Oh, I meant within! I guess that is ambiguous, I figured within = inside, and outside = expired. I'll edit.
Honestly what really egged me on was that I told them I might take them to small claims, and their response was sending a bunch of small claims cases they won.
That's not the point. They were gleeful about their behaviour. Its even more despicable than then faux-kind "oh we are so sorry for your trouble and you are a valued customer, but computer says no."
My first thought is "support a tiny subset of svg that probably still covers 90% of real-world use cases".
I do feel that's there's two distinct types of svg - "bunch of paths with fills" and "clever dangerous stuff" where most real SVGs are of the former type.
Fully expect this to be shot down by someone that's thought about this problem for longer than the 120 seconds I just spent. :)
This is what happens when there isn't an adult in the room to reign things in, you get project overreach. SVGs should never have supported scripting. You want scripting in SVGs, fine, make it a different file format.
I can't imagine the cumulative number of man hours wasted on this problem when the vast majority of users were just looking for a way to make their logos look sharp.
Or you can literally just manipulate your SVG through the DOM in an external JS script... I still have no idea what the original motivation behind scripts in SVGs was.
Yes, that was a large part of the thrust back in the day. Even if it wasn't officially a goal of the SVG working group, there was a lack of an open standards-based alternative to what Flash was able to do, and the developers of the SVG standard saw that adding animation/tweening wouldn't take much given what browsers were already becoming capable of.
a little bit of a, a little bit of b. to displace flash if you don't like it, SVG has to have flash-like features to appeal to those who do use it and steal them away.
While SVG is a web technology, for the longest time you had to install SVG support as a browser plug-in. I remember installing Adobe SVG viewer around 2000. It was used for interactive visualizations.
I'm don't remember precisely but I don't think you could script it from the DOM, I don't see how that could work if it's a plugin.
I think you're right but the lack of industry standard for this kind of thing kills it. People want to be able to take the output of whatever tool they use that exports SVG and put it in a browser. Which isn't an unfair request. But you wouldn't have a guarantee it wouldn't filter out the tool using some obscure SVG functionality.
I'd love to see an agreed standard like OpenGL vs OpenGL ES for SVG. SVG-ES. Everyone agrees on the static, non-scripted elements that should work.
The way linked SVGs render from within img tags is basically perfect for SVG images (which as I understand is not standardized but is largely the same across browsers). External resources and scripting are blocked while still rendering nearly all SVGs correctly. And of course, any CSS is scoped to the SVG.
If someone formalizes this as a new format, please give it a new name! tvg tiny vector graphics? savg safe vector graphics?
And keep the scope as simple as possible so it actually ships! Don’t try implementing a binary format or something.
Maybe I'm missing something as I am not a frontend developer, but when you embed SVGs in an img tag as part of a Phoenix LiveView or even just a static component, you no longer get the ability to dynamically change paths/fills/colors with events coming from the server. Even if it's as simple as having a shape that you want to fill with a brand/highlight color, which at least for me is a common use case.
> My first thought is "support a tiny subset of svg that probably still covers 90% of real-world use cases".
It sounds like the linked post was about someone using a blacklist instead of a whitelist. It doesnt matter how tiny your subset is if you allow through stuff you don't recognize.
For the most part svg is safe. The dangerous parts are pretty obvious - script tag, image tag, feImage tag, attributes starting with on, embedding html in <foreignObject>, DTD tricks, namespace tricks, CSS that loads external stuff (keep in mind also presentational attributes. Its not just style attribute/tag).
W3C has been defining SVG Native, but it hasn't progressed much lately — mostly because there hasn't been any interest in it. SVG Native is a small subset of SVG 2.0 which doesn't support scripting, animations or any external references. https://svgwg.org/specs/svg-native/
So if you are building something where you control every SVG ever produced and rendered then this is totally reasonable.
If you ever need to interface with other tools that generate SVG you now need to have a way of essentially transpiling SVG from the wild into your tamed SVGs. Oftentimes this is done by hand, by a software developer and designer (sometimes the same person).
And this is for basic functionality that your designers expect and have trivial controls for in their vector editors, like "add a drop shadow."
The article goes into some issues with sanitization itself, and except for <script> these are a bunch of reasonable things that someone might expect to work or not have issues with. Sandboxing rendering isn't an unreasonable approach if you're not writing the parser and renderer yourself.
I wonder if it would be best if this was at the browser level as some sort of new format. Otherwise surely it would be really slow/cumbersome to deal with these in ‘user space’
I would say that a proper sanitizer should remove any attribute that has /https?:/ in it. Maybe it should allow access to a subtree of a blessed domain you control, where stuff like textures is stored.
A lot of SVG animation uses JS for some reason. It would be interesting to see if sanitisers strip CSS and SMIL animation, I don't see any security reasons to do so.
> changes within .git directories occur far too often and over so many files that the Backblaze software simply would not be able to keep up.
I don’t really understand that. I’m using Windows File History, and while it’s limited to backing up changes only every 15 minutes, and is writing to a local network drive, it doesn’t seem to have any trouble with .git directories.
>File changes within .git directories occur far too often[..]
That's a crazy statement. The cloud backup system I use can be configured to how often it should bother even looking for new files, and for the section where I have my .git repos (they're actually "bare" git repos and I push to them, locally) I've set it to every two hours. Which is actually overkill because they absolutely do not change that quickly.
This is idiotic. All they have to do is schedule them and then introduce enough hysteresis to not constantly churn on their end. Even if they backed up at most once a day this would be better than this idiocy.
I had a back and forth with them about .git folders a couple of years back and their defence was something like "we are a consumer product - not a professional developer product. Pay for our business offering"
But if that's truly their stance, then they are being deceptive about their non-business offering at the point of sale.
EDIT - see my other comment where I found the actual email
Well I do pay for their business product, I have a "Business Groups" package with a few dozen endpoints all backing up for $99/year per machine.
According to support's reply just now, my backups are crippled just like every other customer. No git, no cloud synced folders, even if those folders are fully downloaded locally.
(This is also my personal backup strategy for iCloud Drive: one Mac is set to fully download complete iCloud contents, and that Mac backs up to Backblaze.)
Professional? We indeed use git at the company where I work, but there we have a dedicated backup system used by professionals. No BB involved.
I, on the other hand, as a private consumer, use git for all my hobby projects and note-taking. And my language learning. Of course I do, or I couldn't keep track of what I'm doing over the years, and I wouldn't be able to sort things out. There's nothing professional there, are BB saying that if you try to do something in an orderly and controlled manner, then it's "professional" and shouldn't be backed up? If that's their stance then no wonder people are leaving BB. I for sure won't ever recommend them again.
You can probably get around this problem by compressing the file and uploading it in a .zip. Google Files allows for making zip files at least, so I don't think it's a rare feature.
I think the linked spec suggestion makes the most sense: make the feature opt-in in the file picker, probably require the user to grant location permissions when uploading files with EXIF location information.
yeah it does sound kind of dodge that there's no option even for advanced users to bypass this, I would guess mainly a moat to protect Google Photos. I wonder if online photo competitors are finding a workaround or not as searching your photos by location seems like a big feature there
I don't know when Google's EXIF protections are supposed to kick in, but so far my photos auto-synced to Nextcloud still contain location information as expected.
I don't think this has anything to do with Google Photos. People fall victim to doxxing or stalking or even location history tracking by third party apps all the time because they don't realize their pictures contain location information. It's extra confusion to laypeople now that many apps (such as Discord) will strip EXIF data but others (websites, some chat apps) don't.
> It's extra confusion to laypeople now that many apps (such as Discord) will strip EXIF data but others (websites, some chat apps) don't.
You've given me a lot of sympathy for the young'uns whose first experiences on the web might have been with EXIF-safe apps. Then one day they use a web browser to send a photo, and there's an entirely new behavior they've never learned.
> Then one day they use a web browser to send a photo, and there's an entirely new behavior they've never learned.
The article is actually about Google's web browser stripping the EXIF location-data when uploading a photo to a webpage, and the author complains about that behavior.
This is not an implementation of the browser itself. Android Chrome is behaving in that way because the app didn't request the required permission for that data from the OS (which would ask the user), so the files it receives to upload already has the data removed
Thank you! Meant my comment for anyone who's not on the very latest version, anyone who experienced Android or another OS with disparate privacy-related behaviors as long as that OS has been around. Yes, now, the issue I'm talking about is solved for the general public on the latest Android devices! At reported cost to power users.
Just to add some more context: The change was applied in Android 10, which was released in 2019.
On OS-level there is no reduction in functionality, the implementation just ensures that the user agrees on sharing his location data to an app, and until that has been agreed it is not being shared (as to not hinder any normal app-operation).
Now the fact that the Chrome app doesn't trigger to ask the user-permissions is another topic, with its own (huge) complexity: If the user disagrees to share his location-history to a webpage, and Android can only ensure this for known media file types (while i.e. Windows cannot do this for ANY filetype, and on iOS I believe the user cannot even decide to not have it stripped), Chrome actually cannot commit to any decision taken by the user.
It's a known dilemma in the W3C, the Browser should ensure user privacy but for binary data it technically can't...
You're replying to someone who is talking about a native app, but the overall issue here is about web apps. Chrome and Firefox don't request the appropriate permission (which, as things stand right now, is probably the safer choice), and there's no way for a website to signal to the browser that it wants that permission, so that the browser could prompt the user only for websites that ask for it, and persist the allow/deny response, similarly to how general location permission works via the JS location APIs.
Seems to be quite simple, an App which wants to access this info just needs to set the permission for it.
Chrome doesn't seem to request that permission, so the OS doesn't provide the location-data to the app. So Chrome rather ended up in this state by doing nothing, not by explicitly doing something...
If your app targets Android 10 (API level 29) or higher and needs to retrieve unredacted EXIF metadata from photos, you need to declare the ACCESS_MEDIA_LOCATION permission in your app's manifest, then request this permission at runtime.
That's not sufficient. We need a standardized attribute on the HTML form to request the permission as well. If Chrome requests the permission, great, but that's not fine-grained enough for a web browser.
Well yes, agree, but as stated Chrome didn't end up with this behavior because they did something, the Browser behaves like this because they didn't implement any logic for this permission.
A standardized attribute on an HTML-form would be difficult to define, because in this context the page just requests/receives a binary file, so a generic "strip embedded location information" decision from the user would be hard to enforce and uphold (also, by whom?).
In this case Android only knows the file-structure and EXIF because the file is requested by Chrome from a Media Library in the OS, not a file-manager.
W3C keeps thinking about this data-minimization topic repeatedly [0], so far they managed to define the principles [1], but enforcing them technically is quite hard if any kind of content can be submitted from a storage to a webpage...
Ideally this should be something search engines handle - but they do a poor job in specialised areas like code repos.
It's helpful to have a github mirror of your "real" repo (or even just a stub pointing to the real repo if you object to github strongly enough that mirroring there is objectionable to you).
One day maybe there will be an aggregator that indexes repos hosted anywhere. But in many ways that will be back to the square one - a single point of failure.
The Fediverse seems to dislike global search. Or is that just a mastodon thing?
IMHO - disagree but it depends on point of view so this is not ”you are wrong” but ”in my view it’s not like that”.
I think it’s the role of the software vendor to offer a package for a modern platform.
Not the role of OS vendor to support infinite legacy tail.
I don’t personally ever need generational program binary compatibility. What I generally want is data compatibility.
I don’t want to operate on my data with decades old packages.
My point of view is either you innovate or offer backward compatibility. I much prefer forward thinking innovation with clear data migration path rather than having binary compatibility.
If I want 100% reproducible computing I think viable options are open source or super stable vendors - and in the latter case one can license the latest build. Or using Windows which mostly _does_ support backward binaries and I agree it is not a useless feature.
Software shouldn't rot. If you ignore the cancer of everything as a subscription service, algorithms don't need to be tweaked every 6 months. A tool for accounting or image editing or viewing text files or organizing notes can be written well once and doesn't need to change.
Most software that was ever written was done so by companies that no longer exist, or by people (not working for a software company) no longer associated with those company they wrote the tool for. In many of these cases the source is not available, so there is no way to recompile it or update it for a new platform, but the tool works as good as ever.
It makes honest people feel rewarded, valued and acknowledge. It teaches people who wish to follow the rules and conform to social norms what those norms are and where we actually draw the line in practice.
Looked at slightly differently, given a split between high trust and low trust preventing conversions from high to low is similarly important to inducing conversions from low to high.
A few months inside or a few months outside?
Because that seems to determine who's being unreasonable in this.
reply