Hah, that's a good point. When I think of it, the first Android skins, like HTC Sense (~3.0), were quite customizable. Every widget had a few stylistic variants, and I think you could even change the look of many components in the OS. Windows was very customizable until around Vista, too. I suppose people don't buy products for their theming features anymore.
I don't entirely agree with other commenters saying it's uninspired. It is neutral, but many functional considerations go into making a UI framework, and neutrality serves an important purpose.
However, given Material's popularity, I think it's inevitable that poorly designed/unergonomic apps will cheapen M3 a lot in the coming years. Same as it happened with Material 2. It used to be associated with clean, professionally developed apps; then it became associated with the worst of the worst and a lot of mediocre stuff, too. Sturgeon's Law is not kind to these things.
When Matias was in charge of Material, he said the purpose of design guidance isn't to raise the peaks but to fill the valleys. An expert can come up with something that's more appealing/usable than slinging the components together, but someone without that expertise should be able to make something pretty compelling by following the practices set out by people who had it.
Material 2 may have felt corporate and boring, but I disagree with this accusation.
Before Material, indie apps on Android were big grey buttons and unpadded text on a black background. Not everything that tries to use Material does a good job, but the starting point is better now than it used to be.
People say more Linux availability would make it mainstream. However, Chromebooks are one of the most available laptops. The software is 100% compatible with hardware, and in many cases, the Play Store is included to address the lack of software. That is more than enough for casual computing and office work—two massive segments of the PC user market. And people still don't like them. ChromeOS's market share is similar to that of all the other Linux distributions.
I think the Windows and MacOS brands have become lifestyle choices. Windows is the "gamer" and "corporate" choice. MacOS is the "student" and "luxury" choice. Linux is the "hacker" choice (they use Arch, by the way). Like iOS vs Android, Xbox vs PlayStation, Toyota vs BMW, and all other brand tribalisms, it seems like most people are emotionally drawn to one or another.
> The software is 100% compatible with hardware, and in many cases, the Play Store is included to address the lack of software
The problem is that the Play store and Linux environments on ChromeOS are both run in VMs.
On a machine with good specs, this is perfectly fine. But when cheaper ChromeOS devices ship with 4GB of RAM, older mediatek APUs, and emmc instead of SSDs, it's just an outright bad experience.
If Google starts pushing Android Desktop as a replacement for ChromeOS, I think that could be interesting. Being able to run the Play store without the overhead of a VM will make Android potentially a much better experience than ChromeOS.
> On a machine with good specs, this is perfectly fine.
I think the VMs are fine on the type of machines most people would buy for Windows/macOS. Chromebooks go exceptionally low-spec on the low-end to the point that I'd say their lowest-spec machines probably aren't direct competition for Windows laptops, wouldn't you agree?
You can buy a PC with Linux off the shelf in some countries. In practice, it's an open secret that the machines are for people who don't want to pay for a Windows license but will use Windows anyway.
I think it's a 3D visualization of Earth with simulated clouds. You can ask an AI to generate a GIS layer to visualize an event. Then, you can talk to parts of the event in chat.
> Learning from copyrighted works to create new ones has never been protected by copyright
The term "learning" (I presume from "machine learning") shoulders a lot of weight. If we describe the situation more precisely, it involves commercially exploiting literature and other text media to produce a statistical corpus of texts, which is then commercially exploited. It's okay if that is licensed, but none of the AI companies bothered to license said original texts. Some (allegedly) just downloaded torrents of books, which is clear as day piracy. It has little to do with "learning" as used in common English — a person naturally retaining some knowledge of what they've consumed. Plain English "learning" doesn't describe the whole of what's happening with LLMs at all! It's a borrowed term, so let's not pretend it isn't.
What's happening is closer to buying some music cassettes, ripping parts of songs off them into various mixtapes, and selling them. The fact that the new cassettes "learned" the contents of the old ones, or that the songs are now jumbled up, doesn't change that the mixtape maker never had a license to copy the bits of music for commercial exploitation in the first place. After the infringement is done, the rest is smoke and mirrors...
>The term "learning" (I presume from "machine learning") shoulders a lot of weight. If we describe the situation more precisely, it involves commercially exploiting literature and other text media to produce a statistical corpus of texts, which is then commercially exploited.
It's "commercially exploiting literature" in the same sense that an author would if they read a bunch of novels and then wrote their own based on what the learned from the pre-existing text. The whole point in dispute is whether that turns into infringement when an AI does it.
By labeling only one of them as "commercially exploiting literature" but not the other, you're failing to distinguish them in any meaningful way, and basically arguing from name-calling.
>It has little to do with "learning" as used in common English — a person naturally retaining some knowledge of what they've consumed. Plain English "learning" doesn't describe the whole of what's happening with LLMs at all! It's a borrowed term, so let's not pretend it isn't.
That's fair, that you can't just call them both "learning" and call it a day. But then the burden's on you to show how machine learning breaks from the time-honored tradition of license-free learning/"updating what you write based on having viewed other works". What's different? What is it about machine learning that makes it infringement in a way that it isn't when humans update their weights from having seen copyrighted works?
>What's happening is closer to buying some music cassettes, ripping parts of songs off them into various mixtapes, and selling them. The fact that the new cassettes "learned" the contents of the old ones, or that the songs are now jumbled up, doesn't change that the mixtape maker never had a license to copy the bits of music for commercial exploitation in the first place.
Okay, but (as above) to make that case, you'd need to identify where "acceptable" learning/"updating what you write based on having viewed other works" crosses over into the infringing mixtape example, and I have yet to see anyone try beyond "they're evil corps, it must be bad somehow".
I’m sure some grifters won’t get their second Mercedes, but sites with no context and just ads disappearing would be a wonderful, almost dream-like outcome for the internet. It might even solve the dead internet problem to a degree.
There’s no way the advertising industry giants will let it happen. But the thought alone clearly illustrates the damaging effects of advertising.
Keep in mind that the fines are intended to be progressive. If they don't quit their current practices now that is is clear how the law should be interpreted, the next fine will be substantially larger.
But is 250k euros an appropriate fine for the personally identifiable information that’s been collected and associated with behavioural metrics, political preferences, confidential health data, and other private data points by the 600+ companies that make up IAB and their partners?
This is less than 500 euros per company. They probably pay more each month to host the illegally collected data.
And they probably have the data for millions of EU citizens. Maybe a billion+ profiles worldwide. Granted, the numbers are pulled out of thin air, but what’s a reasonable estimate if not that?
What do you think happens if they are caught again? By then the precedent has been set. Easy decision. Fine them again. And obviously the previous fine didn't work so increase it. Courts have no patience for repeat offenders.
Also, it sends a signal to wannabe competitors to this company that there are laws and there are consequences for breaking those.
And of course given that these companies have money, there are going to be lawyers paying attention to see if they can get at that money in some way. Germany is almost as bad on that front as California. Lots of enterprising lawyers here. So, one successful court case can trigger many more once the precedent is set.
The fine is nothing, but their core selling point (selling ads without bothering to ask for consent) has been exposed and ruled illegal. The implication is also that data collected for years by those 600+ advertising agencies has been collected illegally, though I doubt deletion of that data will be enforced without a second suit.
As I always say, you can’t outlaw being an asshole. But I am curious about what sort of assholery we will see next. Maybe all tracking will become “legitimate interest” (I’m kidding, please don’t actually entrench that garbage any more than it already is).
Technically Meta got fined on the basis of the DMA, not the GDPR (which I still don't fully understand). It's illegal according to my own interpretation of the GDPR too, but enforcement is seemingly non-existent.
All these fines are coming, but corporate lawyers stall as much as they can. Then, they appeal first-instance court decisions to stall some more. And they do get fined, 3-7 years down the road. Then, they change tactics just enough to violate a different law. If they were to change the nature of the crime more often, they'd open themselves to more prosecution.
But big tech can handle a few government penalties every decade. It even creates moat - artificial barriers to market entry. The multiplicity of penalties is insurmountable for new market entrants, but pocket change for the established ones. For example, the UK Online Safety Act is putting all the small social media sites out of business in the UK, but it won't change moderation standards at Facebook. Ergo, it has become Meta's moat. "If a fine is set for a crime, then it's only a crime for poor people".
Tech is full of clever and fast people who run circles around slow-moving government bureaucracies (even judicial). These courts need to resolve these cases every week. If it's 1 week for first-instance, 1 week for appeals, that's the pace that would stop big tech. Twenty-seven fines with a bite a year would have the intended effect.
But we're talking about a "landmark" GDPR win in this thread that took about 5 years. And the fines so far are less than 500 euros per data collector (250k euro fine / 600+ companies in IAB). It will not even warrant a footnote in GAAP financial statements at the end of the year for these companies; they'll just put it in operating expenses (along with the 1,500 euro office coffee machine, 3x more expensive than the privacy violations). A small blogger collecting analytics data incorrectly may not have much to eat in the month they get fined 500 euros (not that they will have had much to eat in the months of expensive court proceedings), but of course, they also risk the full extent of the penalties.
The actual options are: either you pay with your data or with your wallet. Which makes sense since, you know, journalists like to eat and eating costs money.
But it is illegal to pay with your data. That is the whole point. There shouldn’t be a choice to make here. Journalists should be able to eat and you should be able to read articles without being spied on.