Why? What's the difference between that and one of the many, many concealed camera options that you don't even notice? Just that it's noticeable? I don't think that's a good enough reason for yet-more-regulation. You're already being recorded everywhere you go in public by the authorities, and often by people standing right next to you unnoticed, so just act accordingly.
Because they will be popular and lots of people will buy them and use them all the time, leading to much more generalized surveillance than the concealed options that only a tiny tiny fraction of people would buy or use (and that we should also regulate)
> What's the difference between that and one of the many, many concealed camera options that you don't even notice?
The latter is literally illegal, at least in my country and I hope in any civilized country. If your point is that there's no difference between glasses and other forms of creep cams and the glasses should be illegal too, I concur!
The owner of the private space generally has authority to deny this already, there's no need for an additional law.
In the US at least, any private homeowner/renter can deny entry to their property, barring legal warrants and exceptional circumstances. A business can have a policy, and is generally legally protected as long as the policy is 1) equally applied, and 2) does not violate ADA... A court would have to weigh in if glasses are allowed or not for ADA... but I suspect there's already a case where a movie theater banned such glasses and they would probably(?) win, since such individuals could be expected to have non-recording glasses.
While technically "there is no expectation of privacy in public space." I do see a categorical difference between creating a stored, AI processed record of random people and "people can see what you do while you're out and about". That argument was valid before the mass automation we see now, but now it is more a fig leaf than an argument.
I do not remember every single person I see on the street. What makes it Ok for some guy who will also forget me to create a stored, persistent, AI processed set of videos of me?
I do find the idea of a glasses version of an action cam quite cool, but we are talking about smart glasses from Meta here, which is a different thing.
We are talking about a network of streaming cameras moving around, filming.These videos are stored, still without any specifics about a purpose or when the data will be deleted.
Besides, the filmed people do not choose or consent to be filmed, they might not even be aware that they are filmed. This is not like a phone where you at least have a chance to see it. The person doing the filming chooses to film. Or they might not be aware they are still filming. They might also be one update away from always on. If Amazon did it with Alexa, Meta can do it with the glasses.
Of course, there are CCTV, but, at least in Europe, their use is very specific. You have to be informed about who to contact about the data, as well as the purpose of the recording and how long it will be stored. There too the scope is much more limited than a random guy filming people without their consent.
The collection is one problem. The usage is another. We know they are used to train AI for unspecified use, generative AI? Something else? Under the GDPR the purpose of the collection should be known, but in that context it is extremely murky.
Based on existing technology, it would be possible for them to use facial recognition on these videos to track individuals, building profiles as they go, including location. These profiles could even be linked to the identity of people who have been tagged in photos before. While it might be extremely difficult now, it might be possible later. Making it possible might even be what the AI training is about. The data exist, and it is unclear how long it will be kept, or whether the purpose of processing will change.
It would be bad enough if it was any company, but we are talking about Meta, a company that brought us the Cambridge analytica scandal. A company that knowingly let its users be scammed by ads for profit. Profit over ethics has been part of their DNA from the start, not an exception.
The most important real use case of devices like this is as accessibility tech. Blind people everywhere are talking about devices like this.
It's the same with phones. I know blind people who have been harassed for holding their phones up to things as though they are taking pictures, but in fact they're using the camera on their phone to render signage legible to them, or having their phone (or a person on the other end) read it.
Banning this in a way that doesn't in practice cause problems for visually impaired people would be difficult. It might also be difficult to do in a way that doesn't harm, for instance, accountability for cops who are acting in public.
The impulse to "ban" is sometimes a bit naive imo.
Please don't slander the most open AI company in the world. Even more open than some non-profit labs from universities. DeepSeek is famous for publishing everything. They might take a bit to publish source code but it's almost always there. And their papers are extremely pro-social to help the broader open AI community. This is why they struggle getting funded because investors hate openness. And in China they struggle against the political and hiring power of the big tech companies.
And DeepSeek often has very cool new approaches to AI copied by the rest. Many others copied their tech. And some of those have 10x or 100x the GPU training budget and that's their moat to stay competitive.
I think they were reading GP's comment as a correction. Like "not open-source, just open weight". I'm not sure if their reading was accurate but I enjoyed their high effort comment nonetheless
X is full of "open weights!" corrections as a dog whistle by the anti-China crowd. And they are right about models from the Chinese Big Tech, but completely wrong about DeepSeek.
Correct. We have open-weight models from OpenAI, Facebook, Mistral, DeepSeek, Z.ai, MiniMax, and all sorts of other companies. Most of them have fantastic and open licensing terms.
If we can't build the weights, then we don't have the source. I'm not entirely sure what an open-source model would even look like, but I am confident that these binary blobs that we are loading into llama.cpp and vllm aren't the equivalent of source code. We have absolutely no idea what sort of data went into them.
This is fine. It isn't slanderous. It is what we have, and it is awesome. Just because it is awesome doesn't make it open source.
It’s not slander to say something true. These are open weights, not open source. They don’t provide the training data or the methodology requires to reproduce these weights.
So you can’t see what facts are pruned out, what biases were applied, etc. Even more importantly, you can’t make a slightly improved version.
This model is as open source as a windows XP installation ISO.
Clearly they felt a big backlash when version 5 was released. Now they are afraid of another response like this. And effectively, for the user it will likely only be a small update.
I'd really like to see improvements like these:
- Some technical proof that data is never read by open ai.
- Proof that no logs of my data or derived data is saved.
etc...
reply