Hacker Newsnew | past | comments | ask | show | jobs | submit | I_am_tiberius's commentslogin

Not a fan of regulation in general, but would love to see a ban of cameras on glasses used in public spaces.

Why? What's the difference between that and one of the many, many concealed camera options that you don't even notice? Just that it's noticeable? I don't think that's a good enough reason for yet-more-regulation. You're already being recorded everywhere you go in public by the authorities, and often by people standing right next to you unnoticed, so just act accordingly.

“You're already being recorded everywhere you go in public by the authorities”

You are the frog being boiled.


Because they will be popular and lots of people will buy them and use them all the time, leading to much more generalized surveillance than the concealed options that only a tiny tiny fraction of people would buy or use (and that we should also regulate)

> What's the difference between that and one of the many, many concealed camera options that you don't even notice?

The latter is literally illegal, at least in my country and I hope in any civilized country. If your point is that there's no difference between glasses and other forms of creep cams and the glasses should be illegal too, I concur!


The problem is if it becomes socially normalized. If you're using a concealed camera and someone notices, you're a creep/asshole.

Yet more regulation? We have regulation for these glasses already?

Aren’t there countries that make it mandatory to blot out faces of people on videos if they didn’t consent?


If anything they should be banned in private spaces, like if someone wearing them enters someone's home etc.

There is no expectation of privacy in public.


The owner of the private space generally has authority to deny this already, there's no need for an additional law.

In the US at least, any private homeowner/renter can deny entry to their property, barring legal warrants and exceptional circumstances. A business can have a policy, and is generally legally protected as long as the policy is 1) equally applied, and 2) does not violate ADA... A court would have to weigh in if glasses are allowed or not for ADA... but I suspect there's already a case where a movie theater banned such glasses and they would probably(?) win, since such individuals could be expected to have non-recording glasses.


While technically "there is no expectation of privacy in public space." I do see a categorical difference between creating a stored, AI processed record of random people and "people can see what you do while you're out and about". That argument was valid before the mass automation we see now, but now it is more a fig leaf than an argument.

I do not remember every single person I see on the street. What makes it Ok for some guy who will also forget me to create a stored, persistent, AI processed set of videos of me?

I do find the idea of a glasses version of an action cam quite cool, but we are talking about smart glasses from Meta here, which is a different thing.

We are talking about a network of streaming cameras moving around, filming.These videos are stored, still without any specifics about a purpose or when the data will be deleted.

Besides, the filmed people do not choose or consent to be filmed, they might not even be aware that they are filmed. This is not like a phone where you at least have a chance to see it. The person doing the filming chooses to film. Or they might not be aware they are still filming. They might also be one update away from always on. If Amazon did it with Alexa, Meta can do it with the glasses.

Of course, there are CCTV, but, at least in Europe, their use is very specific. You have to be informed about who to contact about the data, as well as the purpose of the recording and how long it will be stored. There too the scope is much more limited than a random guy filming people without their consent.

The collection is one problem. The usage is another. We know they are used to train AI for unspecified use, generative AI? Something else? Under the GDPR the purpose of the collection should be known, but in that context it is extremely murky.

Based on existing technology, it would be possible for them to use facial recognition on these videos to track individuals, building profiles as they go, including location. These profiles could even be linked to the identity of people who have been tagged in photos before. While it might be extremely difficult now, it might be possible later. Making it possible might even be what the AI training is about. The data exist, and it is unclear how long it will be kept, or whether the purpose of processing will change.

It would be bad enough if it was any company, but we are talking about Meta, a company that brought us the Cambridge analytica scandal. A company that knowingly let its users be scammed by ads for profit. Profit over ethics has been part of their DNA from the start, not an exception.


We're talking about exactly this: https://www.youtube.com/watch?v=IRELLH86Edo

The most important real use case of devices like this is as accessibility tech. Blind people everywhere are talking about devices like this.

It's the same with phones. I know blind people who have been harassed for holding their phones up to things as though they are taking pictures, but in fact they're using the camera on their phone to render signage legible to them, or having their phone (or a person on the other end) read it.

Banning this in a way that doesn't in practice cause problems for visually impaired people would be difficult. It might also be difficult to do in a way that doesn't harm, for instance, accountability for cops who are acting in public.

The impulse to "ban" is sometimes a bit naive imo.


Tomorrow: We used all your data to train our latest mode, Mythos. That was a mistake. Now go away.

I assume they use this model to be able to train new models with user data.

If they think they are respecting their users' privacy by doing so, they are very very wrong.

Open weight!

Please don't slander the most open AI company in the world. Even more open than some non-profit labs from universities. DeepSeek is famous for publishing everything. They might take a bit to publish source code but it's almost always there. And their papers are extremely pro-social to help the broader open AI community. This is why they struggle getting funded because investors hate openness. And in China they struggle against the political and hiring power of the big tech companies.

Just this week they published a serious foundational library for LLMs https://github.com/deepseek-ai/TileKernels

Others worth mentioning:

https://github.com/deepseek-ai/DeepGEMM a competitive foundational library

https://github.com/deepseek-ai/Engram

https://github.com/deepseek-ai/DeepSeek-V3

https://github.com/deepseek-ai/DeepSeek-R1

https://github.com/deepseek-ai/DeepSeek-OCR-2

They have 33 repos and counting: https://github.com/orgs/deepseek-ai/repositories?type=all

And DeepSeek often has very cool new approaches to AI copied by the rest. Many others copied their tech. And some of those have 10x or 100x the GPU training budget and that's their moat to stay competitive.

The models from Chinese Big Tech and some of the small ones are open weights only. (and allegedly benchmaxxed) (see https://xcancel.com/N8Programs/status/2044408755790508113). Not the same.


DeepSeek's models are indeed open weight. Why do you feel that pointing this out would be considered slander?

I think they were reading GP's comment as a correction. Like "not open-source, just open weight". I'm not sure if their reading was accurate but I enjoyed their high effort comment nonetheless

X is full of "open weights!" corrections as a dog whistle by the anti-China crowd. And they are right about models from the Chinese Big Tech, but completely wrong about DeepSeek.

>> Truly open source coming from China.

> Open weight!

They clearly were implying it's not open source.


Correct. We have open-weight models from OpenAI, Facebook, Mistral, DeepSeek, Z.ai, MiniMax, and all sorts of other companies. Most of them have fantastic and open licensing terms.

If we can't build the weights, then we don't have the source. I'm not entirely sure what an open-source model would even look like, but I am confident that these binary blobs that we are loading into llama.cpp and vllm aren't the equivalent of source code. We have absolutely no idea what sort of data went into them.

This is fine. It isn't slanderous. It is what we have, and it is awesome. Just because it is awesome doesn't make it open source.


It’s not slander to say something true. These are open weights, not open source. They don’t provide the training data or the methodology requires to reproduce these weights.

So you can’t see what facts are pruned out, what biases were applied, etc. Even more importantly, you can’t make a slightly improved version.

This model is as open source as a windows XP installation ISO.


> These are open weights, not open source.

Did you even read my comment?


I did. Show me the source code.

> DeepSeek is famous for publishing everything. They might take a bit to publish source code but it's almost always there.

they-might-take-a-bit-to-publish


Weights are the source, training data is the compiler

Training data == source code, training algorithm == compiler, model weights == compiled binary.

Training algorithm is the programmer, weights are the code that you run in an interpreter

isn't it more like the data is the source, the training process is the compiler, and the weights are the binary output.

Clearly they felt a big backlash when version 5 was released. Now they are afraid of another response like this. And effectively, for the user it will likely only be a small update.

I'd really like to see improvements like these: - Some technical proof that data is never read by open ai. - Proof that no logs of my data or derived data is saved. etc...

I don't think this is technically possible without something like homomorphic encryption, which poses too large of a runtime cost for usage in LLMs

They don't even try to proof it another way.

Clearly the intention behind this is to get access to user data (user code).

or "How we make money with your images 2.0".

Would be super interested if any person on this planet uses this as the main driver.

Yes its much less annoying than macos on my m1 macbook.

I do, for work at least. Works nice aside from the lack of USB-C monitor (mine has a HDMI output so not a huge deal for me.)

In the dev branch now! https://asahilinux.org/2026/02/progress-report-6-19/

I will attempt to use Asahi as my daily driver once this is officially released.


I do.

Soon everyone will run local models for simple stuff like that.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: