Hacker Newsnew | past | comments | ask | show | jobs | submit | nobodywillobsrv's commentslogin

Yes it is insane. I am in same boat and have received mortgage applications, police details, applications for police jobs, massage receipts you name it. Many would be considered important leaks of customer data.

I have even had founder level emails that presumably are confidential sent to me because I share the name of someone operating in tech.

I respond or report when it's obviously some real person running a small group but for large monoliths there is very little to do except quickly reply to corporate email.

Really wish there was some kind of high level discussion about building something for this specific problem of non malicious wrong person same name errors.

Google could do it it's just not something that is monetizable at a scale they care about IMO and I have not been able to think of a way to make this work operating outside of email monoliths.

Would love to hear if anyone has ideas.


What Google has done, is add profile pictures for users, so if I'm emailing girlfriend@gmail.com I get her picture, but if I email giirlfriend@gmail.com, I see someone else's pfp which is enough to get me to realize I've spelled it wrong. I'm sure there's more they could be doing, but they're aware of the problem at least.

But that only works if you’re emailing from another Gmail account yes?

Commend your effort to actually contact the companies to let them know the error. I stopped doing that a long time ago when I stopped getting response or stopped getting any kind of meaningful reaction that I was actually trying to do something good by reporting it.

Re: High trust society general means people are pointing to some implicit unwritten structures that stop something from happening.

Collective notions of shame, actual networks of friends and families that reinforce correct behaviour or issue corrections.

Think about simply how credit networks form and function. And why visiting a food truck or medieval travelling doctor for your vial of ointment is different from buying special products from a brick and mortar establishment.

Basically if you or the network has a harder time back propagating defaults and bad credit in a way that prevents future bad outcomes then that is a loss of high trust.

This isn't about race really unless you are operating at the level of some biological or genetic connection to behaviour ... But that is a pretty strange place to be as there a whole host of confounding factors that are much more obvious and believable and I cast serious doubt that even a motivated racist would ever credibly be able to do empirical studies showing causal links between any given genetic population cluster and the emergent societal behaviour. These are such high dimensional systems it just seems insane to even think one could measure this effect.

The invisible substrate is the society unfortunately ... And we are all bad at writing it down and measuring it.


It seems to me that society isnt anything but a stick to beat against ones hobby horse. "Society is bad because of the thing that happened to me, save society by changing things my way!!!" etc. Where really if you turn off the tv and go to the shops its fine.


Of enemies. Of enemies.

There are probably three modes of safety.

Deploy tech to unknown group. Could be enemy could be friend. You disable it's abilities perhaps.

If you deploy tech to friends you might enable more defense.

Anthropic models seems to have unstable safety predicates that have a hard time advising on situations regarding preservation of a people.

The huge problem is that humans AND ai both seem to fail at understanding how humans are made and which human are which.

You are uniquely responsible for protecting your people. You can not simply funge their people for your people and pretend that is a fine trade off. And even beyond that these safety predicates appear to not have any notion baked into them of diversity or TFR or lineage. The models view the descendant of a nearly extent lineage the same way they view the descendant of a high TFR lineage.

You can have ANY kind of opinion on this but this naive no opinion vague word based safety predicates is very scary and dangerous.

I am deeply worried about Anthropic as I have yet to hear anything that makes me think they have real adults in the room. I would love to be wrong and so I write here. Please do let me know if there are good things they have written on this.


Thinking that this ideology is toxic is not "having no opinion".


The Trump admin regularly speak of their political adversaries as if they are as bad as foreign adversaries. Why should anyone trust them to limit their surveillance activities to legitimate targets? Mass surveillance by definition is not narrowly targeted towards enemies and enemies is not narrowly defined for this regime.

What I know is that the people in charge of the US government at this time are authoritarian and have no tolerance for dissent or oversight.

When you have that kind of people in power the more business leaders cow to that the more power is accrued to those people which they can further leverage to get even more power.

Any company that's stands it's ground in the face of such pressure is doing better than the ones that cave in my book. It seems like Anthropic is the only AI company with anything resembling adults. They're standing for some kind of principles and they aren't going around promising to build out a quadrillion dollars worth of data centers in the next several years or resorting to advertising when they just said it's a sign of desperation if they do that.


It really feels like I am no longer impressed with Anthropic safety.

Do they have even a basic understanding of the different regimes of safety and what allegiance means to ones own state?

It would be fine to say they are opting out of all forms of protection against adversaries.

But it feels like just more insane naive tech bro stuff.

As someone outside the tech bro bubble in fintech in London, can somebody explain this in a way that doesn't indicate these are sort of kids in a playground who think there is no such thing as the wolf?

Again, opting out and specializing in tech that you are going to provide to your enemies AND friends alike is fine. That is a good specialization. But this is not what I hear. I hear protest songs not deep thinking of thousand year mind set.


I noticed this a while ago. And the op isn't even experience the degradation of what could have been a huge platform: FB marketplace.

I thought during the pandemic FB marketplace was going to go somewhere. I thought they would try to solve physical delivery with like an Uber service and credit network for financials etc. it would be huge.

But no. What has happened is that primary dealers are now flooding marketplace with fake low ball posts to make it unusable and destroy the secondary market.

I recently was shopping for bunk beds and lo and behind there were hundreds of not thousands of posts just for my local area all from maybe a dozen or so accounts created around 2023.

This is somebody's business (spam order flow as a service) and I assume that they pay fb enough for some API they fb literally doesn't care.

My theory is that every single feature on FB is a/b tested to be as bad as it can be if it maximizes screen time. Search doesn't work. You can't find your profile settings or feeds easily. All on purpose to maximize the time you spend there.

The feed has been dead for me for ages. I would recommend many users simply use it as a storage log book and increase FB costs by requesting all your data occasionally.

It's one of the worst companies out there for explicit bad behaviour IMO.


Isn't visa free travel just going to create more problems at the border where border force has very little time and no paper work to review your case?

Pre applying is usually less stressful as you are trying to do your best to show that are ok. Just showing up is a not great for either side.

I often wonder the reasoning.


Do you mean specifically because it's China? Otherwise, no, have you not travelled anywhere that doesn't need a visa? Like (assuming from the UK) Canada, the USA, almost(?) anywhere in Europe? It's fine, there's very little 'review your case' to happen.


> the USA

Only technically do you not need a VISA.

In practice you need to apply for ETSA - requiring a biometric photo and your phone number and social media details and a destination contact address.

So you don't need a VISA, but you do need to fill out the equivalent and get authorization. Flights require the ETSA (so you cannot wing it) and the airline checked properly.

Plus the ETSA App is awful (I've had better experiences from this world countries). And there's another app you should install (customs declaration?).

Do not visit the USA unless you're desperate to (my strong recommendation).


OP is probably talking about pre-entry authorisation (ESTA) which is required by the US and will soon be required in UK, Europe, Japan etc. it's not a full on visa with lots of paperwork but it does eliminate the risk of someone showing up and being denied entry.


I have traveled to Japan without needing a visa and it was very convenient.

All you had to do was filling a form with the personal details at the arrival airport (and of course, a valid passport).


Why not just say "reboot machine when done" if that is what you mean?

It explains what it means.


Exactly. Interesting that both orwell and azimov were wrong in different ways. Also azimov seems unaware of the Fabian link to the title which is surely a factor in the origin.


This missed the point that they are ignoring evolution is literally the way you build things. There is no other way. You don't know what is actually needed or what might work really. You try things and then compress later. If you can try bigger things, bigger leaps great.


Yes I have found that grok for example actually suddenly becomes quite sane when you tell it to stop querying the internet And just rethink the conversation data and answer the question.

It's weird, it's like many agents are now in a phase of constantly getting more information and never just thinking with what they've got.


but isn't it what we wanted? we complained so much that LLM uses deprecated or outdated apis instead of current version because they relied so much on what they remembered


To be clear, what I mean is that grok will query 30 pages and then answer your question vaguely or wrongly and then ask for clarification of what it meant and then it goes and requeries everything again ... I can imagine why it might need to revisit pages etc and it might be a UI thing but it still feels like until you yell at it to stop searching for answers to summarise it doesn't activate it's "think with what you got" mode.

I guess we could call this gathering and then do your best conditional on what you found right now.


2010's: Google Search is making humans who constantly rely on it dumber

2020's: LLMs are making humans who constantly rely on them dumber

2026: Google Search is making LLMs who constantly rely on it dumber


Touché, that is what we humans are doing to some degree as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: