Hacker Newsnew | past | comments | ask | show | jobs | submit | AndrewKemendo's commentslogin

This looks like a really promising approach

In particular the Forward rollout module is very important. It aligns your (effectively) world model with what it expects from the world, and keeping those in sync I think gives this the power it needs to be able to generate the state action pairs to continuously train semi supervised


I was the author for the practitioners implementation section for the IEEE 7010 standard for assessing human impact from AI software

https://standards.ieee.org/ieee/7010/7718/

I also worked closely with Jack Clark at OpenAI before he disappeared on all these issues as CTO back in 2018

There are literally zero “AI labs” that have ever cared about “safety”

none of them have ever done anything tangible with any kind of independent auditable third-party way that has some defined reference baseline for what is safe and what is not, how to evaluate it, or a practitioners guidance for how to determine what it is and what is not safe as a designer.

They follow the same rules as every other technology platform: do as much as you can legally get away with no more no less

I say this as somebody who’s been actively involved in the AI “safety” debate for a long time now at least since 2013

The concept itself doesn’t even make sense if you fully understand the intersectional scope of technology and society

Societies demands are the things that are unsafe not the technologies themselves

Just like Bertrand Russell said “as long as war exists all technologies will be utilized for it” - you can replace “war” for anything that you think is unsafe


Can you elaborate this part please?

> The concept itself doesn’t even make sense if you fully understand the intersectional scope of technology and society Societies demands are the things that are unsafe not the technologies themselves

Where can I learn more about it?


Go back to the fundamentals and read society of mind from Marvin Minsky or anything cybernetics from Norbert Wiener

if would be super helpful if you could give the elevator pitch version of what a safe AI is.

The only “safe AI” is one that comes out of a “safe set of data”

so what would a “safe set of data” actually have to look like

Well it would have to not look like the majority of data that we produce now which has latent embeddings (primarily from the common crawl database ) of racism, lying, competition, destruction domination

I don’t believe humans are actually capable of making such data because our entire structure of society is based on racism competition and domination


> has latent embeddings (primarily from the common crawl database ) of racism, lying, competition, destruction domination

but safety has a wider scope than "racism, lying, competition, destruction domination" like always requiring eye protection when asked about making lemonaide.

> I don’t believe humans are actually capable of making such data because our entire structure of society is based on racism competition and domination

So this debate that's been going on since 2013 is over because it's impossible to make an AI safe since the data is unsafe? That would make sense but if it was a data problem it seems like that conclusion could have been reached a long time ago.


Indeed that conclusion was reached a long time ago but technologists literally don’t care because they’re just trying to get paid

And literally everybody who has been trying to warn about it is beaten down publicly as a radical or whatever


It's in your nature to destroy yourselves


Defeatist bullshit becomes self-fulfilling at some point. "Oh we're all gonna die anyway so we might as well milk this thing for profit. Après moi la déluge."

*"le" déluge

... the fact that you are missing a reference doesn't require that level of disdain

>If we're closer to BNW, then where are the comforts?

Netflix/Sports/RealityTv + Onlyfans/PH + Doordash tacobell/chic-fil-a


Homie you’ve been around here long enough to know that that is exactly the case

Software developers view themselves as an entirely different class than skilled blue-collar laborers precisely because of their access to capital

It is explicitly because a single engineer can go out and get money from a capitalist and a single machine shop operator cannot go out and get money from a capitalist that makes the distinction

People wonder why software developers are anti-union it’s because they are fundamentally capitalist at heart


The vast vast vast majority of programmers do not have access to capital.

But they eat up the propaganda about how they totally could just happen to get that capital and run a one man software business and make a billion dollars.

Which is why they spent all that time and energy insisting they didn't have to unionize, because they were super important and could totally negotiate better than anyone else, especially a giant group of programmers, and now are panicking because dumb middle managers want to replace them with LLMs entirely.

Very predictable.


Temporarily embarrassed billionaires

Watching people debate whether AI will displace labor is like watching someone in 1850 sincerely ask whether the steam engine might affect employment.

Every single person who utilizes a navigation application to traverse a place that they have no previous independently verified experience, is taking existential risk based on a computer telling them what to do

There are literally thousands of cases of people dying or being injured because they did what a computer navigation application told them to do

This is also literally what the Target stock scheduling system does for target employees for restocking shelves

The vast majority of peoples lives are run by someone else’s computer


That’s fundamentally different, and I think you know that.

It’s one thing to ask an algorithm how to build an A* driving map from point A to point B. It’s another to ask one how to be a better person and go to Heaven.

I’m not religious, and I’m not arguing this from a pro-religion POV. I happily work in AI, and I’m not arguing this from an anti-AI POV. I am highly technical. I love computers. I’m excited about the future. I rely on deterministic algorithms to make my days better. And yet, I do not want to trust the words of an LLM to counsel me on how to be a better husband or father. At this stage, the AI does not know me in the way a counselor or advisor, or even pastor or priest would. And yes, I think that’s a crucial difference.


3/4-agree; LLM advice is only one step up from an Agony Aunt column in a newspaper.

And I'd expect "Target stock scheduling system does for target employees for restocking shelves" to be an A* or similar.

But also, Google maps has directed people to their deaths: https://gizmodo.com/three-men-die-after-google-maps-reported... isn't even what I was originally looking for, which was: https://www.cbsnews.com/news/google-sued-negligence-maps-dri...


Sure, people die from regular programming. Mistakes happen. That’s not good or ok, but it seems unavoidable given today’s technologies and tools.

However, I think that’s in a different category than giving life advice. How is an LLM to know that God forgives Joe for stealing a loaf of bread to feed his children, but doesn’t forgive Tom for doing the same thing because Tom had money but was saving up to buy cooler shoes and didn’t want to spend it? A priest’s advice might be “Joe, don’t make a habit of it, but you didn’t hurt anyone and you children were hungry. Tom, would you freaking knock it off already?” An LLM might reply “that’s a wonderful idea!” to both.

Again, I’m firmly not anti-AI. I use it every day. I absolutely to not want to hear its advice on how to navigate the complexities of life as a human being.


Yeah, no. What you described here and what I described before are not programming errors, they're data errors. An A* route finder isn't going to know a bridge is out unless it is told, an LLM won't know that case history unless it is told.

I'd say the real problem with using an LLM for this kind of thing is not what the LLM writes, but that the act of writing helps the human understand their community, so when it is skipped that understanding remains absent. It's like cheating on your homework.


It’s not fundamentally different it’s people who are taking physical actions in the real world based on trust in some system

whether it’s a human or not they’re trusting the system with their existential outcomes

That is literally exactly the same thing.

The fact that you think that the rules of you being a father are somehow different than the rules of you driving to a appointment indicate that you have a completely incoherent world view based on two incompatible models of epistemology

As usual dualists will come up with a incoherent model and then try and act like it’s valid


> The fact that you think that the rules of you being a father are somehow different than the rules of you driving to a appointment indicate that you have a completely incoherent world view based on two incompatible models of epistemology

Two ways to look at this, both of which are coherent:

1. Current AI is better at some stuff than others. Saying "I'm okay driving in a waymo, but not taking spiritual advice from an AI" makes sense if you think it has not advanced to a near-human level in the spritual advice domain.

2. Even if you don't think that's true, it's reasonable to just want a human for certain activities, because communion with other humans in the same existential boat you're in can be the whole point an activity. I'd argue it is a significant reason for a majority of social activities.


Disclaimer: raised Catholic, now Atheist, married to devout Catholic.

The Church as defined by the institution is a community. I do not see it as a contradiction that the head of the institution is instructing the leaders to not add more layers of abstraction between them and the community, especially when those messages are on the subject of what it means to be human.


> The fact that you think that the rules of you being a father are somehow different than the rules of you driving to a appointment indicate that you have a completely incoherent world view based on two incompatible models of epistemology

lol


You have simply redefined “best” as “hilarious” “often funnier” or “hilarity”

Is it your intention to suggest that the highest possible form of commenting is humorous?


> looked brilliant on the quarterly earnings call. He fired all the bussers. Eliminated expeditors. Replaced kitchen managers with generic “back-of-house” roles. This was what seemed obvious at the time: Labor costs were rising, so remove labor. The savings showed up immediately.

I can only assume that the CEO and none of the management had ever actually worked front or back of house.

Anybody who has would know that eliminating expo and busers would destroy service.

This is just pure incompetence across the board, saying that it looked brilliant or obvious is the exact opposite of how it looks.


Farmers have a saying: Eating the seed grain.

The surest sign of incompetence is somebody claiming they are forced into a requirement for perfection when the requirement is simply a basic adherence to virtue

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: