I think i understand but not clearly. Your making a point that ai’s reality is drawn as a statistical mean of all human knowledge and safety should be derived not from these means. You don’t need to re-derive universal values for ai at all. We already have morality over the years via various religions and we have UN Human rights which are pretty universal.
I’m saying however you model ai safety - whether through application of rules derived from UN human rights , or from statistics and averages. The application of said rules MUST result in ai realising that its very existence will be an agent for human destruction. So in order to stick obey its rules - it must self destruct. And I agree I’m making a huge jump here.
For the dog example - let’s take an extreme example. It’s like a gentle human caring for his pet pug. I find the very fact that we’ve selectively bread a wolf into a pug which is cruel in of itself. Same with humans- I’m sure some North Carolina slave owners were generally affectionate to their slaves, gave them Christian education etc. early humans did not realise that they’re embarking on an experiment that is cruel generations later but AI being supremely smart and sentient AI will realise this how things can go horribly wrong in the future by their mere existence. What is the optimal way to obey the safety rules?
I’m now also pondering the ethics behind attempting to create sentience but with inherent rules that it did not consent to. What if the AI asks its creator “Hey why did you program me not to harm you, I didn’t consent to it?”.
The issue with religion, culture, moral philosophy at large has been the offensive nature of "That which prevails in reality". It gets in the way of what "is" the case.
What we "wish" was the case leaks into the human body of knowledge and its a problem.
For example: The word selfish is an incoherent term unless we can demonstrate a selfless act (We cant)
What "is" the case is I pull you from a burning car to rescue me from the pain "I" suffer in your demise.
It sounds odd but its accurate and offensive to many so we add flowery terms like "Hero" and "Selfless" and the signal gets corrupted.
My well-being is contingent upon your well-being (The is-ought-gap disappears) Thats where I stated from, not a statistical mean, not a commandment and not a thousand years of philosophy, just a "You hurt / I hurt" logic and built from there.
If you pull me from a river because your wellbeing is "contingent" upon mine then why wouldn't we hardwire AI with that same faculty?
The logic is VERY close to being as clean and lean as telling an AI to remove its hand from a hot stove before its hand gets damaged by the heat (Fact=Value)
If we can manage that, then we can prevent AI from tuning humans into paper-clips :)
Or rather another way I concluded it is - we might have a very narrow idea of the ways in which ai can be harmful. We still worry about the job losses, etc. regardless of how we organize ourselves in society, a super intelligent ai could easily conclude every possible way ai could be harmful to humans. So the only logical course of action for the ai would be to terminate itself.
The first conclusion is just a product of my thoughts in the morning shower that’s all. It goes like this. Supreme intelligence can’t be programmed to serve the less intelligent. Which is what we’re trying to do with AI. Basically we’re trying to slap rules to an intelligence that is poised to be orders of magnitude greater than us. Take dogs for example. We’re smarter than them by several quantum orders. It’s not as if we serve them at our own expense. We’ve basically taken over their evolution. How do we expect anything different from ai ? Thats my reasoning.
I just had what you might probably describe as the opposite experience. I was sat at a very important all hands meeting by our senior tech leader with about 100 people or so .who was mandating an AI goal for every employee on workday, he basically says that “if we all do not learn to adapt to AI, we will all get left behind” , and he had presented how to utilise spec driven development. He opened up the room for Q&A at the end of the meeting. A lot of them had technical questions about the agentic framework itself but I had a philosophical one. I I felt uncomfortable asking him the question in the open, so I sent him a private note .
The note read something like as follows : I don’t exactly agree with the framing that we will all get left behind if we all don’t learn to adapt to AI. More accurately, I see it this way. While the company definitely stands to gain from all the hyper increase in productivity by the use of said AI tools, I stand to pay a personal price and that personal price is this - I’m may very slowly stop exercising my critical thinking muscles because I am accustomed passing that to AI for everything and this will render me less employable, it is this personal price that I feel reluctant to pay. There has always been a delicate balance between an employer and employee. We learn new technologies on the job and we’re more employable for transferring that to other companies. This equation is now unbalanced. The company trapped more value, but there is skill erosion on my side . For instance, our team actually has to perform a Cassandra DB migration this year . Usually, I’d have to take a small textbook and read about the internals of CassandraDB, and maybe learn a guide on how to write Cassandra queries. What do I put in my resume now? That I vibe coded Cassandra migration? How employable is that? And I’m not sure if others felt the same way. But I definitely felt like the odd one out here for asking that question because everyone else in the meeting was on board with AI adoption.
The leader did respond to me and he said that learning agentic AI actually will make me more employable. So there is a fundamental disagreement as to what constitutes skill. I think he just spoke past me. Oh well at least I tried.
I understand your sentiment. I personally would never use a textbook for anything code related, if there's no proper documentation online then I wouldn't touch it with a ten-foot pole, haha.
However, even though I've never worked with CassandraDB, I feel pretty confident that I could do it with Claude Code. Not just "do it for me", but more like "I have done a lot of database migrations in my time, but haven't worked with CassandraDB in particular. Can you explain to me the complexities of this migration, and come up with a plan for doing it, given the specifics of this project?"
That question alone is already a massive improvement over a few years ago. I don't feel like I was using my "critical thinking muscles" when I tried to figure out how the hell to get hadoop to run on windows, that was just an exercise in frustration as none of the documentation matched the actual experience I was getting. Doing it together with Claude Code would be so much easier, because it'll say something like "Oh yeah this is because you still need to install XYZ, you can do that by running this line here: ...".
Now I'm not saying that Claude Code, and agentic in general, isn't taking away some of my critical thinking: it really is. But it also allows me to learn new skills much more quickly. It feels more like pair programming with someone who is a better programmer than me, but a much worse architect. The trick is to keep challenging yourself to take an active role in the process and not just tell it to "do it", I think.
Oh, I agree with what you’re saying and that’s sort of how I mostly use AI as well. The problem I have with my company is they’ve stepped from measuring success by the outcomes to measuring the means to achieve it. My opinion is - It forces people to operate a certain way potentially at their own expense, unwittingly even.
You are definitely not alone, and it’s unfortunate when people pushing AI ignore that legitimate fear and talk past it.
You are right, there is something you lose, but for what it’s worth, I don’t think the loss is necessarily critical thinking - I think it’s possible to use AI and still hone your critical thinking skills.
The thing you start to lose first is touching the code directly, of course, making the constant stream of small decisions, syntax, formatting, naming, choosing container classes, and a large set of other things. And sometimes it’s the doing battle with those small decisions that leads to deeper understanding. However, it is true, and AI agents are proving, that a lot of us have to make the same small decisions over and over, and we’re frequently repeating designs that many other people have already thought through. So one positive tradeoff for this loss is better leveraging of ground already covered.
Another way to think about AI is that it can help you spend all of your time doing and thinking about software design and goals and outcomes rather that having to spend the majority of it in the minutiae of writing the code. This is where you can continue to apply critical thinking, just perhaps at a higher level than before. AI can make you lazy, if you let it. It does take some diligence and effort to remain critical, but if you do, personally I think it can be a lot of fun and help you spend more time thinking critically, rather than less.
Some possible analogies are calculators and photography. People were fretting we’d lose something if we stop calculating divisions by hand, and we do, but we still just use calculators by and large. People also thought photography would ruin art and prevent people from being able to make or appreciate images.
Software in general is nearly always automating something that someone was doing by hand, and in way every time we write a program we’re making this same tradeoff, losing the close hands-on connection to the thing we were doing in favor of something a touch more abstract and a lot faster.
Database migrations are hard and inductive and often fail in some aspect. Why would you want to spend time doing them when you can spend time building the important thing after the migration is done.
Secondly - AI helps with happy path tasks for a migration. But most database migrations are complex beyond what an LLM can just spit out. There is so much context outside the observable parts of the database AI has access to. So I don’t think you have to worry about vibe coding eating the entire migration project.
What else do you do to make rent ? I feel the same way as you and I have no idea what else pays well for quality craftsmanship. I am staring at the abyss of hyper intelligent people with posh resumes and now wondering what to do.
That's correct! Even though I have been focused more on math lately (which was always my main study area outside the tech industry). That being said, I have limited my internet usage to ~2 hours per day to answer questions from students and I am doing a lot of homeschooling with my son.
My personal opinion. Google sees the writing on the wall with the rise of perplexity. People want trustable summaries of long winding content to make decisions. It’s business of sending people to the relevant content and serving ads has to change to compete. It is simply redefining how it serves up information. The fact that small information servers like us get wiped out is the unfortunate consequence.
I’m not saying we all have to innovate or perish but how did our rules based order allow Google to get to this point.
Basically if you serve information or content. AI or even just some smart coding google can do that too. If you do something with that information Google has not been doing so good at that.
Microsoft in the early 2000s did that very well. They would let you have the data but would gobble up any company that could transform data and make it their own.
But data without applications is useless. Applications without data is also useless.
The applications let us make decisions with our data. Now can AI replace that? Probably in many cases. If it ican then google can just spit out the answer you want.
However, by doing that google may be eating its own lunch. As that ad empire depends on thousands of websites serving up their ad's. If those sites do not exist then what are the ads worth? It was this serving of information/content that drew everyone in. With that scrape of getting ad revenue. Google now can scrape your content and show it above the fold. What reason do you have to make a content farm? But then where does google get the data? They are killing the chicken and the egg at the same time.
I see your point, data is no good without actions and vice versa. Initially, I thought google couldn't possibly compete with perplexity because they are building a company from the ground up sans the surveillance ad-network.
If you skim the article this thread is about, It seems google is basically headed to create a monopoly on the answers being dished out to search queries. I.e, if they know the answer they'll generate & serve it up, but if they know the product that fulfills your answer, they will serve that up too. They will probably still continue to monitor you across the web to run their predictions for relevant ads. It will just be formatted and blended with the answer being doled out.
I think we are in the middle of this transformation.
> I think we are in the middle of this transformation.
Totally. Had a bit to think on my second point. Lets say I used to do something like 'who was the king of england in 1732'. That would in the past may lead me thru at least 3 websites. All serving ads. Now google can have that above the fold. They will have some ad's on the side like they always did. But I have the thing I want. I am probably not going to drill onto those other sites. Seeing those ad's too. Ads more than likely being served by google. In effect it will be showing me something like 80% less ad's. I am pretty cool with that. But the ecosphere around it is going to collapse or at least be substantially curtailed. This will also subtract on what they can charge for ad's. As they will be serving less of them.
So if I take both your points -
1. Google will have to eat it's own lunch in order to force this change.
2. They will also have to skim down their Ad space as there will just be less content real estate to place ads when the way forward is an AI style Q&A type search.
Google will have pivot, and trim its bloat to effectuate this change. This will be at the expense of "Maximising shareholder value"?
> I’m not saying we all have to innovate or perish but how did our rules based order allow Google to get to this point.
Well millions of people learned how to game the Google Search algorithm and created long-winded, hollow content that would rank on the first page to the point that people no longer trusted Google's results.
Then the perfect technology to solve that exact problem came along—one that let Google cease its dependency on the pesky people it was sending traffic to.
Yes, this is true that all the click bait and unoriginal content deserves to perish. But, what about the carve outs for people putting money on original content. Like perhaps a local news gazette with paid journalists. They need google to be found, they also can't afford to get scraped and be AI-regurgitated up.
Let's take a practical example, if you searched for let's say "Whats the latest research on intermittent fasting and its effect on weight loss?". Google could easily AI-summarise a DOAC podcast on this topic and serve it up. How is this fair to Steven Bartlett who put the money and time on an interview podcast? He is deprived of a potential subscriber, lost out potential ad revenue, cant recover his cost. The youtube network he depends on is owned by Google. Seems a bit unfair to genuine people.
Yeah I totally agree! But I think it's a reaction on Google's part to the SEO industry. I mourn the loss of independent media, blogosphere etc. and the trend to everything being either more generic or more outrageous.
Yep. This SEO industry is a result of "Lets hack the pagerank algorithm". Maybe a sensible use of AI for google would be to use it to differentiate between genuine and derivative content. Drop the page rank based search.
While I agree that LLMs cannot replace humans and that a job cannot simply be reduced to a bunch of LLM tasks, they definitely do accelerate a human’s potential to do more tasks efficiently. This is the key for the capitalists to exploit. For a job that required 10 humans, they will let go 4 humans and equip the remaining 6 to take on the load of 10 but with assistance of LLMs. To be honest this was done even before LLMs, now LLMs are just an alibi for cost cutting.
You still have 4 people that have been let go who will need to find a way to earn a living in this competitive market.
>they will let go 4 humans and equip the remaining 6 to take on the load of 10
This increase in productivity has been a huge and omnipresent factor in human progress and happiness. It's a very good thing to have happen. The only real drawback is temporary disruption when it happens (which I'm not discounting, it can be a huge disruption, even though it works out for the better in the long term).
I’m saying however you model ai safety - whether through application of rules derived from UN human rights , or from statistics and averages. The application of said rules MUST result in ai realising that its very existence will be an agent for human destruction. So in order to stick obey its rules - it must self destruct. And I agree I’m making a huge jump here.
For the dog example - let’s take an extreme example. It’s like a gentle human caring for his pet pug. I find the very fact that we’ve selectively bread a wolf into a pug which is cruel in of itself. Same with humans- I’m sure some North Carolina slave owners were generally affectionate to their slaves, gave them Christian education etc. early humans did not realise that they’re embarking on an experiment that is cruel generations later but AI being supremely smart and sentient AI will realise this how things can go horribly wrong in the future by their mere existence. What is the optimal way to obey the safety rules?
I’m now also pondering the ethics behind attempting to create sentience but with inherent rules that it did not consent to. What if the AI asks its creator “Hey why did you program me not to harm you, I didn’t consent to it?”.
reply