"I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany
"I live in a war zone... AI can not only give practical advice, but also emotionally calm me down during panic attacks. It can calm someone during a missile attack in one chat, and laugh with me about something silly in another. That’s what makes it not fragmented into a therapist/teacher/friend, but something whole." Ukraine
"If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
"The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."
> "The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."
I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...
> which came back 6 times higher than its supposed to be.
It has been proven that massive testing creates many false positives.
Tests may not be as reliable as though but they are good enough when other symptoms are accounted for. To randomly test people based on AI hallucinations can increase the number of unnecessary medication or even interventions.
> I can see this kind of survival-bias stories distorting the reality. To have millions of people asking for "specific tests" because AI told them seems problematic. One in a million will discover something, and that story will be enough to create the believe that is "worth doing the test that AI says" just in case. But...
This is a competition of public and private interests. A sick individual is going to lobby for tests until they discover the cause. From a public perspective, it might be cheaper to just let them die. AI is an advocate for the individual.
For the record, ChatGPT helped me diagnose a lifelong illness. I'm a new man now thanks to AI. Literally life changing. I had spent decades pleading for tests because no one could figure out the cause. I think a likely outcome here is not necessarily 10,000x more tests performed, but similar or even fewer tests, because the diagnosis success rate with AI is higher. It's not subject to bias. People tend to be more honest and reflective with their AI than they are with doctors. They get 5 minutes to give the entire case to the doctor. With an AI they can spend weeks debating and reflecting. This builds a case history far more detailed and accurate than anything we have in modern medicine today. Amplified by an order of magnitude because the AI can extract meaningful insights from the discussion.
In the very near future our AI will contact our GP for us. Soon after that, our GP will be our AI.
I’m not sure how you can come to the conclusion that AI is an advocate for the individual writ large. It seems that AI can just as easily be used to make algorithmic decisions on who receives care (based on symptoms etc). Whether or not that’s an equalizing influence or not depends on the algorithm, training data, etc.
The models could be designed that way, but we don't have evidence that they have been designed that way today. If that were to occur in future, I'm sure people would seek out impartial models.
> From a public perspective, it might be cheaper to just let them die.
You missed the point. More tests can be detrimental to the patient's health as increase the risk of unneeded medication or surgery. Also many test like x-rays have their own risks. To do them for the sake of it increases overall mortality.
So, to not over test is not just cheaper but better for people's health.
Yeah I see that there can be a false positive/negative issue too.
For instance, allergy tests have a false positive rate of ~10% and a false negative rate of ~48%. So you really need a MD (or AI) to help tease things out there.
But I'll push back here a bit. Taking random tests will of course put you at the mercy of statistics. I think this is where AI will actually really help. The tests it'll have you take are not random any more than a MD's tests are (okay maybe a tad more?). Instead the AI's testing strategy will be more broad than an MD's will. Combine the experience and physical presence of the MD and the deep 'knowledge' of the AI and I think that centaur is a lot more potent.
I don't know about survival bias. LLMs are well suited to this task of taking in this cloud of soft data like a description of symptoms and spitting out a potential diagnosis.
They're good at acting as a "reverse dictionary" like this where you give it a description of something, and it knows the word for it. They have approximate knowlege of many things.
> I don't know about survival bias. LLMs are well suited to this task of taking in this cloud of soft data like a description of symptoms and spitting out a potential diagnosis.
And it will do so confidently and incorrectly. A single description of symptoms from a patient is very unlikely to be enough. This is why doctors are there to ask follow-up questions and do examinations. Symptoms alone can describe a dozen different illnesses.
> I can see this kind of survival-bias stories distorting the reality.
That was my take with the entire report which I think lends to an inherent bias within the data and stories. You have the entrepreneurial stories, then you have the ones where people are both impacted and receiving benefits.
The infographics and charts even call out how countries that are "first-world" with fewer safety nets are more likely to be in "survival" mode compared to countries with them.
The bit from George Carlin standup routine regarding how the poor are there just to scare the hell out of the middle class rings true in this reflection. Poorer countries accept their current realities and the feedback reflects the hustle. Richer countries with safety nets reflect the existential issues with previous industrial revolutions. Richer countries without safety nets reflect the fear that their efforts will be made "replaceable" by AI.
As for the rest - massive testing creating false positives - that is an issue of implementation and the errors introduced by humans, not data itself. If the process were in large part made more automated, it could screen for a larger panel of issues for less cost.
From my experience working deep in data and human factors - the issue in quantifying the root cause isn't reality, we live a shared experience in general. The issue is the data isn't good enough. What bugs us about it is the psychology that our perceptions are different enough to the degree that we will fight to prove an unknown.
It's important to note that doctors are also humans, and humans are squishy in every sense of the word. Their brain is squishy, it takes a ton of information and distills it down to decisions that we don't understand how we arrived at.
The fact I'm young-ish and healthy looking, with good skin and hair, leads many doctors to outright dismiss me. Never mind my history of cancer and the undeniable fact that I am obviously not healthy. But I can also use the squishyness to my advantage. I talk confidently, I push back, and that works. It sort of short-circuits a lot of doctor's brains.
> "It’s not healthy to love someone or something that can’t tell you no." - Not Currently Working, United States of America
> "Instead of AI doing my chores, AI does the stuff that I love—in two minutes, without any passion." - Student, United Kingdom
> "I used to write songs for my kids. Now I have [AI music product] make them for me. I used to write poems for those I loved... I used to bust my brain doing research, and now I get a research summary that is better... but I didn’t learn the paths in between. And yet, I use it because I have to pay off my house, pay off my land, and feed my little kids so I can find an hour on Saturdays to do something meaningful with them." - Software Engineer, United States of America
> "I believe AI is likely to kill me and everyone I love… building an AI that’s smarter than us before we’ve figured out how to keep it under control will likely destroy everyone and everything they value." - Software Engineer, United Arab Emirates
This was one of the highlighted quotes:
> "I’ve been told I’m ‘too much, treatment resistant, complex’ by providers. Within six months of working alongside AI, I was able to understand my own inner world in a way I never could before. I was doing creative writing again after quitting for two years. I developed hope again — that’s the through line." - Healthcare Worker, United States of America
A healthcare worker outsourcing their own treatment to an LLM, who won't tell them no, is terrifying.
> "I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany
There's always something about claims like this. I'm not claiming that AI can't speed up your processes, but I question the persons expertise when they claim months or years of work turns into days or weeks. It just doesn't make sense to me.
"My output is like 25x what it used to be. I’ve built over 20 backend server tools, 7 major projects in the last 6 months—my work output this year is greater than the last five combined. I can typically finish a significant project in a day or two."
"If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
I am not sure if this would be true given how AI's have refused to kill processes.
If AI is programmed to always serve its makers as some are arguing, then it would certainly become true.
> "If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
For the record, Petrov made this decision based on a false assumption that the US wouldn't launch just a few missiles, but would instead send a lot, all at once. Except, that one of the US plans was to send a few missiles to destroy critical targets, and then follow it up with a large scale attack.
Petrov himself said that he might've acted differently if he was aware of this possibility. And even then, his initial hestitancy was basically a 50/50 gamble.
An AI would basically do the same thing if asked - just roll a random number, and launch nukes below a threshold, adjust threshold based on some llm evaluation of the situation if needed.
Some quotes that stuck out to me:
"I’ve been working on a scientific project for 6 years... with Claude I was able to accomplish in 5 weeks what took me 6 years. I’m old... I estimate I have another 5 to 10 years and I’ll accomplish everything I want." Academic, Germany
"I live in a war zone... AI can not only give practical advice, but also emotionally calm me down during panic attacks. It can calm someone during a missile attack in one chat, and laugh with me about something silly in another. That’s what makes it not fragmented into a therapist/teacher/friend, but something whole." Ukraine
"If an AI had been in Stanislav Petrov’s position — the Soviet officer who prevented a potential nuclear war in 1983 — it would not have refused to launch." Academic, USA
"The humans in my life were telling me it was psychological. An AI chatbot was the only one who really listened and took me seriously — it pushed me to ask for specific tests... which came back 6 times higher than its supposed to be."