Hacker Newsnew | past | comments | ask | show | jobs | submit | TurdF3rguson's commentslogin

> An AI will not pick up on any of that.

It will if it trains on data like that. It's all about the training data.


Unfortunately the training data is absolute garbage.

Diagnostic standards in (at least emergency, but I think other specialties) medicine are largely a joke -- ultimately it's often either autopsy or "expert consensus."

We get to bill more for more serious diagnoses. The amount of patients I see with a "stroke" or "heart attack" diagnosis that clearly had no such thing is truly wild.

We can be sued for tens of millions of dollars for missing a serious diagnosis, even if we know an alternative explanation is more likely.

If AI is able to beat an average doctor, it will be due to alleviating perverse incentives. But I can't imagine where we could get training data that would let it be any less of a fountain of garbage than many doctors.

Without a large amount of good training data, how could AI possibly be good at doctoring IRL?


You just get 1M doctors to wear body cams for a year. Now you have a model that has thousands of times your experience with patients, encyclopedic knowledge of every ailment including ones that never present in your geography, read all the latest papers, etc..

I don't understand how you think this doesn't win vs a human doctor.


This wouldn't solve the problem of diagnostic standards. Let's say you are a pediatrician and want to predict which kids with bronchiolitis will develop respiratory failure and need the ICU versus the ones who can go home. How do you determine from the body cams which kids had bronchiolitis in the first place? Bronchiolitis is a clinical diagnosis with symptoms that overlap with other respiratory illnesses such as asthma, bacterial pneumonia, croup, foreign body ingestion, etc.

you would have footage of the doctors diagnosing them. I don't understand what you're asking. The body cams have microphones too in case that wasn't clear.

How is training on bad data going to give you better results than the current system?

What kind of embedding helps the AI learn to do a physical exam?

Not to mention patient privacy, I can't even take a still photo of a patient in my current system (even with a hospital-owned camera).


In healthcare, HIPAA/GDPR equivalent would block this. Let's be realistic in our discussion; this is not the same as google buying up a library worth of books, scanning and destroying them

There are other countries, and the patients in them all have similar data

Other countries actually don't necessarily have a similar mix of ailments, median patient appearance and style of communication or even recommended course of action and most of the ones with more sophisticated medical care also have strict medical privacy laws. If you're genuinely unaware of this, I'm not sure you're in a position to be making "one year with a camera, how hard can it be" arguments...

(Where AI is likely to actually excel in medicine is parsing datasets that are much easier to do context free number crunching on than ER rooms, some of which physicians don't even have access to ...)


I think you're being silly if you think the amount of money at stake here, not the mention the health of billions of people is going to be stymied by privacy laws.

To give this more credit than it perhaps deserves: training aside, getting the situational data into the context is a more significant problem here.

Pt's chart is complex/wrong? Gotta ingest that into context.

Chart contains images/scanned and not OCR'd text? Gotta do an image recognition pass.

Diagnosis needs to know what the pt's wearing (i.e. radiation badge)? Gotta do an image recognition pass.

Diagnosis needs to know what the weather's like? Internet API access of some kind. Hope the WAN/API are all working! If they're not, do you fail open or closed?

Patient might be lying? Gotta do video/audio analysis to assess that likelihood--oh, and train a model that fully solves one of the holy grails of computer vision/audio analysis reliably and with a super low false-positive rate before you do. And if it guesses wrong, enjoy the incredibly easy-to-prosecute lawsuit.

Patient might be lying, but the biggest clue is e.g. smell of alcohol on their breath? Now you need some sort of olfactory sensor kit and training for it--a lot more than just "low quality body cam and a mic".

Patient's ODing on a street drug that became abundant in the last few months? Gotta somehow learn about recent local medical/police history that post-dates the training set, or else you might be pouring gas on a fire if you give them Narcan. And that's assuming you know enough to search for information about that drug, and that they didn't lie to you about what they took. Addicts never do that.

Failures in each of those systems bring down the chance of an effective diagnosis, so they need a fairly obsessive amount of model introspection/thinking/double-checking, and humans on standby as a fallback if the AI's less than confident (assuming that LLMs can be given a sense of a confidence level in the future, versus the current state of the art of "text-predict a guess about what your confidence level might be").

Put that all together, and even with the AI compute speed available years from now and a perfectly trained futuristic model that's preternaturally good at this stuff, I'm not sure that that the reliability and, more importantly, the turnaround time of that diagnostic pass is going to be any good compared to a human ER doc.


The user will be adversarial and probably learn new tricks to trick the machine, this is not solvable (only) via training data.

We have that expression “garbage in, garbage out.

My sense is that doctors and AI would be doing a lot better if they were just doing medicine, not being a contact surface for failures of housing, mental health and addiction services, and social systems. Drug seeking and the rest should be non-issues, but drug seekers are informed and adaptive adversariesz


I visited and got the 401 but that doesn't mean whatever triggered it isn't automated.

The reasonable assumption to make when something changes that it had nothing to do with me. Because 99.99999% of the time, it didn't.


I dunno, if they got ID #15, and the site shut down immediately after (for everyone), it doesn’t seem like a crazy stretch.

Like, if a page gets hundreds of thousands of visitors, then your assumption is reasonable. For a page that might get dozens of visitors over its lifetime, it’s a much less certain assumption


It's unlikely in my opinion as someone that maintains a lot of websites, because it's long odds that I'm even at my desk at any given time, let alone monitoring and panicking over what visitors are clicking on.

Is it possible that it happened that way? Sure. But it's more likely that it didn't.


Do you run any honeypots? You realise the point of a honeypot is, unlike a normal website, to monitor exactly what visitors are clicking on so the trapper can react?

They were supposed to shut down after #12 but they got busy, then had to take that day off to get the kids to the doctor and it fell to the wayside. Eventually, the notification for #15 arrived and the dev panicked that it should have gone down weeks ago.

Why would they have Cloudflare turnstiles? Are they worried about getting DDOS-ed?

Cloudflare have successfully made their products so common people use them without giving a second thought to whether or not it makes sense

Coming back to point out cloudflare is probably the most common way of hiding your servers ip if you are running a greyzone or illegal service, and its useful for running many websites on the same VPS without reverse DNS busting you

Sure but that has nothing to do with turnstile. You turn those on when you're site is getting hammered by bots, which seems odd for that particular site.

DDOS websites get DDOSed by their competitors all the time

to get credibility

That's interesting but how is anyone supposed to prove it? They would have to get their hands on your prompts.

Leaks, whistleblowers. Some circumstantial evidence will also do if there's enough of it. Like having hallucinated parts of code that do absolutely nothing, and can't be explained as e.g. leftovers from a refactor.

> They would have to get their hands on your prompts

Unless you are running a local model, your prompts are almost certainly logged by your inference provider, and would only be a subpoena away?


This is going to be the most important job going forward, the guy in charge of making sure production secrets are out CC's reach. (It's not safe for any dev to have them anywhere on their filesystem)

Are you a Batman villain by any chance?

To soak them in water. And then toss the water (very important). If you try to use the water from soaking you will regret it.

>To soak them in water. And then toss the water (very important).

I always do that, but I wonder if the companies that can them do that.


But cyberpunk is the best kind of dystopia!

Sorry for my foul language but I think we will turn into cybershit if things go bad.

SpaceX was profitable before the xAI thing happened. Now I imagine they're way in the red.

I've been thinking the opposite. It sucks to be in the generation of workers that are displaced by AI. It's going to be great to be in the generation where work just isn't something that humans are expected to do.


That's what the whole UBI thing was about though. People did see this coming and wanted to preempt it. I'm not sure whether it would've worked, but people did try to come up with solutions for this transition period.


There's still plenty of time to figure it out. You're making it sound like it's already too late.


I really wouldn't want to be in the post-mass-employment era as part of the class with no economic or military power, totally dependent on handouts.


Yes because you think of it as a handout. But the generation born into it will think of it as entitlements.


We are never going to live in a society that doesn't expect people to work. There may not be enough work for half the population, but people will still be expected to work to live. We already live in a society that could feed every last poor person and we still choose not to, cuz "but muh tax dollars!"


I mean, assuming we don't hit some limit with AI, we're going to get to the point where the best way humans can affect productivity is to just get out of the way.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: