Yes, my thoughts exactly. Productivity by definition creates things, hopefully valuable things. Is all the extra burn on chatbots worth the cost? Has Uber somehow gotten dramatically more efficient and effective due to this massive budget overrun? Or have they just given people shiny and expensive ways to push the same work around?
That sounds like a hack from late-game Factorio: pollute enough that you can just pull iron filings right out of the air. Everyone wins! Except the meatbags who need to breathe the air …
This article is filled with emotional triggers designed to drive engagement. Even the title. It can be hard to separate those things from objective facts.
Putting an llm in front of it helps me focus on the facts.
There are also too many things to read. My default before llms would have been to ignore this article.
At least now I learned some things (mostly about the Gallup poll which had source data)
I do think some people will outsource critical thinking to llms - but it also helps amplify critical thinking by doing a lot of the filtering and organizing and let me focus on the things i think are important.
> This article is filled with emotional triggers designed to drive engagement. Even the title. It can be hard to separate those things from objective facts.
> Putting an llm in front of it helps me focus on the facts.
This argument reminds me of one of Ted Chiang's short stories about "lookism," which (iirc) was a natural preference for people to prefer people who are attractive. In the story, a new technology was developed that could interact with a person's brain to "turn off" their lookism and instead just consider what a person brings to the table without your brain factoring in your own attraction to them.
I won't spoil the story, but a little arms race develops in the technology to "turn off" natural human reactions to things like attraction, emotion in speech, etc., so that users won't be swayed by them in advertising, political campaigns, anything that could possibly have an agenda. By the end, people using the technology are described as highly autistic – unable to perceive any human emotional context, triggers or attraction – so that they're able to interpret just a person's intent and not be manipulated by the underlying motivations.
It's an interesting story, your use of LLMs to cut out the "emotional triggers" from an article and get just the "objective facts" reminds me of that.
> Putting an llm in front of it helps me focus on the facts.
This used to be a very important skill taught in high school and perfected in university. We have lost something if people cannot focus even for short reads.
Years ago I had my blood pressure taken by a nurse; this was when they did it manually, squeezing the pressure cuff bulb by hand and listening with a stethoscope. The doctor came in later, saw the numbers and frowned, and took my pressure again. She (both were women) ended up with a reading much more within my normal range.
I asked, joking, “So are you just better than her?” “No,” my doctor replied, “She’s better. She gets more practice. I have a better stethoscope.”
The pressure cuff + stethoscope combo is called a sphygmomanometer. It's a pretty fascinating piece of technology: A heartbeat is only audible in the earpiece when the cuff is compressing between someone's systolic and diastolic pressure.
To use it, you get the cuff pressure high enough that you stop hearing a heartbeat in the earpiece. Start releasing pressure slowly. As it comes down, take note of where on the dial you start hearing the heartbeat. That's systolic pressure. Keep listening, and take note of where you stop hearing the heartbeat. That's diastolic pressure.
And if you use a mercury sphygmomanometer, you can actually see those pulses appear and then disappear. (It's harder to see them with a gauge-based one.)
I'm an anesthesiologist; we will sometimes use a pulse oximeter below the cuff as a quick estimate. With practice you can estimate SBP to within 5 mm Hg or so, which is more than enough for our needs.
I have a much higher BP when I first go to the office than after I'm sitting in the exam room for a bit.
Usually they call me back to the hallway where they check my weight, then have me sit in a chair and check my temperature, pulse ox and BP, with maybe only a minute sitting down before they do the BP check. My BP is usually in the "hypertension" range there.
But, if they come back to the exam room after I've been sitting in that quiet room for 5 or 10 minutes and check my BP , it's almost always in the "normal" BP range (same as what I see when I check it at home).
Doctor calls it "white coat hypertension", I call it "rushed BP check in the hallway".
Then you will notice when your HCP ignores those instructions, like wrapping the cuff around your shirt-sleeve, or prompting you to talk while the measurement is taken, or allowing you sit with your legs crossed.
BP monitors are often poorly calibrated. The instructions for my home monitors suggest bringing the device into the clinic for calibration, and then the clinician says "we don't do that!"
Manual sphygmomanometer readings won't have an automatic digital readout, and require the human HCP to interpret, announce and record the numbers.
If the nurse got a reading well outside normal range she should have repeated it to confirm, especially if it was inconsistent with your overall presentation.
It is happening, just as planned and predicted. Last I checked every so-called AI company is swimming in cash, and so are their founders and leaders. You didn’t think the economic surplus would be evenly distributed, did you?
I was talking about OpenAI, referencing their original mission statement. Other companies have said similar things.
Rugpull means they "pulled the rug out" from underneath those who believed their mission statement, and decided instead to take over the world and screw everyone else.
I’ve been in tech and medicine too. Consider that any “HUGE” effect in this context is likely exaggerated, especially for something as prosaic as a note-taking assistant.
As a patient sitting with a doctor, I don’t care how standardized the notes are. I don’t care about anyone’s NPS score. I do want the doctor to connect with me, but I also remember not too long ago when doctors did this anyway, without any assistance from robots.
I also remember not too long ago when doctors did this anyway, without any assistance from robots.
Or with assistance from other humans.
The last time I had surgery, every time I met with the surgeon (about six times), he had an intern following him around with a Thinkpad, typing in everything said.
The intern has the ability to understand context, idiomatic expressions, emotion, and a dozen other important and useful things that an AI transcription will never capture.
That’s probably not an intern. Doctors with enough pull can get dedicated scribes like this, but they aren’t cheap, which is why most doctors don’t get them.
> I’ve been in tech and medicine too. Consider that any “HUGE” effect in this context is likely exaggerated, especially for something as prosaic as a note-taking assistant.
Imagine your doctor head down writing down everything you say. Now imagine your doctor looking you in the eye and listening intently. Which do you think feels better to the patient? That is "huge". Anything that helps improve patient care with little effort and cost IS HUGE to us. That feeling of the doctor being present and invested helps patient outcomes. THAT is also huge, even if it's a few percent.
We're healing people, we're not looking for a unicorn startup, a few percent improvement IS HUGE to us.
> As a patient sitting with a doctor, I don’t care how standardized the notes are.
Yes you do, better notes mean better care because the next time your seen your records are clean, understandable, and compliant with regulations and best practices. Better notes mean doctors are following protocols. Better notes mean fewer claim rejections, and fewer claim rejections means less money wasted arguing with insurance companies. Better notes mean the data is more easily used for research, as well, which leads to new treatments and better outcomes.
> I don’t care about anyone’s NPS score.
Ever had a doctor with a bad bedside manner? Missed a diagnosis? Skips appointments on fridays? Tracking NPS scores can help with that. Every data point is useful, and patient satisfaction is massive.
> I do want the doctor to connect with me,
Ok, well, most people DO want this, most people DO want to have a good relationship with their doctor where they feel heard and cared about rather than just another widget on a conveyor belt.
> but I also remember not too long ago when doctors did this anyway, without any assistance from robots.
I also remember when doctors weren't constantly overruled by insurance companies. Ever heard of a Prior Auth? That's when your doctor writes a prescription or an order and then the insurance company makes the doctor call them back and say "yes, I did this on purpose, yes the patient really needs this." Then a bureaucrat at the insurance company will decide if the doctor is right or not. Usually those bureaucrats aren't even doctors. That's illegal, but happens every day.
Anything I can do to help my doctors provide better care for our patients, I'll do. I've dealt with scribes for 12 years and I genuinely think these AI scribes are a genuinely amazing use of the technology. We don't have to hire human scribes, and our doctors are freed up to deal with the patient thanks to a documentation helper.
I evaluated quite a number of these tools before we rolled any out. I've been researching these for two years. Dragon with Copilot is not a good tool, for example. There was another we evaluated, I just did a search on them and their story today is wildly different than it was 18 months ago when I discovered they were lying through their teeth about the tech. I see they claim to have secured a $70m round in 2024 (which I know is a lie) and more since, so maybe they can actually do what they say now but I couldn't trust them, so I kept evaluating.
I'm not an AI truster, AI isn't a panacea, but it DOES have uses, and this is one I've seen make a positive difference. I'm not an insurer, I work for providers, my goal is helping my docs provide the best care, so I promise I'm not going to roll out bullshit tech or things that would endanger our patients. My reputation is on the line, and I take that incredibly seriously too.
A decade seems good to you? We’re still just talking about heat and pressure, well-understood problems. There’s no excuse for a machine like this not to outlive the original owner. Anything else is planned obsolescence or a manufacturing defect.
I’ve had similar experiences. After watching it for a decade I think it’s a mostly over-active pattern recognition combined with a flood of incoming information. I believe I’m careful with the information I consume, but compared with 25 years ago it’s literally orders of magnitude more.
IOW, maybe, it’s easier to find a needle in a haystack if you have a magnet (brain with pattern recognition) and live in a blizzard of haystacks (online today).
I feel like we had a golden opportunity, years ago, to do something about Ticketmaster. In 1994 Pearl Jam, one of the biggest bands in the world at that point, boycotted and sued Ticketmaster. I wished at the time more bands had stood up and said, “Enough.” It would have worked.
But it’s easy to scare an individual artist, or make them feel like they’re locked into a contract, and fame is such a precipice. I suppose that makes it hard for them to work together for their own good.
Ironically sometimes artists complain about Ticketmaster and their stranglehold, but again, it takes some special bravery to actually do something about it.
reply