There's a video by Siliconversations [0] about it. Medicine is first and foremost limited by high-quality data, not intelligence. If OpenAI built a superhuman AGI tomorrow, it would not change a thing about the state of cancer treatment, at least not for a while.
Trying to design a cancer cure by setting a trillion alight on AI is like trying to achieve UBI by funneling citizen's taxes into Polymarket, so they may operate their free supermarket.
I don't think the above poster is talking about finding novel treatments, but rather that they're talking about aiding in diagnosis and navigating existing treatment options.
We always wish that our doctors would stay up to date on all of the current medical literature as they practice, and some of them do. In theory, AI systems could greatly accelerate a person's ability to retrieve and extract insights from the current body of knowledge.
Of course, that is highly fraught, but, in theory, I think I see what they're going for.
How can we be sure of that when we don't even know what improved "intelligence" might look like in this context? Especially given the increased importance of "big data" (genomics, proteomics, metabolomics etc.) to the field and the sheer amount of obscure data that's currently buried in all sorts of archival sources and might be resurfaced with some "intelligence".
Yes. But unfortunately that domain suffers from ambiguity which LLMs are bad at.
Medical treatment has never been about asking questions and getting perfect answers. Excellent doctors and nurse practitioners have a great intuition for which questions to ask based on cues during patient assessment.
What exactly does “personalized medical treatment” entail?
Writing prescriptions?
Ok, I can see how AI could theoretically do that (assuming it doesn’t hallucinate and kill a bunch of people). Oh and don’t think it’ll be so easy to give AI the legal authority to prescribe controlled substances. And insurance companies may take issue with expensive prescriptions written by a chat bot.
Perform surgeries? Stitch wounds?
That’s decades away. And that also opens a legal can of worms. Maybe the AI lawyers can figure something out.
There is a lot of demand still coming for sure but I think I'm more optimistic. Ready to eat my hat on this but
- higher prices will result in huge demand destruction too. Currently we're burning a lot of tokens just because they're cheap, but a lot of heavy users are going to spend the time moving flows over to Haiku or onprem micro models the moment pricing becomes a topic.
- data centers do not take that long to build, probably there are bottlenecks in weird places like transformers that will cause some hicups, but nvidia's new stuff is waay more efficient and the overall pipeline of stuff coming online is massive.
- probably we will see some more optimization at the harness level still for better caching, better mix of smaller models for some use, etc etc.
These companies have so much money and they at least anthropic and openai are playing winner takes it all stakes, with competition from the smaller players too. I think they're going to be feeding us for free to win favour for quite a while still.
I agree and I am amazed at how much money some individuals and also a friend's company burn on token costs. I get huge benefits from this tech just using gemini-cli and Antigravity a few times a week, briefly. I also currently invest about $15/month in GLM-5.1 running Hermes Agent on a small dedicated VPS - fantastically good value for getting stuff done and this requires little of my time besides planning what I need done.
I think the token burners are doing it wrong. I think that long term it is better to move a little slower, do most analysis and thinking myself, and just use AI when the benefits are large while taking little of my time and money to use the tools.
I'm not sure if it's a correct impression but my impression is still that AWS is the "devil you know" and Cloudflare is less predictable with more individual decision making from high ups.
I guess they got that reputation years ago when the founders (?) got into public spats about what they would and wouldn't host. AWS is more lawyers and committees and seems more anonymous, so people don't necessarily like it more but they do trust it to be what it looks like more.
Cloudflare will predictably shut down your account until you pay $150k. They will not transfer out any of your domains or files - they will be inaccessible until you pay $150k.
There have been stories about people with heavy internet traffic (generally media streaming I think) being more or less shut down unless they upgrade their cloudflare plan (to enterprise I guess). Some were posted on HN in the past.
I think I'm saying the opposite on point 3. He has no _obligation_ to us and has full rights to 'take away' as he sees fit, but we still have the right to give our opinion about that process, and to make comparisons and contrasts with other similar products that are run differently
If you think somehow publishing FOSS means you get some right to decide how people use it, or anything besides the licensing of the code, you severely misunderstand what exactly FOSS is about.
Author here, I was actually surprised to learn this too. I reached for Ruby and Django as examples of non commerical frameworks and before writing this I didn't know about the $1M backing either.
I guess I'd have a hard time turning down that kind of money for something I cared about so no judgement to the creators who make the choices but I do think it's something we need to understand the effects of as community members
This overlap is frequently a threat to many people here. For whatever reason, unlike every other profession on earth, writing about the thing you're an expert/interested in while making money from it (or even the potential to make money) is frowned upon. Disregard it.
I set it up and had some fun but it was super janky and regularly broke, especially the whatsapp integration
Now I have a separate plugged in macbook running nixos (that claude set up) and a single long-running claude code process with a channel to a Telegram bot. This means I can talk to it much like I could with OpenClaw, but it's much simpler (no weird soul.md etc). It feels more powerful than just claude code directly as it can set up software, build me throwaway websites with research etc, and "do" things, but it's a lot more stable and feels more controllable because I understand how it works and don't have to worry about it signing up to some social media platform and getting poisoned by another claw.
The self-hosted jank is real. atmita.com goes cloud-native instead (not based on OpenClaw, built from scratch), so there's no pi or mac mini in the loop. Managed OAuth covers the usual apps, and it's API-first so you can wire in anything else directly.
it's not necessarily about people self hosting it, it's about people preferring to pay for hosted stuff that is open source (e.g. I pay for Plausible).
Now it's a lot easier to rewrite open source stuff to get around licensing requirements and have an LLM watch the repo and copy all improvements and fixes, so the bar for a competitor to come along and get 10 years of work for free it a lot lower.
reply