Hacker Newsnew | past | comments | ask | show | jobs | submit | aurareturn's commentslogin

  Today, I’m going to argue that this is the sleeper story we’re not paying enough attention to, and I’ll also share a few suggestions for how to navigate this time of extreme volatility and risk.
I might be living under a rock but everyone is talking about the GCC's capital.

The core of Iran's strategy is to hurt the GCC financially which hurts the US financially. This is literally THE story. It's not the sleeper story. It's the main story and why stocks have been swinging like crazy.


  Luckily debt will be solved by the power of AGI, right? Just one more data centre! One more GPU! It can nearly write a basic three tier application with only 10 critical security vulnerabilities all by itself!
If you read the article, it says the default is directly related to the sell off of software stocks, which are heavy private credit borrowers.

What caused the SaaS apocalypse? Gen AI.

I'm long on AI hardware companies for this reason.


We're already seeing large software companies figure out that they don't need 5,000 developers. They probably only need 1,000 or maybe even fewer.

However, the number of software companies being started is booming which should result in net neutral or net positive in software developer employment.

Today: 100 software companies employ 1,000 developers each[0]

Tomorrow: 10,000 software companies employ 10 developers each[1]

The net is the same.

[0]https://x.com/jack/status/2027129697092731343

[1]https://www.linkedin.com/news/story/entrepreneurial-spirit-s...


Don't count all those chickens before they hatch. There might be more started but do they all survive? Think back to the dot-com boom/crash for an example of where that initial gold rush didn't just magically ramp forever. There were fits and starts as the usefulness of the technology was figured out.

Why will we need 1000 companies tomorrow to do the same thing that 100 companies are doing today? If they are really so efficient because of AI then won't 10 companies be able to solve the same problems?

Because that car repair company with 3 local stores previously couldn't justify building custom software to make their business more efficient and aligned with what they need. The cost was too high. Now they might be able to.

Plenty of businesses need very custom software but couldn't realistically build it before.


I see no way that company would save more money from hiring an experienced developer compared to paying their yearly invoice on the COTS product doing the same thing today. The only way this works is with a very wage suppressing effect.

Off the shelf software could still cost thousands per year and I'm sure they don't do everything the shops need them to do.

Car repair companies won’t see a meaningful improvement to their bottom line with more custom software. Will it increase the number of cars per employee per day they can repair?

I do bespoke work like this, but mostly to replace software that’s starting to cost mid 5 figure amounts per year for a SaaS setup and the support phone line has been replaced by an LLM chat bot.

For the same reason there were more bank branches after the cost-per-branch was reduced.

Right now, software is really expensive; so 1) economics tends to favor large pieces of software which solve many different kinds of problems, and 2) loads of things that should be automatable simply aren't being automated with software.

With the cost of software dropping, it makes more sense to have software targeted towards specific niches. Companies will do more in-house development, more things will be automated than were being automated before.

Of course nobody knows what will happen; but it's entirely possible that the demand for people capable of driving Claude Code to produce useful software will explode.


What makes you think they'll be doing the same thing?

There’s always more problems to be solved. Some of them just weren’t financially feasible before.

This is one of the key "inefficiencies" of the private sector - there might be one winner at the end of the day providing the product that fills the market niche, but there was always multiple competitors giving it a go in the mean time.

A recent example, Mitchell Hashimoto was pointing out that he wasn't "first to market" with his product(s), he was (at least) SEVENTH


Almost tautologically it's not "inefficient" to do so, because free market economics has decided that all the attempts are mathematically worth it, for a high-margin low-marginal-cost product like software.

I'm a little lost as to why seven teams duplicating effort is more "efficient" in any sense of the word than one or two teams working iteratively toward the same goal.

If this were seven government funded teams solving the same problem, people would lose their minds over the 'waste' But when private companies do it, we call it efficient market competition. The duplication is the same - we just frame it differently.

Edit: fixed some typos caused by fat fingers on a phone keyboard


The benefit from having a 5% better product that hundreds of millions of people will use is worth the duplicated effort in the beginning. The numbers just make sense.

>If this were seven government funded teams solving the same problem

The problem here is "government funded" - the trials are not rationalized by free-market economics. That is, a 5% better product in the end would not be worth seven competing developments initially.


> The benefit from having a 5% better product that hundreds of millions of people will use is worth the duplicated effort in the beginning. The numbers just make sense.

This assumes that the duplicated effort arrives at a solution that is better than if it were done by a single team.

> >If this were seven government funded teams solving the same problem

> The problem here is "government funded" - the trials are not rationalized by free-market economics. That is, a 5% better product in the end would not be worth seven competing developments initially.

I think you're saying that 5% is worth it when the free market does it, but 5% gain isn't when the government does it?

I'm hoping you're not because that's impossible - the end result is precisely the same


> The duplication is the same

It is not. Seven teams all working under one leadership is quite different to seven leaderships each working with one team.

When different governments (e.g. USA and USSR), and thus different leaderships, are both trying to solve the same problem (e.g. travel to the moon), that too is considered efficient competition.


Oh, so seven /leaderships/ is what's made the difference?

If a government did this (e.g., seven independent agencies competing for a moon landing), people would call it "fragmented," "uncoordinated," and "bureaucratic infighting."


Seven independent government agencies are still an arm of the same leadership.

When complete organizational separation is introduced, the concerns you speak of go away. In the USA, the ARPA (you might recognize that name from the thing you're using right now) program regularly enables "seven" independent leaders to tackle a problem and this is widely considered a resounding success.


No real scotsmen

Remember, when it comes to government — at least a democratic one — the people complaining are also the leadership. Think about it from their perspective:

- If they do a good job with leadership, only one team will be necessary. Anything else is truly a waste.

- If they do a poor job with leadership, every team will fail. Any more than one is also truly a waste[1].

The latter is the most likely outcome, of course. Now, when you absolve yourself from the process then those points still apply, but now you have several leaders duking it out to see which one doesn't fail. But, for the same reasons, those leaders each only benefit from having one team.

[1] You could argue that all teams are truly a waste, but one team is necessary to show that leadership failed. That brings abstract value, even if it fails to deliver the intended value. You don't know until you try.


If all people complaining are the leadership - then so are all the customers (potential, or otherwise)

The repeated movement of the goalposts here is only evidence of the no real scotsmen strategy being employed.


> If all people complaining are the leadership - then so are all the customers

Not necessarily. Unless you think a global democratic government formed overnight?

> The repeated movement of the goalposts here

Whatever it is you are reading in other threads has no relevance to this one.


> Not necessarily. Unless you think a global democratic government formed overnight?

This is a distraction. Whether it's 300 million voters or 300 million iPhone users, both groups act as the ultimate arbiter of value. If a customer stops paying, the "leadership" of a company fails. If a voter stops voting for one party, or the other, the "leadership" of a state fails. The mechanical result on the "seven teams" is identical: the unsuccessful ones are defunded.

Further, this proves the detachment from reality you are bringing to the conversation - everybody in the private sector knows the golden rule - your customers ARE your employers

THEY dictate what they will pay for, and therefore what can be sold (unless you are a fan of monopolies forcing people to buy things they do not want to)

> Whatever it is you are reading in other threads has no relevance to this one.

Your dishonesty only highlights your bad faith, and as such we are done here.


> If a voter stops voting for one party, or the other, the "leadership" of a state fails.

Political parties in democracy are quite literally labor unions. The people in them do not independently lead the state, they are merely employees, hired by the leadership. You know, that's what you host elections for — to choose which employee you want to hire from the set of candidates who want the job. They may act as sub-leaders within the capacity of their job, but they are not the top leaders we are talking about. "Leadership" here was never intended to be about "middle managers".

That seems pretty obvious, but perhaps this confusion is the source of your misunderstandings?

> and as such we are done here.

Done with what? Thinking other threads are related this one? That is a good idea.


Do the booming companies pay the same as the ones who did layoffs? If you're laid off from Meta or other top tier paying company (the behemoths doing layoffs) you might have a tough time matching your compensation.

But do they need to? If a <role X> job at a top tier company making $600k is eliminated and two <role X> jobs at a "more average" company making $300k replace it; is that really a bad thing? Clearly, there's some details being glossed over, but "one job paying more than a person really needs" being replaced by "two jobs, each paying more than a person really needs" might just be good for society as a whole.

It doesn't seem too bad when you cherry pick an outlier example, but what about when the person making $100k now makes $50k?

I'm sure the retort of the AI optimist will be that AI will make the things that person buys cheaper, and there may be truth to that when it comes to things that people buy with disposable income...

But how likely is AI to make actual essentials like housing and food cheaper?


Are there that many people at top tier companies making 100k? I was under the impression that they were top tier because they paid really well.

There's likely going to be a separation between the top earners and the average.

IE. If a top tier dev make $1m today, they'll make $5m in the future. If the average makes $100k today, they'll maybe make $60k.

AI likely enables the best of the best to be much more productive while your average dev will see more productivity but less overall.


I think this is assuming that the labor market knows how to identify the dirct value of devs. This already seems to be a problem across the board regardless of job role.

I think solo founders or small software companies where top tier devs can have huge ownership will be making top dollar.

Can you give an example of what a solo founder might now make top dollar on that he previously couldn't?

I think a solo dev can make a $1b company whereas it was impossible before.

Yes I understand but so far I don't know what such a company could look like, or even in what industry it would be.

The number of software companies being started is probably at least partially the result of people not being able to find a job and starting a company as a last resort.

I think this is true in the short/medium term, hence the confusing picture of layoffs but growing number of tech roles overall. The limit maybe be just millions of companies with one tech person and a team of agents doing their bidding.

Maybe software engineers will be like your personal lawyer, or plumber. Every business will have a software engineer on dial, whether it's a small grocery store or a kindergarten.

Previously, software devs were just way too expensive for small businesses to employ. You can't do much with just 1 dev in the past anyway. No point in hiring one. Better go with an agency or use off the shelf software that probably doesn't fill all your needs.


And the differentiator will be (even more than it is now) product vision since AI-enhanced engineering abilities will be more level.

Only because VC companies are throwing money at them. How many of them are actually profitable and long term sustainable

Ah, so that explains why job growth is at a steady pace and the software industry hasn’t been experiencing net negative job growth the past year or so.

How silly of me to rely on reality when it’s so obvious that AI is benefiting us all.


I think you're being sarcastic? I'm not sure.

Anyways, this is the start. Companies are adjusting. You hear a lot about layoffs but unemployments. But we're in a high interest environment with disruptions left and right. Companies are trying to figure out what their strategy is going forward.

I don't expect to see a boom in software developer hiring. I think it'll just be flat or small growth.


I was being sarcastic.

We are in negative growth, and the current leadership class keeps talking about all the people they can get rid of.

Look at the Atlassian layoff notice yesterday for example where they lied to our faces by saying they were laying off people to invest more in AI but they totally aren’t replacing people with AI.


> We're already seeing large software companies figure out that they don't need 5,000 developers. They probably only need 1,000 or maybe even fewer.

Long-term, they will need none. I believe that software will be made obsolete by AI.

Why use AI to build software for automating specific tasks, when you can just have the AI automate those tasks directly?

Why have AI build a Microsoft Excel clone, when you can just wave your receipts at the AI and say "manage my expenses"?

Enjoy your "AI-boosted productivity" while it lasts.


> Long-term, they will need none. I believe that software will be made obsolete by AI.

I think this is a bit hyperbolic. Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.

For SaaS platforms you’ll see a dramatic reduction, maybe like 80% but it’ll still have a handful of devs.

Factories didn’t completely eliminate assembly line workers, you just need a far fewer number to make sure the cogs turn the way it should.


> Someone still needs to review and test the code, and if the code is for embedded systems I find it unlikely.

I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.

Today, for example, you can ask ChatGPT to play chess with you, and it will. You don't need a "chess program," all the rules are built in to the LLM.

Same goes for SaaS. You don't need HR software; you just need an LLM that remembers who is working for the company. Like what a "secretary" used to be.


> I feel like you didn't understand my comment. I am predicting that there is no code to review. You simply ask the AI to do stuff and it does it.

I didn’t, and thanks for clarifying for me.

This doesn’t pass the sniff test for me though - someone needs to train the models, which requires code. If AI can do everything for you, then what’s the differentiator as a business? Everything can be in chatGPT but that’s not the only business in existence. If something goes wrong, who is gonna debug it? Instead of API requests you would debug prompt requests maybe.

We already hate talking to a robot for waiting on calls, automated support agents, etc. I don’t think a paying customer would accept that - they want a direct line to a person.

I can buy the argument that the backend will be entirely AI and you won’t need to be managing instances of servers and databases but the front end will absolutely need to be coded. That will need some software engineering - we might get a role that is a weird blend of product + design + coding but that transformation is already happening.

Honestly the biggest change I see is that the chat interface will be on equal footing with the browser. You might have some app that can connect to a bunch of chat interfaces that is good at something, and specializations are going to matter even more.

It was a bit of a word vomit so thanks for coming to my TED Talk.


> I don’t think a paying customer would accept that - they want a direct line to a person.

What the customer wants only matters insofar as they are willing to pay for it. Sure, I'd rather talk to a person... But I'm not willing to pay 100x as much for a service that's only marginally better. Same reason I don't fly first class, as miserable as coach is.

Someone may want to pay for a boutique human lawyer/banker/coder/professor, maybe as a status symbol, the same way people pay $20k for an ugly handbag. But I think most people will take the cheaper and almost as good option, when the difference in quality is far overshadowed by the difference in price.

> someone needs to train the models, which requires code.

I'm not sure that training llms is a coding problem, but it doesn't much matter: llms can train each other.

> If AI can do everything for you, then what’s the differentiator as a business?

Good question. My gut says there isn't: all money flows to the model providers, everyone else is a serf at best parasiting on someone else's model.


Good points. People might not pay 100x for something but it’s all about perceived value. Part of a successful business is to identify the perceived value, and find out your PMF while being different enough from the competition. It’ll be interesting to see how things play out, we are in such early days still.

We hate talking to robots because they are largely useless when we have anything out of routine. We love talking to robots when we would ordinarily wait 30 minutes for a 3-minute conversation.

Because AI agents are tool users. Why does AI need to research 2026 tax code changes and then try to one-shot your taxes when it can just use Turbotax to do it for you? Turbotax has the latest 2026 tax changes coded into the app. I'd feel much more confident if AI uses Turbotax to do my taxes than to try to one-shot it.

> Turbotax has the latest 2026 tax changes coded into the app.

How does TurboTax implement the latest tax changes? My guess is that before the decade is over, the answer is "an LLM does it."


Yes but I’ll be glad to pay for human oversight at TurboTax.

Anyways, formulas are a lot better than one shot.


LLM technology will never achieve 100% accuracy in its output. There is an inherent non-determinism. Tasks that require 100% accuracy cannot be handled by LLMs alone. If an LLM is used to replace HR, it will inevitably do something wrong, and a human will need to be in the loop to correct it.

Same goes for chess, there will always be a chance that it makes an illegal move. Same goes for code, there will always be a chance that it produces the wrong code.

Maybe a new AI technology will be developed that doesn't have the innate non-determinism, but we don't have that now.


Relevant to your comment is this link from today's HN front page, about adapting LLMs to perform deterministic calculations.

https://www.percepta.ai/blog/can-llms-be-computers


So even in your example you still need to have someone to ask the AI to play chess. So there will still be a need for someone somewhere to ask the AI to do something and supervise it or guide it in the right direction.

You've misunderstood my position. My argument is not that "AIs can operate independently and don't need supervision," but rather that "AIs are able or will soon be able to perform complex behaviors directly without having to create traditional software first." The chess example is illustrative because you can play chess with the AI without first asking the AI to implement chess-playing software. This means that software is obsolete, not people.

> Why use AI to build software for automating specific tasks, when you can just have the AI automate those tasks directly?

Speed, cost, security, job/task management

Next question


> Speed, cost, security, job/task management

All of that will inevitably be solved.

50 years ago, using a personal computer was an extravagant luxury. Until it wasn't.

30 years ago, carrying a powerful computer in your pocket was unthinkable. Until it wasn't.

Right now, it's cheaper to run your accounting math on dedicated adder hardware. But Llms will only get cheaper. When you can run massive LLMs locally on your phone, it's hard to justify not using it for everything.


Not until power access/generation is MUCH cheaper. Long, long, long way off.

If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?

Also, battery life of mobile devices.


These exact arguments could have been made 50 years ago about why laptops are impossible.

But now, we not only have laptops, we run horribly inefficient GUIs in horribly inefficient VMs on them.

The dollar-per-compute trend goes ever downward.


It will never ever be as cheap as as cron job and a shell script. There is a certain limit to how efficient using an LLM to do a job vs using an LLM to create a job is. There is a large distinction in compute and power resources between the two. Don't mistake one for the other.

> It will never ever be as cheap as as cron job and a shell script.

Yes. That's precisely why my company runs dBase 7 on a fleet of old 286DX machine from Compaq. /s

Running obsolete software will be cheaper, but the value provided by the newer technology will make the difference insignificant.


I don't think so, because that carried efficiency scales.

Why do 50,000 tasks with an LLM when I can do 64,467,235 without an LLM that the LLM created for the same cost on probably far lower cost hardware?


Because in their ideal world, you won't have your own hardware beyond a secured thin client running only "approved" programs running on their servers.

If I can run 50,000 fixed tasks that cost me $0.834/hr but OpenAI is costing $37/hr and the automation takes 40x as long and can make TERRIBLE errors why the fuck would I not move to the deterministic system?

Because you'll be outcompeted by people who make the best of the nondeterministic system.


Mass media is staunchly anti-AI. I'm betting that inside The Guardian, they're heavily using AI to help them research and write articles. But publicly. they're anti-AI.

Maybe.

I'd say AI makes a lot more sense at the Guardian than it does Amazon.

If AI at the Guardian gets a few facts wrong, no big deal. Most readers probably won't notice. So better luck next time.

But if AI at Amazon gets a few facts wrong, millions of customers can be directly impacted. And this is hard not to notice.

It doesn't take much noticing before customers start going elsewhere.


You can ask AI to verify your changes too.

AI shouldn’t just be used to speed up coding. It can improve quality through testing too.

Amazon had outages caused by humans in the past.


Amazon had outages caused by humans in the past.

Everyone has had and will most likely continue to have outages. The question is --- what is the trend? Are they increasing or decreasing with increased use of AI at Amazon?

Amazon has allegedly discussed internally the “trend of incidents" related to “Gen-AI assisted changes".

“Folks, as you likely know, the availability of the site and related infrastructure has not been good recently,” Dave Treadwell, a senior vice-president at the group, told employees in an email, also seen by the FT.

https://www.ft.com/content/7cab4ec7-4712-4137-b602-119a44f77...


Wasn't Claude already used with Palantir to choose Iran bombing targets?

I don't know exactly how I would feel if the software I created selected a school to bomb and then suggested bombing the rescue parties trying to find / save any unexploded children 40 minutes later (double tap strategy to kill rescue parties and/or medics).

It wouldn't be good though.


That 'let claude wing it, then send for review' approach that your lazy coworker uses is now how the largest military in the world operates. No big drama.

Fortunately for the government, there's no lack of "I'm just here for the tech, keep politics out of this" developers

  Companies are getting desperate to show AI adoption as right now the numbers just don’t add up.
All compute companies say they don't have enough compute to meet demands. Why do you think there isn't enough AI adoption to justify the investment?

“Demand” is mostly their training of models, which they’ve yet to demonstrate is a profitable business.

Just because you’re struggling to get raw materials for your business doesn’t make it a good business. Without strong enterprise adoption ASAP (which is what’s seriously suffering) things are going to hit the fan real quick.


With respect, I don't think you've used the latest models and have not seen Anthropic's enterprise revenue hockey-stick like number. They are so busy outfitting fortune-500, you can't even get someone in sales to respond to emails. I've been waiting for months and so have others.

This will sound snarky, so forgive me, but I honestly don't know the answer. Is this actually true? Is there a reliable source containing statistics on LLM compute usage that includes training vs inference for the whole market?

I don’t understand why people don’t just use Gemini or some other AI web search to get an answer to these kinds of questions quickly (I excluded the sources, you can get them if you ask the same question).

> While AI training is often the most intense and expensive process for a single model, the majority of total AI compute usage (approximately 90%) is used for inference.

> Here is the breakdown of why this is the case: > Inference as High-Volume

> Activity: Inference occurs every time a user interacts with an AI model (e.g., asking ChatGPT a question, using image recognition, or generating code). While a model is trained once (or updated infrequently), it runs millions or billions of inferences continuously.

> Cost Scaling: Training is a massive, one-time upfront cost, while inference is an ongoing, daily operational cost. As the number of AI users grows, the demand for inference compute scales faster than the need for training new, large models.

> The Shift to Efficiency: While early AI hype focused on the immense compute needed for training, the industry has shifted toward making inference cheaper and faster through specialized hardware and techniques like optimization, quantization, and small language models (SLMs).


Gemini is not a reliable source. You posted the only part of the AI response that isn't useful in verifying whether it is true.

Sure, I guess. I asked Gemini to give me some markdown of citations and the claims made that address the question:

https://share.google/aimode/v3Y9P3rYIx1oj9VI2

And I finally figured out how to get links to answers instead of just inlining the content as before. Anyways, there it is. We live in a time where questions like "Does inference or training use more compute?" can be answered quickly by just pasting it into a search box.


The revenue numbers are public for the major AI companies. That's probably the best estimate for "inference for the whole market" we have, since most of that inference is billed in either API usage or subscriptions, and it won't include any in-house usage such as training.

Most of the compute is actually used for inference (90% if Gemini is to be trusted).

Do you have source?

Just ask the slop generator of your choice. Making sense of all that smoke and mirrors without AI support has become impossible.

"enough compute" will be when there is no more hardware for use outside of their walled garden, at which point they can control what they want

  Good products come from tight cycles: ship something, listen to users, iterate. Token economics break that cycle by introducing a competing optimization target. The team stops asking "what do our developers need?" and starts asking "what supports the token narrative?"
In other words, the team starts asking "How can we maximize the token price while delivering as little product value as possible"?

This is why 99.99% of crypto projects are a scam.

No, your token investors don't give a damn what you deliver. They only care about the price of the token. Lie if you have to. Hype up your project like it's the greatest thing in the world. Do whatever to enable security fraud.

When teams discover that lying does more for the token price than actually building, they quickly switch incentives. Now they'll just lie, sell tokens, repeat, until a final rug pull to scam the remaining bag holders.


Yes - but the sad thing is how badly this has bled back to the real markets. That's how you get things like https://en.wikipedia.org/wiki/Nikola_Corporation

I'm concerned we may not be able to pull back from low-trust society in which most investments are fradulent; eventually it will become impossible to raise money for real ventures!


If you'll humor a cheeky substitution:

> No, your VC investors don't give a damn what you deliver. They only care about the valuation. Lie if you have to. Hype up your project like it's the greatest thing in the world. Do whatever to enable security fraud.

People are quite good at recognizing this dynamic amongst crypto startups.

Yet they pretend it's not the driving force in both the VC world and Big Tech.


Not the same because VCs can only make money when the startup gets acquired by a bigger company or by IPO. Both of them will require professional due diligence. So it's far harder to fool investors than crypto which prey on the least sophisticated investors.

The due diligence stops the fraud that is rampant in crypto. It doesn't change the incentive structure of hype-over-substance though.

So long as you aren't (caught) overtly lying about the startup, all hype is fair game. Sam Altman can spout his ridiculous claims until the sun explodes.

The reason I left the security fraud part of the quote in is that the line is entirely demarked by what the SEC will enforce, not what's actually illegal according to the law or not. (And under the current admin, the SEC isn't gonna do shit.) There are a lot of tech startups doing securities fraud that'd get them hit by the regulators in any other part of the west.


Clever comparison, but the key difference is there’s no mechanism for a rug pull for most startups. Unless they reach a huge valuation, the stock is absolutely not liquid. There’s no way to cash out.

The incentives are the same. Rug-pulls just make it faster to cash out.

> There’s no way to cash out.

There are precisely two: Go for an IPO, or get acquired by a major tech firm.

Both of these run near-exclusively on hype. So long as the company isn't showing actively fraudulent numbers, you can IPO with a terrible product that doesn't turn a profit.


It's not just that crypto lets you cash out faster - it lets you do it with zero notice, accountability, or diligence.

Startup exits (IPO or acquisition) often have a big chunk of hype associated with them. But often the hype is backed by factual numbers of revenue or user-base. Even if it's pure hype, there will be mountains of legal paperwork. Hundreds if not thousands of hours spent by professional lawyers checking that whoever is putting up the money really is getting what they're paying for, even if what they're buying is a dream. If not, somebody has broken the law (fraud) or a shocking amount of incompetence has occurred. Your typical crypto scam thrives because there are no such procedural guarantees.


Yep!

People in the Bitcoin space have been screaming at the top of their lungs about this for decades at this point, but it's hard to work against the marketing machine that comes from these ICOs.


The weird thing is that this outcome was always obvious.

Token-driven projects were clearly just penny stock boiler room scams dressed up in a trenchcoat made of jargon whitepapers.


Author here. The essay's argument is actually the opposite of that. The team was talented. Proof-of-Transfer was a real technical contribution. The SEC qualification was historic. What I'm describing is how structural incentives bent a legitimate effort toward narrative optimization over time. That's a harder problem than fraud. There's no villain, just a system that rewards the wrong things. Reducing it to "scams" makes it too easy and misses the lesson for anyone building with a financial instrument attached.

Why not just provide more compute for say, 1 billion token context for each user to mimic continuous learning. Then retrain the model in the background to include learnings.

The user wouldn’t know if the continuous learning came from the context or the model retrained. It wouldn’t matter.

Continuous learning seems to be a compute and engineering problem.


Because that re-training is not strong enough to hold, or so it seems. The same dumb factual errors keep coming up on different generations of the same models. I've yet to see proof that something 'stuck' from model to model. They get better in a general sense but not in the specific sense that what was corrected stays put, not from session to session and not from one generation to the next.

My solution is to have this massive 'boot up' prompt but it becomes extremely tedious to maintain.


If anything, Neo signals they will not merge macOS and iOS.

Why would they if they just released a brand new MacBook?

The SoC is just a way to differentiate from the Air and to keep costs low.


Inference is profitable. No one is selling at a loss. It’s training to keep up with competitors that is causing losses.

> Inference is profitable

Eh. We don't really know that, and the people saying that have an interest in the rest of the world believing it's true.


How are we so sure that deep inside the moon isn't made out of cheese?

I remember Enron. Hell, I remember the S&Ls. I've seen this movie too many times to not know how it ends.

I remember Google, Meta, Apple, Eli Lily, and other meteoric risen companies.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: