> so that when the subsidies end and subscription costs shoot up
Subscription costs are capped to API rates as their ceiling (and, realistically, way lower than that - why would you even subscribe if you could just go pay-what-you-use instead), and those are already at a big margin for Anthropic. What still costs them a fuckton of money comparatively is training, but that is only going to get more efficient with more purpose-built hardware on the way.
Basicallly, I don’t see much of a reason to hike subscription prices dramatically. I don’t think they’ll stay at $100/$200 but anyone who’s paying that already knows how much value they’re getting out of that and probably wouldn’t mind paying more.
I'm not sure what you mean, if you max out your subscription perhaps? If you pay $100 and don't use it, you don't get refunded $100 because it's 'capped to API rates' which would've been 0.
He means that anthropic cannot increase the price of the sub because the users can just switch to the regular API pricing which consequently puts a ceiling on the cost of the sub.
Nobody would use a $1k sub if using the API pricing would only cost $500 for comparative service.
For the record, I'm only explaining what he put forward.
I don't agree with the opinion, mainly for two reasons:
The API cost can be increased in conjunction, hence the ceiling is just as variable
The harness is even more important then the model ime, and Claude Code is getting better every month. Even though the alternatives are getting better too, they're at least currently significantly worse IME - I'd say at least 3-6 months behind (compounded by the model, ofc).
And as a third point, unrelated to the original argument: there is no way anthropic is actually treating the sub as a loss leader. It is not cheap. It's only cheap compared to their API pricing, which they can freely set however they want. Compare their pricing to free models like Kimi k2.5 etc. I sincerely doubt anthropics model costs more to run then theirs, and they're profitable at 30% of the price anthropic charges.
Now huge amount of investment pays for training. This investment expects some returns, to be able to both turn profit and continue the training, rates must be much, much higher.
The point is that if the harness’ workflow gives contradictory and confusing instructions to the model, it’s a harness issue, not necessarily a model issue.
First it was a model issue, then it was a prompting issue, then it was a context issue, then it was an agent issue, now it's a harness issue. AI advocates keep accusing AI skeptics of moving goalposts. But it seems like every 3-6 months another goalpost is added.
Your comment doesn’t make as strong of a point as you think it does; it might make the opposite point.
Because, yes, first, it was a model issue, and then more advanced models started appearing and prompting them correctly became more important. Then models learned through RLHF to deal with vague prompting better, and context management became more important. Then models became better (though not great) at inherent context recollection and attention distribution, so now, you need to be careful what instructions a model receives and at what points because it’s literally better at following them. It’s not so much that the goalposts are being moved, it’s that they’re literally being, like, *cleared*.
This isn’t a tech that’s already fully explored and we just need to make it good now, it’s effectively an entirely new field of computing. When ChatGPT came out years ago no one would have DREAMT of an LLM ever autonomously using CLI tools to write entire projects worth of code off of a single text prompt. We’d only just figured out how to turn them into proper chatbots. The point is that we have no idea where the ceiling is right now, so demanding well-defined goalposts is like saying we need to have a full geological map of Mars before we can set foot on it, when part of the point of going to Mars is to find out about that.
As a side point, the agent is the harness; or, rather, an agent is a model called on a loop, and the harness is where that loop lives (and where it can be influenced/stopped). So what I can say about most - not all, but most, including you, seemingly - AI skeptics is that they tend to not actually be particularly up-to-date and/or engaged with how these systems actually work and how capable they actually are at this point. Which is not supposed to be a dig or shade, because I’m pretty sure we’ve never had any tech move this fast before. But the general public is so woefully underinformed about this. I’ve recently had someone tell me in awe about how ChatGPT was able to read their handwritten note and solve a few math equations.
> the headline deliberately tries to blow this up into a big deal
I do not understand how “company that runs half the internet has had major recent outages and now explicitly names lax/non-existent LLM usage guidelines as a major reason” can possibly not be a big deal in the midst of an industry-wide hype wave over how the world’s biggest companies now run agent teams shipping 150 pull requests an hour.
The chain of events is “AWS has been having a pretty awful time as far as outages go”, and now “result of an operational meeting is that the company will cut down on the use of autonomous AI.” You don’t need CoT-level reasoning to come to the natural conclusion here.
If we could, as a species, collectively, stop measuring the relevance of a piece of news proportionally by how much we like hearing it, please?
And too many people have their egos tied to its failure, too.
Im a massive AI skeptic. If anyone were to be jumping up and down on the corpse of AI and this incessant drive to use it everywhere, it’d be me. But I also work at Amazon. I got the email. I attended the meeting. I can personally attest that there are no new requirements for AI-generated code. The articles about this in the meeting at extremely misleading, if not outright wrong. But instead of believing the person that was actually there in the room, this thread is full of people dismissing my first-hand account of the situation because it doesn’t align with the “haha AI failed” viewpoint.
Not just their egos, but their paychecks. This place is either going to get very quiet or really weird when the hype train derails and the AI bubble bursts.
The subject of the media coverage is not AWS, it is a peer organization to AWS that runs using significant amounts of non-AWS infrastructure. They are both part of an umbrella called Amazon but are not at all the same thing.
It's hard to that this objection seriously. The publication is literally called the Financial Times. It's not exactly crazy for them to think that their readers might care about the entity that shows up the stock ticker rather than how the company happens to divide up things internally.
Even if it weren't a finance publication, I have trouble imagining you making this argument if a headline said something like "Google deals with outages in the cloud" because of the idea that it's misleading to refer to it as anything other than GCP. I think you're fundamentally not understanding how people communicate about this sort of thing if you actually think that someone saying "Amazon" is misleading in any meaningful way.
You’re describing reasonable misunderstandings, but they are still misunderstandings.
The cause and effect statements just don’t correspond to reality.
I guess I’m stuck on the idea that the actual facts are relevant. If the question instead is how the dance of optics and PR is going in the minds of people who don’t know enough to doubt what they read, I don’t know what to say about that.
The message and meeting being discussed here have nothing to do with AWS or any outages AWS has faced recently. I think you’re missing the point of the discussion.
I don’t blame you, because this is just bad reporting (and potentially intentionally malicious to make you think it’s about AWS). But the meeting and discussion was with the Amazon retail teams, talking about Amazon retail processes, and Amazon retail services. The teams and processes that handle this are entirely separate from any AWS outages you are thinking of.
The outages that Amazon retail has faced also have nothing to do with AI, and there was no “explicit call out” about AI causing anything.
> while taking the joyful bits of software development away from you
Quick question: by "joyful bits of software development," do you mean the bit where you design robust architectures, services, and their communication/data concepts to solve specific problems, or the part where you have to assault a keyboard for extended periods of time _after_ all that interesting work so that it all actually does anything?
Because I sure know which of these has been "taken from me," and it's certainly not the joyful one.
I guess I enjoy solving problems, and recognize that the devil is always in the details, so I don't get much satisfaction until I see the whole stack working in concert. I never had much esteem for "architects" who sketch some blobs on the whiteboard and then disappear. I certainly wouldn't want to be "that guy" for anyone else, and I'm not even sure I could do it to an LLM.
It’s perplexing; like the majority of people who insist using AI coding assistance is guaranteed going to rob you of application understanding and business context aren’t considering that not every prompt has to be an instruction to write code. You can, like, ask the agent questions. “What auth stack is in use? Where does the event bus live? Does the project follow SoC or are we dealing with pasta here? Can you trace these call chains and let me know where they’re initiated?”
If anything, I know more about the code I work on than ever before, and at a fraction of the effort, lol.
The project managers and CEOs who are vibe-coding apps on the weekend don't know what an "auth stack" is, much less that they should consider which auth stack is in use. Then when it breaks, they hand their vibe-coded black box to their engineers and say "fix this, no mistakes"
> But the pure output of a generative model cannot be copyrighted, regardless of how complex the prompt is
If that’s how the court interpreted it, then the software industry is hosed, since that’d mean none of the generated code running in production right now is under any sort of copyright or otherwise protection, lol.
I doubt that much software is entirely AI-generated with no human review or testing, it’s probably more like integrating some public domain snippets you found online into your code (which doesn’t invalidate copyright on the rest of it, or the way it’s put together) or having some files auto-generated by a script (like a C header containing a lookup table for a simple mathematical function, the table isn’t copyrightable itself maybe but the software as a whole still is)
If a deterministic machine transformation from a copyrightable prompt results in an uncopyrightable image, what do you think a compiler is doing to source code?
AI is not specifically not deterministic from the enduser's perspective. they throw randomness into it and hence why an exact prompt wont produce the same exact result.
a compiler on the other hand is generally pretty deterministic. The non determinism that we see in output is usually non determinism (such as generated dates) in the code that it consumes.
because they are just translating code (that everyone agrees is copyrightable) in a deterministic manner into another medium.
I'm not saying AI art should or shouldn't be copyrightable. One can argue the inputs into the AI generator are copyrightable, but if the output isn't deterministic translation of the input, its a different argument.
The original argument was that AI works wouldn't be copyrightable because they are deterministic, i.e. are just an algorithmic transformation lacking in creativity.
It sounds like they might be under the impression that having any AI-generated output in the code even if parts are human authored would invalidate the copyright, which isn’t true
>If that’s how the court interpreted it, then the software industry is hosed, since that’d mean none of the generated code running in production right now is under any sort of copyright or otherwise protection, lol.
I'm not sure this is really true, since copyright applies to distribution.
If you have a substantial amount of backend code (as with most SaaS projects) you're never actually distributing the code, and copyright is never at play. Computer generated artifacts are already in this boat and are protected by virtue of being trade secrets not by copyright.
This could maybe be true of shipping javascript to the browser, which presumably is not going to qualify as a trande secret, but I don't think that's where most companies derive value.
The idea that copyright applies solely to distribution is a popular myth, but it has no support in the actual copyright law. The core exclusive rights in copyright are (in the US, 17 USC § 106):
---
(1) to reproduce the copyrighted work in copies or phonorecords;
(2) to prepare derivative works based upon the copyrighted work;
(3) to distribute copies or phonorecords of the copyrighted work to the public by sale or other transfer of ownership, or by rental, lease, or lending;
(4) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and motion pictures and other audiovisual works, to perform the copyrighted work publicly;
(5) in the case of literary, musical, dramatic, and choreographic works, pantomimes, and pictorial, graphic, or sculptural works, including the individual images of a motion picture or other audiovisual work, to display the copyrighted work publicly; and
(6) in the case of sound recordings, to perform the copyrighted work publicly by means of a digital audio transmission.
---
OTOH, distributing copies created in violation of copyright is a good way to cause legally-cognizable harms to the copyright holder that will increase the potential damage award when you are found liable for copyright infringement, and it also makes it much more likely that someone will notice the infringement in the first place. But its not where the law, on its own terms, begins to apply.
Doing any of those without permission (unless it falls into one of the exceptions to copyright protection, like fair use) is a violation of copyright.
The idea of copyright is to prohibit unauthorized use and reproduction, but none of this actually happens with a proprietary software SaaS backend. You don't actually give anybody the code - they connect to the service.
Access to the service is already governed by computer access laws, which don't depend on copyright. And if you never intentionally distributed your code outside of your org, you can call it a trade secret and nobody else has any legitimate right to access it - whether or not it is copyrightable.
There are other things that aren't copyrightable that are trade secrets already. This would be true of any kind of automated data collection for example. You couldn't copyright it but you can call it a trade secret.
And for any of that stuff, if you want to share it and limit distribution, you just have whoever wants access explicitly agree to be bound by contract law.
>The idea of copyright is to prohibit unauthorized use and reproduction, but none of this actually happens with a proprietary software SaaS backend. You don't actually give anybody the code - they connect to the service.
The point isn't that you have to give it to people, but okay?
>Access to the service is already governed by computer access laws, which don't depend on copyright
Yeah, copyright doesn't control everything, and?
>There are other things that aren't copyrightable that are trade secrets already. This would be true of any kind of automated data collection for example. You couldn't copyright it but you can call it a trade secret.
Okay?
>And for any of that stuff, if you want to share it and limit distribution, you just have whoever wants access explicitly agree to be bound by contract law.
Your point being? You're just rambling assumptions about copyright and other things, which don't even track the actual law.
> Your point being? You're just rambling assumptions about copyright and other things, which don't even track the actual law.
I'm replying to the post that claimed:
> If that’s how the court interpreted it, then the software industry is hosed, since that’d mean none of the generated code running in production right now is under any sort of copyright or otherwise protection, lol.
There is in fact "otherwise protection" for the software industry by... not distributing the code. They don't need copyright over the generated code if they vibe code a SaaS backend. Whether there's copyright or not is irrelevant for the business model.
Copyright is the strongest legal protection available. It does not have a state of mind element. Breach of contract is much more complicated and context-dependent.
>There is in fact "otherwise protection" for the software industry by... not distributing the code.
Copyright protects against reverse engineering in some circumstances, for example.
>Whether there's copyright or not is irrelevant for the business model.
Yeah, I'm going to continue to disagree with you as I'm actually a litigator.
> Yeah, I'm going to continue to disagree with you as I'm actually a litigator.
OK, can you explain to me why this is a disaster for a vibe-coded SaaS? Why are computer access and/or contract laws insufficient and why would a vibe-coded backend be a huge risk?
I really don't understand where copyright on the code itself is necessary to protect these business models, and hopefully you can help fill the gaps.
I didn't say it would be a huge risk, I just disagree that any of those features of the law cover what copyright does. They don't. If a trade secret is ever revealed, all protection is lost. Breach of contract is very complex compared to an infringement claim and would have to be negotiated. As a customer, why would I want to indemnify a software supplier? If there's no indemnity, it's not going to get anyone very far. CFAA basically requires that something get hacked so it's not going to cover the vast majority of scenarios...
>I really don't understand where copyright on the code itself is necessary to protect these business models, and hopefully you can help fill the gaps.
Well, did you ever try to understand? It's so exhausting coming to these threads when people are just making assumptions about how the law works without any regards to what actually happens, and then suggesting policy changes in response.
Here's a scenario - disgruntled ex-employee leaks the code. Now it's free for anyone to use because there is nothing you can do to stop anyone from using it because you have no rights in the code since the trade secret is broken. You can sue the employee. They are probably judgment proof, wont have a lot of money anyway, and will still not stop a competitor from spinning up the same exact thing.
Trade secret was your suggestion by the way... So do, you actually know how trade secrets work, or you just making things up??
> How? I'd imagine that most typically means continuing to program by hand.
I think the use of LLMs is assumed by that statement. The point is that even experienced programmers can get poor results if they're not aware of the tech's limitations and best-practices. It doesn't mean you get poor results by default.
There is a lot of hype around the tech right now; plenty of it overblown, but a lot of it also perfectly warranted. It's not going to make you "ten times more productive" outside of maybe laying the very first building blocks on a green field; the infamous first 80% that only take 20% of the time anyway. But it does allow you to spend a lot more time designing and drafting, and a lot less time actually implementing, which, if you were spec-driven to begin with, has always been little more than a formality in the first place.
For me, the actual mental work never happened while writing code; it happened well in advance. My workflow hasn't changed that much; I'm just not the one who writes the code anymore, but I'm still very much the one who designs it.
Yes, I've seen many people become _too_ hands-off after an initial success with LLMs, and get bitten by not understanding the system.
Hirers, above, are more focused on the opposite side, though: engineers who try AI once, see a mess or hallucinations, and decide it's useless. There is some learning to figure out how to wield it.
Depends on what you consider your "skills". You can always relearn syntax, but you're certainly not going to forget your experience building architectures and developing a maintainable codebase. LLMs only do the what for you, not the why (or you're using it wrong).
There are three sides to this depending on when you started working in this field.
For the people who started before the LLM craze, they won't lose their skills if they are just focusing on their original roles. The truth is people are being assigned more than their original roles in most companies. Backend developers being tasked with frontend, devops, qa roles and then letting go of the others. This is happening right now. https://www.reddit.com/r/developersIndia/comments/1rinv3z/ju...
When this happens, they don't care or have the mental capacity to care about a codebase in a language they never worked before. People here talk about guiding the llms, but at most places they are too exhausted to carry that context and let claude review it's own code.
For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book. They have to blindly trust the code and keep hitting it like a casino machine (forgot the name, excuse me) burning tokens which makes these companies more money.
For the people who are yet to begin, sorry for having to start in a world where a few companies hold everyone's skills hostage.
> For the people who are starting right now, they're discouraged from all sides for writing code themselves. They'll never understand why an architecture is designed a certain way. Sure ask the llm to explain but it's like learning to swim by reading a book.
This! There are several forces that act on how code is written and getting the software to work is only one. Abstraction is another which itself reflect two needs: Not repeating code and solving the metaproblem instead of the direct one. Simplicity is another factor (solving only the current problem). Then there’s making the design manifests in how the files are arranged,…
As a developer, you need to guarantee that the code you produced works. But how the computer works is not how we think. We invented a lot of abstractions between the two, knowing the cost in performance for each one. And we also invented a lot of techniques to help us further. But most of them are only learned when you’ve experienced the pain of not knowing them. And then you’ll also start saying things like “code smells”, “technical debt”, “code is liability” even when things do work.
The syntax argument is correct, but from what I am seeing, people _are_ using it wrong, i.e. they have started offloading most of their problem solving to be LLM first, not just using it to maybe refine their ideas, but starting there.
That is a very real concern, I've had to chase engineers to ensure that they are not blindly accepting everything that the LLM is saying, encouraging them to first form some sense of what the solution could be and then use the LLM to refine it further.
As more and more thinking is offloaded to LLMs, people lose their gut instinct about how their systems are designed.
Subscription costs are capped to API rates as their ceiling (and, realistically, way lower than that - why would you even subscribe if you could just go pay-what-you-use instead), and those are already at a big margin for Anthropic. What still costs them a fuckton of money comparatively is training, but that is only going to get more efficient with more purpose-built hardware on the way.
Basicallly, I don’t see much of a reason to hike subscription prices dramatically. I don’t think they’ll stay at $100/$200 but anyone who’s paying that already knows how much value they’re getting out of that and probably wouldn’t mind paying more.
reply