had this happen to me mid-refactor and spent 20 min wondering if I'd gone crazy. honestly the one hour threshold feels pretty arbitrary, sometimes you just step away to think
had the same realization last year after getting a few obviously AI-generated PRs. reviewing them took longer than just writing it myself. maybe the right unit of contribution is going back to being the detailed bug report / spec, not the patch
Saw this coming eventually. $20/month for autonomous agents running 24/7 was clearly not sustainable at API pricing. The part that's surprising is there's still no official announcement - just a quiet page edit.
The $20/mo plan never supported 24/7 autonomous agents. With Opus 4.5 and 4.6 I would hit resource limits after a reasonable amount of work, which corresponded to a variable amount of wall clock time.
This makes me think either they’re severely resource constrained and need to focus on “high value” customers, they’re bleeding money on inference, or their sales and marketing team is incompetent.
Regardless, this feels like a pretty big rug pull. Especially without a phase-out period and a real announcement. As someone using Claude Code on a personal hobby project to get a better feel for its capabilities, I’m not sure what to do now. I can’t justify the $100+/mo plans for a hobby project.
My choices are then:
- Code this project by hand, which would be fun but defeats the point of this being my agentic coding project.
- Find another model and use Codex or OpenCode or whatever.
- Put the project on a shelf till this shakes out.
This was never the case though. There's a per week and per 5 hour quota. If you exhaust either you have to wait for the reset. What they're doing makes no sense.
And yet they're very aware that Hacker News, etc exists and so the awareness and backlash would be instant. It's like they want to get a lower rating from the community. Maybe that's their solution for the resource issue: make enough people mad so they abandon their subscriptions.
15 years of supply chain excellence and the software running on that hardware quietly got worse every cycle. the m1 transition was so clean it made everyone else look like they were guessing. ternus thinks in tolerances and thermal envelopes - giving the keys to someone who's already pulled off the hardest platform migration in apple's recent history seems right.
The m1 transition was clean, and the hardware is amazing, don't get me wrong (I just bought a neo and I'm very happy with it). But the transition did look even more amazing than it should have because of just how dogshit Intel macs had gotten, especially around thermal throttling. Apple could have built much nicer systems on Intel already had they just made them slightly thicker and used sensible heatsink and fan designs for the hardware they were putting in them.
(We're seeing echoes of that again now where you can get 20-30% performance bumps in Neos and Airs just by sticking a thermal pad on the CPU - Apple is still allergic to cooling, they've just built amazingly efficient hardware that sidesteps the problem)
To make the M1 transition so clean took a lot of software excellence...one can argue Apple's compiler / virtualization / software languages team is the best in the industry (grumbling from Swift UI developers aside...)
They’ve done a few impressive things also, remember that time they converted the root filesystem on every ios device on the fly with a point release? Kudos to whoever clicked that particular button I’d have bricked it personally ;)
three critical vulns in 12 months is a pattern not a coincidence. the SRP point is sharp - we interview engineers on isolation principles then build platforms that are the opposite of that.
ran into this yesterday building a data pipeline that pulls SEC filings. same prompt, same context window, 4.7 chewed through noticeably more of my api budget than 4.6 did. the output wasnt obviously better either, just... more expensive.
what bugs me is the tokenizer change feels like a stealth price hike. if you're charging the same $/token but the same text now costs 35% more tokens, thats just a 35% price increase with extra steps. at least be upfront about it.
This feels like one of those bugs that sounds niche until you put a work Mac through the usual gauntlet of VPN, MDM, chat, calendar, backup, and whatever else corp IT adds. Not catastrophic, but it is kind of wild that macOS still has no first party overflow affordance for menu bar icons.
The 4B being this capable is honestly surprising. Ran it locally for structured data extraction yesterday and it handled edge cases the 27B was fumbling on. Didn't expect to swap down that fast.
The monitoring and evaluation piece is underrated. In my experience the hardest part isn't building the initial LLM pipeline, it's knowing when the thing quietly broke. Domain expertise matters a lot there because you need to design evals that actually catch the failure modes that matter for your specific data distribution.
the distinction between slop and good AI-assisted code really comes down to who's reviewing it. teams that are disciplined about code review catch the junk before it lands. teams that let AI output fly straight to prod are gonna have a bad time eventually. it's less about the AI and more about engineering culture around it
reply