Hacker Newsnew | past | comments | ask | show | jobs | submit | encoderer's commentslogin

As a point of reference, I’m a heavy cc user and I’ve had a few bugs but I’ve never had the terminal glitches like this. I use iterm on macOS sequoia.

To offer the opposite anecdotal evidence point -- claude scrolls to the top of the chat history almost capriciously often (more often than not) for me using iterm on tahoe

I've had it do it occasionally in all of Ghostty, iTerm2 and Prompt 3 (via SSH, not sure what terminal emulator that uses under the hood)

I thought I was the only one who had this problem - so annoying, and the frequent Ui glitches when it asks you to choose an option .

Wow I thought it was tmux messing up on me, interesting to hear it happens without it too

Not tmux related at all had it happen in all kinds of setups (alacritty/linux, vscode terminal macos)

Scrolling around when claude is "typing" makes it jump to the top

To be fair, iTerm is likely to be the single most common terminal emulator used by Claude Code developers, so I'd hope that it would work tolerable well there.

i will note that they really should of used something like ncurses and kept the animations down, TTYs are NOT meant to do the level of crazy modern TUIs are trying to pull off, there is just too many terminal emulators out there that just don't like the weird control codes being sent around.

Pair programming works best when you are tasked with a problem that’s actually beyond your current abilities. You spend less time in your head because you are exploring a solution space for the first time.

Yes I’ve had a lot of success with this too. I found with prompt tightening I seldom do more than 5 rounds now, but it also does an explicit plan step with plan review.

Currently I’m authoring with codex and reviewing with opus.


Good reminder: don't forget the plan review!

Your most autistic and senior engineer is now named Claude. Point him at nearly any task, pair-program with codex, and review the results.

I wonder if you've ever worked on a web service at scale. JSON serialization and deserialization is notoriously expensive.

It can be, but $500k/year is absurd. It's like they went from the most inefficient system possible to create, to a regular normal system that an average programmer could manage.

I have no idea if they are doing orders of magnitude more processing, but I crunch through 60GB of JSON data in about 3000 files regularly on my local 20-thread machine using nodejs workers to do deep and sometimes complicated queries and data manipulation. It's not exactly lightning fast, but it's free and it crunches through any task in about 3 or 4 minutes or less.

The main cost is downloading the compressed files from S3, but if I really wanted to I could process it all in AWS. It also could go much faster on better hardware. If I have a really big task I want done quickly, I can start up dozens or hundreds of EC2 instances to run the task, and it would take practically no time at all... seconds. Still has to be cheaper than what they were doing.


Curious about the workload, but as Im trying to make a tool about json, what are those files compressed with? What is the size of the average file ? What is their structure (ndjson ? Dict with some huge data structure a few level deep?)

In S3 the JSON is stored in plain-old .zip files. While downloading to local the files are unzipped to plain old JSON. It's basically an object containing tons of data about each website I manage including all fragments of HTML and metadata used on the sites. It can get quite large, some sites have thousands of pages. We often need to find things stored many levels deep in the JSON that may be tricky to find, it isn't usually a specific path, and lots of iterable arrays and objects are involved. The files range from ~20MB to ~400MB, depending on how much content each site has. And we have ~9000 total sites.

They got a 1000x speed up just by switching languages.

I highly doubt the issue was serialization latency, unless they were doing something stupid like reserializing the same payload over and over again.


Well, for starters, they replace the RPC call with an in-process function call. But my point is anybody who's surprised that working with JSON at scale is expensive (because hey it's just JSON!) shouldn't be surprised.

Well everything is expensive at scale, and any deserialization/serialization step is going to be expensive if you do it enough. However yes i would be surprised. JSON parsing is pretty optimized now, i suspect most "json parsing at scale is expensive" is really the fault of other parts of the stack

Would it be better or worse if I had that experience and still said it's stupid?

You didn't say it was stupid. If you had, I would have just ignored the comment. But you expressed a level of surprised that led me to believe you're unfamiliar with how much of a pain in the ass JSON parsing is.

I think OP’s point was surprise that a company would spend so much on such inefficient json parsing. I’m agreeing. I get that JSON is not the fastest format to parse, but the overarching point is that you would expect changes to be made well before you’re spending $300k on it. Or in a slightly more ideal world, you wouldn't architect something so inefficient in the first place.

But it's common for engineers to blow insane amounts of money unnecessarily on inefficient solutions for "reasons". Sort of reminds me of saas's offering 100 concurrent "serverless" WS connections for like $50 / month - some devs buy into this nonsense.


Because of cost basis step up at death, you can just defer forever.


This is a self fulfilling profecy.

For a long time, it was jobs and the promise of a better future for your family. By killing that all we have is weather.


All we have is the weather? California is the largest agricultural producer of any state, and it's not even close. Plants like growing here for the same reason people do.


Because they get all the water that can possibly be piped in from somewhere else.


Good? If it's the best place for producing a product, but requires an input from somewhere else, that's how businesses work.


That's pretty much true of half the USA.


And if the last several years are indicative of the trend, wildfire season is now a substantial part of the year.


You act as though California is no longer one of the largest populations or one of the largest economies.

The “snowball fallacy” is a fallacy because there is no reason California s can’t swing the regulatory pendulum back the other direction if there is too much economy / freedom impacted.


When I took a machining course, the instructor sat in the corner and showed us YouTube videos in Mandarin with English subtitles to teach us the equipment.

We are never going to catch up.


You are making lots of projections from a single anecdote and conflating a state’s policies/economy with that of a country of 25x the population.

Detroit was once one of the US’s largest population cities, at nearly 2.5 million residents in the late 1960s, falling to less than 1 million by the 2010s. On this scale, California is still in the peak days of the 1960s, but we aren’t showing any current signs of shrinking. Maybe AI will be the catalyst for massive job losses, but that’s for the future to unfold.

Machining is a low value part of the economic supply chain, like sweat shop clothing. While I don’t want to lose it, it’s being dominated by countries (China, Taiwan) which are willing to throw MASSIVE money at the industry. TSMC was literally a whole-of-country effort to centralize the entire world’s supply chain of cutting edge semiconductors on one island. China is winning because they have cut-throat competition between companies and they don’t slow down for legal concerns such as regulation or intellectual property. That is only going to last for a certain amount of time before people will demand better living environments (which is partly why they have such a terrible fertility rate).


What a myopic attitude.

3 to 4 decades ago anything from China was poor quality and US manufacturing was tight tolerance.

When we outsourced, we did the training to get them where they are today and stopped investing in our skills at home.

There are still skilled people here who can train and the knowledge is not some sort of eldritch incantation.

The main issues with learning is lack of jobs and lack of opportunity to apply skills if you have them.


I had to pay an instructor to show me YouTube videos because the college wouldn't admit to being unable to find domestic talent.

> There are still skilled people here who can train

If you don't acknowledge you're losing the race, you will never catch up.


China probably caught up the same way starting 40 years ago. Watching VHS tapes in English (or German, Japanese, or French) with Mandarin subtitles*. Clearly "never" is untrue because it's been done once already.

IMO this is all cyclical.

* This is metaphorical. Obviously there were also textbooks and research papers and technical manuals and everything else. The point is much of it came from abroad and they learned it all to the point that they're the experts today.


Most of the comp sci videos on youtube are indian, but is India the cutting edge producing of comp sci innovations?


I’m happy this is coming from a real person with skin in the game and not just a veiled PAC with murky intentions.


Is it really the moderators that make a community special? They are vital no doubt but I have never came here for the moderation.

For me the magic of a niche community like a subreddit or HN is when a 99th percentile expert in the subject shows up and gives everybody a brilliant lecture on the actual truth of things. These are not 99th percentile in Reddit use or post count or any of those things.


> I have never came here for the moderation.

The moderators (and the algorithm they support and tune) are why the conversation on HN is compelling enough to attract 99th percentile experts on just about every subject.


Moderators are the invisible hand pruning the garden weeds. You might not always see them working, but they allow the space necessary for the good conversations to grow and thrive. Their absence would be felt quickly.


Guys remember this kind of stuff when you are building side projects. You can just ship you don’t need every feature on day one.


the ability to change email address is not that complicated of a feature to postpone to later.

maybe they should ask CC to fix this...


It’s not a complicated feature and it’s also not required on day 1. At Cronitor we did not have it for nearly 2 years.


I don't know. I actually find it harder and more stressful to write code in a way that does not meet a certain quality level. it require me to actually think more.

It's king of weird, but I have tried over the years to develop a do-just-what-is-necessary-now mindset in my software engineering work, and I just can't make my mind work that.

For me, doing things right is a way for me to avoid having to hold too much context in my head while working on my projects. I know the idiomatic way to do something, and if i just do it that way, then when I come back to it I know it should and is architectured.


“I don’t have this feature yet” is not really a mental burden. For any successful project that is always true about a lot of features.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: