Is it an extraordinary claim that Opus 4.6 or GPT 5.3 works amazing on existing code bases in my experience?
That's funny. I feel like it's the opposite. Claiming that Opus 4.6 or GPT 5.3 fails as soon as you point them to an existing code base, big or small, is a much more extraordinary claim.
Why would they? Github has 28 million public repos, Codeberg only hit 300k last year. Anyway, Codeberg was just a placeholder for 'repo source _less_ likely to be in their training data'. Codeberg was quick candidate for a place to find a big old codebase with non-sensitive data.
It is indeed hard but the guys at Codeberg are certainly an order of magnitude better than Github as they opted out of the main AI crawlers, regularly block IPs known to belong to AI startups and they allow you to make your repos only be accessible to logged in users.
You seem be going on a tangent, here. Main point was about performing a well documented test anyway.
This one is a lot harder to tell because there are some AI bros who claim similar things but are completely serious. Even look at Show HN now: There used to be ~20-40 posts per day but now there are 20 per HOUR.
(Please oh please can we have a Show HN AI. I'm not interested in people's weekend vibe coded app to replace X popular tool. I want to check out cool projects wher people invested their passion and time.)
You should wonder whether any of those devs will train themselves to become engineers and whether the supply of engineers will be lower than the demand for them. Because if any of them become true, you will likely struggle to keep your employee stats relatively the same (ie you will struggle in very specific ways) unless you are the kind of person who doesn't need to interview to land a gig at a top 10 tech company.
> I'm struggling to think of any scenario that doesn't also put most white collar professions out of work alongside me
You don't need to be out of a job to struggle. Just for your pay to remain the same (or lower), for your work conditions to degrade (you think jQuery spaguetti was a mess? good luck with AI spaguetti slop) or for competition to increase because now most of the devving involves tedious fixing of AI code and the actual programming heavy jobs are as fought for as dev roles at Google/Jane Street/etc.
Devving isn't going anywhere but just like you don't punch cards anymore, you shouldn't expect your role in the coming decades to be the same as the 90s-25s period.
> are there modes of thinking that fundamentally require something other than what current LLM architectures do?
Possibly. There are likely also modes of thinking that fundamentally require something other than what current humans do.
Better questions are: are there any kinds of human thinking that cannot be expressed in a "predict the next token" language? Is there any kind of human thinking that maps into token prediction pattern such that training a model for it would not be feasible regardless of training data and compute resources?
At the end of the day, the real world value is utility, some of their cognitive handicaps are likely addressable. Think of it like the evolution of flight by natural selection, flight is usefulness to make it worth it adapt the whole body to make flight not just possible but useful and efficient. Sleep falls in this category too imo.
We will likely see similar with AI. To compensate for some of their handicaps, we might adapt our processes or systems so the original problem can be solved automatically by the models.
Waiting until the moment they get good enough is not a smart thing to do either. If you are a farmer and know it is going to snow, at some point in the next 5 months, you make plans NOW, you don't wait until the temperatures drop and you see the snow falling. Right now, people are waiting for the snowfall before moving their proverbial chickens indoors
Top AI researchers like Yann LeCunn have said that LLMs are a dead end.
It seems to me that LLM performance is plateuing and not improving exponentially anymore. This recent hubbub about rewriting a worse GCC for $20,000 is another example of overhype and regurgitating training data.
You don't know for sure if it is going to "snow" (AI reaches general intelligence) Snow happens frequently, AI reaching general intelligence has never happened. If it ever happens, 99% of jobs are gone and there is really nothing you can do to prepare for this other than maybe buy guns and ammo, and even that might not do anything to robotic soldiers.
People were worried about AI taking their jobs 60 years ago when perceptrons came out, and anyone who avoided a tech career because of that back then would have lost out majorly.
There is no reason why an AI model capable of pushing a significant chunk of devs into lower paid and highly competitive dev jobs as a result of automation needs to be a general artificial intelligence. There is a lack of nuance that comes with thinking that either AI is dumb or it has human level general intelligence. As much as devs hate to admit it, you don't need that much of what we understand as general intelligence to write software. Only a portion of your intelligence is needed and arguably not all of it at the same time.
While general purpose models might be plateauing soon (arguably they have for a while). Highly specialised models (especially for programming) haven't necessarily plateaud yet. And anyway, existing functionality seem like a good foundation to build upon systems that remove the need of hiring as many devs. It's not the "being out of a job" that should worry you. Open up your binary thinking and consider that facing a 08 job market for the rest of your career is not the same permanent unemployment but it is not a market you would like to have.
You don't need to be a genius or rocket scientist to write code, but llm don't even reach the bar for anything but the most simple things. Take a look at the video I posted earlier for an example.
And specialised models for programming HAVE plateaued.
> Can you imagine not being fired when you can only do 2.5% of all tasks?
You are not competing against LLMs though. You are competing against people (who in a pre-LLM world wouldn't be in tech) using LLMs tools to beat you in terms of value. In the new world, you either are a top 1% dev or you beat everyone in race to the bottom pricewise. The middle will become vanishingly small. Think of manufacturing in developed countries.
Is your AI PR publicly available in github?