You're either overestimating the capabilities of current AI models or underestimating the complexity of building a web browser. There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.
Even if AI will not achieve the ability to perform at this level on its own, it clearly is going to be an enormous force multiplier, allowing highly skilled devs to tackle huge projects more or less on their own.
The "discoverer" of APL tried to express as many problems as he could with his notation. First he found that notation expands and after some more expansion he found that it began shrinking.
The same goes to Forth, which provides means for a Sequitur-compressed [1] representation of a program.
Myself, I always strive to delete some code or replace some code with shorter version. First, to better understand it, second, to return back and read less.
> There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.
Firstly the CI is completely broken on every commit, all tests have failed and its and looking closely at the code, it is exactly what you expect for unmaintainable slop.
Having more lines of code is not a good measure of robust software, especially if it does not work.
The one nice thing about web browsers is that they have a reasonably formalized specification set and a huge array of tests that can be used. So this makes them a fairly unique proposition ideally suited to AI construction.
As far as I read on Ladybird's blog updates, the issue is less the formalised specs, and more that other browsers break the specs, so websites adjust, so you need to take the non-compliance to specs into account with your design
I would love to make these videos for you if you want to pay for my time. Drop me an email at josh.d.griffith at gmail and tell me what you want to see and compensate. I can vibe code at any scale.
That's the thing - I know what 'vibe coding' is because that's pretty much how I use AI, as an exploratory tool or interactive documentation or a search engine for topics I want surface level information about.
It does not make me a 10x-100x more efficient. It's a toy and a learning tool. It could be replaced or removed and I wouldn't miss it that much.
Clearly I am missing something. I care about quality software, so if it's making someone 100x more productive but their producing the same subpar nonsense they would anyway then I am not interested. Hence I want to see a really proficient programmer use it, be 10x+ more productive, and have a quality product at the end. That's what I want to see demonstrated.
What would you need to see to change your mind? I can generate at mind-boggling scale. What’s your threshold for realizing you might not have explored every possible vector for AI capabilities?
I promise you that I can show you how to reliably solve any of them using any of the latest OpenAI models. Email me if you want proof; josh.d.griffith at gmail
> the person using the tool (e.g. OpenAI, Claude, DevStral, DeepSeek, etc) must NOT be able to solve problems alone
I think this is a good point, as I find the operators input is often forgotten when considering the AIs output. If it took me an hour and decades of expertise to get the AI to output the right program, did the AI really do it? Could someone without my expertise get the same result?
If not, then maybe we are wasting our time trying to mash our skills through vector space via a chat interface.
reply