Hacker Newsnew | past | comments | ask | show | jobs | submit | cheevly's commentslogin

2029? I have no idea why you would think this is so far off. More like Q2 2026.

You're either overestimating the capabilities of current AI models or underestimating the complexity of building a web browser. There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.

Even if AI will not achieve the ability to perform at this level on its own, it clearly is going to be an enormous force multiplier, allowing highly skilled devs to tackle huge projects more or less on their own.

Skilled devs compress, not generate (expand).

https://www.youtube.com/watch?v=8kUQWuK1L4w

The "discoverer" of APL tried to express as many problems as he could with his notation. First he found that notation expands and after some more expansion he found that it began shrinking.

The same goes to Forth, which provides means for a Sequitur-compressed [1] representation of a program.

[1] https://en.wikipedia.org/wiki/Sequitur_algorithm

Myself, I always strive to delete some code or replace some code with shorter version. First, to better understand it, second, to return back and read less.


It's most likely both.

> There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.

Firstly the CI is completely broken on every commit, all tests have failed and its and looking closely at the code, it is exactly what you expect for unmaintainable slop.

Having more lines of code is not a good measure of robust software, especially if it does not work.


Not only edge cases and standards, but also tons of performance optimizations.

Web browsers are insanely hard to get right, that’s why there are only ~3 decent implementations out there currently.

The one nice thing about web browsers is that they have a reasonably formalized specification set and a huge array of tests that can be used. So this makes them a fairly unique proposition ideally suited to AI construction.

As far as I read on Ladybird's blog updates, the issue is less the formalised specs, and more that other browsers break the specs, so websites adjust, so you need to take the non-compliance to specs into account with your design

You should make your own predictions, and then we can do a retrospective on who was right.

Yeah if you let them index chromium I'm sure it could do it next week. It just won't be original or interesting.

because it makes him look smart when inevitably he's 'right'

Please don't cross into personal attack on HN.

https://news.ycombinator.com/showhn.html


HN feels like where you should go for the worst AI takes and head-in-the-sand copium.


I like it. You’ve gone a bit too far though. It might be time to dwell on the bitter lesson and scale back.


Not sure what you mean? Thanks for the comment! I did not think I went far enough!


Frontier AI isnt trained on frontier AI. I wish HN would collectively stop and actually think before they post.


I would love to make these videos for you if you want to pay for my time. Drop me an email at josh.d.griffith at gmail and tell me what you want to see and compensate. I can vibe code at any scale.


I assume this is a reply in jest :)

> I can vibe code at any scale.

That's the thing - I know what 'vibe coding' is because that's pretty much how I use AI, as an exploratory tool or interactive documentation or a search engine for topics I want surface level information about.

It does not make me a 10x-100x more efficient. It's a toy and a learning tool. It could be replaced or removed and I wouldn't miss it that much.

Clearly I am missing something. I care about quality software, so if it's making someone 100x more productive but their producing the same subpar nonsense they would anyway then I am not interested. Hence I want to see a really proficient programmer use it, be 10x+ more productive, and have a quality product at the end. That's what I want to see demonstrated.


What would you need to see to change your mind? I can generate at mind-boggling scale. What’s your threshold for realizing you might not have explored every possible vector for AI capabilities?


I promise you that I can show you how to reliably solve any of them using any of the latest OpenAI models. Email me if you want proof; josh.d.griffith at gmail


I'd watch that show ideally with few base rules though, e.g.

- the problems to solve must NOT be part of the training set

- the person using the tool (e.g. OpenAI, Claude, DevStral, DeepSeek, etc) must NOT be able to solve problems alone

as I believe otherwise the 1st is "just" search and the 2nd is basically offloading the actual problem solving to the user.


> the person using the tool (e.g. OpenAI, Claude, DevStral, DeepSeek, etc) must NOT be able to solve problems alone

I think this is a good point, as I find the operators input is often forgotten when considering the AIs output. If it took me an hour and decades of expertise to get the AI to output the right program, did the AI really do it? Could someone without my expertise get the same result?

If not, then maybe we are wasting our time trying to mash our skills through vector space via a chat interface.


Im talking generalized solutions that solve all of them.


There are like a dozen well-established ways to overcome this. Learn how to use the basic tools and patterns my dude.


You clearly live in a different reality than me entirely. Complete opposite experience.


Would you kindly please detail how your experience is different? Complete opposite experience without details does not really say much.


Asset generation is hard.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: