Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

2029? I have no idea why you would think this is so far off. More like Q2 2026.




You're either overestimating the capabilities of current AI models or underestimating the complexity of building a web browser. There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.

Even though several people seconded the complexity of a browser, I must add one more take and bring up one of my all time favorite blog posts, back from 2000 (I am old), when browsers were already complex, Joel Spolsky's Joel On Software episode "Things You Should Never Do, Part I" https://www.joelonsoftware.com/2000/04/06/things-you-should-... His first example was Netscape browser v6.0, and why there wasn't a v5.0 after v4.0, why it took three years: "They did it by making the single worst strategic mistake that any software company can make: They decided to rewrite the code from scratch." I think this blog post is very relevant here.

Even if AI will not achieve the ability to perform at this level on its own, it clearly is going to be an enormous force multiplier, allowing highly skilled devs to tackle huge projects more or less on their own.

Skilled devs compress, not generate (expand).

https://www.youtube.com/watch?v=8kUQWuK1L4w

The "discoverer" of APL tried to express as many problems as he could with his notation. First he found that notation expands and after some more expansion he found that it began shrinking.

The same goes to Forth, which provides means for a Sequitur-compressed [1] representation of a program.

[1] https://en.wikipedia.org/wiki/Sequitur_algorithm

Myself, I always strive to delete some code or replace some code with shorter version. First, to better understand it, second, to return back and read less.


It's most likely both.

> There are tons of tiny edge cases and standards to comply with where implementing one standard will break 3 others if not done carefully. AI can't do that right now.

Firstly the CI is completely broken on every commit, all tests have failed and its and looking closely at the code, it is exactly what you expect for unmaintainable slop.

Having more lines of code is not a good measure of robust software, especially if it does not work.


Not only edge cases and standards, but also tons of performance optimizations.

Web browsers are insanely hard to get right, that’s why there are only ~3 decent implementations out there currently.

The one nice thing about web browsers is that they have a reasonably formalized specification set and a huge array of tests that can be used. So this makes them a fairly unique proposition ideally suited to AI construction.

As far as I read on Ladybird's blog updates, the issue is less the formalised specs, and more that other browsers break the specs, so websites adjust, so you need to take the non-compliance to specs into account with your design

You should make your own predictions, and then we can do a retrospective on who was right.

Yeah if you let them index chromium I'm sure it could do it next week. It just won't be original or interesting.

because it makes him look smart when inevitably he's 'right'

Please don't cross into personal attack on HN.

https://news.ycombinator.com/showhn.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: