Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm surprised by the number of bad takes on LLMs in this thread.

LLMs spoon-feed you with information about how things are implemented. You are not supposed to know how everything works when you start these projects. You're supposed to try your best, inevitably fail, then research the topic and understand where you went wrong, then adjust your approach. If you know how everything works and just follow the tutorial, you won't know what makes other methods fail, and by proxy what makes the one you chose work.

Write a language parser with a regex. Find out that it can't parse recursive statements. You've now learnt that regex can only parse a specific subset of syntaxes. Try to workaround this by pattern-matching the most nested statement first. Find out that it blows up performance. You now know more about time complexity and know what to watch out for when you write a real parser.

Write a non-optimizing compiler from scratch. Find out that you can't make do with unsound optimizations because you can't keep track of what optimizations are applied where. Find out that implementing sound optimizations is hard because you need to track use-def chains. Then you'll understand why SSA is used. Find out that code motion is a mess. Learn about sea of nodes. Merge every optimization pass into one because you're unable to order passes right. Learn how e-graphs solve this.

Write a layout engine. Get stuck on being unable to define what a "width" is. Workaround this with min/max/natural widths, introduce binary search, etc. Learn how this stuff works in practice (this is something I haven't personally done yet).

They say we learn from mistakes. Please don't let the smart (or "smart", depending on how you look at it) machine stop you from making them. It's not a teacher and it doesn't know how to educate.



A lot of people say if you don’t use LLMs then you will fall behind. I’m starting to think that not using them will be a significant advantage in the long run.


I think LLMs improve productivity in the present at a significant cost for the future. It's like cutting an R&D department. You might be able to utilize existing approaches better, but you won't make progress, and I think people are way too overconfident in believing everything important has already been developed.

I guess the counterargument here would be that LLMs could improve research as well by optimizing menial tasks. It's kind of similar to how computing has enabled brute-force proofs in math. But I think the fact that students are still required to prove theorems on paper and that problems with brute-force solutions are still studied analytically should show that tools like computers or LLMs are not at all a replacement for the typical research process.


IMO we are going to see a large class of people who have cognitive deficits brought on by AI tool usage.

I've been wondering lately about how to distinguish between tools that enhance your cognitive ability, and tools that degrade it. Jobs called a computer a "bicycle for the mind," and it seems like LLMs are an easy-chair for the mind. I'm not sure a priori how to distinguish between the two classes of tools though. Maybe there is no other tool like an LLM.


i think theres both. the LLM is an incredible tool you should be able to use well, but its a complement to your other knowledge and tools, not a replacement. if you dont add the LLM to your toolset, youre not going to be building at the same scale as people who are, and if you dont have the backing knowledge, your LLM outputs are gonna be junk because you wont be able to point it in the right direction soon enough in the context window


Using LLMs is like moving to management: you lose your edge on detailed execution, but you improve on accountability and long-term impact.


In the past we used to copy code verbatim from magazines. You have to start somewhere right?


But your brain was the clipboard. That simple process of transcription was something that you couldn't avoid learning from even if you wanted to. You'd notice the connections between the commands you typed and things that happened when you ran the program even if you weren't trying to.

Things would start to click, and then you'd have those moments of curiosity about how the program might behave differently if you adjusted one particular line of code or changed a parameter, and you'd try it, which would usually provoke the next moment of curiosity.

This was how many of us learned how to write code in the first place. Pasting the output from an LLM into your source tree bypasses that process entirely -- it's not the same thing at all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: