Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I started my career as a developer in the 1990s and cut my teeth in C++, moving on to Python, Perl, Java, etc. in the early-2000s. Then I did management roles for about 20 years and was no longer working at the “coal face” despite having learned some solid software engineering discipline in my early days.

As an old geezer, I appreciate very much how LLMs enable me skip the steep part of the learning curve you have to scale to get into any unfamiliar language or framework. For instance, LLMs enabled me to get up to speed on using Pandas for data analysis. Pandas is very tough to get used to unless you emerged from the primordial swamp of data science along with it.

So much of programming is just learning a new API or framework. LLMs absolutely excel at helping you understand how to apply concept X to framework Y. And this is what makes them useful.

Each new LLM release makes things substantially better, which makes me substantially more productive, unearthing software engineering talent that was long ago buried in the accumulating dust pile of language and framework changes. To new devs, I highly encourage focusing on the big picture software engineering skills. Learn how to think about problems and what a good solution looks like. And use the LLM to help you achieve that focus.



> So much of programming is just learning a new API or framework.

Once you're good at it in general. I recently witnessed what happens when a junior developer just uses AI for everything, and I found it worse than if a non-developer used AI: at least they wouldn't confuse the model with their half-understood ideas and wouldn't think they could "just write some glue code", break things in the process, and then confidently state they solved the problem by adding some jargon they've picked up.

It feels more like an excavator: useful in the right hands, dangerous in the wrong hands. (I'd say excavators are super useful and extremely dangerous, I think AI is not as extreme in either direction)


It used to (pre-'08 or so) be possible to be "good at Google".

Most people were not. Most tech people were not, even.

Using LLMs feels a ton like working with Google back then, to me. I would therefore expect most people to be pretty bad at it.

(it didn't stop being possible to be "good at Google" because Google Search improved and made everyone good at Google, incidentally—it's because they tuned it to make being "bad at Google" somewhat better, but eliminated much of the behavior that made it possible to be "good at Google" in the process)


This is an excellent analogy. I will be borrowing it. Thank you.


Fun fact, I recently started reading Designing LLM Applications[0] (which I'm very much enjoying by the way) and it draws this exact analogy in the intro!

0: https://www.oreilly.com/library/view/designing-large-languag...


I swear I didn't steal mine from there, LOL. Maybe I'm on the right track if others are noticing similar things about the experience of using LLMs, though.


Yes exactly, I meant it as evidence that there is something to this insight. Also that it stuck with me in the book enough to make the connection when I saw your comment; it's that it struck me as a good point.


Thank you, you just sold me on an O'Reilly free trial. Let's see what damaged I can do to that book in 10 days.


I've been using it to learn Lean, the proof assistant language, and it's great. The code doesn't always compile, but the general structure and approach is usually correct, and it helps understand different ways of doing things and the pros, cons, and subtleties of each.

From this it has me wondering if AI could increase the adoption of provably correct code. Dependent types have a reputation for being hard to work with, but with AI help, it seems like they could be a lot more tractable. Moreover, it'd be beneficial it the other direction too: the more constraints you can build into the type system of your domain model, the harder it will be for an AI to hallucinate something that breaks it. Anything that doesn't satisfy the constraints will fail to compile.

I doubt it, but wishful thinking.


Yep, I absolutely relate to this. ChatGPT happened to come out right when I needed to learn how to use kubernetes, after having used a different container orchestrator. It made this so much easier.

Ever since, this has been my favorite use case, to cut through the accidental complexity when learning a new implementation of a familiar thing. This not only speeds up my learning process and projects using the new tool, it also gives me a lot more confidence in taking on projects with unfamiliar tools. This is extremely valuable.


Also agree. I've been playing with Godot for some super simple game dev, and it's been surprisingly fantastic at helping me navigate Godot's systems (Nodes, how to structure a game, the Godot API) so I can get to the stuff that I find enjoyable (programming gameplay systems).

No, it's not perfect and I imagine there's some large warts as a result, but it was much, much better than following a bog-standard tutorial on YouTube to get something running, and I'm always able to go refactor my scripts later now that I'm past initial scaffolding and setup.


> primordial swamp of data science

This deeply resonates with me every time I stare at pandas code seeking to understand it.


Yes. I am routinely aghast at its poor legibility compare to either R dataframe or the various idioms you learn in Matlab/bumpy for doing the same things.


Yep, same. I'm an old hand at pandas, and writing a 300 line script in pandas and asking Claude to rewrite it to polars taught me polars faster than any other approach I've used to learn a new framework.


I guess LLMs are good at mapping one thing to another. Just like translating a real language.


I am pretty much in the same boat, although I was never that advanced a dev to begin with.

It is truly amazing what a superpower these LLM tools are for me. This particular moment in time feels like a perfect fit for my knowledge level. I am building as many MVP ideas as quickly as I can. Hopefully, one of them sticks with users.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: