I like this analogy along with the idea that "it's not an autonomous robot, it's a mech suit."
Here's the thing -- I don't care about "getting stronger." I want to make things, and now I can make bigger things WAY faster because I have a mech suit.
edit: and to stretch the analogy, I don't believe much is lost "intellectually" by my use of a mech suit, as long as I observe carefully. Me doing things by hand is probably overrated.
The point of going to school is to learn all the details of what goes into making things, so when you actually make a thing, you understand how it’s supposed to come together, including important details like correct design that can support the goal, etc. That’s the “getting stronger” part that you can’t skip if you expect to be successful. Only after you’ve done the work and understand the details can you be successful using the power tools to make things.
The point of school for me was to get a degree. 99% of the time at school was useless. The internet was a much better learning resources. Even more so now that AI exists.
I graduated about 15 years ago. In that time, I’ve formed the opposite opinion. My degree - the piece of paper - has been mostly useless. But the ways of thinking I learned at university have been invaluable. That and the friends I made along the way.
I’ve worked with plenty of self taught programmers over the years. Lots of smart people. But there’s always blind spots in how they approach problems. Many fixate on tools and approaches without really seeing how those tools fit into a wider ecosystem. Some just have no idea how to make software reliable.
I’m sure this stuff can be learned. But there is a certain kind of deep, slow understanding you just don’t get from watching back-to-back 15 minute YouTube videos on a topic.
>I’ve worked with plenty of self taught programmers over the years. Lots of smart people. But there’s always blind spots in how they approach problems.
I've worked with PhDs on projects (I'm self-taught), and those guys absolutely have blind spots in how they approach problems, plenty of them. Everyone does. What we produce together is better because our blind spots don't typically overlap. I know their weaknesses, and they know mine. I've also worked with college grads that overthink everything to the point they made an over-abstracted mess. YMMV.
>you just don’t get from watching back-to-back 15 minute YouTube videos on a topic.
This is not "self taught". I mean maybe it's one kind of modern-ish concept of "self taught" in an internet comment forum, but it really isn't. I watch a ton of sailing videos all day long, but I've never been on a sailboat, nor do I think I know how to sail. Everyone competent has to pay their dues and learn hard lessons the hard way before they get good at anything, even the PhDs.
I think it depends on how they were self taught. If they just went through a few tutorials on YouTube and learned how to make a CRUD app using the shiny tool of the week, then sure. (I acknowledge this is a reduction in self-teaching — I myself am self-taught).
But if they actually spent time trying to learn architecture and how to build stuff well, either by reading books or via good mentorship on the job, then they can often be better than the folks who went to school. Sometimes even they don't know how to make software reliable.
I'm firmly in the middle. Out of the 6 engineers I work with on a daily basis (including my CTO), only one of us has a degree in CS, and he's not the one in an architecture role.
I do agree that learning how to think and learn is its own valuable skill set, and many folks learn how to do that in different ways.
> But if they actually spent time trying to learn architecture and how to build stuff well, either by reading books or via good mentorship on the job, then they can often be better than the folks who went to school.
Yeah I just haven’t seen this happen. I’ve seen plenty of people graduate who were pretty useless. But … I think every self taught programmer I’ve worked with had meaningful gaps in their knowledge.
They’d spend a week in JavaScript to save them from 5 minutes with C or bash. Or they’d write incredibly slow code because they didn’t know the appropriate algorithms and data structures. They wouldn’t know how to profile their program to learn where the time is being spent. (Or that that’s even a thing). Some would have terrible intuitions around how the computer actually runs a program, so they can’t guess what would be fast or slow. I’ve seen wild abstractions to work around misunderstandings of the operating system. Hundreds of lines to deal with a case that can’t actually ever happen, or because someone missed the memo on a syscall that solves their exact problem. There’s also hairball nests of code because someone doesn’t know what a state machine is. Or how to factorise their problem in other ways. One guy I worked with thought the react team invented functional programming. Someone else doesn’t understand how you could write programs without OO inheritance. And I’ve seen so many bugs. Months of bugs, that could be prevented with the right design and tests.
I’ve worked with incredibly smart self taught programmers. Some of the smartest people I’ve ever worked with. But the thing about blind spots is you don’t know you have them. You say you’re self taught, and self taught people can be better than people who went to school. In limited domains, yeah. Smart matters a lot. But you don’t know what you don’t know. You don’t know what you missed out on. And you don’t know what problems in the workplace you could have easily solved if you knew how.
Yeah, I agree, but not knowing what you don't know applies to almost everyone in every skill, not just programming. I acknowledge I have gaps in my knowledge. But it's because of those gaps that I am always trying to supplement my knowledge by studying different data structures, different patterns for solving problems, different algorithms. I don't aim for complete mastery. I aim for basically "what can I add to my bag of problem solving tools." I concede that because the barrier to entry is low, stories similar to your anecdotes are probably quite common in most self-taught programmers. I think this just speaks to the necessity of rigor during the interview process. Like, does the candidate just know how to build features, or do they know how to design fail-proof systems?
Also, to clarify, I'm not arguing that self-taught vs CS grad is mutually exclusive to smart/not smart. There are plenty of not-smart self-taught engineers and plenty of smart grads.
> In limited domains
I'd argue that many, if not most, teams operate in limited domains.
> I think this just speaks to the necessity of rigor during the interview process.
That gets expensive, fast. There's just so much to cover already, between communication skills, programming skills, debugging skills, architecture / "whiteboarding problems", data structures and algorithms, general problem solving ("interview problems"). A job interview can never be a fully rigorous test of someone's actual skills. Most don't cover even a fraction of that stuff already.
> I'd argue that many, if not most, teams operate in limited domains.
It depends what you consider yourself responsible for. If you think of your job (or your team's job) as shipping features X, Y and Z within this react based web app, then sure - you operate in a limited domain. But if your job is "solve the user's actual problems" then things can get pretty broad, pretty fast. Sometimes you write code. Sometimes you're debugging a hard problem. Or talking to the users. Or identifying and tracking down a performance regression. Or writing an issue for a bug in 3rd party code. Or trawling through MDN to figure out a workaround to some browser nonsense. Or writing reliable tests, or CI/CD systems. And so on.
Its only really junior engineers who have the luxury of a limited scope.
I haven't heard of self taught programmers binging 15 minute YT videos. I can't recall the last time I did myself.. aside from conference talks and such its probably been at least 5 years since I watched something explaining things in the realm of programming.
For a motivated learner with access to good materials, schools provide two important things besides that very important piece of paper:
1. contacts - these come in the form of peers who are interested in the same things and in the form of experts in their fields of study. Talking to these people and developing relationships will help you learn faster, and teach you how to have professional collegial relationships. These people can open doors for you long after graduation.
2. facilities - ever want to play with an electron microscope or work with dangerous chemicals safely? Different schools have different facilities available for students in different fields. If you want to study nuclear physics, you might want to go to a school with a research reactor; it's not a good idea to build your own.
To extend 2. facilities, my experience had a - somewhat older and smaller - supercomputer that we got to run some stuff on.
And I'd argue for:
3. Realisation of the scope of computing.
IE Computers are not just phones/laptop/desktop/server with networking - all hail the wonders of the web... There are embedded devices, robots, supercomputers. (Recent articles on HN describe the computing power in a disposable vape!)
There are issues at all levels with all of these with algorithms, design, fabrication, security, energy, societal influence, etc etc - what tradeoffs to make where. (Why is there computing power in a disposable vape?!?)
I went in thinking I knew 20% and I would learn the other 80% of IT. I came out knowing 5 times as much but realising I knew a much smaller percentage of IT... It was both enabling and humbling.
But you can also meet experts at a company and get access to a company's machinery. To top it off the company pays you instead of you paying the school.
> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? — The Elements of Programming Style, 2nd edition, chapter 2
If you weren't even "clever enough" to write the program yourself (or, more precisely, if you never cultivated a sufficiently deep knowledge of the tools & domain you were working with), how do you expect to fix it when things go wrong? Chatbots can do a lot, but they're ultimately just bots, and they get stuck & give up in ways that professionals cannot afford to. You do still need to develop domain knowledge and "get stronger" to keep pace with your product.
Big codebases decay and become difficult to work with very easily. In the hands-off vibe-coded projects I've seen, that rate of decay was extremely accelerated. I think it will prove easy for people to get over their skis with coding agents in the long run.
I think this goes for many different kinds of projects. Take React, for example, or jQuery, or a multitude of other frameworks and libraries. They abstract out a lot of stuff and make it easier to build stuff! But we've also seen that with ease of building also comes ease of slop (I've seen many sloppily coded React code even before LLMs). Then react introduced hooks to hopefully reduce the slop and then somehow it got sloppy in other ways.
That's kinda how I see vibe coding. It's extremely easy to get stuff done but also extremely easy to write slop. Except now 10x more code is being generated thus 10x more slop.
Learning how to get quality robust code is part of the learning curve of AI. It really is an emergent field, changing every day.
Yeah I think that's an interesting point of comparison. There's definitely a phenomenon where people can take their abstractions for granted and back themselves into corners because they have no deeper understanding of what their framework does under the hood.
The key difference with LLMs is that React was written very intentionally by smart engineers who provided a wealth of documentation to help people who need to peek under the hood of their framwork. If your LLM has written something you don't understand, though, chances are nobody does, and there's nowhere you can turn to.
If (as Peter Naur famously argued) programming is theory building, then an abstraction like a framework lets you borrow someone else's theory. You skip developing an understanding of the underlying code and hope that you'll either never need to touch the underlying code or that, if you do, you can internalize the required theory later, as needed. LLM-generated code has no theory; you either need to supervise it closely enough to impose your own, or treat it as disposable.
> LLM-generated code has no theory; you either need to supervise it closely enough to impose your own, or treat it as disposable.
Agreed! And I think that's what I'm getting at. Adding what they're now calling "skills," or writing your own, is becoming crucial to LLM-assisted development. If the LLM is writing too much slop, then there probably wasn't sufficient guidance to ensure that slop wouldn't be written.
The first step of course is to actually check that the generated code is indeed slop, which is where many people miss the mark.
No, it's not a mech suit. A mech suit doesn't fire its canister rifle at friendly units and then say "You're absolutely right! I should have done an IFF before attacking that unit." (And if it did the engineer responsible should be drawn and quartered.) Mech-suit programming AI would look like something that reads your brainwaves and transduces them into text, letting you think your code into the machine. I'd totally use that if I had it.
This analogy works pretty well. Too much time doing everything in it and your muscles will atrophy. Some edge cases will be better if you jump out and use your hands.
There's also plenty of mech tales where the mech pilots need to spend as much time out of the suits making sure their muscles (and/or mental health) are in good strength precisely because the mechs are a "force multiplier" and are only as strong as their pilot. That's a somewhat common thread in such worlds.
Yes. Also, it's a fairly common trope that if you want to pilot a mech suit, you need to be someone like Tony Stark. He's a tinkerer and an expert. What he does is not a commodity. And when he loses his suit and access to his money? His big plot arc is that he is Iron Man. He built it in a cave out of a box of scraps, etc.
There are other fictional variants: the giant mech with the enormous support team, or Heinlein's "mobile infantry." And virtually every variantion on the Heinlein trope has a scene of drop commandos doing extensive pre-drop checks on their armor.
The actual reality is it isn't too had for a competent engineer to pair with Claude Code, if they're willing to read the diffs. But if you try to increase the ratio of agents to humans, dealing with their current limitations quickly starts to feel like you need to be Tony Stark.
Funny, because I was thinking of Evangelion's predecessor, Gunbuster, in which cadets are shown undergoing grueling physical training both in and out of their mechs to prepare for space combat.
I like the electric bike as a metaphor. You can go further faster, but you quickly find yourself miles from home and out of juice, and you ain't in shape enough to get that heavy bugger back.
As long as we're beating the metaphor... so don't do that? Make sure you charge the battery and that it has enough range to get you home, and bring the charger with you. Or in the LLMs case, make sure it's not generating a ball of mud (code). Refactor often, into discrete classes, and distinct areas of functionality, so that you're never miles from home and out of juice.
> to stretch the analogy, I don't believe much is lost "intellectually" by my use of a mech suit, as long as I observe carefully.
With all respect, that's nonsense.
Absolutely no one gains more than a superficial grasp of a skill just by observing.
And even with a good grasp of skills, human boredom is going to atrophy any ability you have to intervene.
It's why the SDCs (Tesla, I think) that required the driver to stay alert to take control while the car drove itself were such a danger - after 20+ hours of not having to to anything, the very first time a normal reaction time to an emergency is required, the driver is too slow to take over.
If you think you are learning something reviewing the LLM agent's output, try this: choose a new project in a language and framework you have never used, do your usual workflow of reviewing the LLMs PRs, and then the next day try to do a simple project in that new language and framework (that's the test of how much you learned).
Compare that result to doing a small project in a new language, and then the next day doing a different small project in that same language.
If you're at all honest with yourself, or care whether you atrophy or not, you'd actually run that experiment and control and objectively judge the results.
I'd agree, if my goal was "to be a great and complete coder."
I don't. I want just enough to build cool things.
Now, that's just me.
That being said, I'd also venture to say that your attitude here might be a tad dinosaurish. I like it too, but also, know that to a large extent, especially in the market -- this "quality" that you're striving for here may just not happen.
OK, it’s a mech suit. The question under discussion is, do you need to learn to walk first, before you climb into it? My life experience has shown me you can’t learn things by “observing”, only by doing.
Yes, you can learn to walk in the mech suit. Let’s put one leg forward, then the next, good. You are now 100% production ready at walking. Let’s run a load test. You’re running now. Now you’re running into the ocean. “I want to swim now.” You’re absolutely right! You should be swimming. Since we don’t have a full implementation of swimming let me try flailing the arms while increasing leg speed. That doesn’t seem to work. The user is upside down on the ocean floor burrowing themselves into the silt. Task Complete. Summary: the user has learned to walk.
>Here's the thing -- I don't care about "getting stronger."
Let's not mince words here, what you mean is that you don't care to learn about a craft. You just want to get to the end result, and you are using the shiny new tool that promises to take you from 0 to 100% with little to no effort.
In this way, I'd argue what you are doing is not "creating", but engaging in a new form of consumption. It used to be you relied on algorithms to present to you content that you found fun, but the problem was that algorithm required other humans to create that content for you to later consume. Now with LLMs, you remove the other humans from the loop, and you can prompt the AI directly with exactly what you wish to see in that moment, down to the fine grained details of the thing, and after enough prompts, the AI gives you something that might be what you asked for.
This strikes me as extreme cope from the other end. There may be some truth to that, but it also kind of reminds me of "how can you possibly create a new kind of tractor unless you know exactly how to build a combustion engine yourself?"
If all I know is the mech suit, I’ll struggle with tasks that I can’t use it for. Maybe even get stuck completely. Now it’s a skill issue because I never got my 10k hours in and I don’t even know what to observe or how to explain the outcome I want.
In true HN fashion of trading analogies, it’s like starting out full powered in a game and then having it all taken away after the tutorial. You get full powered again at the end but not after being challenged along the way.
This makes the mech suit attractive to newcomers and non-programmers, but only because they see product in massively simplified terms. Because they don’t know what they don’t know.
The mech suit works well until you need to maintain stateful systems. I've found that while initial output is faster, the AI tends to introduce subtle concurrency bugs between Redis and Postgres that are a nightmare to debug later. You get the speed up front but end up paying for it with a fragile architecture.
If observing was as good as doing, experience would mean nothing.
Thinking through the issue, instead of having the solve presented to you, is the part where you exercise your mental muscles. A good parallel is martial arts.
You can watch it all you want, but you'll never be skilled unless you actually do it.
Here's the thing -- I don't care about "getting stronger." I want to make things, and now I can make bigger things WAY faster because I have a mech suit.
edit: and to stretch the analogy, I don't believe much is lost "intellectually" by my use of a mech suit, as long as I observe carefully. Me doing things by hand is probably overrated.