> In summary, I believe there would still be a market for Software Developers in the foreseeable future, though the nature of work will change
This is precisely what I dread. When it comes to software development specifically, the parts that the AI cheerleaders are excited about AI doing are exactly the parts of the job that I find appealing. If I wanted to be a glorified systems integrator, I would have been doing that job already. The parts that the author is saying will still exist are the parts I put up with in order to do the enjoyable and satisfying work.
So this essay, if it's correct, explains the way that AI threatens my career. Perhaps there is no role for me in the software development world anymore. I'm not saying that's bad in the big picture, just that it's bad for me. It increasingly appears that I've chosen the wrong profession.
This resonates strongly with me. I don't want to describe the painting, I want to paint it. If this is indeed where we end up, I don't know that I'll change professions (I'm 30+ years into it), but the joy will be gone. It will truly become "just a job".
I remember back in the 80s I had friends who enjoyed coding in assembly and felt that using higher-level languages was "cheating" - isn't this just a continuation of that?
Yeah, that's a good way of looking at it. We gradually remove technical constraints and move to a higher level of abstraction, much closer to the level of the user and the business rather than the individual machine. But what's the endpoint of this? There will probably always be a need for expert-level troubleshooters and optimizers who understand all the layers, but for the rest of us, I'm wondering if the job wouldn't generally become more product management than engineering.
I'm not sure that there is an endpoint, only a continuation of the transitions we've always been making.
What we've seen as we transitioned to higher and higher level languages (e.g., machine code → macro assembly → C → Java → Python) on unimaginably more powerful machines (and clusters of machines) is that we took on more complex applications and got much more work done faster. The complexity we manage shifts from the language and optimizing for machine constraints (speed, memory, etc.) to the application domain and optimizing for broader constraints (profit, user happiness, etc.).
I think LLMs also revive hope that natural languages (e.g., English) are the future of software development (COBOL's dream finally be realized!). But a core problem with that has always been that natural languages are too ambiguous. To the extent we're just writing prompts and the models are the implementers, I suspect we'll come up with more precise "prompt languages". At that point, it's just the next generation of even higher level languages.
So, I think you're right that we'll spend more of our time thinking like product managers. But also more of our time thinking about higher level, hard, technical problems (e.g., how do we use math to build a system that dynamically optimizes itself for whatever metric we care about?). I don't think these are new trends, but continuing (maybe accelerating?) ones.
I don't think COBOL's dream was to generate enormous amounts of assembly code that users would then have to maintain (in assembly!) and producing differently wrong results every time you ran it.
It may not have been the dream, but the reality is many COBOL systems have been binary-patched to fix issues so many times that the original source may not be a useful guide to how the thing actually works.
> But also more of our time thinking about higher level, hard, technical problems (e.g., how do we use math to build a system that dynamically optimizes itself for whatever metric we care about?).
It’s likely that a near-future AI system can suggest suitable math and implement it in an algorithm for the problem the user wants solved. An expert who understands it might be able to critique and ask for a better solution, but many users could be satisfied with it.
Professionals who can deliver added value are those who understand the user better than the user themselves.
This kind of optimization is what I did for the last few years of my career, so I might be biased / limited in my thinking about what AI is capable of. But a lot of this area is still being figured out by humans, and there are a lot of tradeoffs between the math/software/business sides that limits what we can do. I'm not sure many business decision makers would give free rein to AI (they don't give it to engineers today). And I don't think we're close to AI ensuring a principled approach to the application of mathematical concepts.
When these optimization systems (I'm referring to mathematical optimization here) are unleashed, they will crush many metrics that are not a part of their objective function and/or constraints. Want to optimize this quarter's revenue and don't have time to put in a constraint around user happiness? Revenue might be awesome this quarter, but gone in a year because the users are gone.
The system I worked on kept our company in business through the pandemic by automatically adapting to frequently changing market conditions. But we had to quickly add constraints (within hours of the first US stay-at-home orders) to prevent gouging our customers. We had gouging prevention in before, but it suddenly changed in both shape and magnitude - increasing prices significantly in certain areas and making them free in others.
AI is trained on the past, but there was no precedent for such a system in a pandemic. Or in this decade's wars, or under new regulations, etc. What we call AI today does not use reason. So it's left to humans to figure out how to adapt in new situations. But if AI is creating a black-box optimization system, the human operators will not know what to do or how to do it. And if the system isn't constructed in a mathematically sound way, it won't even be possible to constrain it without significant negative implications.
Gains from such systems are also heavily resistant to measurement, which we need to do if we want to know if they are breaking our business. This is because such systems typically involve feedback loops that invalidate the assumption of independence between cohorts in A/B tests. That means advanced experiment designs must be found that are often custom for every use case. So, maybe in addition to thinking more like product managers, engineers will need to be thinking more like data scientists.
This is all just in the area where I have some expertise. I imagine there are many other such areas. Some of which we haven't even found yet because we've been stuck doing the drudgery that AI can actually help with. [cue the song Code Monkey]
This made me laugh out loud. Python is not a step up from Java in my opinion. Python is more of a step up from BASIC. It's a different evolutionary path. Like LISP.
The increase in productivity, we can all agree on, but a non-negligible portion of HN users would say that each one of those new languages made programming progressively less fun.
I think where people will disagree is how much productivity those steps brought.
For instance I think the step from machine code to macro assembler is bigger than the step from a macro assembler to C (although still substantial), but the step from C to anything higher level is essentially negligible compared to the massive jump from machine code to a 'low level high level' language like C.
So many other things happened at the same too, so it's sometimes hard to untangle what is what.
For instance, say that C had namespaces, and a solid package system with a global repo of packages like Python, C# and Java have.
Then you'd be able to throw together things pretty easily.
Things easily cobbled together with Python often aren't attributable to Python the language per se, but rather Python, the language and its neat packages.
Python is a step backwards in productivity for me compared with typed languages. So no I don't think we all agree on this. You might be more productive in Python but that's you not me.
The endpoint is that being a programmer becomes as obsolete as being a human "calculator" for a career.
Millions, perhaps billions of times more lines of code will be written, and automated programming will be taken for granted as just how computers work.
Painstakingly writing static source code will be seen the same way as we see doing hundreds of pages of tedious calculations using paper, pencil, and a slide rule. Why would you do that, when the computer can design and develop such a program hundreds of times in the blink of an eye to arrive at the optimal human interface for your particular needs at the moment?
It'll be a tremendous boon in every other technical field, such as science and engineering. It'll also make computers so much more useful and accessible for regular people. However, programming as we know it will fade into irrelevance.
This change might take 50 years, but that's where I believe we're headed.
Yet, we still have programmers writing assembly code and hand-optimizing it. I believe that for most software engineers, this will be the future. However, experts and hobbyists will still experiment with different ways of doing things, just like people experiment with different ways of creating chairs.
An AI can only do what it is taught to do. Sure, it can offer unique insights from time to time, but I doubt it will get to the point where it can craft entirely new paradigms and ways of building software.
You might be underestimating the potential of an automated evolutionary programming system at discovering novel and surprising ways to do computation—ways that no human would ever invent. Humans may have a better distribution of entropy generation (i.e. life experience as an embodied human being), but compared to the rate at which a computer can iterate, I don't think that advantage will be maintained.
(Humans will still have to set the goals and objectives, unless we unleash an ASI and render even that moot.)
AI, even in its current form can provide some interesting results. I wouldn’t underestimate an AI, but I think you might be underestimating the ingenuity of a bored human.
Humans aren't bored any more [0]. In the past the US the US had 250 million people who were bored. Today it has far more than than scrolling through instagram and tiktok, responding to reddit and hacker news, and generally not having time to be bored
Maybe we'll start to evolve as a species to avoid that, but AI will be used to ensure we don't, optimising far faster than we can evolve to keep our attention
Perhaps, but evolutionary results are difficult to test. They tend to fail in bizarre, unpredictable ways in production. That may be good enough for some use cases but I think it will never be very applicable to mission critical or safety critical domains.
Of course, code written by human programmers on the lower end of the skill spectrum sometimes has similar problems...
It doesn't seem like a completely different thing to generate specifications and formally verified programs for those specifications (though I'm not familiar with how those are done today).
I mean, I don’t even like programming with Spring because what all of those annotations are doing is horribly opaque. Let alone mountains of AI generated code doing God knows what.
I mean Ken Thompson put a back door into the C compiler no one ever found. Can you imagine what an AI could be capable of?
I don't believe that's going to happen. If it were, humans would have stopped playing chess. But not only do lots of people still play chess, people making a living playing chess. There are YT channels devoted to chess. The same thing will be true of almost all sports, lots of entertainment, and lots of occupations where people prefer human interaction. Bar tenders and servers could be automated away, but plenty like to sit at a bar or table and be served by someone they can talk to. I have a hard time seeing nurses being replaced. Are people going to want the majority of their care automated?
I also don't know what it means to completely remove humans from all work. Who is deciding what we want done? What we want to investigate or build? The machines are just gong to make all work-related decisions for us? I don't believe that. It would cease being our society at that point.
Which brings up the heart of the matter. Why are we trying to replace ourselves? It's our civilization, automation are just tools we use to be more productive. It should make our lives better, not remove us from the equation.
My guess is the real answer is it will make some people obscenely rich, and give some governments a significant technical advantage over others.
Ahaha, no. Every time period has its own distinct style. you can tell the difference between Magnus, Kasparov, Capablanca etc. Lots of innovation in chess in fact, almost uninfluenced by machines.
“It will cease being our society” is the most likely outcome. Current politics demonstrates we have lost the ability to collaborate for our common good. So the processes accelerating AI capabilities will be largely unchecked until it’s too late and the AIs will optimize whatever inscrutable function they have evolved to prioritize.
They are pretty close. LLMs can write the code the solve a sudoku, or leverage an existing solver, and execute the code. Agent frameworks are going to push the boundaries here over the next few years.
>There will probably always be a need for expert-level troubleshooters and optimizers who understand all the layers
There's already so many layers that essentially no one knows them all at even a basic level, let alone expert. A few more layers and no one in the field will even know of all the layers.
Seems so. Those friends did have to contend with the enjoyable part of their job disappearing. Whether they called it cheating or not is doesn't diminish their loss.
It didn't; there are still many roles for skilled assembly programmers in performance-critical or embedded systems. It's just their market share in the overall world of programming has decreased due to high-level programming languages; although better technology has increased the size of the market that might have demands for assembly.
I am not skilled in these areas so I am very scared. I am going to go back to school to get a nursing degree because it is guaranteed to not be disrupted by the disrupters like now where the disrupters are disrupting themselves. Despite the personal risks of a healthcare job, it will bring me so much more peace of mind.
I'm afraid it's naive to think that nursing is not going to get disrupted by AI. Seems like robotics is going to massively impact medical caregiving in the near future.
> robotics is going to massively impact medical caregiving in the near future
Not in the near near future. Do you know anything about nursing? The field will require some hard changes for robots to replace nurses, and the robots will need licenses
Even without robotics, many jobs like nursing (or construction) that require training will be able to be accomplished with much less training + a live computer coach that can give context-specific directions.
I think it's a fundamentally different thing, because AI is a leaky abstraction. I know how to write c code but I actually don't know how to write assembly at all. I don't really need to know about assembly to do my job. On the other hand, if I need to inspect the output of the AI to know that it worked, I still need to have a strong understanding of the underlying thing it's generating. That is fundamentally not true of deterministic tools like compilers.
Boiler plate being eliminated by syntactic sugar or runtime is not the same thing. Sure that made diving in easier but it didn't abstract away the logic design part - the actual programming part. Now the AI spits out code for you without thinking about the logic.
The difference is that your friend has a negative view of others than the OP is not presenting. They’re just stating their subjective enjoyment of an activity.
Not commenting on the mindset of earlier programmers, rather the analogy you offer: language level abstraction is entirely unlike process specialization.
For example, when moving up to C from assembler, the task of the "programmer" remains invariant and the language tool affords broader accessibility to the profession since not everyone likes to flip bytes. There is no subdivision of overall task of "coding a software product".
With AI coders, there is task specialization, and, as pointed out, what's left on the table is the least appetizing of the software tasks: being a patch monkey.
"Automatic programming always has been a euphemism for programming with a higher level language than was then available to the programmer. Research in automatic programming is simply research in the implementation of higher-level languages.
Of course automatic programming is feasible. We have known for years that we can implement higher-level programming languages. The only real question was the efficiency of the resulting programs. Usually, if the input 'specification' is not a description of an algorithm, the resulting program is woefully inefficient. I do not believe that the use of nonalgorithmic specifications as a programming language will prove practical for systems with limited computer capacity and hard real-time deadlines. When the input specification is a description of an algorithm, writing the specification is really writing a program. There will be no substantial change from out present capacity.
The use of improved languages has led to a reduction in the amount of detail that a programmer must handle and hence to an improvement in reliability. However, extant programming languages, while far from perfect, are not that bad. Unless we move to nonalgorithmic specifications as an input to those systems, I do not expect a drastic improvement to result from this research.
On the other hand, our experience in writing nonalgorithmic specifications has shown that people make mistakes in writing them just as they do in writing algorithms."
Programming with AI, so far, tries to specify something precise, algorithms, in a less precise language than what we have.
If AI programming can find a better way to express the problems we're trying to solve, then yes, it could work. It would become a matter of "how well the compiler works". The current proposals, with AI and prompting, is to use natural language as the notation. That's not better than what we have.
It's the difference between Euclid and modern notation, with AI programming being like Euclidean notation and current programming languages being the modern notation:
"if a first magnitude and a third are equal multiples of a second and a fourth, and a fifth and a sixth are equal multiples of the second and fourth, then the first magnitude and fifth, being added together, and the third and sixth, being added together, will also be equal multiples of the second and the fourth, respectively."
a(x + y) = ax + by
You can't make something simpler by making it more complex.
I don't really think it's a continuum. There is a continuum of abstraction among programming languages, from machine code to Java/Python/Haskell or whatever, but natural language is fundamentally different: it's ambiguous, ill-defined. Even if LLMs generate a lot of our code in the future, somebody is going to have to understand it, verify its correctness, and maintain it.
The distance isn’t the same between them, but each one is more abstracted than the next.
Natural language can be ambiguous and ill defined. Because the compiler is smarter. Just like you don’t have to manage memory in Python, except it abstracts a lot more.
The fact is that this very instant you can compile from natural language.
LLMs can generate code, but they still need to be prompted correctly, which requires someone who knows how to program beyond toy examples, since the code is going to have to be tested and integrated into running code. The person will need to understand what kind of code they're trying to generate, and whether that meets the business requirements.
Python is closer to C (third generation programming language). Excel is a higher level example. It still takes someone who knows how to use Excel to do anything meaningful.
I think this will weed out the people doing tech purely for the sake of tech and will bring more creative minds who see the technology as a tool to achieve a goal.
Indeed, can't wait for the day when technical people can stop relishing in the moments of intimate problem solving between stamping out widgets, and instead spend all day constantly stamping out widgets while thinking about the incredible bullshit they'll be producing for pennies. Thanks boss!
It feels that people commenting on this post are forgetting that tools have evolved since the times of punch cards or writing only in pure assembly.
I personally wouldn't have enjoyed being that kind of programmer as it was a tedious and very slow process, where the creativity of the developer was rather low as the complexities of development would not allow for just anyone to be part of it (my own assumption).
Today we have IDEs, autocomplete, quick visual feedback (inspectors, advanced debuggers, etc.) which allow people who enjoy creating and seeing the results of their work as opposed to purely be typing code for someone else.
So, I don't get why people jump straight to thinking that adding yet another efficiency tool would destroy everything. To me it seems to make developing simpler applications something which doesn't require a computer science degree, that's all.
> I personally wouldn't have enjoyed being that kind of programmer as it was a tedious and very slow process, where the creativity of the developer was rather low as the complexities of development would not allow for just anyone to be part of it (my own assumption).
I think your assumption is incorrect. I remember programming using punched cards and low-level languages, and the amount of creativity involved was no less than is involved now.
So are you saying that you would rather live in a society where only lucky people could participate in a given field than making it accessible to more people?
From the perspective of the programmer, true. Not necessarily from the perspective of the manager/customer, who can say in broad terms what needs to be done, and the programmer-black-box spits something out.
Agreed. But isn't this what is happening over a period for all manual jobs? I mean people used to carve wood. Now, machines do that with more precision & speed. The same goes for laying roads, construction & other professions.
All niche jobs will become mundane chores. I don't know if it is good or bad. Because humans always find a way to cultivate something new.
Funny you should phrase it this way. I know you mean prompts as description, but I would currently prefer declaring/describing what I want in a higher-level functional way rather than doing all the stateful nitty-gritty iterations to get it done. Some folks want to do manual memory management, or even work with a borrow checker, I'm good for most purposes with gc.
The question is always what's your 'description' language and what's your 'painting' language? I see the same in music: DJ's mix and apply effects to pre-recorded tracks, others resample on-the fly, while some produce new music from samples, and others form a collage from generated soundscapes, etc. It's all shades of gray.
Call me a cynic (many have, especially on this topic) but I can't help but think that the majority of what AI will "successfully" replace in terms of craftsmanship is going to be stuff that would've never been produced the "correct" way if you will. It's going to be code created for and to suit the interests of the business major class. Just like AI art isn't really suitable for anything above hobby fun stuff like generating your D&D character's avatar, or product packaging stock photo junk or header images for LinkedIn blog posts. Anything that's actually important is going to still need to be designed, and that goes for creative work like design, and proper code-work for development too, IMO.
Like sure, these AI's can generate code that works. Can they generate replacement code when you need to change how something works? Can they troubleshoot code that isn't doing what it's meant to? And if you can generate the code you want but then need to tweak it after to suit your purpose, is that... really that much faster than just writing the thing in your style, in a way you understand, that you can then change later as required?
I dunno. I've played with these tools and they're neat, and I think they can be good for learning a new language or framework, but once I'm actually ready to build something, I don't see myself starting with AI generation for any substantial part of it.
The question is not about what AI can do today but what we assume AI will be able to do tomorrow.
All of what you wrote in your second paragraph will become something AI will be doing better and faster than you.
We never had technology which can write code like this. I prompted ChatGPT to write a very basic java tool which renders an image from an url and makes it bigger on a click. It just did it.
Its not hard to think further and a lot of technology is already going into this direction. Alone last week devin was showne. Gemini has a window token of 1 Million tokens. Groq shows us how it will feel to have instant response.
Right now its already good enough that people with Copilot like to keep it when asked. We already now pay billions for AI daily. This means the amount of research, business motivation and money flowing into it now is probably staggering in comparision to what moved this field a few years ago.
Its not clear at all how fast we will progress but i'm pretty sure, we will hit a time were every junior is worse than AI which will force people of rethinking what they are going to do. Do i hire an junior and train him/her? Or do i prefer to invest more into AI? The gap will widen and widen, a generation or a certain amount of people will stay longer and might be able to stay in development but a lot of others might just not.
> We never had technology which can write code like this. I prompted ChatGPT to write a very basic java tool which renders an image from an url and makes it bigger on a click. It just did it.
It's worth noting, that it can do things like that because of the large amount of "how to do simple things in java" tutorials there are on the internet.
Ask an AI to _make_ java, and it won't (and will continue to not) be able to.
That's the level that AI will fail at, when things aren't easily indexed from the internet and thus much harder / impossible to put into a training set.
I think the technology itself (transformers and other such statistical models) have exhausted most of their low hanging fruit by now.
Sora, for example, isn't a grand innovation in the way latent space models, word2vec, or transformers are, it's just a MUCH larger model than DALLE-3.
which is great! but still has the limits inherit to statistical models. They need the training data.
> It's worth noting, that it can do things like that because of the large amount of "how to do simple things in java" tutorials there are on the internet.
Much like the same points made elsewhere with regard to AI art: It cannot invent. It can remix, recombine, etc. but no AI model we have now is anywhere close to where it could create something entirely new that's not been seen before.
> The question is not about what AI can do today but what we assume AI will be able to do tomorrow.
And I think many assumptions on this front are products of magical thinking that are discarding limitations of LLMs in favor of waiting for the intelligence to emerge from the machine, which isn't going to happen. ChatGPT and associated tech is cool, but it is, at the end of the day, pattern recognition and reproduction. That's it. It cannot invent something not before seen, or in our case here, it cannot write code that's never been written.
Now that doesn't make it useless, there's tons of code that's being written all the time that's been written thousands of times before. But it does mean depending what you're trying to build, you will run into it's limitations pretty quickly and have to start writing it yourself. And that being the case... why not just do that in the first place?
> We never had technology which can write code like this. I prompted ChatGPT to write a very basic java tool which renders an image from an url and makes it bigger on a click. It just did it.
Which it did, because as the other comment said, tons of people already have.
> Its not clear at all how fast we will progress but i'm pretty sure, we will hit a time were every junior is worse than AI which will force people of rethinking what they are going to do. Do i hire an junior and train him/her? Or do i prefer to invest more into AI? The gap will widen and widen, a generation or a certain amount of people will stay longer and might be able to stay in development but a lot of others might just not.
I mean, this sounds like an absolute crisis in the making for software dev as a profession, when the entire industry is reliant on a small community of actual programmers overseeing tons of robot junior devs turning out mediocre code. But to each their own I suppose.
Most of the time i'm not 'inventing' anything new too.
I get a requirement, find a solution and the solution is 99,99999% not a new algorithm. I actually believe i never invented a new algorithm.
Besides the next step is reasoning in GPT-5 and devin shows that GPTs/LLMs can start breaking down tasks.
I don't mind being wrong tbh, there is no risk in it for me if AI will not take my job but i don't believe it. I do believve the progress will be better and better and AI will do more and more reasoning.
It can easily try and do things 1000x fater than us, including reasoning. Its not hard to see that it will also be able to create its own examples and learn from them.
> I get a requirement, find a solution and the solution is 99,99999% not a new algorithm. I actually believe i never invented a new algorithm.
I can think of tons of things I do in my day-to-day programming that, while certainly not new or remarkable advances in technology, are at least new enough that you're not going to find a Stack Overflow thread for it.
Again, you guys are pointing to a code generator that can generate functions or code snippets to accomplish a particular task, and again, that is cool and I think it has a huge usage if nothing else as an assistive learning tool when you're trying to pick up a new language or get better with a library or what have you. But again, my point is, ask it to do something that doesn't appear in a bunch of those threads. Ask it to solve a particular bugbear problem in your codebase. Ask it to invent a new language, even a high level one.
> It can easily try and do things 1000x fater than us, including reasoning
AI is not a reasoning machine, though. I'd be very interested in what you mean by the word "reasoning" in this context.
"I can think of tons of things I do in my day-to-day programming that, while certainly not new or remarkable advances in technology, are at least new enough that you're not going to find a Stack Overflow thread for it."
I don't. I might solve current issues, new error messages from new/other libraries etc. but not something novel new.
"reasoning": thinking about a problem, reasoning about potential solutions, estimating the best action, executing it, retrying.
Reasoning in sense of, if the error message indicates a hibernate issue, reducing the search space for solution finding.
In my workplace juniors were replaced years ago by a never ending round of offshore. As soon as we train our offshore, they are rotated somewhere else.
At least ai will stay put.
But most of us will be getting by on basic income. Or banging on the gates of robo guarded walls begging for food.
I think the question is whether we're going to plateau at 95% or not. It's possible that we just run into a wall with transformers, or they do iron it out and it does replace us all.
Also if there are fewer humans involved in the code production there is a lot of room for producing code that "works", but is not cohesive or maintainable. Invariably there will be a point at which something is broken and someone will need to wade through the mess to find why it's broken and try to fix it.
This is the future imagined by A Fire Upon the Deep and its sequel. While less focused on the code being generated by ai, it features seemingly endless amounts of code and programs that can do almost anything but the difficulty is finding the program that works for you and is safe to use.
To some extent... This is already the world we live in. A lot of code is unreadable without a lot of effort or expertise. If all code was open sourced there would almost certainly be code written to do just about anything you'd like it to. The difficulty would be finding that code and customizing it for your use.
To piggyback off the sci-fi talk, I imagine in the far future, the programmer will become some sort of literal interface between machines and humans.
I imagine some sort of segregation would happen where the "machine cities" would be somewhat removed from the general human populus.
This would be to ensure the machines could use whatever information transport system they desired, unencumbered by the needs of the human populous, and vice-versa.
At a certain level of compute, I prognosticate that a certain level of logistical optimization would be trivial to advanced intelligences, and could be accomplished with almost-literally no effort using left-over cycles from whatever big calculation they were doing.
This would start to define different roles for humanity and machine. With logistics essentially "solved," a programmer would be a human-machine interpreter, sometimes journeying to the machine cities to disceminate needs of the people, or define a good way to introduce new technology to the populous.
This could look something like: During a headlining musical act, a "programmer," recently-returned from the machine city, grabs a mic and says "Does anyone want some of this BLUE, GLOWING, NON-RADIOACTIVE SELTZER WATER?" At which point the crowd would go wild. "If you liked that, just wait until you see what's coming next week!"
So essentially the programmer role becomes a hype-man for new, emergent technologies.
Caution - lots of people like to talk about this "code archeology" idea as if it's a central driving point of the book, whereas in fact it's mentioned once in passing in the prologue and is never again relevant to the story.
Don't get me wrong, it's still a decent book on its own merits - but don't go into it expecting that to be the main point of the book (I did, and disappointed as a result).
I'd argue that while its not a core diving part narrative... It is central to the idea of the book and its sequel. It's a decent sized book with a lot of ideas and the idea of code archeology and the repercussions of it are what the book is about as much as any of the other main ideas.
But yes, if you want a book that focused only on that... This is going to disappoint.
> It is central to the idea of the book and its sequel. [..] the idea of code archeology and the repercussions of it are what the book is about as much as any of the other main ideas.
Can't speak to the sequel as I gave up on the series after that, but it's _really_ not relevant to the plot or ideas of the first book at all. All that matters for the plot is that a hostile, powerful, uncontrollable AI arises. In the book, it _happens_ to be because of a code archeologist "delving too greedily and too deep"; but the plot would not be changed one iota if it had simply arisen (and gone off the rails) as a product of general AI development.
I'd argue that while its not a core diving part narrative... It is central to the idea of the book and its sequel. It's a decent sized book with a lot of ideas and the idea of code archeology and the repercussions of it are what the book is about as much as any of the other main ideas.
But yes, if you want a book that focused only on that... This is going to disapoint.
As a counterpoint, the main nemesis of the book comes from software that is found in archaeological expedition. While software archeology doesn't show up after the first chapter, the ramifications of what happens in that world due to so much software is pretty central.
No problem. I've been a sci fi reader my entire life and was shocked I hadnt stumbled across Vinge earlier. The sequel/prequel to Fire Upon the Deep, called A Deepness in the Sky, is arguably even better and the same idea of tech/code being used and customized far after its written is even more central to the plot.
Two of my favorite reads of the last few years, so I highly recommend them.
Certainly many of us here already have a good amount of experience debugging giant legacy spaghetti code-bases written by people you can't talk to, or people who can't debug their own code. That job may not change much.
I remember one such occasion back in a previous tech boom (late 90s) and it turned out the reason I couldn't talk to the guy who wrote this particular pile of Italian nutrition was that the Feds had shown up one day and taken him to jail (something to do with pump and dump market manipulation via a faked analyst report [edit: actually a faked press release I now remember. "SmallCapCorp (NASDAQ: SCC$) announces they have received a record breaking order for their next gen product / aquisition offer / something like that from RandomIsraeliCompanyThatMightNotEvenHaveExisted"]).
A lot of software engineers would spend a portion of their day tracking their volatile stock / options etc. in those years.
That's how bads use GPT to code. The right way is to ask GPT to break the problem down into a bunch of small strongly typed helper functions with unit tests, then ask it to compose the solution from those helper functions, also with integration tests. If tests fail at any point you can just feed the failure output along with the test and helper function code back in and it will almost always get it right for reasonably non-trivial things by the second try. It can also be good to provide some example helper functions/tests to give it style guidelines.
It's not really "all this work," once you have good prompts you can use them to crank out a lot of code very quickly. You can use it to crank out thousands of lines of code a day that are somewhat formulaic, but not so formulaic that a simple rules based system could do it.
For example, I took text document with headers for table names and unordered lists for table columns, and had it produce a database schema which only required minor tuning, which I then used to generate sqlmodel classes and typescript types. Then I created an example component for one entity and it created similar components for the others in the schema. LLMS are exceptionally good at this sort of domain transformation, a decent engineer could easily crank out 2-5k lines/day if they were mostly doing this sort of work.
Now your description of "good prompts" to reuse has created an abomination in my mind. I blame you.
The abomination: prompts being reused by way of yaml templating, a Helm chart of sorts but for LLM prompts. The delicious combination of yaml programming and prompt engineering. I hope it never exists.
You know with GPT you can do these steps in a language you are not familiar with and it will still work. If you don't know some aspect of the language or it's environmental specifics you can just chat until you find out enough to continue.
How do I know if a problem needs to be broken down by GPT, and how do I know if it broke the problem down correctly? What if GPT is broken or has a billing error, how do I break down the problem then?
1. Intuition built by trial and error
2. Domain expertise backed by automated checks
3. The old fashioned way, and if your power is out you can even bust out a slide rule
Maybe I'm being overly optimistic but in a future where a model can digest hundreds of thousands of lines of code, write unit tests, and do refactors, will this even be a problem?
I'm the opposite. I enjoy engineering and understanding systems. Manually coding has been a necessary to build systems up until now. AWS similarly was great because it provided a functional abstraction over the details of the data center.
On a personal level I feel bad for the people who enjoyed wiring up small data centers or enjoyed writing GitHub comments about which lint rules were the best. But I'm glad those are no longer necessary.
> I realise that I as a developer have put a lot of people out of a job.
For most developers this will not be true. Most apps, websites, compilers, desktop software etc. will not have put anyone out of a job. I certainly never ever put someone out of a job. I made some peoples live easier, but their total working hours didn't shorten and they certainly did not change profession or were replaced. In fact the majority of tasks that my software was applied to would simply have been deemed impossible to do and not have been done and that would have been all there was to it.
Having once worked on a project to improve some customer care software, I know that as a direct result of those improvements, people got fired.
I'm sure they all found new jobs but it did make me think about the consequence of my work.
Other projects involved making freemium games more addictive to suck people into paying. Of course everyone has a choice but playing on people's addictions to make money is an questionable morality.
I'm just saddened by the prospect that, for me, "adapting to change" would mean "no longer being able to make a living doing what I actually enjoy". that's why if this is the future, it's a career-killing one for me. Whether or not I stay in the industry, there is no future in my chosen career path, and the alternative paths that people keep bringing up all sound pretty terrible to me.
My only hope is that AI will not achieve the heights that its proponents are trying to reach (I suspect this is the case). I see no other good outcome for me.
> I'm just saddened by the prospect that, for me, "adapting to change" would mean "no longer being able to make a living doing what I actually enjoy". that's why if this is the future, it's a career-killing one for me.
Ok and? You don’t think any of the others put out of their work by other forms of computing like you might’ve enjoyed their jobs? You don’t think it might have been career ending for them?
The catch is that those people could, barring the AI advances we seem to be seeing, could retrain for an SWE labor market that lagged demand; that wont even be possible for devs put out of work in the future.
Those people who did retrain are the same devs being put out of work - which means they got hit with the setback twice and are worse off than people who started off as devs and thus only got hit once.
Like the allied bomber pilots in WWII looking down below at the firestorm knowing that there is a good chance (~45%) that they too will join their fate only later.
I suspect this is the wrong take. AI can only perform integrations when there are systems to integrate. The frontier of interesting work to be done isn't supervising an integration AI, but building out the hard components that will be integrated. Integration work itself already has been moving up the stack to low-code type tools and power-user like people over the past decade even before LLMs become the new thing.
I understand your feelings but I do also wonder if its not similar to complaining about compilers or garbage collection. I'm sure there are people that love fiddling with assembly and memory management by hand. I assume there will be plenty of interesting/novel problems no matter the tooling because, fundamentally, software is about solving such problems.
Software engineering as an occupation grew because of static analysis and GCs (literally why the labor market is the size that it is as we speak); the opposite appears to be the outcome of AI advances.
The same happened with accountants and spreadsheet software, the number of accounting jobs grew. The actual work they performed became different. I think a similar thing is likely to happen in the software world.
Tech has already learned there’s not enough real frontier left to reap the bounty of(removing zero interest rates that incentivize mere flow of capital). This stuff is being invested in to yield the most productivity at the least cost. There will either be a permanent net decrease in demand or, being so high level, most openings will pay no more than 60-70K in an America (likely with reduced benefits) where wages are already largely stagnant.
I think there is definitely merit to your statements. I believe the future of the average software developer job involves a very high level language, API integration, basic full stack work with a lot of AI assistance. And those roles will mostly be at small to medium businesses who can't afford the salaries or benefits that the industry has standard in the US.
Almost every small business I know has an accountant or book keeper position which is just someone who had no formal education and the role is just managing QuickBooks. I don't think the need for formally educated accountants who can handle large corporate books decreased significantly, but I don't have any numbers to back that up. Just making the comparison to say I don't think the hard / cool stuff that a lot of software developers love doing is going away. But these are just my thoughts.
It's reasonable to expect that sometime relatively soon, AI will be a clear-cut aid to developer productivity. At the moment, I consider it a wash. Chatbots don't clearly save me time, but they clearly save me effort, which is a more important resource to conserve.
Software is still heavily rate-limited by how much of it developers can write. Making it possible for them to write more will result in more software, rather than fewer developers. I've seen nothing from AI, either in production or on the horizon, that suggests that it will meaningfully lower the barrier to entry for practicing the profession, let alone enable non-developers to do the work developers do. It will make it easier for the inexperienced to do tasks which need a bit of scripting, which is good.
> Software is still heavily rate-limited by how much of it developers can write
Hmm. We have very different experiences here. IME, the vast majority of industry work is understanding, tweaking, and integrating existing software. There is very little "software writing" as a percentage of the total time developers spend doing their jobs across industry. That is the collective myth the industry uses to make the job seem more appealing and creative than it is.
At least, this is my experience in the large FAANG type companies. We already have so much code. Just figuring out what that code does and what else to do with it constitutes the majority of the work. There is a huge legibility issue where relatively simple things are obstructed by the morass of complexity many layers deep. A huge additional fraction of time is spent on deployments and monitoring. A very small fraction of the work is creatively developing new software. For example, one person will creatively develop the interface and overall design for a new cloud service. The vast majority of work after that point is spent on integration, monitoring, testing, releases, and so on.
The largest task of AI here would be understanding what is going on at both the technical layer and the fuzzy human layer on top. If it can only do #1, then knowledge workers will still spend a lot of effort doing #2 and figuring out how to turn insights from #1 into cashflow.
>At least, this is my experience in the large FAANG type companies. We already have so much code. Just figuring out what that code does and what else to do with it constitutes the majority of the work.
That sounds horrible. I've always sought out smaller companies that need stuff built. It certainly doesn't pay as much as SV companies but it's pretty stimulating. Sometimes being a big fish in a small pond is pretty nice.
IMO, maintaining someone else's code is probably the worst type of programming job there is, especially if it's bad /disjointed code. A lot of people can make a good living doing it though. It would be nice if AI could alleviate the pain of learning and figuring out a gnarly codebase.
Yep. It's not very satisfying, but that's the state of things. I think we should be more honest as an industry about that. Most of the content that prospective SWEs look at has a self-marketing slant that makes things look more interesting than they typically are. The reality is far more mundane. Or worse: micromanaging, pressure-driven, and abusive in many places.
> I've seen nothing from AI, either in production or on the horizon, that suggests that it will meaningfully lower the barrier to entry for practicing the profession, let alone enable non developers to do the work developers do.
Good observation. Come to think of it, all examples of AI coding require a competent human to hold the other end, or else it makes subtle errors.
How many humans do you need per project though? The number can only lower as AI tooling improves. And will employers pay the same rates when they’re already paying a sub for their AI tools and the work involved is so much more high level?
I don’t claim to have any particular prescience here, but doesn’t this assume that the scope of “software” remains static? The potential universe of programmatically implementable solutions is vast. Just so happens that many or most of those potential future verticals are not commercially viable in 2024.
Exactly. Custom software is currently very expensive. Making it cheaper to produce will presumably increase demand for it. Whether this results in more or fewer unemployed SWEs, and if I'll be one of them, I don't know.
> Making it possible for them to write more will result in more software, rather than fewer developers.
Goddamnit, software developers are already writing more software than we need. I wish they'd stop. Or redirect all that energy to new problems to solve. Instead we're seeing cloud-deployed microservice architecture CRUD apps that do what systems built for mainframes with kilobytes of RAM do, only worse. We're in a glut of bad software, do you think that AI accelerating production of more of the same will make things better?
If chatbots aren't saving you time you need to refine what you choose to use them for. They're absolutely amazing at refactoring, producing documentation, adding comments, translating structured text files from one format to another, implementing well known algorithms in newer/niche languages where repository versions might not exist, etc. On the other hand, I've mostly stopped asking GPT4 to write quickstart code for libraries that don't have star counts in the high thousands at least, and while I'll let it convert css/style objects/etc into tailwind I it's pretty bad at styling in general, though it is good at suggesting potentially problematic styles when debugging layout.
> you need to refine what you choose to use them for
This is making assumptions about the work I do which don't happen to be valid.
For example:
> libraries that [...] have star counts in the high thousands at least
Play little to no role in my work, and
> I'll let it convert css/style objects/etc into tailwind
Is something I simply don't have a use for.
Clearly your mileage varies, and that's fine. What I've found is that for the sort of task I farm out to the chatbots, the time spent explaining myself clearly, showing it counterexamples when it gets things wrong, and otherwise verifying that the code is fit to purpose, is right around the time I would spend on the task to begin with.
But it's less effort, which is good. I find that at least as valuable if not more so.
I remember watching this really funny video where a writer, by trade, was talking about recent AI products they were exploring.
They saw a "Make longer" button which took some text and made it longer by fluffing it out.
He was saying that it was the antithesis of his entire career.
As a high schooler who really didn't care, I would've loved it, though.
I've heard one CEO been asked about gen-ai tools to be used in the company. The answer was vague, like they are evaluating the tooling. However one good example was made: chatgpt is really good in writing mails, and in summarizing text as well.
He said they don't want to have situation when sender is using chatgpt to write a fancy mail and recipient is using chatgpt to read it. However I think that it is the direction where we are going right now.
I was giving examples, in the hopes that you could see the trend I was pointing towards for your own benefit. You can take that and learn from it or get offended and learn nothing, up to you.
Not sure why you are scared of GPT assisted documentation. First drafts are universally garbage, honestly I expect GPT to produce a better and more accurate first draft in a fraction of the time, which should encourage a lot of people who otherwise wouldn't have documented at all to produce passable documentation.
> Yikes. Not looking forward to that in the future.
Instead of documentation, I'm hoping more for "analysis". A helper that can take in a whole project (legacy or not) and tell you what it's supposed to be doing, and maybe point out areas for improvement.
It’s interesting how all of these articles implicitly assume AI keeps getting more intelligent then at some point just…stops.
There’s no reason to think AI won’t also take over all the parts you don’t find appealing, too. The whole point of the Singularity, is that no aspect of human work cannot be performed better by superhumanly intelligent machines.
Kurzweil’s predictions from 20, 30 years ago have been disturbingly on target and there is no clear reason why the current rate of progress will suddenly stop.
Umm, no I would not describe Ray Kurzweil's predictions as "disturbingly on target". Dan Luu checked everything and came up with 7% accuracy: https://danluu.com/futurist-predictions/.
The market is different, and so is the supply. The market for artisanal cutlery is basically an art market. The programmer supply today is an approaching-standardization factory worker. There IS an art market for software, in the indie gaming space, so perhaps that will survive (and AI could actually really help individual creators tremendously). But the work-a-day enterprise developer's days are numbered. The great irony being that all the work we've done to standardize, framework-ize the work makes us more fungible and replaceable by AI.
The result I foresee is a further concentration of power into the hands of those with capital enough to own data-centers with AI capable hardware; the petite bourgeoisie will shrink to those able to maintain that hardware and (perhaps) as a finishing interface between the AI's output and the human controlling the capital that placed the order. It definitely harms the value proposition of people who's main talent is understanding computers well enough to make useful software with them. THAT is rapidly commoditizing.
> The great irony being that all the work we've done to standardize, framework-ize the work makes us more fungible and replaceable by AI.
I mean, at some level, this is what frameworks were meant to do: give you a loose outline and do all that messy design stuff for you. In other words: commodify some amount of software design skill. And I’m not saying that’s bad.
Definitely puts a different spin on the people that get mad at you in the comment section when you suggest it’s possible to build something without a framework though!
Since AI has been trained on the generous gifts of the collective (books, code repos, art, ..), it begs the question why normal societies would not start to regulate them as a collective good. I can foresee two forces that will work against society to claim it back:
- Dominance of neoliberalism thought, with its strong belief that for any disease markets will be the cure.
- Strong lobby from big corporates.
You don't want to intervene to early, but you have to make sure you have at least some limits before you let the winners do too much damage. The EU has to be applauded for having a critical look on what effects these developments might have, for instance which sectors will face unemployment.
That is in the interest of both people and business, because the winner takes it all
means economic and scientific stagnation. I fear that 90% of the worlds' data is already in the hand of just a few behemots, so there is already no level playing field (which is btw caused by aforementioned dominance of neoliberalism).
The sectors of work that have been largely pushed out of economy in recent decades have not been defended by serious state policy. In fact there are whole groups of crucial workers, like teachers or nurses, who are kept around barely surviving in many countries. The groups protected by the state tend to be heavily organized and directly related to exploitation of natural strategic resources, like farmers or miners.
There is no particular sympathy towards programmers in society, I don't think. Based on what I observe calling the mood neutral would be fair, and this is mostly because the group expanded, and way more people have someone benefiting from IT in their family. I don't see why there would be a big intervention for programmers. Artists maybe, but these are proverbially poor anyway, and the ones with popular clout tended to somehow get rich despite the business models of culture changing.
I am all for copyright reform etc., but I don't see making culture public good, in a way that directly leads to more artisanal creators, as anything straightforward. This would have to entail some heavier and non-obvious (even if desirable) changes to the economic system. It's debatable if code is culture anyway, though I could see an argument for software, like Linux and other tools.
> I fear that 90% of the worlds' data
Don't wanna go into a tangent in this already long post, but I'd dispute if these data really reflect the whole knowledge we accumulated in books (particularly non-English) and otherwise not put into reachable and digestible formats. Meaning, sure, they have these data, they can target individual people with private stuff they have on them, but this isn't full accumulation of human knowledge that is objectively useful.
> There is no particular sympathy towards programmers in society, I don't think.
The concern policy makers have is not about programmers, but about boatloads of other people having no time to adapt to the massive wave these policymakers see coming.
There a strong signals that anyone who produces text, speech, pictures or whatever is going to be affected by it. If the value of labor goes down, if a large part of humanity cannot reach a level anymore to meaningfully contribute, if productivity eclipses demand growth, you simply will see lots of people left behind.
Strong societies depend on strong middle classes. If the middle class slips, so will the economy, so no good news for blue collar as well. AI has the potential to suffocate the organism that created it.
>AI has been trained on the generous gifts of the collective
Will be interesting to see how various copyright lawsuits pan out. In some ways I hope they succeed, as it would mean clawing back those gifts from an amorphous entity that would displace us (all?). In some ways I hope that we can resolve the gift problem by giving every human equity in the products produced by the collective value of the training data they produced.
>winner takes it all means economic and scientific stagnation
Given the apparent lack of awareness or knowledge of philosophy, history, or current events, it seems like a tough row to hoe getting the general public on board with this (correct) idea. Heck, we can't even pass a law overturning Citizens United, the importance of which is arguably even less abstract.
When the tide of stupidity grows insurmountable, and The People cannot be stopped from self-harm, you get collapse, and the only way to survive it is to live within a pocket of reason, to carry the torch of civilization forward as best you can.
> When the tide of stupidity grows insurmountable, and The People cannot be stopped from self-harm, you get collapse,
Yes, people are unfortunately highly unaware of what societal ecosystem they depend on, and so cannot prioritize on what is important. These topics don't sell in media shows.
> Most knifes today are mass produced. But there are still knife craftsman.
But are there more or less knife craftsmen today than in the old days?
How about more or less knife craftsmen per capita?
Finally, and most importantly, if you are a budding knife crafstman -- is it easier or harder to get a job that pays the bills of a contemporary average lifestyle, today than in the old days (ie, what is the balance of supply and demand)
At the risk of exposing my pom-poms, it's not the writing of the code or the design of the systems that I find the current batch of AI useful for.
Probably the biggest thing that GPT does for me these days is to replace google (which probably wouldn't be necessary if google hadn't become such hot garbage). As I say this, I'm made aware of the incoming rug-pull when the LLMs start spitting SEO trash in my face as well, but right now they don't which is just the best.
A close second is having a rubber duck that can actually have a cogent thought once in a while. It's a lot easier to talk through a problem when you have something that will never get tired of listening - try starting with a prompt like "I don't want advice or recommendations, but instead ask me questions to elaborate on things that aren't completely clear". The results (sometimes) can be really, really good.
For me the principal benefit of ChatGPT is it helps me to maintain focus on a problem I'm solving, while I wait for a slow build or test suite or what ever. I can bullshit about it without annoying my coworkers with Slack messages. And sometimes I'll find the joy reveling in the chatbot's weird errors and hallucinations.
I suppose my lunch is about to be eaten by all these people who will use it to automate the software engineer job away. So it goes
> Still be a market for Software Developers in the foreseeable future, though the nature of work will change
Back 25 years ago when I graduated, everyone kept saying there wouldn't be a need for software developers. Either the work was going to get sent overseas to the cheapest bidder or it was all going to get automated away anyway. I even recall CEOs making outrageous claims like "we didn't need any new software" as if all the software to run the world had already been built and would magically continue to run without maintenance.
We heard the same thing once upon a time about manufacturing too, but now we see a shift back on shore for manufacturing that we once thought was gone, either to robotics or offshore. It's different than before, but still manufacturing.
Software developers are still here, their compensation has overall increased dramatically, but of course the nature and the demands of the work continue to shift. Will it be different in 20 years? Of course, but it will still be software development.
These are exactly my thoughts. I comfort myself by thinking that it is still a while away and also not certain, but this might just be willful ignorance on my side. Because TBH, no clue yet what else I would like to (or even could) do.
Sorry, can you clarify more? I don't think I understand. The part you enjoy the most is the integrating of systems, right? If that's really your passion, I'm not sure you're in danger of losing your job to AI. AI is not great at nuance and this is exponentially more challenging than what we've done so far. I'm just assuming that since this is your passion (if I'm understanding correctly) that you see it as the puzzle it is and the complexities and uniqueness of each integration. If you're the type of person that's frustrated by low quality or quick shortcuts and not understanding the nuances actually involved, I think you're safe.
I don't see AI pushing out deep thinkers and the "annoying" nuance devs anytime soon. I'm that kinda person too and yeah, I'm not as fast as my colleagues. But another friend (who is similar) and I both are surprised how often other people in our lab and groups we work with (we're researchers) talk about how essential GPT and copilot are to their workflows. Because neither of us think this way. I use GPT(4) almost every day, but it's impossible for me to get it to write good quality code. It's great at giving me routines and skeletons, but the real engineering part takes far more time to talk the LLM into than it does to write it (including all the time to google or even collaborate with GPT[0]). LLMs can do tough things, but their abilities are clearly directly proportional to the frequency of the appearance of those tasks. So I think it is the coding bootcamp people that are in the most danger.
There are expert people that are also at risk though. These are the extremely narrow expertise people. Because you can target LLMs for specific tasks. But if your skills are the skills that define us as humans, I wouldn't lose too much sleep. I say this as a ML researcher myself. And I highly encourage everyone to get into the mindset of thinking with nuance. It has other benefits too. But I also think we need to think about how to transition into a post scarce world, because that is the goal and we don't need AGI for that.
[0] A common workflow for me is actually due to the shittiness of Google. Where it overfits certain words and ignores the advanced things like quotes or NOTs. Or similarly broaching into a new high level topic. I can't trust GPT's answer, but it will sure use keywords and vernacular I don't know that enable me to make a more powerful search. (But google employees should not take away that they should push LLMs into google search but rather that search is mostly good but that same nuance is important and being too forceful and repeating 5 pages of essentially the same garbage is not working. The SEO people attacked you and they won. It looks like you let them win too...)
I've encountered the same bemusing behavior, with Copilot helping more accurately with coding tasks, and I've started to think of it as akin to "personalities."
You don't go to your painter friend and ask them for coding help, much like you don't go to the general-purpose GPT; you'd go to the Copilot, who enjoys programming tasks or whatever.
Can GPT help? Sure. But the skeletons, rough jumping-off points, etc. all scream to me "I'm not going to do your homework for you," which I love.
In the end, both have been immensely helpful, but I use them for different things.
Oh yeah, I have some context-prompts that I use for different situations and they significantly help get the right answers. Still, I've never found coding to be successful other than boiler plaiting and hinting. I mean I can get it to give me usable code, that's for sure, but not good code. Definitely not optimized. It'll just give you essentially StackOverflow code.
This is pretty muchy experience as well. AI is a fantastic helper, and it will make devs more productive, but it is not going to put all software devs out of work. Probably a tiny fraction of them at best.
However, with recent grads flooding into IT for remote work and high pay, they could be hurting as AI reduces the need for entry level roles. Entry level was already saturated, and now it will be more saturated, with AI reducing the need for jobs.
Yeah I remember seeing someone try to measure it and they saw improvements in productivity on all experience levels but they found that it helped novices the most and experts only a little.
But this is actually something I worry about. The best way to become an expert is to do things the hard way. I've talk a lot of people linux over the years and only 3 have really learned it. Every time I teach people I give them two options: the easy way, which is just how to use it and use the gui and a bit of terminal or the hard way, which hand them the arch wiki, tell them to come back after their third failed attempt to install. Those 3 people came back, but usually did more than 3 installs, but had all been successful at some point. All 3 mentioned they understood why and then we could really talk about how to use linux, make scripts (and that scripts aren't aliases...), and so on. All 3 still are terminally terminal, years later. The thing is that humans (and even machines) learn by struggling, getting things wrong, and learning from mistakes. The struggle is part of the learning process. I've found this both in myself and whenever I teach anything, that if I just feed someone the answer (or look it up and nothing more for myself) they (I) don't end up remembering, they don't end up playing, they don't end learning how to learn.
What if we don’t hit AGI and instead the tools just get pretty good and put lots of people out of work while making the top 0.1% vastly richer? Now you’ve got no prospects, no power, and barely any money.
That's the scenario I'm assuming. Lots of people out of work, then they start working on ai and using it to solve the post-work survival problem.
But this relies on a few assumptions. 1) there will be open source AI that can solve low resource survival problems, 2) civilians will be able to run these AIs on whatever computing resources they're able to scrounge together, 3) the solutions the systems come up with will let civilians survive or revolt without access to high levels of capital, 4) the systems will NOT rise to the level of independent power gaining AGIs
Note that I have specifically assumed that we don't have independent AGIs. If we hit AGI, then I don't think we can assume that anyone will be able to use AI to solve the problems of post work survival. The AGI will do what it wants to do. I'm not sure how civilians should position themselves in that situation.
Developers can already deploy code on massive infrastructure today, and what do we see? Huge centralization. Why? Because software is free-to-copy winner-takes-most game, where massive economies of scale mean a few players who can afford to spend big money on marginal improvements win the whole market. I don't think AI will change this. Someone will own the physical infrastructure for economies-of-scale style services, and they will capture the market.
This is precisely what I dread. When it comes to software development specifically, the parts that the AI cheerleaders are excited about AI doing are exactly the parts of the job that I find appealing. If I wanted to be a glorified systems integrator, I would have been doing that job already. The parts that the author is saying will still exist are the parts I put up with in order to do the enjoyable and satisfying work.
So this essay, if it's correct, explains the way that AI threatens my career. Perhaps there is no role for me in the software development world anymore. I'm not saying that's bad in the big picture, just that it's bad for me. It increasingly appears that I've chosen the wrong profession.