> It’s as if we’ve made technology so complex, that the only way forward is to double down and try harder with these LLMs and the associated AGI fantasy.
This is the real AI risk we should be worried about IMO, at least short term. Information technology has made things vastly more complicated. AI will make it even more incomprehensible. Tax code, engineering, car design, whatever.
It's already happening at my work. I work at big tech and we already have a vast array of overly complicated tools/technical debt no one wants to clean up. There's several initiatives to use AI to prompt an agent, which in turn will find the right tool to use and run the commands.
It's not inconceivable that 10 or 20 years down the road no human will bother trying to understand what's actually going on. Our brains will become weaker and the logic will become vastly more complicated.
Yes, I'm already doing it. But the problem is there's not a lot of incentive from management to do it.
Long term investment in something that can't easily be quantified is a non-starter to management. People will say "thank you for doing that" but those who create new features that drive metrics get promoted.
That's why it's very important to work for an organization run by engineers or former engineers when writing software.
And engineers who actually shipped and maintained successful products over time.
For software developed in-house by non-software organizations, the incentives are all wrong because management cannot properly assess the value of well architected systems.
This is the real AI risk we should be worried about IMO, at least short term. Information technology has made things vastly more complicated. AI will make it even more incomprehensible. Tax code, engineering, car design, whatever.
It's already happening at my work. I work at big tech and we already have a vast array of overly complicated tools/technical debt no one wants to clean up. There's several initiatives to use AI to prompt an agent, which in turn will find the right tool to use and run the commands.
It's not inconceivable that 10 or 20 years down the road no human will bother trying to understand what's actually going on. Our brains will become weaker and the logic will become vastly more complicated.