Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Call me a cynic (many have, especially on this topic) but I can't help but think that the majority of what AI will "successfully" replace in terms of craftsmanship is going to be stuff that would've never been produced the "correct" way if you will. It's going to be code created for and to suit the interests of the business major class. Just like AI art isn't really suitable for anything above hobby fun stuff like generating your D&D character's avatar, or product packaging stock photo junk or header images for LinkedIn blog posts. Anything that's actually important is going to still need to be designed, and that goes for creative work like design, and proper code-work for development too, IMO.

Like sure, these AI's can generate code that works. Can they generate replacement code when you need to change how something works? Can they troubleshoot code that isn't doing what it's meant to? And if you can generate the code you want but then need to tweak it after to suit your purpose, is that... really that much faster than just writing the thing in your style, in a way you understand, that you can then change later as required?

I dunno. I've played with these tools and they're neat, and I think they can be good for learning a new language or framework, but once I'm actually ready to build something, I don't see myself starting with AI generation for any substantial part of it.



The question is not about what AI can do today but what we assume AI will be able to do tomorrow.

All of what you wrote in your second paragraph will become something AI will be doing better and faster than you.

We never had technology which can write code like this. I prompted ChatGPT to write a very basic java tool which renders an image from an url and makes it bigger on a click. It just did it.

Its not hard to think further and a lot of technology is already going into this direction. Alone last week devin was showne. Gemini has a window token of 1 Million tokens. Groq shows us how it will feel to have instant response.

Right now its already good enough that people with Copilot like to keep it when asked. We already now pay billions for AI daily. This means the amount of research, business motivation and money flowing into it now is probably staggering in comparision to what moved this field a few years ago.

Its not clear at all how fast we will progress but i'm pretty sure, we will hit a time were every junior is worse than AI which will force people of rethinking what they are going to do. Do i hire an junior and train him/her? Or do i prefer to invest more into AI? The gap will widen and widen, a generation or a certain amount of people will stay longer and might be able to stay in development but a lot of others might just not.


> We never had technology which can write code like this. I prompted ChatGPT to write a very basic java tool which renders an image from an url and makes it bigger on a click. It just did it.

It's worth noting, that it can do things like that because of the large amount of "how to do simple things in java" tutorials there are on the internet.

Ask an AI to _make_ java, and it won't (and will continue to not) be able to.

That's the level that AI will fail at, when things aren't easily indexed from the internet and thus much harder / impossible to put into a training set.

I think the technology itself (transformers and other such statistical models) have exhausted most of their low hanging fruit by now.

Sora, for example, isn't a grand innovation in the way latent space models, word2vec, or transformers are, it's just a MUCH larger model than DALLE-3. which is great! but still has the limits inherit to statistical models. They need the training data.


> It's worth noting, that it can do things like that because of the large amount of "how to do simple things in java" tutorials there are on the internet.

Much like the same points made elsewhere with regard to AI art: It cannot invent. It can remix, recombine, etc. but no AI model we have now is anywhere close to where it could create something entirely new that's not been seen before.


I have not seen crabs made out of food before.

What level do you think 'invention' have to be to count as something AI can't do?

The only thing AI needs is a feedback loop and a benchmark/cost function.

If the cost function is page impressions, thats easy. If its running unit tests based from business requirements, thats easy too.


> The only thing AI needs is a feedback loop and a benchmark/cost function.

You’re forgetting about data. AI needs data. It’s arguably the most important thing any statistical model needs.

That data must come from somewhere.

There’s no free lunch.


> The question is not about what AI can do today but what we assume AI will be able to do tomorrow.

And I think many assumptions on this front are products of magical thinking that are discarding limitations of LLMs in favor of waiting for the intelligence to emerge from the machine, which isn't going to happen. ChatGPT and associated tech is cool, but it is, at the end of the day, pattern recognition and reproduction. That's it. It cannot invent something not before seen, or in our case here, it cannot write code that's never been written.

Now that doesn't make it useless, there's tons of code that's being written all the time that's been written thousands of times before. But it does mean depending what you're trying to build, you will run into it's limitations pretty quickly and have to start writing it yourself. And that being the case... why not just do that in the first place?

> We never had technology which can write code like this. I prompted ChatGPT to write a very basic java tool which renders an image from an url and makes it bigger on a click. It just did it.

Which it did, because as the other comment said, tons of people already have.

> Its not clear at all how fast we will progress but i'm pretty sure, we will hit a time were every junior is worse than AI which will force people of rethinking what they are going to do. Do i hire an junior and train him/her? Or do i prefer to invest more into AI? The gap will widen and widen, a generation or a certain amount of people will stay longer and might be able to stay in development but a lot of others might just not.

I mean, this sounds like an absolute crisis in the making for software dev as a profession, when the entire industry is reliant on a small community of actual programmers overseeing tons of robot junior devs turning out mediocre code. But to each their own I suppose.


Most of the time i'm not 'inventing' anything new too.

I get a requirement, find a solution and the solution is 99,99999% not a new algorithm. I actually believe i never invented a new algorithm.

Besides the next step is reasoning in GPT-5 and devin shows that GPTs/LLMs can start breaking down tasks.

I don't mind being wrong tbh, there is no risk in it for me if AI will not take my job but i don't believe it. I do believve the progress will be better and better and AI will do more and more reasoning.

It can easily try and do things 1000x fater than us, including reasoning. Its not hard to see that it will also be able to create its own examples and learn from them.


> I get a requirement, find a solution and the solution is 99,99999% not a new algorithm. I actually believe i never invented a new algorithm.

I can think of tons of things I do in my day-to-day programming that, while certainly not new or remarkable advances in technology, are at least new enough that you're not going to find a Stack Overflow thread for it.

Again, you guys are pointing to a code generator that can generate functions or code snippets to accomplish a particular task, and again, that is cool and I think it has a huge usage if nothing else as an assistive learning tool when you're trying to pick up a new language or get better with a library or what have you. But again, my point is, ask it to do something that doesn't appear in a bunch of those threads. Ask it to solve a particular bugbear problem in your codebase. Ask it to invent a new language, even a high level one.

> It can easily try and do things 1000x fater than us, including reasoning

AI is not a reasoning machine, though. I'd be very interested in what you mean by the word "reasoning" in this context.


"I can think of tons of things I do in my day-to-day programming that, while certainly not new or remarkable advances in technology, are at least new enough that you're not going to find a Stack Overflow thread for it."

I don't. I might solve current issues, new error messages from new/other libraries etc. but not something novel new.

"reasoning": thinking about a problem, reasoning about potential solutions, estimating the best action, executing it, retrying.

Reasoning in sense of, if the error message indicates a hibernate issue, reducing the search space for solution finding.


In my workplace juniors were replaced years ago by a never ending round of offshore. As soon as we train our offshore, they are rotated somewhere else.

At least ai will stay put.

But most of us will be getting by on basic income. Or banging on the gates of robo guarded walls begging for food.


Why invent a new lang when it's easier to compile down to machine code and grok that?


It could be that AI is smart enough for this. Nonetheless a language is a compression thing. You rarly use single ASM instructions.


I think the question is whether we're going to plateau at 95% or not. It's possible that we just run into a wall with transformers, or they do iron it out and it does replace us all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: