Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I assume because I've seen ChatGPT successfully:

- write a Gitlab ci job that tests python code and outputs a junit test report.

- can you add code coverage?

- write a sql table that has users, age, and a biography

- write python that parses the above output from the table

- now use a lexer and parser

- create a unit test to verify it's correct.

- write a perl one liner that parses a list of users and phone numbers

It completed all of them. I was already impressed with OpenAI when it generated an argument for why Britney Spears should become president and a test for a third grader about Socrates which I then answered incorrectly and had it grade it.



Have you tried doing this yourself? On Twitter etc. you see some really cool cherry-picked examples that are quite easy to break when replicating them yourself.

It is often confidently incorrect. Which is easy to spot if you know the subject matter, but nearly impossible if you are not familiar with the subject.


I found the same thing! I asked multiple questions and a lot of things were “confidently incorrect”. It was the exact term I thought of seeing these answers.

I found it funny because it made it more human-like as well.

(Example: “what’s the difference between timestamp and timestamptz in postgresql?” will answer that timestamptz will store the time zone and takes more space, which is both incorrect!)

I also asked it general, abstract programming advice and it gave pretty well-reasoned arguments mentioning maintainability and readability.


> It is often confidently incorrect.

My god, it truly can replace computer programmers.


As a lifelong product manager, I’ve always considered my specs to be prompt engineering for incredibly impressive but sometimes misguided intelligences.


It can barely spell out the correct ingredients of certain dishes, wouldn't trust it.

Garbage in, Garbage out.

GPT3 is as good and reliable as a Tesla autopilot.


I've tried it and it works. It sometimes makes mistakes which it corrects when pointed out. And it's going to make less and less mistakes as the technology improves. It also can do many things I am not able to do. It's incredible.


These were all examples I tried myself.


On being able to replace a real programmer I'd say ChatGPT is about the same distance lane assist is from FSD.

Why you ask? Well it's because most of what a programmer does is more complex than the AI can handle right now. For example my job involves the following:

a) multiple interacting systems and management of the relationships between them

b) > 90% of my work does not involve writing new code

c) the programming part is mostly about changing existing code

d) or debugging (and sometimes setting up a debugger and running code)

e) reading hundreds of articles to figure out why the trivial example these systems are good at doesn't work (sometimes on someone else's machine or staging only)

f) taking product requirements and making small alterations to allow them to become reality (and knowing when to tell people about the decisions you made and when not to)

g) talking to people who might know more about the specific problem you are facing and when to battle through

h) sharing ideas on how to do things with the wider team

i) fighting the good fight and sneaking in refactoring when people are not looking

I don't think we are anywhere near these things yet.


The most important, I think, is talking to the customer, and understanding what they want... even when they do not know it...


Okay. If there is somebody out there where the hardest part of their job is those tasks then that's not good for them.


There are a lot of people for whom those kinds of tasks are the most difficult programming tasks they'll be doing (more or less) per feature. It's just their job will often involve a lot more than just that (and if it doesn't, goodbye job...)


Right. I don't understand the concern even for people working on small crud apps using widely used frameworks and little specialized business domain modeling. The act of writing code isn't the hard bit.


I agree. People think coding is the hard bit, when for most jobs it isn't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: