Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


It’s a nice NLP-generated text, so now, what is scientific correct about it, given that ChatGPT is not configured for reasonings or for citing sources?

There is a reason why HN guidelines forbids robot-generated answers.


Just to clarify your position, do you think this specific passage contains mistakes or is misleading in any way (if so, please be precise), or are you generally doubtful about this technology but are fine with the text above?


The issue with ChatGPT is that it produces convincingly sounding texts that more often than not contain factual errors that are obvious to people familiar with the field, but require effort to disprove for lay persons. Made up citations, for example. As such, they’re worthless. A human is capable of producing a similar made up text, but ChatGPT makes it trivial to anyone, flooding the conversation with useless noise, crowding out the signal.

Asking your parent to engage with an essentially unlimited firehose of unfounded claims just plays into that hand.

I really recommend reading this text on ChatGPTs lack of usefulness for academic conversations https://acoup.blog/2023/02/17/collections-on-chatgpt/


The problem with ChatGPT is indeed that it is trained to look like an authoritative source independent of the input query. What ChatGPT is doing is transforming the original input and filling the gaps but the gaps it filled must all be acknowledged by the original author and that is hard to impossible for a layman.


It’s even worse. The model contains the information that claims are statistically likely to be followed by a citation, for example. So when the output produces a claim, it follows up with a citation- and it completely makes that one up. It has no concept of what a citation is, or what purpose it serves or that a reader might actually go and validate that. It's just a specific sequence of words that follows a specific sequence of words.


Thank you for the reference, I know about the dangers of chatgpt and some skepticism is certainly warranted.

However, dismissing anything produced by chatgpt simply because it was made by chatgpt is not right, which is why I was asking an opinion about that passage: if the text is accurate, it should not matter who or what wrote it.


You're missing my point: The text is produced by an agent known to be unreliable. It might be right by chance, but the onus to prove that (by citing references, for example) is on the poster. It's entirely warranted to dismiss the text, otherwise you essentially DOS the conversation.


You know, the most unreliable agents still appear to be people. If someone is right 90% of the time, we don’t require references for everything they say to save for that 10%.


To clarify my position: I believe the generated text is indistinguishable from human-generated reasonings and is most probably true in most cases (probably no factual error).

However, on average, ChatGPT content will contain more errors than humans (who it can be assumed want to see the truth), and therefore, it should be put in the same bucket as both “propaganda facts” and “con artist facts”. It’s still facts, just misleading.


I don't see this guideline? [0]

[0]: https://news.ycombinator.com/newsguidelines.html



OK, noted, sorry for my post.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: