Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
dmitrygr
42 days ago
|
parent
|
context
|
favorite
| on:
X blames users for Grok-generated CSAM; no fixes a...
Goalpost movement alert. The claim was that "AI can be told not to output something". It cannot. It can be told to not output something sometimes, and that might stick, sometimes. This
is
true. Original statement is not.
UncleMeat
42 days ago
[–]
If you insist on maximum pedantry, an AI
can
be told not to output something as this claim says nothing about how the AI responds to this command.
dmitrygr
42 days ago
|
parent
[–]
You are correct and you win. I concede. You outpedanted me. Upvoted
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: