Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
MattPalmer1086
on July 18, 2024
|
parent
|
context
|
favorite
| on:
Overcoming the limits of current LLMs
More true to say that
all
output is bullshitting, not just the ones we call hallucinations. Some of it is true, some isn't. The model doesn't know or care.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: