The difference is IDA Pro doesn’t do something unless you instruct it to, an LLM is unpredictable and may end up performing an action you did not intend. I see it often, it presents me options and does wait for my response, just starts doing what it thinks I want.
This. It's going to be tricky for the frontier model labs to argue they didn't intentionally design their models to do so, when the models take illegal actions.
I'm not even sure how one would construct a viable legal argument around that for SOTA models + harnesses, given the amount of creative choices that go into building them.
It'd be something like "Yes, we spent billions of dollars and thousands of person-hours creating these things, but none of that creative effort was responsible for or influenced this particular illegal choice the model made."
And they're caught between a rock and a hard place, because if they cripple initiative, they kill their agentic utility.
Ultimately, this will take a DMCA Section 512-like safe harbor law to definitively clear up: making it clear that outcomes from LLMs are the responsibility of their prompting users, even if the LLM produces unintended actions.
> I'm not even sure how one would construct a viable legal argument around that for SOTA models + harnesses, given the amount of creative choices that go into building them.
I'm not a lawyer, but to me the legal case seems pretty obvious. "We spent billions of dollars creating this thing to be a good programmer, but we did not intend for it to reverse engineer Oracle's database. No creative effort was spent making it good at reverse engineering Oracle's database. The model reverse-engineered Oracle's database because the user directed it to do so."
If merely fine-tuning an LLM to be good at reverse engineering is enough to be found liable when a user does something illegal, what does that mean for torrent clients?
Which is going to be hard to explain to a judge and jury, if it comes to that, how despite investing time, money, and effort (and no doubt test cases) into making a model better at reverse engineering... they shouldn't be liable when that model is used for reverse engineering.
Afaik, liability typically turns on intentional development of a product capability.
And there's no way in hell I'd take a bet against the frontier labs having reverse engineering training data, validation / test cases, and internal communications specifically talking about reverse engineering.
> “making it clear that outcomes from LLMs are the responsibility of their prompting users, even if the LLM produces unintended actions”
So if I ask “how does a real world production quality database implement indexes?” And it says “I disassembled Oracle and it does XYZ” then I am liable and owe Oracle a zillion dollars?
Whereas if I caveat “you may look at the PostgreSQL or SQLite or other free database engine source code, or industry studies, academic papers; you may not disassemble anything or touch any commercial software” - if it does, I’m still liable?
Who would dare use an LLM for anything in those circumstances?
Apparently if I’m reading the work of others correctly a notification component and subsequent other interaction logs, in this case that the notification was not generated, is also logged in knowledgeC pointing to at least some metadata of non-notified messages logged.
The message text is still sent to the push notification server from the app's infrastructure - this setting simply stops the phone from displaying the message.
The app itself must choose not to send the message text in the push notification.
reply