"Note on "tuned": OpenAI shared they trained the o3 we tested on 75% of the Public Training set. They have not shared more details. We have not yet tested the ARC-untrained model to understand how much of the performance is due to ARC-AGI data."
The blog-poster is talking about long-term trends, so it doesn't matter if early-adopters skip on security, the time-horizon is long enough that the consequences will matter.
If we stop and carefully look at our world, security (safety against malicious peers) is an iceberg taken for granted. One might start by summing up the militaries of every country on earth. Add the budgets of most police departments, and a good chunk of the justice system. The energy, material, and labor poured into most weapons, fences, doors, and locks. The CPU cycles used in all encryption, and most of the hashing.
P.S.: "Investors, friends, I am pleased to announce that our bold and powerful new business-model which will completely disrupt the entire retail sector, worldwide, and change society forever. Behold! TTLMD: Take The Thing and Leave the Money in the Drawer! Existing industry dinosaurs will be unable to compete with our ultra-low-cost alternative which needs barely any staff."
No, I'm still talking about prompt injection (and other more-normal reliability issues), because I do not believe LLMs are some inevitable stepping stone to an actual AI, one that has "alignment" to principles or goals beyond "what additional token completes this document the best." (Robot characters humans perceive when reading the document are not the author of the document.)
For any technology or product, there are issues which can be ignored or downplayed in the name of profit today, but they tend to pop up eventually. That's why it's very hard to buy leaded gasoline anymore, and the joke about how the "S" stands for "Security" in the term "IoT".
> I then leverage this to achieve arbitrary memory read and write outside of the v8 heap sandbox, and in turn arbitrary code execution in the Chrome renderer process.
So the code is running in a process that runs as the same user running the browser. That's no longer much of a sandbox and you're now relying on the OS to protect your data, right?
No. There is a reason the author keeps repeating "arbitrary code execution in the Chrome renderer process." Because it's there, not in the browser process.