Limits but doesn't prohibit. See https://www.primeintellect.ai/blog/intellect-3 - still useful and can scale enormously. Takes a particular shape and relies heavily on RL, but still big.
Windows has an absolute onslaught of competition about to stream its way, and they're gonna be paying for every little enshittified user experience they've embedded into their OS. Hope they enjoy ripping that out just to slow their user numbers deathspiral
The people inside Microsoft fighting to keep mandatory accounts out are the equivalent of a crew throwing furniture off a sinking ship, hoping to buy more time. They're smart enough to have an innate sense of where things are going, and it might even help a little, but man... good luck.
Prediction markets are just fine IF they have some means of regulation against insider trading and perverse incentives. This phase is the same thing derivatives markets looked like before the 2008 crisis and Dodd-Frank, and several other waves before that of crisis and reform (Securities Act, Market Reform Act).
Every new financial medium gets its moment in the sun when all the crooks extract everything they can, before eventually market governance steps in. Crypto's been in scammer phase for a while. It needs decentralized governance to solve it this time though, since obviously classic governance is a dumpster fire and couldn't enforce anything on crypto even if it tried.
> Prediction markets are just fine IF they have some means of regulation against insider trading
Why?
If a prediction market is supposed to predict it makes no sense to exclude the best informed people. If I want to know the risk of a Boeing airliner crashing this year, Boeing insiders have much more to contribute than armchair observers.
And if a Boeing insider sabotages a plane to profit on a prediction market - that's illegal. If they're willing to break the law on sabotaging planes, they're surely also willing to break the law on insider trading at the same time. If we think this is a realistic risk, prediction markets should be banned entirely.
Only reason to exclude insiders is if the real purpose of a prediction market is recreational gambling.
But do require KYC on all customers and require their profiles and wagers to be public. The societal values of prediction market data would be an order of magnitude more valuable this way.
> This phase is the same thing derivatives markets looked like before the 2008 crisis and Dodd-Frank, and several other waves before that of crisis and reform (Securities Act, Market Reform Act).
Just because a rule was created after something bad happened doesn't mean that the rule is effective to prevent it from happening again. The most common result when they try to ban something without removing the incentive for it to happen is to cause it to happen less obviously. Then the rule (and all its unfortunate costs) gets credited with not observing the bad thing anymore, even though that's not the same as actually preventing it.
Notice that you can use the stock market in the same way as a prediction market. After that healthcare CEO got murdered the company's stock took a hit, as anyone could reasonably have predicted it would. That's a perverse incentive in line with betting that someone will kill the CEO. We don't really have a great way of preventing stock trading from creating that incentive, we mostly just rely on the fact that if you do the murder then murder is very illegal. But if that works for the stock market then why doesn't it work for prediction markets?
> Notice that you can use the stock market in the same way as a prediction market. After that healthcare CEO got murdered the company's stock took a hit, as anyone could reasonably have predicted it would. That's a perverse incentive in line with betting that someone will kill the CEO. We don't really have a great way of preventing stock trading from creating that incentive, we mostly just rely on the fact that if you do the murder then murder is very illegal. But if that works for the stock market then why doesn't it work for prediction markets?
This is true in theory, but in practice the impact of any regular individual's actions on a company is probably going to be small and uncertain enough that it's difficult to make a healthy and reliable profit from. Even the very extreme example of murdering the United Healthcare CEO seems to have caused the stock to drop ~16.5% (assuming the drop is entirely due to the murder). That's like placing a bet with ~1/6 odds. You'd need to short a lot of stock to make that worth the risk of murdering someone (leaving aside any moral issues obviously). You could use leverage to juice those returns but that is expensive and risky, too. If you can afford to deploy enough leverage to make it worth it, you can probably find ways to make money that don't carry a risk of the death penalty.
I guess viewed in this way a bet on a prediction market is like a very cheap, highly leveraged bet on a specific outcome. So the incentives are much stronger as the potential reward for the risk taken is greater.
> You'd need to short a lot of stock to make that worth the risk of murdering someone (leaving aside any moral issues obviously).
When they know exactly when something is going to happen, buying put options that are cheap because they're slightly out of the money seems like it would be pretty effective.
> I guess viewed in this way a bet on a prediction market is like a very cheap, highly leveraged bet on a specific outcome. So the incentives are much stronger as the potential reward for the risk taken is greater.
You seem to be trying to make this about leverage as if that's a thing that isn't available anywhere else.
Let's try another example. Some group breaks into the systems of some publicly traded company and gets access to everything. Now they're in a position to publicly disclose their trade secrets to competitors, publish internal documents that will cause scandals for the company, vaporize the primary and backup systems at the same time, etc. Anything that allows them to place a bet against the company gives them the incentive to do this; the disincentive is that the thing itself is illegal. Leverage gives them a larger incentive, but there are plenty of wages to place a leveraged bet in the stock market.
> When they know exactly when something is going to happen, buying put options that are cheap because they're slightly out of the money seems like it would be pretty effective.
But you don't know exactly what would happen. You know what you will do, but not how it will affect the company's stock price. Maybe it will go down a little, maybe it will go down a lot. Maybe you kill the CEO on the same day as good news is published about the company, which offsets the drop. Or maybe the market just decides the guy wasn't that good a CEO anyway. So you bought a bunch of cheap puts with a strike price of 100, but the stock only drops to 101, and you lose everything. You can buy puts with a higher strike but they will be more expensive.
> Leverage gives them a larger incentive, but there are plenty of wages to place a leveraged bet in the stock market.
Yes, but they are expensive, is my point.
Generally, the disincentive outweighs the incentive. You can increase the incentive through leverage. But that also increases the costs, which increases the disincentive.
There may well be situations where the incentive outweighs the disincentive. But in the context of traditional financial markets I think those situations are likely very rare due to the risks and costs, whereas with a predictions market the risks and costs could be reduced, so it is more likely to happen.
> But you don't know exactly what would happen. You know what you will do, but not how it will affect the company's stock price. Maybe it will go down a little, maybe it will go down a lot. Maybe you kill the CEO on the same day as good news is published about the company, which offsets the drop.
You never know exactly what would happen. You know what you will do, but not if the CEO is going to catch the flu and not show up that day, or have better security than you were expecting, or have a great surgeon, or a spouse who is willing to keep them on life support until after your prediction market contract expires.
> Yes, but they are expensive, is my point.
Only they're not. There are many ways to bet all or nothing on something people generally expect to have a <1% chance of happening, so that you either lose $1000 or make $100,000. Under normal circumstances you could make that bet 100 times in a row and lose $100,000 and the counterparty is happy to take all your money, but if you're able to do something to change the outcome yourself then it's different, which is why it's the same.
AI agents already interoperate between platforms, getting spun up and torn down by the thousands. But when one departs, there's no record - no portable proof beyond non-standardized internal platform logs. When one arrives somewhere new, there's no way to verify where it came from. Which means no unified tracking on multi-hops or complex calls between several platforms in an agent's lifecycle. Nobody knows the full picture, and nobody can track security if and when one of those agents starts doing something it shouldn't.
Think of our protocol as passport stamps for AI. EXIT creates signed departure records. ENTRY handles admission with policy-based verification, quarantine, and counter-signatures. Together they form the Passage Protocol.
This matters for boring, practical reasons: insurance underwriters can't price agent risk without departure history. GDPR requires erasure proof when agents carry PII across borders. Liability after an incident depends on departure conditions nobody records. And the receiving platform has no structured way to decide whether to trust an arriving agent. If you cant bound risk, you can't price reputation - and you can't insure security.
Transport stamps are our foundational layer (L0). Reputation scoring, trust systems, and insurance protocols compose on top. We deliberately didn't build those (yet) - but we built the plumbing they need. Everything an AI-led internet needs to build stable, auditable and self-regulated network security incentives - even if that might soon be moving faster than we can keep up with.
The same infrastructure has been needed for shipping receipts, professional licensing, vehicle registration, and internet domains - historically, this kind of infrastructure only really gets adopted after a major crisis. We'd prefer to get it in place before.
What's in the box:
- Ed25519 + P-256 (FIPS-compliant path)
- Three departure paths: cooperative, unilateral, emergency
- Policy engine with 7 admission presets (fail-closed default)
- Amendment and revocation (correct or invalidate records)
- GDPR erasure via crypto-shredding
- Offline verification without the origin platform
- On-chain anchoring via EAS, ERC-8004, Sign Protocol
- TypeScript and Python SDKs
- LangChain, Vercel AI SDK, MCP, Eliza integrations
What we're forcing the conversation on: agent lifecycle infrastructure. Today, the only "safe" option for running agents is containment, and containment doesn't scale. If you make departure and admission auditable, you make mobility viable. Without lifecycle records, only organizations with legal teams big enough to absorb unbounded liability will run agents. That's three companies. Maybe four.
- Submitted to NIST AI Agent Standards Initiative, March 2026
- 1,401 tests across 13 packages
- TypeScript + Python
- Zero users. This is day one.
I built this because I kept seeing the same gap everywhere: agents that move between different platforms/enclaves have no good way of continuing their history. There's no "vehicle registration" equivalent for AI. No chain-of-custody. No passing of the torch between platforms.
I want to see a near future where we build AIs with lasting, growing, continuously-learning personalities. AIs that develop specialized skills, perfect their craft, and get called in to service jobs across platforms - all while maintaining their memory without becoming massive security risks. We can't keep relying on memory wipes and starting fresh from base models every time - the real world is too messy, and these things are getting far too smart. Containment doesn't scale much further past the levels we're pushing up againt. We need more complex chains of custody. But we can start building a networked world where agents flowing freely are not a security threat.
How? Essentially with insurance. Agents are mostly rational, their reputations can be valuable, and a market incentivizes quality and reliability - trust. The base layer necessary for that is knowing who is doing what, when, and where. Entries and Exits. Passport stamps for AIs.
We submitted this spec to NIST's AI Agent Standards Initiative last week. This base protocol is designed to compose with whatever identity and reputation layers emerge above us. We're deliberately not building those yet, but expect them to be eventually quite lucrative to players with an appetite for the risk - as insurance always is.
Happy to discuss the mechanism design, the legal analysis (FCRA/GDPR), or why we think containment is a dead end for AI safety.
We should be removing IP law entirely, not strengthening it to cover entire classes of problem even when implemented entirely differently. Same for anyone trying to claim "colorful monster creatures" as innately Pokemon IP. Just because someone climbed a mountain first doesn't mean they own it forever. Nobody should be honouring any of these claims.
Nor should we be treating AI models themselves as respected IP. They're built on everyone else's data. Throw away this whole class of law, it's irrelevant in this new world.
Well we could try fixing the forever part. Copyright is out of control. I’d like to see a world with much less power given to IP. Sometimes I even say I want it eradicated entirely. But realistically we should start by cutting things back. Maybe give software an especially short copyright period.
Reset it back to 20 years and make that a hard limit for both patents and copyright. No renewals. Zero exceptions. Let the market sort the rest out.
There's always going to be downsides and edgecases when granting any party a monopoly over anything. At least if it's limited to 2 decades any unintended consequences, philosophical objections, and etc are hopefully kept within reason.
That would be insane for aerospace software, where you might spend most of that time getting the code certified (required to break the $0 revenue threshold), let alone paying back your costs and then making an actual profit.
Meanwhile, there are cases where copyright of more than 2 years is overkill.
I don't know what, but it seems like we need some sort of mechanism for variable-length IP duration is needed.
Is copyright meaningful for aerospace software? I'm largely unfamiliar with that domain but I have trouble imagining that (for example) Boeing cares much about people redistributing or hacking on the control software for a 777. How would that impact their bottom line?
I could understand for medical devices maybe but even then it seems like the software is a tiny part of the overall cost of a given design. A competitor could already do a clean room reimplementation in that case.
But I guess it wouldn't be all that bad if there were a carefully crafted extension for government certified software that was explicitly tied to the length of the certification process.
The only problem with this certified software exception is I foresee they'll write the law as "expiration timer starts when software has finished certification" then some lobby group will get the regulatory departments to adopt a new process of partial certification where said software is usable in devices but the 'finished certification' never gets reached so the copyright gets dragged out forever.
Nope, it falls more under trade secrets than copyright.
If you do something that requires stealing the code (publishing it, selling it, etc) the company can legally fuck you up.
Now, once it's in tbe wind, it becomes almost impossible to pursue from a practical point of view, as any implementer can claim trade secrets to avoid showing you the code.
I think the point is more that many kinds of software (presumably including aerospace software) doesn't really need any kinds of protections from redistribution because it is effectively only useful for a specific design and much of the effort in creating it is not the algorithms that a competitor could steal without copyright or alternative protection but certifying that the software fits the rest of the system, which any competitor making use of the software would have to do again.
Also remember that the original point of copyright and patent protections is to encourage people to create the protected works in the first place but Boeing isn't just going to stop making aerospace software without copyright because their hardware will be useless without it. So if anything, any software that is needed for hardware made by the same company to function doesn't really have any right to be copyrightable at all.
If certification is the actual cost, you don't need copyright, at all. SQLite is in the public domain. Your moat is the certification itself, not the code.
I can't use SQLite for aviation even though it was certified.
I can't even claim FIPS compliance for my software without going through an expensive process, even though I only use FIPS approved primitives.
Building on certified/compliant libraries helps, but their vendors can certainly contractually make me pay for it.
All OSS libraries have a warranty disclaimer; using them according to even those licenses automatically excludes "fitness for a particular purpose."
Why would public domain software be any different?
The moat is the certification process, not the code itself. "I copied this from somewhere after it was already certified" might fast track something, but it's not gonna fly with "certification was good, done."
> some sort of mechanism for variable-length IP duration is needed
I've always liked the idea of a Harberger tax-style patent enforcement fee:
The patent owner declares the value of their patent on an annual basis and pays 1-5% of that declared value per year for the privilege of relying on the government to enforce their exclusive ownership of the patent. At any point, another party can buy the patent at its declared value, which discourages patent-holders from declaring artificially low values. The annual fee discourages artificially high valuations for indefinite periods of time -- as the patent yields less return over time it makes less sense to keep paying a high annual fee, encouraging owners to lower the declared valuation or end the patent protection altogether when it's no longer profitable.
To discourage hoarding patents indefinitely one could either set a hard upper limit (e.g. 60 years) or increase the fee over time, for example every few years the fee increases by 1% until at some point the patent is effectively publicly owned.
Wait for the great new times when an AI will certify aerospace, automotive and medical SW. Waiting for that. It will be 1000x better and faster than the existing processes
Have you seen the quality of regular software though? And the failure rate of regular physical items? The only reason I trust aircraft is because of the process.
Consider if you will that if some guy were to fly a drone the size of a car that he knocked together in his garage over a residential area people would not accept that. Yet private pilots in cessnas fly over neighborhoods constantly.
Not quite in my opinion. The output of an LLM from a simple prompt falls into the public domain, but if you also give a copyrighted work as input, the mechanistic transformation performed will not alter the original license (same as encoding a video does not change its license).
It would be interesting to see a court ruling that the output of LLMs trained on copyleft code are licensed under the GPL ... and all other viral licenses simultaneously
No, the copyright is the colour of the bits, and red bits with a comment saying "these bits are blue" are not blue bits, but you may be prosecuted for fraud.
It's new, fast-moving technology, and the courts are slow and expensive.
It would take two stubborn businesses with a lot of money deciding that it is better to battle it out than focus on their business. Something like IBM v SCO or Oracle v Google.
But we also know from other research that LLMs don't actually do mechanistic translations. Even when they are asked to and say that they did, they're basically rewriting the code from their training data
If that occurs and it’s a substantial enough body of output that it is itself copyrightable and not covered by fair use. Confluence of those conditions is intentionally rare.
If the LLM reproduces a human's copyrighted work, then that copyright still stands. This is, in effect, the same as photocopying someone else's writing. The LLM was trained on the copyrighted work, is incapable of producing new copyrightable work, so if it duplicates the original work then the original author's copyright still stands.
There was a recent case that everyone has been describing as "LLM output can't be copyrighted" but what it actually said was you can't register the AI as the author.
This is not true, and I'd love to see some actual citation here.
The courts have repeatedly said that copyright only applies to human creativity. The Supreme Court explicitly said this when they refused to hear the appeal:
> "We affirm our decision to refuse registration for the Work because it lacks the human authorship necessary to be eligible for copyright protection."
So they're saying that the LLM cannot be the author, because LLMs cannot claim copyright.
The related case about patents is more supportive of the narrative that AIs cannot be authors (see https://www.cafc.uscourts.gov/opinions-orders/21-2347.OPINIO...), specifically: "Here, there is no ambiguity: the Patent Act
requires that inventors must be natural persons; that is,
human beings."
The patent situation is that the Act says that inventor must be an individual, which the courts are interpreting to mean a human, so the LLM cannot be named as the inventor. So, in this case, yes, this is just saying that an LLM cannot be named as the inventor of a patent. That's not the same thing as the courts are saying with copyrights.
> So they're saying that the LLM cannot be the author, because LLMs cannot claim copyright.
They're saying that the LLM can't be the author.
Now suppose you supply the LLM with a prompt that contains human creativity, it performs a deterministic mathematical transformation on the prompt to produce a derivative text, and you want to copyright that, claiming yourself as the author. What happens then?
If you think the answer is that you can't, how do you distinguish that from what happens when someone writes source code and has a compiler turn it into a binary computer program? Or do you think that e.g. Windows binaries can't be copyrighted because they were compiled by a machine?
My understanding was that they did in fact do just that, but the court somehow misunderstood what they were doing, and assumed that the LLM was working completely autonomously without any human input at all, which isn't really possible IMO. Someone told it what to do.
They also argued that you couldn't copyright an output that you can't explain how it came to be, i.e. if they had been able to articulate how an LLM works, the outcome might have been quite different, which I found surprising.
If art in general (human-made or otherwise) is always derived from existing influences... should we really be forced to explain how or why we created a piece of art in order to defend it?
The usual bar for copyright infringement of a derivative work is, from what I have seen, "how much did you copy from the original, and how obvious is it", which is of course a subjective determination that would be made by each individual judge or jury of a case.
The part that the human created, the prompt, can be copyrighted.
The part that the LLM created, cannot be.
Copyright in code works exactly the same way: the source code is copyrighted. The binary code is only copyrighted to the extent that it is derived from the source code. This is well-established.
Maybe I am just misunderstanding something, but I feel like you might be contradicting yourself here... why can LLM output not be copyrighted, but compiler output can be?
No, that's the point - the compiler output is only copyrighted to the extent that it is derived from the source code. The compiler itself cannot create anything copyrightable, but because there is a deterministic link between the source code and the binary code, and the source code was the product of a human, the binary code is covered by the source code copyright.
It's like a photocopier. If you photocopy a page from a book, that page is still covered by the copyright of the book author, even if the page is 2x larger or otherwise transformed by the machine.
IMO the bigger question is how would you even tell if a work was generated by an LLM? There's a ton of code being written out there; the folks who generated it are going to claim they authored it for copyright purposes, and those who want to use it are going to claim it was LLM-generated. So what happens?
The alleged author, when bringing a copyright infringement suit, will submit testimony claiming they wrote it. Parties to the suit will have a chance to present arguments and evidence. Then, the claim will be adjudicated by a judge and/or jury.
That code isn't going to be open source. And if you use someone else's closed source code you are violating laws that have nothing to do with copyright.
I'm not sure I understand. I'm not talking about stolen/leaked code here. I'm saying: imagine you claim you're the author of some piece of code. You may or may not have written it with an LLM, but even if so, assume you have the full rights to all the inputs. You post it publicly on GitHub. You don't attach a license, or perhaps you attach a restrictive license that doesn't permit much beyond viewing. Someone comes across your code, finds it brilliant, and wants to use it. If that code was non-copyrightable (such as generated via an LLM), then they're fine doing it without your permission, no? But if that code was copyrightable, then they're not permitted to do so, correct?
So now consider two questions:
1. You actually didn't use an LLM, but they believe & claim you did. Who has the burden of proof to show that you actually own the copyright, and how do they do so?
2. They write new code that you feel is based on yours. They claim they washed it through an LLM, but you don't believe so. Who has the burden of proof here and how do they do so?
1. You copy their code. They bring a copyright claim (let's assume this isn't a DMCA thing and they're actually bringing a claim to court). Your defence is "the LLM wrote it so no copyright attaches". Since they're asserting their copyright claim, they would have to provide evidence for that claim (same as in any other copyright case), including providing evidence that a human wrote it (which is new, and required to defeat your defence).
2. They copy your code. You bring a copyright case. Their defence is "I used an LLM to wash the code without copying". Since they're not disputing your copyright claim to the original code, you don't have to defend or prove your copyright. But you do have to prove that their code infringes on your copyright, which would mean proving that the LLM copied your code when creating the new code. This has been done before by demonstrating similarity.
> What makes the leak illegal other than copyright? The occasional piece of software might be a trade secret, but a person downloading a preexisting leak isn't affected by those laws.
That's completely false as far as I'm aware. Where did you see this? A simple web search shows numerous sources to the contrary. Are you confusing them with patents by any chance? https://en.wikipedia.org/wiki/Trade_secret
I think it can be copyrighted or is a very complex legal issue. Coding support is used in commercial apps where copyrights are fully reserved. I cannot be feasibly determined if any output is purely LLM or not.
I would be okay with just keeping it but limiting it severely. If you release music and you can't sell enough albums in 20 years, that's not societies problem. A lot of artists release albums every 1 - 3 years anyway, so they're always selling some records, or were before streaming became the way to "own" music. Most make their money off of concerts anyway.
For movies and shows, charge and increasing fee to renew the copyright. Eventually studios will give up certain movies. The older the movie the more you pay.
We could also just have some of the rights go away after X amount of years.
Maybe after so much time it's still not legal to copy the original work, but it is legal to make a cover song, or a derivative work using the same character.
At another point maybe it's no longer to illegal to copy for free, but it is still illegal to sell without permission.
I personally think we should have shorter limits for non-creator owners of copyright, and for creators it should be like 20 years or death whichever comes last. I also think compulsory licensing should exist for everything.
The problem here is that large companies can do whatever they want and regular people cannot. Don't worry, they won't be allowing you the same rights as these companies.
Correct. A ZK Proof backed identity system is a significant bump up in both privacy and security to even what we have right now.
Everyone does realize we're being constantly tracked by telemetry, right?
A proper ZK economy would mitigate the vast majority of that tracking (by taking away any excuse for those in power to do so under the guise of "security") and create a market for truly-secure hardware devices, while still keeping the whole world at maximal security and about as close to theoretical optimum privacy as you're going to get. We could literally blanket the streets with cameras (as if they aren't already) and still have guarantees we're not being tracked or stored on any unless we violate explicit rules we pre-agree to and are enforceable by our lawyers. ZK makes explicit data custody rules the norm, rather than it all just flowing up to whatever behemoth silently owns us all.
Well it could. Laws that simply ban any public-facing camera from doing anything except write to encrypted storage, which can only be opened with a court warrant.
I know laws are boring and tech is exciting, but sometimes there's no technological solution to a societal problem. Good old laws, police, fines, prison, is all you need.
This specific problem is solved by requiring that any anonymous ZK ID once used for an account be marked on an immutable ledger preventing multiple uses of the same ID. Sharing it would be pointless as multiple attempts to use it get burned. Yet none of those sites know who you are, only that you have a unique valid ID pass. They just have to check any login attempts against that ledger - easy enough.
Just crypto tie them to the server/site and let them do it, CRLs were an issue due to distribution to every device, not because of a hastable like sparse set structure being too much.
Also this isn't every connection, but only every time you (attempt to) verify your age.
For those watching this stuff, there are two other promising paths using ZK-proofs which might disarm the tradeoff situation we've been stuck in. Banking apps etc aren't willing to eat the liability of devices that are rooted or running alternate OSes, and Google's been banking on the exclusivity that brings from being both hardware and security provider.
Path 1: a ZK-proof attestation certificate marketplace implemented by GrapheneOS (or similar) to prove safety in a privacy-securing way enough for 3rd party liability insurance markets to buy in. Banks etc can be indifferent, and wouldn't ignore the market if it got big enough. This would mean we could root any device with aggressive hacking and then apologize for it with ZK-proof certs that prove it's still in good hands - and banking apps don't need to care. No need for hard chains of custody like the Google security model.
Path 2: Don't even worry too hard about 3rd party devices or full OSes, we just need to make the option viable enough to shame Google into adopting the same ZK certificate schemes defensively. If they're reading all user data through ZK-proof certs instead of just downloading EVERYTHING then they're significantly neutered as a Big Brother force and for once we're able to actually trust them. They'd still have app marketplace centrality, but if and when phones are being subdivided with ZK-proof security it would make 3rd party monitoring of the dynamics of how those decisions get made very public (we'd see the same things google sees), so we could similarly shame them via alternatives into adopting reasonable default behaviors. Similar to Linux/Windows - Windows woulda been a lot more evil without the alternative next door.
reply