Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Alice is strongly incentivized to help her students whenever they need it, because she has equity in their long-term success.

You are assuming that the student will be interested in that help, but you can't force help onto someone. How do you prevent the student just taking the money and doing whatever?

How many students will you have to mentor to get a guaranteed return?

Also, teachers typically don't have that kind of investment money, so would have to get funded themselves for that purpose. How would risk management work along that double-tiered, decades-long funding structure? It seems like a good way to burn a lot of money.



Oh absolutely, these students would be actively applying to be trained by the teacher. And you’re right, the student retains full agency; the teacher can’t (and shouldn’t) control what they do.

As for returns: there are no guarantees, just like in venture capital. The model assumes a power-law distribution — you might mentor many students, but only a few will generate outsized successes. As AI makes outcomes more extreme, this dynamic will likely intensify, which is why equity (rather than debt) is the only model that works.


It is a long-term investment, but it doesn’t have to be decades before there’s liquidity. Teachers could sell portions of their equity along the way through secondary sales.

For example, if a student shows strong potential - say they ship a prototype that gains traction online - new investors may want to back them. At that point, the teacher can sell some of her shares to those investors (with the student’s approval), realizing value earlier while still staying aligned with the student’s long-term success.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: