> I disagree, it's entirely about safety. Homomorphic Encryption allows a future for us to control our data.
It's true that homomorphic encryption techniques can be used in ways you describe, but this specific application is not about safety and it's somewhat absurd that it's proposed as some sort of way to shield the world from the terminator.
It's even pretty dubious to me that this actually protects the principle value of an ML system:
1. This approach doesn't really conceal the structure of the underlying ML system very well, which is where a lot of the underlying advances have been. While this conceals some aspects of the model, I don't think it conceals all of it.
2. The most expensive part of building ML systems is getting and wrangling great data from which to train them, and if you were to try using an ML agent in an untrusted environment they'd get something that resembles the data.
I think this is really cool math sold in the wrong way.
It's true that homomorphic encryption techniques can be used in ways you describe, but this specific application is not about safety and it's somewhat absurd that it's proposed as some sort of way to shield the world from the terminator.
It's even pretty dubious to me that this actually protects the principle value of an ML system:
1. This approach doesn't really conceal the structure of the underlying ML system very well, which is where a lot of the underlying advances have been. While this conceals some aspects of the model, I don't think it conceals all of it.
2. The most expensive part of building ML systems is getting and wrangling great data from which to train them, and if you were to try using an ML agent in an untrusted environment they'd get something that resembles the data.
I think this is really cool math sold in the wrong way.