Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is basically DRM for deep learning. While that may be useful, it's not about safety.

The big problems involved in building safe AI are about predicting consequences of actions. (The deep learning automatic driving systems which go directly from vision to steering commands don't do that at all. They're just mimicking a human driver. There's no explicit world model. That's scary.)



This is not DRM. This is homomorphic encryption. There is a difference.

In a system with DRM, the data is kept secret from users of the system by managing the rights to what data those users can access. Example: When you play a DVD, the key to decrypt the contents do exist on the system, but rules are in place to make accessing the key, outside of accepted practices like decoding the frames of the video, hard. The key still exists on the local system and it can be extracted and once you do you have full access to the data regardless of the DRM's restrictions.

In a system performing homomorphic encryption, the data is kept secret from other users by never decrypting the data. Homomorphic Encryption would add two encrypted numbers together and the result would be a third encrypted number. If you don't have the key you cannot decrypt any of the three values. The key does not exist on the local system.

Homomorphic Encryption is not DRM. DRM is invasive and requires you to surrender control of parts of your system to another party, while Homomorphic Encryption is just a computation and can be performed with no modifications on a system.

>While that may be useful, it's not about safety.

I disagree, it's entirely about safety. Homomorphic Encryption allows a future for us to control our data. I could submit my encrypted health information to a 3rd party. They could perform homomorphic calculations on my encrypted data. They then return to me the encrypted results. The 3rd party is never privilege to my unencrypted health information and only the people that I have given the key to can decrypt and view the results.


> I disagree, it's entirely about safety. Homomorphic Encryption allows a future for us to control our data.

It's true that homomorphic encryption techniques can be used in ways you describe, but this specific application is not about safety and it's somewhat absurd that it's proposed as some sort of way to shield the world from the terminator.

It's even pretty dubious to me that this actually protects the principle value of an ML system:

1. This approach doesn't really conceal the structure of the underlying ML system very well, which is where a lot of the underlying advances have been. While this conceals some aspects of the model, I don't think it conceals all of it.

2. The most expensive part of building ML systems is getting and wrangling great data from which to train them, and if you were to try using an ML agent in an untrusted environment they'd get something that resembles the data.

I think this is really cool math sold in the wrong way.


It's a way for someone to run a trained network on their own machine without being able to extract the parameters of the network. That's DRM.


Deep learning automatic driving systems that want to work have to have world models.

It's Automatic Driving 101.

These models don't have to be as explicit as formulas but can be approximations of reality through beam search (having multiple steering hypotheses at once and then picking the most likely one etc.), model ensembles, some bayesian state exploration or anything that isn't random search.


If that model is inaccessible to the people that are trying to assure safety then it's not particularly material if it exists or not when it comes to safety



I am specifically talking about the situation of trying to verify that a self driving car has a reasonable model of the world, i.e. it won't fail in a spectacular way in a situation a human would handle properly. Right now I don't know how to show this even without complicating it with ZKPs, verifiable computing or homomorphic encryption.

Really cool work, I just don't see how it has anything to do with the safety of the AI.


Are you saying that from experience or speculation?


The same way that crypto is DRM for personal communiques.. Safety as in "what information will we let be stolen" means safety for opsec.

Safety as in "only does what you want it to do" - correctness - is a wholly different discussion.


Opsec safety is not the biggest problem with autonomous driving. It is a secondary problem at best and one that can be addressed (though certainly there is room for improvement) using normal security techniques.

Correctness is the biggest problem with AI safety. Note that "adversarial ML attacks" fall under correctness.


Is the assertion that adversarial ML is a subset of correctness widely considered canonical?

That appears counterintuitive because so many ML techniques seem (for lack of a technically defined term) tautological. For example, a big hairy random forest classifier maybe can be gamed in certain cases, but it is not "correct?" After all, it is its own definition.


Yes. I'm not sure what point you're trying to make about tautology. Adversarial ML examples are clearly errors if they were part of the test set, and part of "correctness" is reducing the error rate (correct model, correct parameters, etc.).


Well, if you want to run an AI on someone's computer, and be sure they don't know what it's doing - that's safety.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: