Not OP, but I too am confused. I understood the sketch artist analogy but that didn't seem related to this point:
>They also trained the model with non-blackhole images. Since the output of the model was approximately the same, this indicates that the resulting output picture doesnt look like what we think a black hole looks like just because it was trained with black hole images. It likely really looks like that.
If you are feeding non-blackhole images in and getting blackhole results out, wouldn't that be indicative of an over-trained model? Her other analogy was we can't rule out that there is an elephant at the center of the galaxy, but it sounds like if you feed a picture of an elephant in you'll get a picture of a blackhole out?
This is ensuring that the model is not over trained.
They also showed that when they fed in simulated sparse measurements based on real full images of generic things, they got back fuzzy versions of the real image. [1] So if you put in a sparsely captured elephant (if for instance there was one at the center of the galaxy) you'd get an image of the elephant out, not this black hole.
To complete the artist analogy, imagine that the suspect that is being drawn by each artist is some stereotypical American. The description given to the artists doesnt say that, it just describes how the person looks. One of the three sketch artists is American and the others are Chinese and Ethiopian.
If the American draws a stereotypical American, how can you be sure that the drawing is accurate and thats not just what he assumed the person would look like because everyone he has ever seen looks like that?
You look at what the other two draw. If they both draw the same stereotypical American, even though they have no knowledge of what a stereotypical American looks like, you can be pretty sure that they determined that based on the description provided to them. The actual data.
They did still likely utilize some of their knowledge about what humans in general look like though. This is analogous to how the model uses its training on what a generic image looks like. For instance, maybe several sparse pixels of the same value are likely to have pixels of that same value between them. The model puts things like this together and spits out a picture of what we think a black hole looks like even though its never seen a black hole before.
From what I understand, the training input images are just to establish the relationship between sparse data points and full image, regardless of subject matter. Since they were getting the black hole picture out of the trained model regardless of how it was trained, it's likely that the model was producing accurate results of what the "camera" was pointed at. If they had pointed it at an elephant, the model would have produced a picture of something elephant-like because it was somewhat accurately reconstructing a full image from sparse data points.
Probably not with random noise. With random noise there is literally no connection between pixels. With any actual picture there are connections. Like for instance a pixel is more likely to be the value of its neighbor or nearly so than any random value. This follows from the fact that the pictures are of actual objects with physical properties that determine the value of the pixel that maps to them. Most of the image can be characterized by continuous gradients with occasional edges.
I think if you trained with random noise you would get random noise output.
They're not just training the model to make pictures from nothing. They're training the model to make pictures from an input.
So I assume they're simulating what an input would look like of, say, a planet or astroid or elephant or whatever, given that it was viewed through the relevant type of sensor system. Then when they feed in the black hole sensor data, they get pictures that look like the black holes we imagined. Even if we never told the model what a black hole looks like.
>They also trained the model with non-blackhole images. Since the output of the model was approximately the same, this indicates that the resulting output picture doesnt look like what we think a black hole looks like just because it was trained with black hole images. It likely really looks like that.
If you are feeding non-blackhole images in and getting blackhole results out, wouldn't that be indicative of an over-trained model? Her other analogy was we can't rule out that there is an elephant at the center of the galaxy, but it sounds like if you feed a picture of an elephant in you'll get a picture of a blackhole out?