The question doesn't make sense imo because it, meaning neural network or other ML computer vision classification, doesn't have a mechanism to be trustworthy. It's just looking for shortcuts with predictive power, it's not reasoning, doesn't have a world model, it's just an equation that mostly works, etc, all the stuff we know about ML. It's not just about validation set performance, you could change the lighting or some camera feature or something, have some unusual mole shape, and suddenly get completely different performance. It can't be "trusted" the way a person can, even if they are less accurate.
These limitations are often acceptable but I think as long as it works how it does, denying someone a person looking at them in favor of a statistical stereotype should be the last thing we do.
I can see if this was in a third world country and the alternative was nothing, but in the developed world the alternative is less profit or fewer administrators. We should strongly reject outsourcing the actual medical care part of healthcare to AI as an efficiency measure.
I understood that you don't believe it can be made reliable. But my question was: what if it were?
Let me put it differently. Suppose I don't tell you it's ML. It's a machine that you don't know how it works, but I let you do all the tests you want, and turns out they're great. Would you then still be against it?