I agreed with most points in this article, but you lost me at the security rating part and advocating for "counterfit". Can a universally-accepted 'security score' really capture the nuances of vulnerabilities in PyTorch models?
And as AI models become increasingly complex, will our security measures be able to keep pace, or will they always be one step behind? Counterfit is Microsoft managed, given their track record I think any security measure or rating that's widely accepted should instead come from an open-source, widely maintained security library.
I'm not sure that exists at all currently. If it does though, I'd love to donate and I'm sure others would too
And as AI models become increasingly complex, will our security measures be able to keep pace, or will they always be one step behind? Counterfit is Microsoft managed, given their track record I think any security measure or rating that's widely accepted should instead come from an open-source, widely maintained security library.
I'm not sure that exists at all currently. If it does though, I'd love to donate and I'm sure others would too