from the "cybersecurity implications" conclusion section at the end of the anthropic report:
> This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially—and we can predict that they’ll continue to do so.
this is the point. maybe it's not some novel new thing, but if it makes it easier for greater numbers of people to actually carry out sophisticated attacks without the discipline that comes from having worked for that knowledge, then maybe it's a real problem. i think this is true of ai (when it works!) in general though.
Every time this argument is brought up, it reminds me of "cancel culture".
Argument: X is good for Z but makes it easier to commit Y, so we must ban/limit X.
What happens in reality: X is banned, and those who actually want to use it to do Y still find a way to use X. Meanwhile, the society is deprived of all the Z.
In this case though, banning X takes away a lot of the financials of X being possible or improving further. Sure, X-1 will continue to exist in perpetuity, but it will be frozen and allows society to catch up to mitigate Y more effectively.
EDIT: nevermind the fact that being able to do Z is not at all a fair trade for getting X. But that’s just me.
in this case a company that develops X is actively investing in understanding the Y problem and sharing their findings with the general public towards development of an X that doesn't have a Y problem?
Anthropic fairly consistently advocates for the same broad approach to such problems: have the government tightly regulate AI. It is, of course, a pure coincidence that this is exactly the approach that would kill off any open competition and consolidate the market around a few established companies that can afford to deal with such regulatory frameworks, Anthropic being one of them.
> This campaign demonstrates that the barriers to performing sophisticated cyberattacks have dropped substantially—and we can predict that they’ll continue to do so.
this is the point. maybe it's not some novel new thing, but if it makes it easier for greater numbers of people to actually carry out sophisticated attacks without the discipline that comes from having worked for that knowledge, then maybe it's a real problem. i think this is true of ai (when it works!) in general though.