I don't see this event as particularly alarming or surprising, since it's rather similar to using a search engine for "most vulnerable US infrastructure" and finding documents where lots of people have already written words saying it's very vulnerable.
What does worry me are the rosy hand-wavy promises from OpenAI and others that LLMs can really be made "ethical" or "safe" in a way that reliably prevents this kind of thing without a loss in overall utility.
What does worry me are the rosy hand-wavy promises from OpenAI and others that LLMs can really be made "ethical" or "safe" in a way that reliably prevents this kind of thing without a loss in overall utility.