Given that OpenAI is working with and doing business with the US military, it makes perfect sense that they would try to normalize militaristic usage of their technologies. Everybody already knows they're doing it, so now they just need to keep talking about it as something increasingly normal. Promoting usages that are only sort of military is a way of soft-pedaling this change.
If something is banal enough to be used as an ordinary example in a press release, then obviously anybody opposed to it must be an out-of-touch weirdo, right?
No, it wasn't chosen at random -- it had to be a question that any reasonable person would immediately recognize as harmless, but where the old model would inject a bunch of safety caveats and the new model would not.
What better example would you suggest for a demonstration of an actually-harmless question which sits close enough to the guardrails that the previous model would have stuttered over it?