Yes. It is not incredibly rare, it's incredibly common. A huge percentage of queries to retail LLMs are things like "hello" and "what can you do", with static system prompts that make the total context identical.
It's worth maybe a 3% reduction in GPU usage. So call it a half billion dollars a year or so, for a medium to large service.
Even if that were the case you wouldn't be wrong. Adding caching and deduplication (and clever routing and sharding, and ...) on top of timesharing doesn't somehow make it not timesharing anymore. The core observation about the raw numbers still applies.
I mean i get the idea but sounds so incredibly rare it would mean absolutely nothing optimisation wise.