I called it, it was, indeed, bad behavior from miners. Didn't expect it to be poor storage conditions to be the trigger, though. Probably a combination of that and the extreme stress of overclocking.
Miners do overclock, but they also undervolt when they can. Mining presents an atypical load; it limits what you can do with undervolting, but it also allows sometimes ridiculous overclocking that would instantly crash under a 3D workload without increasing power usage much. Most often, a card will not be stable when undervolted while mining (or any similar 100% load compute that doesn't depend on VRAM), while it would be stable playing a game.
That said, anyone who makes this argument doesn't understand TCO. You pay, say, $1000 per GPU, fit maybe 8 GPUs onto a motherboard, spend as little as possible to populate a board (possibly used mobo+CPU+RAM; getting this under $300 is a winner), then PSUs (don't skimp here, mining with several GPUs in a system destroys garbage PSUs; you could be spending $200+ here).
That means for your rig, it could be costing you $8.5-9k per rig. This rig would be using about 3000 watts. This machine could live about 25,000 hours before parts begin to break down; if you're mining, you've found where to get power for 10 cents a kwh or cheaper, and this works out to about $7.5k.
So, lets say this ends up being a $16-17k TCO over those 25,000 hours. Overclocking does not significantly increase the power usage (say, 15% more coins for 20% more power; or if power is about half your TCO, 10% more TCO). Undervolting does not have the same sort of flexibility overclocking does; you may only achieve a 5% power reduction (again, half of that for this example would be TCO), but possibly deny you an overclock at all.
Also, every card is different. Some may overclock better, some may undervolt better, some may allow a good combination of both. Miners will generally adapt to the situation as needed.