Yes, like renewable energy infrastructure (which China does, and would be highly useful anyway in case generative AI does live up to its promise).
Even if generative AI lives up to its hype, with current US administration there's no way America is going to lead the race for long. There's just not enough energy available, when those in power oppose developing many of the energy projects that make most economical sense.
You can already buy cheap but powerful old servers. But newer hardware tends to be more power efficient. So, depending on time horizon you consider, it might be cheaper to buy newer hardware.
Assuming that GPUs power efficiency will increase, the same will be true about them.
The one truly hopeful aspect of a bubble-burst scenario is that extra capacity always finds a use-case, and in this case practically any other use-case would be both less harmful and more real.
Before the AI craze, all the GPUs were being bought up by cryptocurrency miners, and I'm not sure that's better. Even as an AI skeptic I think AI is a better use of all this hardware than cryptocurrency.
You're right of course, I'm taking for granted that we've been there and done that. The question is, would we do it again and would that usage really be allowed to expand to soak up whole data centers? We might since I don't think a cynic has made a bad prediction in years..
IMO the most likely way to soak up the extra capacity is actually yet another iteration of AI rather than say, doing productive but boring work with any other techniques for curing disease or something. Still, a crash and a next iteration might be more likely to involve fresh ideas on architecture, or focus on smaller expert models that have less fake results and actually empower users. Right now I think there's a clear bias in research and execution. OFANG does want results, but also wants results that tech giants. Are subsymbolic techniques really the best techniques, or are they just the best at preserving the moat of big-data and big-compute?