The crash of NVIDIA is not about the moat of OpenAI.
But because DeepSeek was able to cut training costs from billions to millions (and with even better performance). This means cheaper training but it also proves that OpenAI was not at the cutting edge of what was possible in training algorithms and that there are still huge gaps and disruptions possible in this area. So there is a lot less need to scale by pumping more and more GPUs but instead to invest in research that can cut down the cost. More gaps mean more possibility to cut costs and less of a need to buy GPUs to scale in terms of model quality.
For NVIDIA that means that all the GPUs of today are good enough for a long time and people will invest a lot less in them and a lot more in research like this to cut costs. (But I am sure they will be fine)
But because DeepSeek was able to cut training costs from billions to millions (and with even better performance). This means cheaper training but it also proves that OpenAI was not at the cutting edge of what was possible in training algorithms and that there are still huge gaps and disruptions possible in this area. So there is a lot less need to scale by pumping more and more GPUs but instead to invest in research that can cut down the cost. More gaps mean more possibility to cut costs and less of a need to buy GPUs to scale in terms of model quality.
For NVIDIA that means that all the GPUs of today are good enough for a long time and people will invest a lot less in them and a lot more in research like this to cut costs. (But I am sure they will be fine)