Perhaps they mean there's less wealth to be extracted from the closed-source training side of the equation, which requires huge capital investment, and promises even bigger returns by gatekeeping the technology.
Many discussed aspects are disconnected. Cost of training, cost of hardware(and margin there), cost of operation, possible use cases, and then finally demand.
Cheaper training still expect there is some use case for those trained models. There might or might not be. It can very well be that cost of training did not really limit the number of usable models.