Good for training, definitely a bad idea for inference. But if you are spending that much money, why not just buy the equivalent of GPUs? You could buy 10 12GB 3060s for that price.
For LLM developers, is there really no advantage to having a big block of unified memory, rather than a bunch of devices with a small amount of memory each?