Are there cost savings over traditional GPU workloads?
When will GKE support TPU? And will there be preemptable instances?
If there are cost savings depends on your workload. If you are training huge models, it might be cost effective, but honestly most tasks I have seen would have been fine on a CPU.
The pricing page lists preemptible instances: https://cloud.google.com/tpu/pricing#single-device-pricing
Are there cost savings over traditional GPU workloads?
When will GKE support TPU? And will there be preemptable instances?