Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do existing PyTorch models work out of the box with cloud TPU, or does it require some tweaking?

Are there cost savings over traditional GPU workloads?

When will GKE support TPU? And will there be preemptable instances?



Last time I checked, you had to change a few lines to use torch_xla.

If there are cost savings depends on your workload. If you are training huge models, it might be cost effective, but honestly most tasks I have seen would have been fine on a CPU.

The pricing page lists preemptible instances: https://cloud.google.com/tpu/pricing#single-device-pricing




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: