Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The MLPerf 1.0 results provided an apples-to-apples comparison of large-scale TPU and GPU systems across several ML workloads: https://cloud.google.com/blog/products/ai-machine-learning/g...

In MLPerf 1.1, we showcased model training at larger scale: https://cloud.google.com/blog/topics/tpus/google-showcases-c...

The deep learning workloads that people find most interesting and the underlying hardware and software systems are all changing very rapidly. In addition to following MLPerf, we generally recommend that people run rigorous performance and cost comparisons on the actual workloads that they care about accelerating.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: