The deep learning workloads that people find most interesting and the underlying hardware and software systems are all changing very rapidly. In addition to following MLPerf, we generally recommend that people run rigorous performance and cost comparisons on the actual workloads that they care about accelerating.
In MLPerf 1.1, we showcased model training at larger scale: https://cloud.google.com/blog/topics/tpus/google-showcases-c...
The deep learning workloads that people find most interesting and the underlying hardware and software systems are all changing very rapidly. In addition to following MLPerf, we generally recommend that people run rigorous performance and cost comparisons on the actual workloads that they care about accelerating.