No, it doesn't use 120W at the wall. 120W is the difference between wall consumption in the test and at idle, which is around 1.1x more than the actual consumption. Apple under-reports power usage for some reason.
Sure, Apple runs at half the wattage of the 3080, with less than half of the performance.
All of the video workloads are specifically using ProRes, which is the Apple accelerated codec. If you do not use ProRes, you will get a fraction of the performance. The Macbook only gets that performance in a fraction of video editing workloads.
Yes, all games are running on Rosetta 2, except for Dolphin which shows a similar performance ratio. This doesn't matter because GPUs don't have to deal with Rosetta, and the tests are done in GPU-limited circumstances. The performance impact is going to be negligible in a CPU-limited scenario where the CPU is not even running at 100% to be able to slow down the GPU.
The M1 Max is around the speed of the slowest 3060 in all compute workloads that aren't bottlenecked by CPU-GPU communication.
The fact remains, Apple straight up lied by saying that their GPU is comparable to a 3080. It is comparable to a 3060 in TDP and slower in all except specific, hardware-optimized workloads. There are other workloads that are accelerated in the 3060 where the inverse is true.
Also, a Quadro RTX does very well at Tomb Raider. Which isn't surprising because it has a lot of compute power, unlike the M1 Max's GPU, and games nowadays care a lot about compute power.
The benchmark between a 3060 and the M1 Max was not by Anandtech. It was by LTT, and was with a laptop whose processor and GPU are in the 1000$ class. And unlike Anandtech, they did all of their GPU benchmarks in GPU-limited conditions whereas the Anandtech benchmarks are CPU-limited too, artificially boosting the performance of the M1 Max by measuring it's CPU as much as it's GPU.
I don't understand why Apple gets to call their GPU as fast as a 3080 when it's slower than a 3060 unless you are running very specific hardare-optimized software that require a hardware-specific worfklow.
All of the benchmarks you linked are CPU bechmarks, and the Takua render benchmark is ridiculous because in the real world, artists use GPU rendering on their laptops, where the M1 Max gets absolutely destroyed by an RTX 3060 because of hardware-accelerated ray tracing.
> Apple under-reports power usage for some reason.
Again, citation needed. Mostly because these numbers aren't coming from Apple, but Anandtech.
> Sure, Apple runs at half the wattage of the 3080, with less than half of the performance.
But it is not half of the performance, that is a lie and the numbers are out there. Again, in the same article, the only downside is gaming, which runs under Rosetta 2.
> It is comparable to a 3060 in TDP and slower in all except specific, hardware-optimized workloads. There are other workloads that are accelerated in the 3060 where the inverse is true.
This is straight up false. Again, read the article you yourself linked.
> All of the benchmarks you linked are CPU bechmarks
Wow. So, rendering and manipulating video and photo are purely CPU benchmarks. Ridiculous.
I'm not going to continue with this. I find insane that you would willingly ignore simple arithmetics just to, I guess, bash Apple. I'm not the greatest Apple fan myself, but for sure I am open to reckon that they did extremely well with the M1 family, and that they have created a very good laptop for professionals.
The Anandtech article shows the difference between idle wall power and current wall power, not wall power. Read the article, please.
>Again, citation needed. Mostly because these numbers aren't coming from Apple, but Anandtech.
The grey numbers are reported by the CPU using Apple code, but do not reflect actual power consumption
>This is straight up false. Again, read the article you yourself linked.
Incorrect. The workflows are the specific Apple-approved ProRes workflows as well as games, in CPU-limited fashion by the author's own admission. I linked the article for it's power consumption figures, you can watch the video linked above for proper GPU performance testing, where Rosetta 2 has negligible impact because of low CPU usage.
>Wow. So, rendering and manipulating video and photo are purely CPU benchmarks. Ridiculous.
Yes. CPU-rendering is a CPU benchmark. Lightroom is a CPU benchmark for all but a few very specific tasks (AI upscaling for example), and cannot properly utilize a high-performance GPU for anything else.
They created a good laptop GPU-wise for editing ProRes footage and using very large datasets, nothing else. If you need to do rendering, or game design, or have to work with complex CAD files, or have to do 90% of GPU compute workloads, or have to do 3D modelling/sculpting, or literally anything else a professional (or not) would want to do, it's embarrassingly deficient.
Sure, Apple runs at half the wattage of the 3080, with less than half of the performance.
All of the video workloads are specifically using ProRes, which is the Apple accelerated codec. If you do not use ProRes, you will get a fraction of the performance. The Macbook only gets that performance in a fraction of video editing workloads.
Yes, all games are running on Rosetta 2, except for Dolphin which shows a similar performance ratio. This doesn't matter because GPUs don't have to deal with Rosetta, and the tests are done in GPU-limited circumstances. The performance impact is going to be negligible in a CPU-limited scenario where the CPU is not even running at 100% to be able to slow down the GPU.
The M1 Max is around the speed of the slowest 3060 in all compute workloads that aren't bottlenecked by CPU-GPU communication.
The fact remains, Apple straight up lied by saying that their GPU is comparable to a 3080. It is comparable to a 3060 in TDP and slower in all except specific, hardware-optimized workloads. There are other workloads that are accelerated in the 3060 where the inverse is true.
Also, a Quadro RTX does very well at Tomb Raider. Which isn't surprising because it has a lot of compute power, unlike the M1 Max's GPU, and games nowadays care a lot about compute power.
The benchmark between a 3060 and the M1 Max was not by Anandtech. It was by LTT, and was with a laptop whose processor and GPU are in the 1000$ class. And unlike Anandtech, they did all of their GPU benchmarks in GPU-limited conditions whereas the Anandtech benchmarks are CPU-limited too, artificially boosting the performance of the M1 Max by measuring it's CPU as much as it's GPU.
I don't understand why Apple gets to call their GPU as fast as a 3080 when it's slower than a 3060 unless you are running very specific hardare-optimized software that require a hardware-specific worfklow.
All of the benchmarks you linked are CPU bechmarks, and the Takua render benchmark is ridiculous because in the real world, artists use GPU rendering on their laptops, where the M1 Max gets absolutely destroyed by an RTX 3060 because of hardware-accelerated ray tracing.