That doesn't have any relevance to the efficiency and cost improvements of having the same very fast RAM connected to both CPU and GPU cores.
I can't believe anyone is arguing that bifurcated memory systems are no big deal. Are you like an x86 arch enthusiast? I'm sure Intel is frantically working on UMA for x86/x64, if that makes it more palatable. Though they'll need on-die GPU, which might get interesting.
I'm a computer enthusiast. I've got my M1 in a drawer in my kitchen, it's just not very useful for much unless I'm being paid to fix something on it. MacOS is a miserable mockery of itself nowadays and Apple Silicon is more trouble than it's worth, at least in my experience.
As I'm working on AI stuff right now, I have to be a realist. I'm not going to go dig up my Mac Mini so my AI inferencing can run slower and take longer to set up. Nothing I do feels that much faster on my M1 Mini. It feels faster than my 2018 Macbook Pro, but so did my 2014 MBP... and my 2009 x201. Being told to install Colima for Docker with reasonable system temps was the last straw. It's just not worth the hoop-jumping, at least from where I stand.
So... when a day comes where I need UMA for something, please let me know. As is, I'm not missing out on any performance uplift though.
> I'm sure Intel is frantically working on UMA for x86/x64
Everyone has been working on it. AMD was heavily considering it in the original Ryzen spec iirc. x86 does have an impetus to put more of the system on a chip - there's no good reason for UMA to be forced on it yet. Especially at scale, the idea of consolidating address space does not work out. It works for home users, but so does PCI (as it has for the past... 2 decades).
It's just marketing. It's a cool feature (they even gave it a Proper Apple Name) but I'm not hearing anybody clamor for unified memory to hit the datacenter or upend the gaming industry. It's another T2 Security Chip feature, a nicely-worded marketing blurb they can toss in a gradient bubble for their next WWDC keynote.