TBH This response is way too aggressive for the original comment.
The facts stated are simply that today it is more efficient to encode video for remote graphical sessions because the X11 applications already changed a long long time ago to adapt to the modern world of GPUs and accelerated compositing. BW, latency, efficiency, everything became better because a super computer with thousands of cores can do that and lighten the load on the CPUs.
It doesn't say it doesn't work...
It doesn't prevent you neither from running X11 or even booting up a PDP-11 if this is your favorite workflow!
The problem is that the Wayland fans have a tendency (at least I feel) to significantly misrepresent things in their favor. Statements like "oh that's actually more efficient on Wayland". No, Wayland is incapable of the sorts of optimizations X can do by design. It is only more efficient if the X app is doing things in a specific way that doesn't take advantage of huge parts of the X protocol. Granted, most modern apps do exactly that at this point. That doesn't make such statements any less of a misrepresentation though.
A much more reasonable claim would be that the network transparency afforded by the X protocol adds significant complexity which is no longer utilized by the majority of mainstream apps today. As such there's a reasonable case for dropping all that complexity from the core system and leaving it to peripheral libraries to handle on a case by case basis for the apps that want to make use of it.
And the idea of lossy compression while using an image editing program being a desirable thing (as suggested elsewhere in this comment chain) is laughable. It's already bad enough reading text that's gone through lossy compression. I would never want compression artifacts while manipulating an image.
My impression of Wayland so far is that I like the technicals but absolutely detest the people I encounter pushing it as a solution (it's quite similar to Rust in that regard I suppose). They would probably meet less resistance if they took more care not to misrepresent the overall state of things. I'll leave the link to KDE Wayland "showstoppers" for reference. Certainly that list is far shorter today than it used to be and many (not all) of the items are now solely on KDE's end. Nonetheless, fanboys have been claiming that Wayland is "production ready" the entire time. https://community.kde.org/Plasma/Wayland_Showstoppers
I'll switch to Wayland once it "just works" out of the box in terms of app integration on stable distributions including things like screen capture, fractional scaling factors, color management, all the stuff that works on X.
Look at this very statement it's completely inaccurate in a trivial fashion that ought not required analysis but here we are.
The people who actually develop x or wayland are a tiny number of people. The people expressing opinions on the internet on tech is 1000x larger. The implications that the proponents are correct in their analysis because they develop it is fatally flawed if for no other reason than the subject is obviously not solely the tiny number of actual devs. Furthermore the arguments even of devs needs to stand on their own feet.
Look at the prior comments where someone complains that random crashes result in the entire session going down.
Who cares what anyone says about the theoretical design decisions regarding manifestly unsuitable tools.
You can make all the arguments you want but it won't do anything meaningful. If those 1000x people expressing opinions have the necessary domain expertise, and aren't just tossing out their feelings on what they think might be cool or might be nice in a perfect world, then they should start contributing to these projects and fixing the bugs.
I mean, I think it would be cool if my PC never crashed. Isn't it easy and fun to say things like that?
You are saying that as if the part of the X11 protocol that's reasonable to run over the network was the better API that application developers are simply too lazy to use.
While the reality is, that toolkits (and applications) used to use those APIs and were revamped to use the DRI APIs and general bitmap based windowing.
The old APIs don't support double buffering, access to GPUs with modern APIs (both opengl[indirect sucks] and Vulkan).
They don't provide modern font rendering, or any kind of graphical effect (distortions) that UI people might want to play with.
They would be significantly worse for anything displaying animations and one of the most common client libs (libx11) is serial and thus horribly latency sensitive.
The advantage of VNC isn't lossy compression. Which isn't forced by it either. But it handles networking better with bitmaps. And has improvements like acting as a screen/tmux style connectable session for graphical applications.
Both VNC/RDB can also support showing the server's desktop if it has one, not just running other applications than currently open elsewhere.
The only advantage of old style X11 was that it's ubiquitous. But since it hasn't been used in ages since it doesn't work well with modern computers/UI frameworks that advantage is gone.
And there's 0 reason to try and reimplement that in a new windowing protocol, when there's objectively better choices out there already. Local optimized windowing is a different beast than network capable.
You might also want to note that the fantasy of being able to use the same protocol to drive the local display and also operate over a network, is long dead. It seems like a clever idea but it doesn't actually work. It ceased to be a thing entirely the moment wide area networks became popular. The local and remote cases are two completely different situations that need their own individual attention. Even when developing against X11 protocol you still have to consider this in modern times because the DRI extension is not available over the network.
Compare to a protocol like RDP which is extremely optimized for efficient and secure network operation, but is also way more complicated as a result, and it would be foolish to use it on a local display server.
It does just work for some workflows. I think that's really what it boils down to. It is production ready for some people, but it's clearly not for you yet. Maybe that will change.
The facts stated are simply that today it is more efficient to encode video for remote graphical sessions because the X11 applications already changed a long long time ago to adapt to the modern world of GPUs and accelerated compositing. BW, latency, efficiency, everything became better because a super computer with thousands of cores can do that and lighten the load on the CPUs.
It doesn't say it doesn't work... It doesn't prevent you neither from running X11 or even booting up a PDP-11 if this is your favorite workflow!