I recently tried switching to Linux with a few different distro's (Ubuntu, Elementary OS, etc.) and use Wayland.
The thing that sticks out like a sore thumb to me, and which I've been unable to solve, is that apparently it's not possible to configure trackpad scroll speed. At all.
From what I was able to gather, Wayland/libinput say they shouldn't be responsible for handling it and instead window managers should[0][1], meanwhile gnome says wayland/libinput should handle it[2] and ultimately - several years later - it's still not possible to in pretty much any Linux distro that uses Wayland(?)
When I switch to my Linux laptop to test things, my trackpad is bonkers and I have to move my finger in like 1mm increments because otherwise I'd scroll like 10 pages in Firefox. It's infuriatingly frustrating.
Wayland is not GNOME, what you describe is not Wayland problem, but GNOME problem.
It's possible in both KDE and Sway.
GNOME is probably worst DE imaginable. After 20 years, they still don't have thumbnails in filepicker and even dropped preview side pane in GTK4. I can't even.
With X11, I can change scroll speed using the `imwheel` tool.
Years ago with Ubuntu and ElementaryOS, I could change it in the system settings GUI.
Today, with Wayland now default, I can't change it at all when running Ubuntu or ElementaryOS.
You can say this is a GNOME problem, or a distro problem, or whatever. But at the end of the day, I installed two of the most popular Linux distro's out there and my scrolling is literally unusable - and I can't fix it without apparently installing an entirely different desktop environment or a patched version of libinput.
>You can say this is a GNOME problem, or a distro problem, or whatever. But at the end of the day, I installed two of the most popular Linux distro's out there and my scrolling is literally unusable
You chose GNOME , the answer is you are not the target of the GNOME project, they target people with weak minds that would instantly faint if they see a configuration screen with options. So you are using it wrong, with GNOME you bend your mind and body to work as the GNOME self proclaimed UX experts think things should work.
If you decide you want options try KDE, but you are free to complain about GNOME, it will be a waste of time, reality and rationality do not affect people with giant EGOs in power at GNOME.
> You chose GNOME , the answer is you are not the target of the GNOME project, they target people with weak minds that would instantly faint if they see a configuration screen with options.
GNOME's problem is not the lack of configuration options, but lack of empathy. They stubbornly refuse to address (sometimes, even acknowledge) their own users' problems.
> If you decide you want options try KDE [...]
You don't solve this problem with more options. You solve it by understanding your users' needs.
I want a desktop that just works. KDE's promise is I can make it work if I can keep tweaking it; GNOME's promise is it should work if I'm willing to accept the compromises. Both of these options suck.
Clearly desktop is too large a target to be solved effectively by a herd of cats. Lots of good intentions and half-standards, but no synergy or force to in darkness bind them.
I have been using Linux/Unix on desktop since the nineties, and while it has come a long way, it feels like that last 10-20 percent will never happen. It's that part that needs a manager with a large stick/carrot willing to make (informed) decisions and enforce them.
>GNOME's problem is not the lack of configuration options, but lack of empathy. They stubbornly refuse to address (sometimes, even acknowledge) their own users' problems.
I would agree that this is also a problem, so GNOME has many problems. But IMO is the big EGO s in charge that cause this lack of empathy.
I will disagree that you can get a Desktop that is perfect for everyone, Kubuntu or whatever KDE distro could do a survey and come with the most popular configuration by default but for sure it would be something like
What value you prefer for X ?
- 45% prefer A
- 25% prefer B
- 15% prefer C
- 5% prefer D
- the rest prefer E, F, G or H
so at best you have 45% happy users and 55% of users that need to change an option , but if you are an extremist of no options then you screw the 55%. And this is what GNOME did, they told their users that don\t like the feature removal or changes to leave.
Spot on. Gnome is designed for non-existent user. All the while, worse than ignoring, denying actually existing user that have since moved on. Very sad story.
But "suck" is some subjective thing, you will find a giant number of GNOME fans that will tell you how great GNOME is, how intuitive it is, and how you just need to give up your own preferences and embrace their UX guru ideas and it will not suck.
It seems that many people are in the "I don't want to think for myself camp" . unfortunately people in my family are also in that camp, never question if maybe there is a way to customize things to make the software more comfortable, they bend their body and minds, causing even distress to follow the defaults.
Something that obviously suck for us is normal for a GNOME user, in GNOME land there is only 1 way of working and if you don't like it you are asked to move on. many tried but you can't change those guys minds this is why there are tons of GNOME forks and probably no KDE fork.
You shouldn't need to "think for yourself" as a prerequisite to have usable tools. Your thinking energy is much better spent solving original problems or channeling your creativity.
Personally - I think KDE 3.5 (yes I'm that old) was near perfect. As of 4.x, they were prioritising adding the ability to freely hand-rotate desktop widgets (who the actual f8ck wants a skewed widget!?) over performance and stability. I could live with a useless feature, but I couldn't continue to run it on a computer with 256MB of memory, which was all I could afford back then. I can't remember what I switched to at the time, but I never came back.
>You shouldn't need to "think for yourself" as a prerequisite to have usable tools
This is not true, we are not all the same. For example my eyes are bad, and one is worse, so I prefer one side of the screen more, I want to put my important stuff like notifications on my good side, I do not need a pretend UX guru to tell me how to have my notifications and hard code them. I also prefer to change the keyboard layout around so I can do most used keyboard shortcuts comfortable with just my left hand. I don't care that the majority things the keyboard layout should look like, I am more comfortable in my way where a GNOME user will not even consider that it could be possible to customize the keyboard layout or change keyboard shortcuts, some big ego dude decided for them.
Let's assume you are right and defaults could be good for 99% but IMO are great only for 40%, I want great, not good or OK, but I don't want to force my preference on others , just wish others won't remove functionality because it is used by less then 50%.
EDIT: I think I might have misunderstood your point and my reply was offtopic,
I assume that GNOME big EGO designer checks the touchspeed setting on his laptop, decides what is best for him and hardcodes it in, in their mind GNOME is usable and if you somehow disagree they will tell you to move on, they will never admit anything, they want to push out all the people that disagree with their vision and keep only the ones on the exact sam page with them on all the possible points.
GNOME has kind of always been on the opposite end of the configurability spectrum from KDE, IME.
But also I'm not super clear on how libinput fits into the picture, I think there used to be some Synaptics-specific integration in certain places that I never went back to, to confirm the differences (switched to libinput several years before, and I've completely forgotten since what I was trying to solve back then).
Anyway, I ended up writing a mostly-personal-anecdote below, that would likely not help, so you can probably skip it (if you do want to try anything, KUbuntu and https://neon.kde.org are both on top of Ubuntu, and there's other distros, but I've not kept up with any distro I might recommend)
---
I tried Ubuntu (w/ the GNOME default) once (back in KDE 4 times IIRC) and the lack of almost any flexibility felt like I was stuck in a sandbox (a literal one, like from when we were kids, not that it's easy to remember that far back).
The only other time I felt like that was when a friend gave me their old iPhone (6 IIRC?) "to see what it's like" and I had to give up trying it out because both the OS, and the few apps I could find (for its max supported iOS version), had random chunks of features missing (tbh I should've jailbroken it, then it'd just be one more piece of hardware I have no immediate use for - but I digress)
KDE is far from perfect (switched to Wayland recently, and been tracking a few QoL leaps in the next couple releases), but I can at least try to tweak it - same goes for using NixOS or ZFS tbh (not going to even defend those, but I have done both Gentoo-like shenanigans, and randomly RAIDed a dual-SSD laptop, respectively, quite painlessly despite not preparing for it from the start, and the weirdness budget is personally paid for a thousandfold).
Meanwhile I run into e.g. GNOME apps using libadwaita nowadays needing environment variables (that appear deprecated?) to apply the KDE gtk integration theme so they don't stick out like a desaturated winamp skin.
I've never felt like "power user" applied well to myself, like I'm not doing the equivalent of weight-lifting for computers, never want to be doing sysadmin if I can avoid it, I just want to have enough control to make things seamlessly neat for myself.
Opinionated defaults are great in the same way you'd set up a house for a marketing reel, but if I actually buy it, why would I have to deal with a landlord telling me I can't paint the walls or move/replace some furniture to maximize my comfort?
The "personal" in "personal computing" is supposed to be the same as the one in "personal property", and so I'd have similar expectations of "can screw with it without asking for permision" for both (modulo real estate being seen as an investment, and building codes, etc. - should've picked a smaller example than a house, oh well).
To stretch the house analogy further, just like I have leased (some corporation's) private property as office spaces, I would be fine to doing the same (also for business reasons) with e.g. a cloud platform (most likely through GHA, whenever they announce the paid tiers) - tho be fair to everyone, this applies far more to walled gardens than opensource software like GNOME.
Also, to be clear about the "sandbox" thing: sandboxes (and/or ideally more objcap systems) are great, and I think that the XDG Portal work is incredible for what it allows: the apps are getting sandboxed, the user getting more power over them.
Even without Flatpak (which I keep meaning to try out), I was happy to see e.g. the KDE Wayland screenshare dialog outright has an option for "create virtual screen" (which can further be configured in the KDE settings, and e.g. partially overlapped with a physical monitor, etc.).
If the app was in charge, they would barely enumerate some of the windows correctly, let alone provide new virtual screens.
So what are you using or recommending? Recently switched to kubuntu from gnome-ubuntu, and while some pain points have gone away (lack of global menu can partially be kindof mitigated, didn't have to fiddle with touch acceleration like on 20.04), I'm not impressed: middle-click gets in the way a lot and can't be switched off, power mgmt tray is lagging badly behind power events, weird dock preview and not-so-great app switching, somewhat fiddly touch targeting at times, FF crashing, updates not smooth, insists on chromium as default browser, point-/tasteless Windows-y sounds and looks, ...
All the while there are ZERO GUI apps or other capabilities I'm using that I didn't already use 15 or 20 years ago.
Eying a return to Mac OS which at least doesn't feel like you're treated as guinea pig by dicks with attitudes. Linux notebooks seem barely good enough for uninspired enterprise work on bloated IDEs and Docker/other container crap only there because said dicks couldn't agree on a set of (really old) lib versions and gui toolkits.
> middle-click gets in the way a lot and can't be switched off,
What do you mean by this? What happens when you middle click, that you don't want or expect to happen?
> not-so-great app switching
How do you like your app switching? I agree that the default isn't great, but in System Settings you can tweak it beyond all reason. I can help if you tell me what you want.
> FF crashing
I haven't had that happen. What addons do you have in Firefox?
> insists on chromium as default browser
Again, System Settings. Or, from within the Firefox settings, there might be a way to set it as default.
> apps or other capabilities I'm using that I didn't already use 15 or 20 years ago
What apps or capabilities do you feel are missing?
Half of the time I want to press right-click or even left-click on the touch pad it actually registers as middle-click which is very annoying as it means a window gets closed rather than focussed; don't want middle-click at all. Undesired Middle-click-paste also happens a lot.
When I click a link from Thunderbird (weirdly as it sounds when Thunderbird is a Mozilla app) it opens chromium; dialog to set FF as default doesn't change this.
I need to retest and maybe reconfigure app switching from a (vertical) dock as you say; just doesn't feel fluent as it is.
Occasional FF crashing should probably be addressed at Moz. Maybe it's because FF on gnome gets more usage and testing. Btw, does SuSE (or Manjaro) still have a global-menu patch for FF because kubuntu (understandably) doesn't maintain such patches?
I don't use KDE or Wayland - I use awesome-wm with X11. I don't get a lot of Firefox crashes but one odd behavior I have noticed is that sometimes, after closing all Firefox windows, the firefox process continues to run (unresponding) in the background.
When this occurs, clicking links results in no browser windows opening. At this point I run `killall -9 firefox` and oddly, in that very moment, the link I clicked will open up in Chromium.
I guess I can remap middle-click to left-click using the wayland equivalent of xinput from a shell script or something. The apparent-FF-crash story described by johnmaguire and eddyb sounds very much like another thing I believe is happening on my notebook. Between these fuckups and the general alienation going on in Linux land (wayland, snaps/flatpack, systemd) I've got to say I'm not enthusiastic to go through those chores and teething probs just to get the same-old gui apps like Inkscape and GIMP (plus FF/Thunderbird) running, so right now I'm leaning to go back to Mac OS more than ever (used Mac OS on and off from 2003 to 2016).
How much shm is enough depends on both the browser's rendering engine and the display engine, but apparently 1.5GB shm use isn't out of the ordinary. You won't find any references to shared memory (or even general out of memory messages) if Firefox crashes in this way, so it's just another thing to keep an eye on.
I am using KDE (aka Plasma5) in Wayland mode, on NixOS unstable.
I would not recommend NixOS, just like I mentioned, and I didn't really want to get into the weeds of why, but while Nix is something that more people should try out if they're already familiar with unfortunate asymmetries (e.g. "git's data model is really nice" vs "git's CLI has sharp edges and some workflows the data model implies are entirely unserviced") and/or like to play around with experimental unpolished software, I would maybe avoid it until they actually come up with a "more declarative"/"less computational" flavor for 99% of usecases.
I've used openSUSE in the past, and while YaST2 might be less relevant now, it was shocking how much similar things were outright lacking back then (a lot of this was pre-NetworkManager to be quite fair).
A lot of people like Arch, and if Debian/Ubuntu package management doesn't get in the way I suppose KDE Neon might be nice?
(I keep forgetting KDE's Discover exists, it might also help with not dealing with package management directly)
---
> FF crashing
Quite ironically, if it is https://bugzilla.mozilla.org/show_bug.cgi?id=1743144 that's technically a gtk limitation (not only does it lead to the FF main thread having to poll gtk often enough to keep the Wayland connection from breaking, but when it does break it calls `_exit` so Firefox can't even do crash reporting, and they refused a patch to address this), and it can also happen for Chrome (which also uses Wayland through gtk AIUI).
If you want to check if it is the case, you can look in the logs (e.g. through `journalctl -o with-unit -r`) for "error in client communication".
> middle-click gets in the way a lot and can't be switched off
Are you talking about the feature controlled by System Settings -> Input Devices -> Mouse -> "Press left and right buttons for middle-click"?
(that is, if you intentionally press both buttons, does it trigger middle-click?)
AFAIK that's off by default, but I am on a different distro and running KDE/Plasma 5.25.4 and maybe it changed at some point, or maybe it's specific to touchpads? (which I sadly can't test because I only have an older Nvidia laptop, that can't use the Wayland-compatible drivers, or rather I would have to switch to nouveau first and deal with that etc.)
> insists on chromium as default browser
I've had issues with this in the past, some apps provide their own configuration instead of going through XDG mechanisms, or at least have suboptimal defaults.
I would check the settings of the apps which cause chromium to start, and maybe play around with Flatpak/forcing the use of XDG Portal, but that might be too much to ask.
> GNOME has kind of always been on the opposite end of the configurability spectrum from KDE, IME.
which gets weird, when they push on you things like:
- opening WiFi-portals in whatever crap browser your distro ships with GTK. Executing random JS code in the process.
- make XWayland startup absolutely inconfigurable and hardcoded in C (or was it Vala) and for whatever reason MAKE IT LISTEN ON ABSTRACT SOCKETS (which no Xconfig anywhere else does)! (problem here: if any user-namespace container shares your network namespace and you do a naive xhost +SI:localuser:user - it works for any container, because abstract sockets are not stowed away in the filesystem).
At least the first issue I'd like to report, but don't know how and where...
> Wayland is not GNOME, what you describe is not Wayland problem, but GNOME problem.
Which in turn is the biggest problem with wayland. We are fragmenting the ecosystem with every window manager dealing with shared problems in their own custom way, because wayland only takes responsibility for a tiny fraction of the common things that every window manager does.
Come on, it’s not like the linux user space was ever not fragmented. That’s sort of the deal with bazaar-style development.
After enough time some best practices do solidify into standards, but it wasn’t overnight in X either, and first and foremost it wasn’t made with that in mind. Wayland has protocols which are queryable and provide a very good way of implementing more and more standard ones from the experimental “fragmented” ones, and I don’t see much problem with this model in practice either.
> After enough time some best practices do solidify into standards, but it wasn’t overnight in X either, and first and foremost it wasn’t made with that in mind.
Part of what drives me nuts about Wayland is they could have learned these lessons from X's history and actually had it in mind. Instead, they threw it all out without learning a thing.
Or chances are, the more features they would have demanded by spec from the start, the smaller the number of people actually interested in implementing it would have gotten, and it would turn into a single implementation at most, or just hype at the worst.
They absolutely chose to do the protocol with enhanceable protocols, something greatly missing from Wayland, so I don’t think your criticism is fair here.
This is a result of them learning lessons from X history. The lesson from X is that mouse configuration absolutely does not belong in the display protocol and should not require futzing around with editing a root-owned xorg.conf. Input configuration is an entirely separate concern.
And? Just because the linux ecosystem is fragmented doesn't mean that new software has to be, too.
The fact that developing a new compositor with wayland basically required wlroots to be even feasable for a one-man project is concerning enough. The amount of stuff you have to handle is insane compared to writing your own x-wm.
It is a good comparison because you don't need to write your own X server to implement your own window manager, which is the part that most people care about.
XWayland already exists and is the name for X.org running on Wayland, to enable X11 apps to run on Wayland desktops.
You may be interested to learn about wl-roots though. It's a project that I think originated from sway, and it aims to provide most of the low-level plumbing for writing your own Wayland window manager/desktop environment.
I love GNOME (and currently still use it), but the project really has been taking a turn for the worse.
If you want to do something even slightly off the happy path, GNOME is the worst Wayland experience. Because of ideology (and in the name of security), they refuse to implement features and extensions that users and app devs need.
This is just an issue that I have run into; there are more. Even ignoring wlroots and KWin, users coming from Windows and macOS except features like this.
But GNOME has zero reason to implement server side decorations in Wayland. All GNOME apps use client side decorations and have done so for years. The other major toolkits (Qt and Electron) have also added support for client side decorations. Legacy X apps are still able to use the X decorations. The only thing left is apps that were broken in Wayland in the first place.
Client side decorations are fine, but removing the option of server side decorations from app developers is stupid IMO. KDE and wlroots have this functionality.
Practically, for small apps that don't really have their own style (or games that are just one big opengl window) it's much better to rely on the window manager to provide a titlebar that somewhat fit in with the rest of the system. This is what users expect when coming from Windows or macOS.
Instead, apps like kitty just provide a godawful stopgap titlebar for GNOME users, because implementing good CSD is outside the scope of the project.
I'd be reluctant to attribute Gnome development to malice when it can quite easily be explained by incompetence.
Fortunately, Linux has desktop alternatives that are better than Gnome. In fact, you'd be hard pressed to find an alternative that isn't better than Gnome.
Whoever wants to sabotage the Linux Desktop, or more precisely a community developed Linux Desktop. The biggest sponsors of GNOME are IBM and Google. Who knows what their agenda ist.
You’re right, IBM & Google have teamed up to brainwash and secretly hire all the OSS developers for GNOME. Their goal is to stealthily undermine the Linux Desktop by implementing other half baked OSS tech in place of currently broken OSS tech, all while the public is bamboozled.
As somebody who witnessed the whole Microsoft "Helloween Documents" leak at the time I can assure you all kind of psyops are being pulled inside the more or less completely naive FOSS community.
The observed practices outlined in the Helloween Documents were a "crazy conspiracy theory" first as well. Until it turned out to be a very real conspiracy.
Same here, only with Dash to Panel. But man - I have so many workarounds for standard functionality GNOME either breaks or refuses to implement. A symlink in place of /usr/bin/gnome-terminal, patch sets for GTK 3, custom scripts to work around that you can't configure the screenshot folder...
I’m sure this goes for a lot of people, me as well for some time; but in the end, the problems simply became too much for me— so I’m curious, how long have you been using Gnome?
No it's not possible in a newer LTS (> 20.04, don't have laptop on hand) Ubuntu KDE. There's no scroll speed configuration, as sibling comments discuss. Switching driver to libinput or whatever made it configurable, but other bugs popped up, and suddenly every accidental palm touch sent my cursor clicking somewhere.
There may be GNOME specific issues as well but I also noticed the other week FF scrolling being bonkers in sway (just a light swipe can "smooth scroll" across several screenfulls way too fast and long, with several seconds of slow trailing scrolling) but not under i3 (IIRC it might even have been fine in sway when running under XWayland). I don't recall the details but got back to somewhat sane behavior with some non-obvious about:config changes.
It forces every single DE that will use it to reinvent same thing. There is no good reason to do that, just a lot of duplicated effort. You will not want to have different mouse behaviour when you use different WM. Having it pluggable is fine, even desired, but pushing it on every DE is just waste of effort and just bumps the bar required for any DE to move to wayland.
> GNOME is probably worst DE imaginable. After 20 years, they still don't have thumbnails in filepicker and even dropped preview side pane in GTK4. I can't even.
For me it peaked somewhere in 1.x, 2.x was prettier version of it, after that they completely lost me. And their constant dumb arrogance with assuming what their users want is just cherry on cake of shit. And the "fuck the compatibility, just change APIs will nilly" attitude they got after 2.x
That is exactly what libraries are for, not protocols.
Should Web Standards mandate the use of a given JS implementation because it hinders the creation of new ones?
Also, the million+1 “de” under X are more like skins, there is absolutely no reason for them to exist in that form, they could just build on top of another wayland compositor.
Okay ? It can be lib. Just make wayland use it, not be thing every DE needs to. DE shouldn't care about where the mouse movement data comes from and should not need to compensate based on device.
That is a correct conclusion, in the same way that many issues with X features are not actually issues with the X server. Over the years a lot of things have been moved out of the X server into client libraries, or into Mesa, or into D-Bus, or into Wayland...
>GNOME is probably worst DE imaginable. After 20 years, they still don't have thumbnails in filepicker and even dropped preview side pane in GTK4.
Really disappointing to see this very lazy and trite criticism upvoted. If you look in KDE Plasma or Sway, you can also find plenty of missing features and bugs. You can find those anywhere.
Gnome on Ubuntu 20.04 does have thumbnails in file picker. I checked that right now before clicking Reply: Slack, upload from your computer, click on PNG, the thumbnail appears to the right of the file list.
What they meant is having thumbnail instead of the icon, so that you can quickly see a preview of all the files instead of having to select them one by one. This article was somewhat recently shared and it explains the whole ordeal https://jayfax.neocities.org/mediocrity/gnome-has-no-thumbna... (contains snarky language).
It’s a small(-ish) issue but people often share it as a good example of their frustration with the way Gnome is managed - the bug report has been open for years, there has been several patches proposed, but they’ve always been either rejected or ignored with no discussion. My personal (and similar) pet peeve is that typing in the window will trigger a search instead of selecting the file matching the key like in every other OS.
Do you also know a possibility to stop inertial/kinetic scrolling on two-finger-tap / rightclick? (left click works, but since I scroll with two fingers, it would feel much more natural on rightclick)
No worries. That one is a little harder. My understanding is that libinput doesn't handle kinetic scrolling at all[0], it is now implemented at the toolkit level. For Firefox, the "hold" gesture to stop kinetic scrolling is blocked on either the required event being backported to GTK3 or Firefox being ported to GTK4[1].
My only suggestion for the time being is to try scrolling back a tiny bit in the opposite direction rather than tapping. Works for me, lol.
I would think that's a toolkit (GTK, Qt, etc.) issue, not a window manager (well, in Wayland parlance, compositor) issue. The apps just get the scroll events from libinput, and it's up to them to decide how much/how fast to scroll based on what it sees.
It's the same argument that the libinput author makes about why libinput doesn't implement kinetic scrolling, and that it's the job of the toolkit. Only the toolkit knows what's appropriate there.
For reference, the kinetic scrolling rationale goes like this: say you start a scroll movement in one app, and then it continues scrolling after you lift your finger off the touchpad. Then you alt-tab to another app. If libinput implements kinetic scrolling, then scrolling will start happening in the newly-focused app, until the kinetic scroll decays down to zero. That's definitely not what you want, and libinput can't fix that problem, because it doesn't know anything about windows or focus.
Granted, scroll speed isn't exactly the same thing, but I could imagine that a toolkit/application could want different scrolling speed in different contexts, which libinput would have no understanding of.
> because it doesn't know anything about windows or focus
But that's exactly the problem with the Wayland architecture, isn't it? Input handling, window management and composition are so closely entangled that it shouldn't be spread over several unrelated libraries.
And yet, in every other OS, scroll speed is a global setting, potentially with per-app overrides. No user in their right mind will configure a different scrolling speed in each and every app they use - they will have a preferred scrolling speed, and at most a handful of apps where they want custom behavior (e.g. scrolling in a game will probably require completely different precision than day-to-day, or perhaps a CAD tool).
To me the interesting/sad bit is that individually, I can see all of their points; the system is set up in such a way that there is nobody whose job it is to cut through the nonsense and keep the user in mind.
I think “not interesting” is an understatement. Making changes in OS project can be downright abysmal - lack of coherent structure, little to guidance, or even hostility toward contributions…
It’s certainly not entirely because of the attractiveness of the problem itself.
I'm able to set the trackpad speed in both GNOME and KDE on Wayland with libinput. In GNOME, it is under Settings > Mouse & Touchpad > Touchpad Speed. In KDE, it is under Settings > Input Devices > Touchpad > Pointer speed. Do these settings not show up for you?
They're talking about scroll speed, not cursor speed. When using those high precision touchpads where you can scroll pixel by pixel (so not line by line scroll wheel emulation), it's impossible to configure the speed of that scrolling in GNOME, and on a whole lot of systems, it's insanely sensitive.
I have a slider for touchpad scroll speed on my KDE Plasma Wayland laptop.
Also, in my experience XWayland programs seem to exhibit overly sensitive scrolling, while native Wayland is fine (noticed this mainly on Firefox). Perhaps X scrolls by line, while Wayland natively supports pixel-based scrolling?
Ok, I see the problem. I had expected the scroll and cursor speeds to be linked, but they apparently are not. For users who prefer slower scroll speeds, I understand how this can be frustrating.
- "When I switch to my Linux laptop to test things, my trackpad is bonkers and I have to move my finger in like 1mm increments because otherwise I'd scroll like 10 pages in Firefox. It's infuriatingly frustrating."
The scroll behavior is highly configurable on Firefox' side: check out some of these about:config flags (if this is still of interest),
The same driver is used for wayland but I don't know how you configure it there, it will no doubt be possible to do something similar with a single text file config.
I think the problem is that most UI tools like gnome settings do not expose the full range of possibilities because it's too much effort.
[EDIT]
Ouch, so i'm wrong, there is no equivalend in wayland. I know wayland has no concept of a display server but this really sucks for configurability [0]:
> For Wayland, there is no libinput configuration file. The configurable options depend on the progress of your desktop environment's support for them; see #Graphical tools.
This means there is no built in way to configure libinput without your DE/WM supporting it or other tools like libinput-config as the sibling comment points out.
I’m puzzled by your comment because I observe precisely the opposite: in Sway (Wayland) I have `input * scroll_factor 0.35`, but in i3 (X) I can’t reduce the scroll speed.
I noticed the same thing and it was hard to believe that such a basic thing is actually unsupported in 2022. It's really just not supported by Gnome or KDE when using Wayland.
I just don't bother with all of that, since it has been years without any improvement and with confusion of alternatives of alternative system components that are not working together and decide to fight against the OS as you have already explained this inconsistency.
At that point, I just use macOS to just get work done without fiddling or googling cryptic errors for just using the trackpad or hunting down and googling why either GNOME, Wayland, LightDM, Dbus and LibInput and the video driver(s) decided to have a fight and crash the desktop.
From https://wiki.ubuntu.com/Wayland
"The X11 protocol was designed around running graphical apps across the network. While some people use this feature, it's far from common. Wayland drops this requirement as a way to greatly simplify its architecture."
X client and server are usually the same machine, but they don't have to be. While on the road, you can use your notebook to open a Gimp session on your home machine and edit an image stored there. That is, the Gimp runs on your home machine while your notebook has the GUI.
This is a lot to give up for cool compositing effects...
You can also use Gimp via a RDP or VNC session, which will give much better performance on low-bandwidth connections, since those protocols do damage detection, only sending updates of what's changed, and (lossy) compression.
I'm not an expert, but my understanding is that the whole X11 is network transparent thing worked great when apps used the X11 drawing primitives. These days a significant number of X11 apps just render _everything_ (often including window decorations) "server-side" as bitmaps and then send them over the wire to the client to composite them. Essentially the X11 wire protocol has become a bitmap pipe.
Wayland does the "bitmap pipe" thing more efficiently than X11.
Very much this. Running the X11 protocol over the network is usually not a good idea anymore.
By the way, does anyone know of a VNC-like solution that can use MPEG compression?
Also, VNC could be better if it could increase the quality of parts of the screen once they stop updating. E.g. in TigerVNC setting a low bitrate doesn't improve the quality of text once the text stops changing.
Nomachine's server and client are closed-source but work great.
For open-source solution for quickly remoting into an existing display, I use freerdp-shadow-cli which is way more stable than x11vnc and uses the RDP protocol instead.
> Running the X11 protocol over the network is usually not a good idea anymore.
It works well, though, and I'm not really interested in hearing that my own experience with it working well is a lie or some trick. Wayland gives up some things, and pretending that those things have no value isn't going to convince those of us who know they do.
Also, those of us who use non-mainstream window managers (such as Window Maker) are not interested in hearing that we should switch to some UI which doesn't support our workflow just because it's more fashionable these days.
There's a debate to be had here, and it needs to be had on the basis of facts, not lies.
TBH This response is way too aggressive for the original comment.
The facts stated are simply that today it is more efficient to encode video for remote graphical sessions because the X11 applications already changed a long long time ago to adapt to the modern world of GPUs and accelerated compositing. BW, latency, efficiency, everything became better because a super computer with thousands of cores can do that and lighten the load on the CPUs.
It doesn't say it doesn't work...
It doesn't prevent you neither from running X11 or even booting up a PDP-11 if this is your favorite workflow!
The problem is that the Wayland fans have a tendency (at least I feel) to significantly misrepresent things in their favor. Statements like "oh that's actually more efficient on Wayland". No, Wayland is incapable of the sorts of optimizations X can do by design. It is only more efficient if the X app is doing things in a specific way that doesn't take advantage of huge parts of the X protocol. Granted, most modern apps do exactly that at this point. That doesn't make such statements any less of a misrepresentation though.
A much more reasonable claim would be that the network transparency afforded by the X protocol adds significant complexity which is no longer utilized by the majority of mainstream apps today. As such there's a reasonable case for dropping all that complexity from the core system and leaving it to peripheral libraries to handle on a case by case basis for the apps that want to make use of it.
And the idea of lossy compression while using an image editing program being a desirable thing (as suggested elsewhere in this comment chain) is laughable. It's already bad enough reading text that's gone through lossy compression. I would never want compression artifacts while manipulating an image.
My impression of Wayland so far is that I like the technicals but absolutely detest the people I encounter pushing it as a solution (it's quite similar to Rust in that regard I suppose). They would probably meet less resistance if they took more care not to misrepresent the overall state of things. I'll leave the link to KDE Wayland "showstoppers" for reference. Certainly that list is far shorter today than it used to be and many (not all) of the items are now solely on KDE's end. Nonetheless, fanboys have been claiming that Wayland is "production ready" the entire time. https://community.kde.org/Plasma/Wayland_Showstoppers
I'll switch to Wayland once it "just works" out of the box in terms of app integration on stable distributions including things like screen capture, fractional scaling factors, color management, all the stuff that works on X.
Look at this very statement it's completely inaccurate in a trivial fashion that ought not required analysis but here we are.
The people who actually develop x or wayland are a tiny number of people. The people expressing opinions on the internet on tech is 1000x larger. The implications that the proponents are correct in their analysis because they develop it is fatally flawed if for no other reason than the subject is obviously not solely the tiny number of actual devs. Furthermore the arguments even of devs needs to stand on their own feet.
Look at the prior comments where someone complains that random crashes result in the entire session going down.
Who cares what anyone says about the theoretical design decisions regarding manifestly unsuitable tools.
You can make all the arguments you want but it won't do anything meaningful. If those 1000x people expressing opinions have the necessary domain expertise, and aren't just tossing out their feelings on what they think might be cool or might be nice in a perfect world, then they should start contributing to these projects and fixing the bugs.
I mean, I think it would be cool if my PC never crashed. Isn't it easy and fun to say things like that?
You are saying that as if the part of the X11 protocol that's reasonable to run over the network was the better API that application developers are simply too lazy to use.
While the reality is, that toolkits (and applications) used to use those APIs and were revamped to use the DRI APIs and general bitmap based windowing.
The old APIs don't support double buffering, access to GPUs with modern APIs (both opengl[indirect sucks] and Vulkan).
They don't provide modern font rendering, or any kind of graphical effect (distortions) that UI people might want to play with.
They would be significantly worse for anything displaying animations and one of the most common client libs (libx11) is serial and thus horribly latency sensitive.
The advantage of VNC isn't lossy compression. Which isn't forced by it either. But it handles networking better with bitmaps. And has improvements like acting as a screen/tmux style connectable session for graphical applications.
Both VNC/RDB can also support showing the server's desktop if it has one, not just running other applications than currently open elsewhere.
The only advantage of old style X11 was that it's ubiquitous. But since it hasn't been used in ages since it doesn't work well with modern computers/UI frameworks that advantage is gone.
And there's 0 reason to try and reimplement that in a new windowing protocol, when there's objectively better choices out there already. Local optimized windowing is a different beast than network capable.
You might also want to note that the fantasy of being able to use the same protocol to drive the local display and also operate over a network, is long dead. It seems like a clever idea but it doesn't actually work. It ceased to be a thing entirely the moment wide area networks became popular. The local and remote cases are two completely different situations that need their own individual attention. Even when developing against X11 protocol you still have to consider this in modern times because the DRI extension is not available over the network.
Compare to a protocol like RDP which is extremely optimized for efficient and secure network operation, but is also way more complicated as a result, and it would be foolish to use it on a local display server.
It does just work for some workflows. I think that's really what it boils down to. It is production ready for some people, but it's clearly not for you yet. Maybe that will change.
Yes, I see this come up again and again. Meanwhile, I use X11 over the network every day with Firefox, GIMP, various custom apps, and a zillion other things. And it works fine, and great, and VNC isn't an option for most of it for me.
>and I'm not really interested in hearing that my own experience with it working well is a lie or some trick.
Your own experiences with it aren't a lie, but they're probably based around ignoring the last 20-30 years of advancements, and ignoring many actual statements from the X11 developers saying that it's bad for remoting.
>Wayland gives up some things
Well in the case of X11 fowarding, no it doesn't. That still works.
>Also, those of us who use non-mainstream window managers (such as Window Maker) are not interested in hearing that we should switch to some UI which doesn't support our workflow just because it's more fashionable these days.
By insisting on using these obscure, non-mainstream and under-maintained environments you are setting yourself up for an extraordinary amount of pain for very little benefit. The more time you spend avoiding the issue the harder it gets to figure out how to adapt your workflow to something else. I'm sure you understand that more deeply than anyone else here. So why keep up with the charade? It is not doing you any favors. At some point we have to let bad habits die.
If you're not afraid of big brother, I can recommend Chrome Remote desktop. You just install a server on the shared machine, and you can access your stuff using a regular browser thanks to the wonders of WebRTC.
I recently had to uninstall Chrome Remote Desktop, because it was breaking my (Ubuntu) system's ability to handle some basic permissions, such as mounting USB disks.
It turns out that CRD is well known for breaking other seemingly random things, such as accessing your printer:
> my understanding is that the whole X11 is network transparent thing worked great when apps used the X11 drawing primitives.
It did. It was awesome to be able to run one of the expensive academic apps from any X terminal on campus. Network security was largely non-existent then, and you could run X apps not just across campus but across the Internet. I remember a really early web page that had a text field (for your host name) and a button that would start a particular app on their side and would display on your local X server. It wasn't very fast.
Anyway, all that died as Athena and Motif began to look dated as newer applications in the mid-90s started using more and more bitmaps.
Completely off-topic, and I wish I could DM You without extra spam here, but great handle. Commodore Business Machines...
Ever use a PET? Loved those things.
I was around back in the days that X-servers were still very much a thing, and even then many thought they were an expensive and inefficient way of doing things.
However, there were many who genuinely believed that networks would get arbitrarily faster and compute would not. Of course, the opposite happened, but in that alternative universe X’s design would have made sense.
This isn't quite right. Both have gotten arbitrarily faster. That some folks have gigabit fiber is proof of that. Graphics have just outpaced networking. Easily.
The problem with X over the network is not bandwidth it is latency. The X11 protocol is very chatty so even very simple things will cause several roundtrips which makes X over the network so painful.
> The X11 protocol is very chatty so even very simple things will cause several roundtrips which makes X over the network so painful.
This simply isn't true. Some operations require roundtrips, drag and drop comes to mind, but very simple things absolutely don't. Events come one way, draw commands go the other.
Essentially xlib is latency limited because its a synchronous protocol. Xcb can help if you redesign your applications for it, but we are talking about old applications (motif...)
You know xcb and xlib are just different client libraries for accessing the exact same wire protocol, right?
Moreover, notice the example they gave there is atom interning, which is a roundtrip on the wire (though you can batch them even in xlib...), and they say "most real applications will see less benefit than this" since that's the worst case - most applications do atom work at startup, not in the main use cycle, which is famously both async and buffered (it used to be a FAQ on xlib tutorials reminding people to run the event loop, and error handling is complicated a bit by it).
Latency hasn't really improved. Dial up or isdn to nearby cities was 25-50ms. Same as I get with hfc or 5g. Cable was slightly better where it was available. International pings got much worse from australia in the 2010s but since improved roughly back to where they were.
Bandwidth has increased by 3 orders of magnitude though.
Dial up latency was never that low. ISDN, yes, but traditional dialup modems were pretty bad. The analog-to-digital conversion introduced a couple hundred milliseconds of latency.
I remember having latency under 100 ms within the metro area, but I could easily be misremembering, and I don't have a PPP account anymore to test. (And I'd have to visit my MIL who has a real POTS line with it's low latency)... But I did find a reference mentioning 150 ms round trips regionally, which I think was US west/east/central, so I think this 150ms is including some intercity latency as well.
I think one thing that sort of disappeared was the vision of thin clients everywhere. X11 is an obvious thin-client solution.
Thin client for home use seems like a winning proposition. Rather than giving the kids a Raspberry Pi 400 or garage-sale P4 when they want their own computer, you could hook up another X terminal to a large shared Ryzen box. The hardware gets more efficiently used, files can easily be shared and centrally backed up, and nobody ends up being the guy with the cast-off or otherwise inferior PC.
I think quite the opposite happened, just not on X11 level. The web browser is essentially a thin client nowadays. The fact that you have full client-side scripting is one thing that was IMHO a bit underdeveloped in classical thin-client systems because with that you can mask network latency much easier (though, admitted, I don't know a lot about how X11 handles this).
Don't agree at all. The web browser is essentially a standardized computer on which to run your fat client systems like SPAs. The browser can run arbitrary computation locally.
The problem being that X thin clients back then were more powerful than desktop PCs at the time. Even now they're more expensive than a Raspberry Pi, so it's more economically efficient to buy one of those.
Also, network latency has a quite imminent physical limitation, the speed of light. Serial computations do also have a limit, and we are close to it with current technology, but parallel execution doesn’t really have any, and GPUs take massive advantage of that. We can continue to increase resolution /frame rate basically arbitrarily (of course with very diminishing returns after a point)
There is some strange irony when watching people using quite expensive IBM X Windows thin client terminals to manage xterms, each being a different talk session.
> These days a significant number of X11 apps just render _everything_ "server-side" as bitmaps and then send them over the wire to the client to composite them.
Inferring from the quotation, I take it you mean application server-side (instead of display server-side)? Just confirming.
> since those protocols do damage detection, only sending updates of what's changed
So does X.
> and (lossy) compression.
This is legitimate though, the X protocol has no support for any kind of bitmap compression and it can sometimes be pretty painful. Though internets getting faster and faster too so it isn't necessarily a big deal especially since it does have bitmap caching and some display-side rendering capabilities, but it sure would be nice if you could send other formats on the wire.
If I understand you correctly, yes. freexrdp has an option to render only a single window instead of the hole desktop. This works amazingly well and feels native. This is the trick WinApps uses.
You could either use WinApps or create your own script and add it to a desktop entry. Users can then open RDP Gimp like any other application without even noticing.
Editing my photos over a layer of video compression artifacts doesn't sound fun. Web apps are a more reasonable replacement for X11, with browser client much more capable to do low latency local processing. The only problem is lack of easy support for local apps and LAN-distributed apps. Probably still easier to add than to support low input latency eventually consistant Canvas over X11 or VNC.
> While on the road, you can use your notebook to open a Gimp session on your home machine and edit an image stored there. That is, the Gimp runs on your home machine while your notebook has the GUI.
But that's never where I want the split to be when working across the network.
Remote storage? Sure, sign me up. (The POSIX APIs are horribly unsuitable for network filesystems, but I'm speaking about the concept more than the current implementations.)
Remote heavy computations, like AI workloads and compilation? Definitely.
Remote GUI code? No ugh. Compare the experience of VS Code Remote vs just running VS Code remotely with ssh X11 forwarding, using a high latency connection.
You are fighting the speed of light. It won't go well.
I never had to do it myself, but I have a feeling it will work better with Wayland. The delay of 50-80ms introduced by the network is not terribly much (everything under 110ms is fine for playing Dota), so if your software doesn't lag itself, like X11 did, maybe it will work.
You could also use something like Zerotier-one to recover from network being shortly disrupted. This service connects your machines into a virtual network with static IPs, so the IP addresses through which your machines talk will stay the same and TCP connections will recover and ssh will keep working if a device briefly goes offline.
> The delay of 50-80ms introduced by the network is not terribly much (everything under 110ms is fine for playing Dota)
I don't think you're giving the Dota client enough credit there. Game clients don't wait for network to pretend to respond to user input, because it absolutely is noticeable at 100ms latency. Instead, they predict as much as they can client side and display it as fast as possible. For this reason, the client is not at all thin. It knows how to run the whole program by itself. Not so with remote X11. With remote X programs, the client doesn't know what happens on click for anything, so the user really does get hit with a 100ms delay for every interaction with the program.
You're right. I just tested it connecting from laptop in Sweden to a VPS in Sweden through a German VPN server. The total ping was 30-40ms, and running `waypipe ssh remote firefox` was right on the edge of being unusable. Navigating and clicking on links was fine with a bit of delay, but typing when the feedback is delayed is annoying.
Although if I connect to something that's 20ms away (within the same country), the experience is really good.
50-80ms of non-predicted input lag is not terribly much? Surely you must be joking.
Dota is fine because it has network prediction and a finite and well defined set of inputs.
And that's also ignoring video compression artefacts which are that you introduce if you send a modern GFX pipeline across the network. Computers screens can no longer be described with just filled rectangles and text.
It's certainly possible to do better than raw X11 across the network. Other folks mentioned VLC, RDP, and NX too. I assume these protocols reduce bandwidth usage by compression. Sometimes you can even do prediction (as with mosh for ttys or Stadia for games).
The best is always going to be to use a local copy of the logic and data needed most frequently/immediately. E.g. in the VS Code example, they keep the buffer you're editing locally, the logic for cursor movements, the errors you're flipping through, and maybe even some precached autocompletion/highlight stuff. You don't have to wait for a round trip to see your keystrokes show up. That's the advantage of splitting the system in the right place.
I don't know whether you used Remote X, but I used it in a project, and it's extremely inefficient, esp. in thin client applications, and if you try to carry anything like a video, a 100mbps connection could easily be saturated by one pair of X server and client. In other words, it doesn't work in any way unless you have a gigabit LAN.
NoMachine developed a couple of libraries called NX back in the day, which transferred images and image deltas over with high compression. We used this instead via in the same project (with X2Go), and I reimplemented the same stack to my university where 20 something users connected remotely to a single "terminal" server to do remote research, and it worked like a charm.
While I like Remote X, it's still very inefficient even today. So, unless it's made extremely efficient over normal internet, over residential connections, it won't be missed.
Moreover, the rarity of projects using Remote X, or abstracting it with libraries like NoMachine kinda validates the idea is the feature is considered a novelty and not used much.
While I like the feature, I have feeling that it won't be missed or sought after much.
On the other hand what killed X is its code state, rather than the complex architecture. It's haphazard development over the years which made the code unmaintainable.
Addendum: Remote X made sense back in the day. Carrying minimal data, mostly terminal windows between terminals and central mainframe/time sharing system, over relatively short distances. I guess it's never designed and considered for long distances like today's internet, hence it's left to the wayside.
Remote X has been a godsend for me. I'm not always able to lug around a worthy machine, so being able to use lesser hardware as a 'thinclient' to my wireguard'd assets is very handy. Obviously I'm not going to expect high definition video or realtime gaming, but that's not what it's billed as either, so within the scope of what it's for I'm wholly satisfied.
People on both sides sound somewhat fanatic, and I've never understood that. I just want to use what serves me best with the fewest interruptions.
I ran wayland for 3 years in various forms, and could not reconcile the bumps and stumbles in the process. Maybe next year I'll give it a shot again. There has to be something that it's doing for everyone for me to be hearing this much about it still.
I'm so glad modern GNOME has xrdp support built in. It's just a toggle in the settings.
My previous attempts to make it work have ruined the standard configuration so I'll probably need to do some serious purging/reinstalling to get it to work, but for new installs I'm sure remoting into GNOME must be a LOT easier.
can xrdp or vnc display individual remote windows on my local desktop side by side with local windows? I don't want to import an entire remote desktop. That's what I've always used X for, with seamless mouse X select and middle button paste.
> more comfortable to use than X forwarding. A PITA to set up though.
X port forwarding is painless, not sure what aspect you're referring to?
Yes, it is possible to display individual windows with rdp. X forwarding works ok with a fast LAN, it's very slow to draw over a slower link. SSH compression can help a little. If it works well for you then that's awesome, keep using it.
Tried freerdp-shadow-cli, didn't work very well. Vinagre RDP client was unable to authenticate at all and just showed a black screen. I only knew that it was an authentication issue due to the server log. FreeRDP client didn't work with my multimonitor setup, it only showed the secondary screen. It doesn't scale the display properly either. The client must be set to the same resolution as the host. I'll be sticking with VNC for screen sharing, and continue using RDP for headless sessions.
> Moreover, the rarity of projects using Remote X, or abstracting it with libraries like NoMachine kinda validates the idea is the feature is considered a novelty and not used much.
> While I like the feature, I have feeling that it won't be missed or sought after much.
It’s not used or won’t be missed _by you_. It’s extensively used today by science and HPC facilities and clusters, where it’s at best complementary to things like NX (with the problems of basically having a second desktop instead of integrating into the existing system), but often the only option for remote GUI to heavier duty systems available, not to mention the remote OpenGL capabilities which although out-of-date and clunky, allows more efficient local rendering than just streaming bitmap deltas.
As an HPC cluster administrator, I (and my colleagues) don't see its use on our clusters, either.
Instead, researchers either post process their data in the user interface and download it to visualize, or get the raw results and post process and visualize in their systems.
Once in a blue moon, a user generates some output window which requires very few interactions to look at their preliminary data, that's all.
Yeah, I’m an HPC user (and small cluster admin by necessity). Remote X11 windows haven’t really been common for years. Occasionally you’ll see an Rstudio server instance, but this is tunneled through an SSH port forward and uses HTTPS.
The last time I needed and X11 remote session in a cluster, it involved Matlab and some kind of toolkit.
The performance of using a GUI remotely is nothing compared to just copying a terminal figure through an SSH tunnel and viewing the figure locally. I’d say that the low performance of remote X11 was the biggest factor in this shift.
Also, I suspect that the bandwidth used by a typical X app was far lower back when it was invented. I remember seeing some greyscale X terminals, and probably not many levels of grey either.
It's interesting how our networks have gotten relatively much slower. When X11 was designed and for a long time afterwards, networks were fast enough. But now end user networking performance has plateaued much earlier and faster than computing power and app bandwidth requirements.
100M became the norm in the mid 90s, 1G in the mid 00s, but there was no 10G norm in 2010s or 100G norm in 2020s, we're still at almost the same spot of the curve 20 years later. Save for some tragicomic attempts at 2.5G etc end user networks.
Part of the reason of course is wireless, apps adapted to flakiness and bandwidth unpredictably alternating between slow and very slow, and also last mile consumer uplinks adapted to this.
I think it has two reasons, first the price of the hardware required to do these speeds (plus the cable requirements + range limits), and WiFi.
These slowdown is offset by improvements in video and audio codecs, transparent compression and better static compression formats.
So, having 100mbps WAN connection at home is arguably provides more utility than having 100mbps 5 years ago.
Unless you do professional photo/video work, a 10G connection at home makes no sense. A good 802.11AC network plus some gigabit nodes is more than enough for a nice smooth experience at home, for most people.
P.S.: As a sysadmin, I know delights of having a lightning fast network at any place you live/work, but no, I'd rather spend time with my family rather than maintaining that.
The price of hardware is a red herring. All the other ethernet hardware was also pricey at start and then with volume became cheap. The cost and volume normally develop hand in hand (into opposite directions).
And yes, our apps adapted to working with less bandwidth. We don't know what kind of apps and computing we would have in the alternative universe where the norm would have been 10G connections 10 years ago.
For example lots of P2P stuff has been badly hampered by performance problems resulting in the centralization of the net in hands of big players.
There are lots of hard problems resulting from bad connectivity that result in big $$ tech funneling traditional net uses onto their services, by investing engineering efforts (and complexity) into heroic engineering feats to work around the bad connectivity. The complexity means high barrier to entry to compete with them. Codecs yes but also clever distributed caching, cloud storage, UIs that predict what the user does next, video calls have to go through corporate encoding/multiplexing proxies instead of p2p, file sharing has to be of the hard "sync most likely needed needed files" variety with proprietary server side smarts, instead of just "mount remote share", etc.
On the other hand, this works increasingly poorly as most desktop environments and software assumes there is no network involved (for example, last time I tried running Firefox over a forwarded X session I got a bunch of errors about a broken GL context). And you can do the same thing, with usually better results, by using VNC instead (and there are indeed VNC servers for Wayland, for example wayvnc).
Just had to use Firefox remotely (to use WeTransfer to ship a giant file, they dropped support for their cli, grr), and had exactly this issue - it works incredibly slowly over X over SSH. In the end had to use vnc over SSH.
(Note that tightvncserver seems to silently ignore the -localhost option now, which means it's completely insecure to run on an internet connected machine. Tigervnc still has the option)
The issue is the new webrenderer or how it is called. There are options in about:config to disable it and force the old webrenderer.
Then it can runs rather fast on X over SSH in LAN and somewhat okayish through the internet.
Ten years ago I was using it all the time and it was very fast over the internet and perhaps even through Tor. I do not know how they could mess it up so badly
I'm not sure what experience you're describing, for me the real crime trying to run a remote Firefox over a forwarded X session is that it figures out you're doing that and instead decides to run another window of your local Firefox instance. It's been like that a few years now. I can't imagine what they're doing under the hood, haven't looked into it, but it sure is irritating.
It's not detecting that you are trying to run remote. That's just the default behavior of the firefox command, to use the running instance of there is one. IIRC, there is an option you can pass on the command line to get around it.
Without x2go most apps aren't usable over the internet now because x11 was designed around assumptions that are no longer correct and you end up with multiple round-trips during rendering.
Even X2go provides an experience that's strictly worse than using windows via rdp.
It just seems completely pointless to even bother with something x11's network transparency if you're designing an x11 replacement nowadays.
It would be much better to focus on a new vnc replacement for Linux that can use low latency video codecs when needed for games, etc.
> x11 was designed around assumptions that are no longer correct
Nothing could be further from the truth. If you use Xrender properly you can make very sophisticated drawing operations that are extremely efficient over the wire and that are GPU accelerated even when the process does not run on the local machine.
It is Gtk and Qt that for wahtever reason decided to ditch their Xrender backends.
> multiple round-trips
This is also a problem introduced by badly designed tool-kits like Gtk and Qt. It is trivial to design tool-kits in X11 that require no round-trips.
> It just seems completely pointless to even bother with something x11's network transparency
This is mostly true because desktop software on Linux is mostly written by highly incompetent developers that produced abominations like Gtk3+.
> It is Gtk and Qt that for wahtever reason decided to ditch their Xrender backends.
It was often slower and more likely to run in to driver issues than just doing software rendering and sending pixmaps. They're now moving to GL and Vulkan because they can be faster and while they have the same risk of driver issues at least that's the same stack used for video games and CAD and such so more people care about it working well.
Any data on this? I can remember that your typical Cairo backend ran much faster on a PC from 2004 than todays GNOME/Gtk runs on current machines. If performance is your only argument you clearly lose with the modern GNOME/KDE + wayland stack. The only way you could make GLAMOR accelerated Xrender faster is by putting the spline tesselation step into a shader. (This was not possible when Xrender was introduced but can be done today without breaking any APIs)
This. I remember KDE3 being fast as hell doing 2D with a GeForce2 MX, much faster than a Geforce 8200 trying to run a similar rendered desktop but with OpenGL doing Xrender's job.
I saw a breakdown of how the X11 protocol over a network works (IIRC It was by one of the wayland proponents who also was a an xorg maintainer so he knew what he was talking about).
The protocol was very poorly designed everything needed essentially several trips back and forth between the client and server. That applies also if you're using Athena or motif widgets, it just becomes much worse with QT or gtk because they also transmit bitmaps, because otherwise they can't guarantee the look.
So it's latency not bandwidth which kills the performance. If you need 3 or 4 roundtrips just moving a window around, that's fine on your home network where you have a ping of 10ms, over the Internet with 100ms latency 3 roundtrips become 300ms so just a cursor move becomes painful.
You cannot say a developer is highly incompetent if you yourself do not have the ability to be at least as incompetent as them.
I agree there are problems with various stacks and widgets, but it isn't going to change anytime soon. At the end of the day, it requires someone who can create something from scratch, and those people are few and far between due to the amount of vision and work required. Especially if it's a non-paying task.
Most of your comment is pretty bad misinformation. X11 was not designed to use Xrender. That was an optional extension that came later. And on many drivers, the drawing of XRender is still not actually GPU accelerated either. Remember that XRender was designed long before GPU drivers were in the state they are now, back in the era when people still thought 2D acceleration was going to be a thing. And, XRender is extremely limited to a small handful of primitives that were mainly used to implement the postscript drawing model used by cairo.
It is also not trivial to reduce the amount of roundtrips in an X11 program. If the program uses Xlib then it probably needs to have significant portions of it rewritten to use xcb, which is a lot more complicated than it seems because Xlib handles a lot of caching and logic that xcb simply does not. In some situations X11 roundtrips are simply unavoidable. Complaining about GTK or Qt will not change this. Even if it were possible to fix those, there are less and less contributors to those projects who are willing to spend time to maintain X11 support.
I doubt it will ever be fixed, because it offers no benefits when the application itself uses actual hardware rendering with GL or Vulkan. XCB can't speed that up at all.
Sorry, I didn't clarify. The bug was fixed with some use of pixmaps, in other situations it's still broken.
It sounds like the vast majority of apps you use are very old. Even plain productivity apps benefit from GPU rendering. If they don't it's because those apps are far behind, compare to things like Electron apps where everything has been GPU accelerated for quite some time because of Skia.
I find electron apps absolutely horrendous to use. The apps I use most are Zim, QtCreator, Strawberry, and the KDE apps: Okular, etc. - most of the time when I remote ssh it's for pavucontrol-qt, dolphin (the file manager) or mainly the app I develop, https://ossia.io (for instance for working on a show that is taking place over a raspberry pi). None of those force any kind of GPU rendering.
Also apps that do gpu rendering make my laptop really heat up and loose battery quickly compared to when it's not in use - I don't use a compositor partly for this (+ the occasional frame lag)
That's not really relevant. You can personally choose to not use Electron apps, but many people cannot or do not want to choose to do that. Like it or not, it's a thing now. I've noticed a lot of developers seem to have this confusion that anyone else can avoid Electron. I guess you can if you spend all day in the terminal and the IDE but most other people cannot.
And that sounds like something is seriously wrong with your machine or your drivers. GPU rendering has massively improved performance and battery life on every machine I've ever tried. And especially on embedded devices with a low power mobile GPU, see for example here: https://social.librem.one/@dos/104984930233748319
Compositing should do so as well by avoiding unnecessary redraws. That sequencer would probably benefit greatly on a Raspberry by using GPU rendering, the screenshot even shows it rendering video and shaders...
> Compositing should do so as well by avoiding unnecessary redraws.
I use a tiling WM so I doubt it would help much. If anything, if I run a whole system profile with perf, rendering does not even show up when compared to just running a git status here.
> That sequencer would probably benefit greatly on a Raspberry by using GPU rendering, the screenshot even shows it rendering video and shaders...
Sure, the video & shader parts use GPU of course. But for the main GUI, it can be rendered with OpenGL or with Qt's software renderer ; from my tests (and lord knows I've spent entire weeks profiling and doing everything I could to improve rendering performance) the OpenGL backend for the main GUI only becomes more worth it when using a 4k resolution - and GL is absolutely full of bugs on e.g. windows ; yes there are still people with GMA500 GPUs. Qt's software renderer has no issue rendering at a 1080 screen size on a Pi 3.
Unless you are writing applications using motif and/or contributing to a better remote X11 experience, the why and how do not really matter if most popular applications are using GTK/QT and electron.
It was really nice and convenient at the time but it is going the way of the dodo the same way cars with manual gearbox are because almost nobody cares and there are other ways to connect remotely.
> It just seems completely pointless to even bother with something x11's network transparency
this kind of statement astounds me. The internet is essentially based on network transparency, it's the gold standard for local area networking too. If it hadn't been invented yet and somebody just proposed a viable scheme to create network transparency, your eyes should light up with the possibilities... imagine a Beowulf cluster of these.
network transparency is what networks should provide, not for graphics, but across the board. It's the Apples and Microsofts of the world who don't give it to you both because they never understood it, and because it disrupts their walled gardens. You are thinking of a few network features that you want, and instead of conceiving it as properly architected orthogonal software features, you just want special purpose code for your two features (video and video games) in the current context.
This is a big misunderstanding. Network handling is stripped from the core design, but is very much available on all compositors.
The equivalent to X11 forwarding is waypipe[1], which is far superior to X11 forwarding. Rendering happens entirely on the host and clients can therefore use accelerated resources as they wish, and the (accelerated) h264 encoded buffer feeds means much lower network utilization.
Low network utilization yes and perhaps superior to X11 core protocol, but it doesn't come for free, for example in terms of latency. And what if said app was actually a video player in it own self? Then you get to decode the video on server and re-encode it again for transport, whereas a primitive-centric protocol could allow decoding the original video directly on the client.
The higher the window size the higher the requirements for the encoder (though Waypipe does say "This way, Waypipe can send only the regions of the buffer that have changed relative to the remote copy.", or is it talking about the video encoder?), whereas with primitive-based systems the requirements are only correlated to the amount of changes done on the display—while still allowing to use image or video encoder for tasks better suitable for it.
I just fondly remember the times when (possibly two decades ago) I run
xlockmore -mode ifs
at work accidentally from my home computer and it was running fine over the 100Mbit network, so I didn't realize my mistake until coming back from lunch. Basically just a bunch of pixels running around smoothly, but I think it would be quite a quality test for a video encoder..
I understand though that coming up with a great protocol for user-interface primitives would be a research project in its own right, however. Perhaps something based on JavaScript, WASM, or EBPF fragments sent to the client would be a realistic options. Time has certainly gone past the primitives provided by X11.
But I also think that just "forget about it, we'll video stream it" is just giving up on the problem altogether.
> Low network utilization yes and perhaps superior to X11 core protocol, but it doesn't come for free, for example in terms of latency.
You can have your cake and eat it too: You can disable the compression if it's a problem. It's highly configurable if you want to play with it.
The compression means trading a little bit of hardware resources at either end for a better UX (lower latency, higher throughput).
> The higher the window size the higher the requirements for the encoder (though Waypipe does say "This way, Waypipe can send only the regions of the buffer that have changed relative to the remote copy.", or is it talking about the video encoder?).
The core wayland protocol mandates communicating which parts of a "surface" (read: window) has changed when a new buffer is submitted in a "surface commit". Neither a compositor nor waypipe will do anything if nothing has changed.
> I also think that just "forget about it, we'll video stream it" is just giving up on the problem altogether.
Each surface has its own stream, and is updated independently. E.g., a video player on a webpage will generally be a subsurface, a context menu or plugin is a popup surface. They're all processed independently, with each their own damage tracking (and if applicable, video compression). If content is stretched or scaled up, only the original source buffer will be transmitted, allowing the display server to take care of this.
This is not giving up, this is the maximum effort, optimal implementation.
> I just fondly remember the times when (possibly two decades ago) I run ... over a 100Mbit network
For reference, a single 4k 60Hz display takes ~15Gb/s to keep fed with bitmaps. Even a quarter of the screen takes 3.7Gb/s. Not even cinematic refresh rates would be able to fit within a 1Gb/s line.
> You can have your cake and eat it too: You can disable the compression if it's a problem. It's highly configurable if you want to play with it.
Well you've of course lost the game at that point if your surface is very tiny, given you have no primitives to express "write text hello at x, y" e.g. for a terminal; without compression the bitmap presentation of a terminal—or a text document—is very large. Scrolling will damage the whole screen.
> Each surface has its own stream, and is updated independently. E.g., a video player on a webpage will generally be a subsurface, a context menu or plugin is a popup surface. They're all processed independently, with each their own damage tracking (and if applicable, video compression). If content is stretched or scaled up, only the original source buffer will be transmitted, allowing the display server to take care of this.
That's pretty nice and better than I assumed! Some hardware has limited number of hardware video encoders, though, for example in NVIDIA 1080GTX it's four for the _complete system_. I think for this reason waypipe reuses the contexts for different surfaces, resulting in lost video quality? Without reuse I expect one to run out of contexts quite fast—and then you spill to CPU.
Video sending can be a nice way to provide remote frames because it naturally batches all drawing operations into one compressed frame and really works if your task is to send 4k 60Hz video. But can it really beat display server -side composition or even things such as text rendering? As a tool in the toolkit it's a very nice thing to have, but it shouldn't be the complete toolkit. For X11 Someone(TM) could implement XPutVideo.
I think the bad performance of many X11 apps is not inherent to the protocol but inherent to the programs written in synchronous style (libX11 included; libxcb fixes this), assuming immediate access to the display server, instead of putting in requests while waiting for the answer for previous ones. Worst offender: Virtualbox. Access to a local surface and sending that surface as a whole can be a good solution to such code. And this kind of code is possibly so common because doing everything asynchronously can be tedious, and even more tedious in some enviroments such as C..
The X11 draw APIs are not really used outside stuff written in Motif or hand-written X11 clients like st. Instead, modern X11 clients sidestep all this for performance reasons and render on their own and post buffers (e.g. GLX, cairo, vulkan, whatever they may like). This means copying bitmaps when forwarding, over a protocol that is not made for doing so efficiently.
Sure, if an application is just rendering nothing but text through X (and not using e.g. pango rendering to a cairo context), and you do not prefer using the resources on the machine running the application rather than the one displaying it, and you don't care about performance when using the application locally, then X11's might be more efficient. But for a purely text application, SSH puts X11 and Wayland to shame.
Re: encoder limits, Intel Quick Sync (built into Intel CPUs since 2011) documentation suggests the only limitation there with respect to parallel encoding is whether or not you can keep up with frame-rate requirements - an old example being 10 streams at full HD 30fps. I believe waypipe only focuses on video-memory buffers right now for video encoding right now as a simple heuristic, as CPU memory (shm buffers) are only used for "low performance" content on Wayland.
> I think the bad performance of many X11 apps is not inherent to the protocol but inherent to the programs written in synchronous style ... And this kind of code is possibly so common because doing everything asynchronously can be tedious, and even more tedious in some enviroments such as C.
I believe even libxcb has forced synchronous parts, which are annoying Wayland compositor developers a lot as Xwayland-supporting compositors need a bit of X11 WM code.
The Wayland protocol is asynchronous by nature, and the primary client and server library exposes this with no synchronous pretenses in idiomatic C. Every function you call only queues a request that will be sent when your event loop dispatches next time, and when you receive events your event loop will fire callbacks in bulk.
Nothing is synchronous, and updating your window requires no wait. In the simple case, the only message you'll get back at some point is one informing you that the previous buffer can now be reused (buffer release).
Each time I tried running Firefox over ssh with X11 it either lagged or crashed. With Wayland's Waypipe it works flawlessly, I could even watch a Youtube video through it.
I'm double checking it right now using a remove machine a few blocks away from my home. Running `ssh -X remote firefox` lags a lot worse than `waypipe ssh remote firefox`. The latter feels almost native, very responsive.
So no, I don't think we are giving up a lot. X11 was designed to work over network, but it never really did.
The issue is the new webrenderer or how it is called. There are options in about:config to disable it and force the old webrenderer.
Then it can runs rather fast on X over SSH in LAN and somewhat okayish through the internet.
Ten years ago I was using it all the time and it was very fast over the internet and perhaps even through Tor. I do not know how they could mess it up so badly
Problem is horrible lags, and I mean just program working bad, not latency. I tried doing it with X11 now, and I gave up before I could even type "youtube" to address bar. Then I tried doing
and it wasn't better. I saw the new firefox window, but it stayed unresponsive.
And I bet part of the problem is due to Youtube (and browser generally) involving some non-trivial drawing technologies, rendering a video is not like showing a GTK interface.
Streaming is most sensitive to how much data you can push through the pipe in a given amount of time, not to how long it takes for a question/answer pair to travel between the server and the client.
Latency (how long it takes for messages to travel back and forth), which is where X11 over the network has major issue, doesn't matter in the specific case of watching a video.
Essentially, once you press 'play', there's just a torrent of data flowing down from the server to the client with very little need for two-way banter between the machines.
Yeah, no - X11 forwarding a video is at best "eh" when you're network local, and horrible otherwise. X11 forwarding burns a lot of bandwidth, and even when that bandwidth is available, going over TCP and SSH makes it implicitly quite latency sensitive.
It's not even fair to compare it to waypipe's h264 compressed buffer feeds.
And any program not made with goddamn XMotif will be slow as fuck on anything non-ethernet.
It is a strictly worse solution to remote desktoping than what followed it — we are no longer drawing things with CPU and the things we draw are not 3 rectangles. A bitmap crossing the wire for an icon/image/whatever is insanely inefficient through the X protocol, you want to compress it with a modern compression algorithm for much better outcome.
> Have you ever tried this? I've never had X11 network transparency be a good experience.
I did, nearly 20 years ago. The computers were over 2500 km apart, though it was over a good university to university connection. The Gimp was certainly usable, though I will not go as far as saying it was good since 20 year old memories can be hazy and the standards of the day are different from the standards of today.
I also experimented with X over a 14.4 kbps modem a few years earlier. It was mostly while writing papers (plotting graphs and viewing dvi files, not the actual writing). It was slow, but it got the job done and was better than juggling documents between software on two computers.
The one really nice thing about X was the ability to use individual applications across a network connection, rather than dealing with an entire remote desktop. All of this remote desktop stuff strikes me as being better suited for accessing a remote system, rather than for running remote software.
> The one really nice thing about X was the ability to use individual applications across a network connection, rather than dealing with an entire remote desktop
That sounds like a UX problem that’s quite trivial to fix, not a technical one and absolutely not inherent with Wayland.
If you're talking about shipping a singular rectangular region across the network, possibly. Yet not all windows are rectangular and many applications use more than one window. Then there are issues with how the application interacts with the desktop environment, where the application thinks it is being on computer A when it is actually being displayed on computer B. It is not a trivial UX problem.
I've used it in the past month to impromptu demonstrate something that I had set up on a GUI app at home (with a lot of dependencies) from my laptop.
Yes, it's laggy, but it's nice to have the option, and it's really nice that it's already integrated with ssh and doesn't need extra setup (past passing `-Y` to ssh). If this was the only thing I'd lose by switching window managers it wouldn't be a dealbreaker.
This is what I keep coming back to. A backend that targets Javascript can take advantage of the desktop/laptop GPU. A backend that targets Wayland has to render everything locally in software (no GPU), and nobody I’ve ever worked for wants to build a system that runs this way.
Xaw and Motif worked okay, and it’s getting hard to imagine a heavy user who has access to only one computer, yet newer GUI toolkits have lost interest in X11 as a platform for remote rendering.
I was excited about Waypipe until I read that it has to ship compressed pre-rendered video frames over the wire, and all the desktop can do is decompress and composite them. I believe VNC and RDP are much the same, sending bitmaps rather than structured GPU commands from the app.
We could expect everyone to start investing in datacenter GPUs, but architecturally “one GPU per display, plugged directly into the display” always made a lot more sense to me.
I actually still use this feature a lot. I run some linux software on a server and can see the window on my main desktop without any need for rdp, vnc or similar. It’s really great!
I used to use X11's network transparency features a lot in college[0], but it's been a good 20 years and I haven't used it since -- even once.
I, too, am not comfortable with throwing away such a potentially useful feature (though I believe there's a wayland "protocol" or something that allows for network transparency now), but I personally don't have a need for it.
[0] One of our VLSI design labs was a FreeBSD lab, and the machines there had a bunch of proprietary/paid simulation software on them. It was amazing to be able to ssh in from my dorm room, and run those apps "locally", with access to all my files on the network share. A few of my classmates wondered why I never pulled all-nighters in the lab with them... I never needed to! They all ran Windows at home; back then it was pretty difficult (or at least just kinda unknown) to run an X server on Windows.
I use it quite often. And I am dreading the day the office switches to wayland. I like having terminals open to several machines on the premises and being able to fire up individual applications from each. I don't really like having a workspace devoted to a vnc for each machine. It doesn't work as well. It makes me use the mouse more. I don't want windows into those machines. I want a gui on my machine hooked to the heavy lifting on another. I'm sure I'll adapt. But sonfar, its a productivity killer.
It's in the "critical path" for certain niche applications. I do HPC operations, and every now and then, I run across some (usually commercial) scientific software application whose installer and/or configuration manager is a GUI. I am not sitting in front of the cluster head-node, I am ssh'd in to it.
With X, I can just go ahead and fire up the GUI, and it will appear. Maybe a bit laggy, but as long as its only one application, it's usually good enough to complete the task I need to complete.
With remote work and thin pipes, it's laggier, but "xpra" also allows per-application X forwarding, and removes a lot of the lag.
Of course I could set up full VNC and set up a whole desktop environment on the cluster head node, and remote into it, and maybe in the glorious Wayland future, I will have to do that.
Sometimes when I talk about this, someone in the comments says that Wayland is working on some kind of per-application forwardability -- that would be nice if it's true.
I get that my use-case is niche, and so I don't expect it to be a high priority for the devs, but it's one of those things that, once in a while, is a critical link in my administrative workflow, and I'm not looking forward to the eventual kludging of workarounds.
Edited to add: I have now seen the comment below about "waypipe", this sounds very promising!
I used that feature very rarely but the times I did it was insanely useful. Mostly running some debug software so debugging session could run locally but I could control it remotely, without installing whole graphical environment and somehow setting up vnc session just to run single app
> While on the road, you can use your notebook to open a Gimp session on your home machine and edit an image stored there.
How practical is that? I have horrible upload speed at home which ruins all these "just remote into your home machine" workflows for anything more than a terminal.
Not at all practical. Remote X11 isn’t really practical on a LAN, much less over a slower internet link. It’s cool in theory but not especially useful.
Realistically speaking, what matters is that openssh comes with X11 forwarding built-in.
Somebody should go and provide equally seamless "Wayland forwarding". That might end up looking more like VNC under the hood, but there's nothing inherently wrong with that.
I think GP was talking about how ssh has the -X and -Y flags for forwarding X, but you need to do something like -L XXXX:localhost:YYYY to ssh forward.
TigerVNC (and some others) will do e.g. "-via foo@bar :1" which GP might not know, but is rather convenient.
Xpra is still a better replacement for -X when launching a single application though.
Yes, that's what I meant, and thanks, those are interesting points.
My main use case is launching one-off visualization tools on a remote system that I'm ssh'd to anyway. ssh -X is hard to beat in terms of convenience for that use case, and e.g. a persistence setup would be overkill.
That will call ssh with the correct parameters on both sides and start the app. Seems just about perfect for what people want out of a remote solution that's not a full desktop.
It's a lot more than just cool compositing effects. If you are just trying to get some work done reliably with a dynamic multimonitor setup, Wayland is vastly superior.
I use remote display of apps daily as part or work. I've a Linux VM, and all compute or memory intensive programs run on LSF with x11 forwarding or via direct login. Sure desktop users don't need the remote part, but enterprise / high end compute definitely does
I use X11 remotely daily (and did so for 25 years). It comes for free with a simple ssh login and doesn't require setting up any server on the remote machine. It just works (even if slowly at times). It beats setting up x2go/VNC/etc hands down.
I'd be a big fan if that worked, but I haven't found an open remote desktop solution on Linux that works out of the box and acceptably fast¹, to the point that I have to use proprietary software to perform remote desktop sessions - there is a huge gap between the open solutions and the proprietary ones².
¹=they're all either incredibly slow, or clunky to configure/use. interestingly, X2Go, which I think was working out of the box, but was too slow, is based on an old version of the Nomachine (NX) protocol.
waypipe[1] is the wayland equivalent people are looking for. It gives you the UX and ease of use of X11 forwarding, but with the performance and efficiency of things like RDP (which is gross, but can run quite well on Windows Server if you throw enough money at Microsoft).
Wayland applications will render entirely on the host using host resources and acceleration as needed, buffer content gets transmitted using (hardware accelerated) h264 encoding, while messages in general get passed as-is to give a fully local experience.
This. It takes hundreds of megabits per second to work smoothly.
If you're doing this while on the move, your mobile data is going to be gone in a literal matter of seconds, if the network were fast enough to support it, which it isn't so in reality you're going to be frustrated and wondering why you didn't use a different protocol. From VM to host or on gigabit LAN it works okay, which is where I've used it before.
I used X forwarding a lot until last year, so I can find various arguments for it, but "while on the move" makes absolutely no sense.
What percent of Linux users do you think would ever use software this way? What percent of all "desktop" OS users would ever use software this way? It seems like an extremely niche and rare use case and probably not the right thing to target for the main desktop environment.
I've been using Linux as my main OS for ~12 years now. I have personally never wanted to do anything like that. For remote software, I would prefer a web UI or a VNC/remote-desktop tool.
X-forwarding (over ssh, or directly like this person is doing) is extremely useful and powerful. I use it this way, and I know plenty of people that do too. On top of that, wayland doesn't solve any problems that I have, so there's your data point.
Sure, but my guess is that <1% of Linux users currently use X forwarding, and probably <0.01% of desktop OS users would ever want something like this. It's a cool feature that should exist, but rare enough that it doesn't need to be fundamental to desktop architecture.
Using a wayland desktop, remote login into a Linux server, start a graphical text editor like gvim, it fails. This is a very common way to dev on Linux, I too have been doing this daily, and with ssh+X11 it works out of the box. I wonder what is the best way to do this with Wayland. Should I ask an admin to reconfigure the remote server to run a VNC or RDP server, and then use a specific tool to connect? I wonder if VNC/RDP can integrate apps as windows instead of all-in-one window.
I do something similar with vim, no gui. I use vim over ssh with tmux. Others I know use vs code with an ssh backend. These two make up the workflow of roughly 60000 Googlers. None of them need X forwarding.
Some people may benefit from X forwarding, but the vast majority of Linux users do not.
Your colleagues don't need X forwarding, so requirements from people that are not in the vast majority don't need to be addressed, they'll just have to use RDP/VNC even if it's a step backwards for them. Fortunately X11 is open source, so life can continue for those of us who need it.
Same at FB - vscode with ssh backend is the recommendation, ssh+vim works perfectly fine; last time I checked X forwarding did work, but even with a multi-gigabit connection and 20ms latency it was too painful so nobody did that...
>> X client and server are usually the same machine, but they don't have to be.
When I was in college there were rooms full of X-terminals which would be used to remote into various bigger workstations. You could also sit locally at one of those machines, but plenty could be done over the network and by more people on lower cost hardware.
Nobody is arguing it's not a great and powerful feature, nor that it's not "a lot to give up". The only argument posited is that not a lot of people will actually have to make that sacrifice (as not many actually use that feature). Is that not true?
X11 is way too slow to run something like gimp remotely, unless perhaps you are sitting near the other machine connected via gigabit ethernet. Giving that up is an easy win for better everything else.
> This is a lot to give up for cool compositing effects...
I've used networked X windows and it's always felt like a kludgy, crash-prone hack. The better solution is to render the entire desktop remotely and stream it over wholesale, such as with VNC and other protocols.
If companies like YC's Mighty have their way, the future will be thin-client based. And it won't be built on X Windows, because that's the wrong layer of abstraction.
The thing I dislike about Wayland is: Sometimes the server will crash. Video drivers or something. Hasn't happened much with 22.04, but used to happen more frequently before. Both with Intel and AMD.
When this happens in X11, the server restarts, I see the screen go black for an instant, that's it.
When this happens in Wayland, all my open programs are killed. No save prompts or anything, just insta-killed. This makes Wayland unusable unless they can guarantee no crashes, ever.
Same here. Only, Wayland just hangs the entire screen, no recovery possible. I could reproduce this across distros and hardware (Intel, AMD and Nvidia). This was using stock Fedora 34, and I haven't tried since, because those hangs would occur within minutes of boot and I the closest I could come was "must be a driver bug". Exactly the same bug for all three GPU makers...
Unfortunately, nobody seems interested in such user reports or quality control (apparently, turns out there are many like us who're getting "upgrade your hardware", "works for me", "user error", etc. I retreated to Debian stable, which is indeed very stable.
A display server crash is fatal under X. A window manager crash is not. A window manager is not particularly affected by GPU drivers.
What you see might be a GPU hang or reset that recovered.
Some Wayland compositors have a window manager model by using an external process for window positioning logic (e.g. river). This is mostly just for convenience of development though, making it easy to experiment with layout engines.
> A window manager is not particularly affected by GPU drivers.
Since the window manager is also the compositor in many X setups (e.g. GNOME and KDE), the WM is very much affected by GPU drivers. It doesn't have to be the GPU drivers though, plenty of complexity in the WM/compositor itself.
The core complaint is absolutely valid - the protocol and libraries should have been designed so that applications can recover from a server crash/restart, even if the legacy X server does not provide that guarantee.
To be nitpicky, if the X server crashes, you're just as hosed with X as with Wayland.
The difference is that X survives a window manager crash, whereas Wayland typically does not, because on Wayland it's usually the same process as the server.
i really want to support Wayland. X has done us well, but it's about time a protocol based around more modern situations supplants it.
Wayland has a myriad of these sharp-edge cases which i also find each time i try it out. what worries me a little is that some of them are clearly design decisions with no simple remidiation, your crashing example (which i have also witnessed first hand) is one of them.
Though I'm not sure what's the current status of that is or how actively it's progressing. Hopefully it will turn into something usable, it's important.
I takkede to an X11 dev once about this. I remember him saying that X supports recovering already. The issues was that most compositors and applications did not.
GPU recovery did use to work on radeonsi with compositing disabled at some point. Of course you are usually better off rebooting anyways because who knows what state all the million bits of the GPU were left in after a crash that is by definition an unanticipated event.
I think currently we have compositors that are monolithic and due "it all". Sounds like we'd need a separate process to maintain window state, that the compositor interfaces with. is this recreating some of the problems of X11?
Perhaps a better approach would be to make sure that clients can wait for the server to restart and reconnect. Then you don't need any crash-proof server part.
As a fellow X11 apologist, this was pretty cool to read. It's nice to see that things are shaping up, even if there are still quite a few rough edges.
Having said that, I'm still a die-hard Xfce user, and until someone makes an xfwm4-like Wayland compositor, and ports (at least) xfce4-panel and xfdesktop to Wayland, I'm not gonna switch.
Every 2 years I take Xfce (Xubuntu, Mint Xfce) for a test drive. It seems light and fast, I want to like it. Then every 2 years, I rediscover that the resize border on each window is 1px wide, which is completely unusable. When I google for a solution, the answer seems to be use Alt-Left-Mouse. I don't understand how that is a solution. I don't want to use 2 hands to resize a window.
Used Xfce for a decade. I have never had trouble using the mouse to resize windows. I use 1080p screen, maybe you use a higher resolution and 1px is too small on that.
If you right click the title bar of any window, to open a menu, from which you can select resize.
Even though it has been 7 years since the initial report and the issue is still very much active, the assigned dev says it's a low priority to him. If you know how some FOSS project still work, there's very little hope for a change anytime soon.
Wat. I'm writing this from under Xfce. Just targeted my mouse pointer at a window border, moved it around. No, the resize-sensitive width is deifinitely wider than 1 px.
Have you tried changing the window title / border theme in Settings / Window Manager? There are a few themes with resize corners only provided on top and bottom, and with left/right borders 1px thick; you might have switched to that and had the frustration. But there are definitely a number of themes with comfortably thick border. (My preferred one is Smoothwall.)
> When I google for a solution, the answer seems to be use Alt-Left-Mouse. I don't understand how that is a solution. I don't want to use 2 hands to resize a window.
For what it's worth, it works very well one-handed on laptops, this is how I've used it for years and I greatly prefer it. The click target, that is the area you have to aim for with the mouse cursor, is much bigger than even the most absurdly thick window borders, so I find it much easier to use.
Yeah, that's frustrating sometimes, and I don't even have a super hi-res screen.
But I just live with it. Honestly I don't resize windows that often. I either use the window's natural size, or maximize. In the 18 years I've used Xfce it hasn't been a big issue for me.
I'll be holding out for that too, Xfce has been my DE since the day that Gnome 2 was forcibly retired. A simple, configurable, responsive, unopinionated, low fuss DE is essential.
There are lots of stubs of news out there on the web that xfce development may head in that direction, potentially moving from xfwm4 to mutter, so fingers crossed.
(I am definitely an xorg apologist but ... that's because it does what I need, painlessly. As/when wayland can, I'm good to move, especially if the majority of development is taking place there. We just never quite seem to reach that point.)
I've tried to use Sway a number of times, and the biggest reason I keep bouncing off it is environment variables.
There's a bunch of environment variables I want set for every process in my desktop session, from basic things like $EDITOR and $LD_LIBRARY_PATH to more complex things like $MOZ_USE_XINPUT2 or $SSH_AUTH_SOCK. When I use i3 under X11, gdm runs my ~/.profile script so I get all my standard login environment variables, then it runs ~/.xsession so I get all my GUI-specific settings. There's also some integration with systemd so that the SSH agent can set $SSH_AUTH_SOCK and have it show up in my terminals.
With Wayland/Sway, none of this works. gdm does not execute ~/.profile, and there's no equivalent to ~/.xsession. Something starts a user-level systemd instance, but its environment variables aren't synced to the compositor, and the compositor doesn't use it to start apps, so $SSH_AUTH_SOCK isn't available in anything I care about.
Having an SSH agent seems like such a basic feature, but it doesn't work for me and I can't even imagine where I'd begin to look for the problem, or for documentation to learn more about it.
This is not really a wayland or sway problem, but really a systemd problem. It's really annoying because the documentation around this is very scattered and contradictory.
The advice used to be that you should use pam user environment (~/.environment and you need to edit pam configuration on some distributions to allow user environments). However, IIRC there are some possible security implications around this so this has been deprecated. I think you are now supposed to use systemd user variables (or so). I'm not on my computer right now so can't look it up, but hopefully that gives you enough Google keywords to set things up.
It seems like the main developers behind systemd and several of the other fundamental layers of the system stack only use gnome/KDE and assume everyone else is. So some things become quite frustrating to set up for regular users.
Yeah, I can create files in ~/.config/environment.d/ to set some environment variables. My problems are that (a) that syntax only allows very simple variable assignments, not complex things like "run ssh-agent and add the environment variables it prints to stdout" and (b) those environment variables aren't available to the Wayland compositor or applications it launches.
Definitely agree. It took several days to figure out the exact environment variables needed to get basic applications working and scaling properly. For example, if you want “click link and open it in default browser” to work in, say, Slack, you need environment variables to get it working.
And since (like you say) basic profiles are not executed on startup, it’s very difficult to fix anything.
I completely disagree with the Sway author’s approach to go batteries excluded for the project. It means that next to nothing works out of the box, and that you either have to put in hours of work to fix undocumented things, or rely on a distro to package everything exactly right. And there really aren’t any distros packaging Sway right now.
Sway would benefit from integrating more of these things out of the box and it should rely much less on end-user configuration and other software to accomplish basic desktop behavior.
One excellent example is that you need a separate Lock Screen app like swaylock to lock your computer. And since you’re in charge of configuring all its settings, it’s difficult to know if you’ve done something insecure.
It seems to me that sway is intended for power users, people who know exactly what is going on with their linux from boot to window manager, and thus have no difficulty configuring whatever they desire.
If sway adopts opinionated defaults it will mess with people who don't care for those defaults. Perhaps there is a way to do it such that opinionated defaults immediately turn off in the presence of configuration but as far as I can see there is no interest in this.
Under GDM you can use systemd environment.d(5)[1] to configure those variables. I have some examples in my dotfiles[2]. Your Sway configuration also has to inject it's own environment variables to systemd session like is documented here[3]. Arch Linux does that in `/etc/sway/config.d/50-systemd-user.conf`[4]
If you really want to run some shell scripts before sway then you have to create a new desktop file under `/usr/share/wayland-sessions/` that calls your script that setups environment and does `exec sway`[5]
When you 'exec sway' it inherits the tty env vars... So just stuff your vars into whatever .profile the tty loads when you login after boot. This has never been a problem I spent any amount of time on.
If you are doing something magical like auto starting sway or something, then you need to make sure that a .profile is loaded by whatever is starting sway if it doesn't inherit a .profile.
In addition to the reply, he mentions gdm so he likely doesn't login from the console. This is a bit extreme but one day they might (for security reasons) remove that too.
> With Wayland/Sway, none of this works. gdm does not execute ~/.profile, and there's no equivalent to ~/.xsession. Something starts a user-level systemd instance, but its environment variables aren't synced to the compositor, and the compositor doesn't use it to start apps, so $SSH_AUTH_SOCK isn't available in anything I care about.
This probably won't help you, since you say the compositor isn't using the user-level systemd instance to start apps, but in case it helps someone else: I recently faced a similar problem at work (I needed to set the DOCKER environment variable so that unit tests running directly from the IDE could find the podman socket from the user-level podman.socket systemd unit), and solved it by creating a file under ~/.config/environment.d (https://www.freedesktop.org/software/systemd/man/environment...).
Same here. I also use i3 in a non-standard way: in KDE but with i3 instead of Kwin, and it has been the dream setup for me navigation wise. I wished I could somehow do the same with Sway under Wayland, But that doesn't seem to be supported.
Yeah, I've got a lovely GNOME+i3 setup that I really like. There doesn't seem to be an equivalent for Sway, but the GNOME services get more memory-hungry over time so I'm always looking to see about Sway becoming more usable.
I settled on pam_env and ~/.pam_environment for handling these. Required a small change to pam config to load it by default, but it felt like a sane solution at the time.
Over the last decade and a half I've written dozens of small tools, applets, etc to hone my desktop environment I use. Half of those have required patching, some don't work at all (like a nifty oneliner I used to kill of hung ssh connections by targeting the ssh process name, that no longer includes the ip or domain).
I will use X11 until I can't - simply because I do not want to rewrite my tooling.
Agreed. I use xdotool heavily, ssh -YC quite a lot. underpowered devices...
I'm going to stay on X11 as long as there are options. Thankfully, there are still plenty of options, FOSS seems good at that long tail. Just like MATE still offers me a nice simple desktop that doesn't even force compositing for the particularly wimpy laptop.
This is a good article, one of the best in this vein that I've seen. The author clearly had some features that he really needs from X and has been tracking their progress under Wayland. If you find yourself in a similar circumstance, I think it's a good read.
My own experience was different. I came back to Linux as a daily driver after years away. I bought a machine expressly to be a good Linux workstation and set it up from scratch. So for me the new stuff (Wayland (via Sway), Pipewire, etc..) has simply been excellent. I just didn't have the legacy issues to deal with that other people have.
The pet peeve of mine is lack of subpixel rendering: if you move from X to Wayland, it's hard to get used to fuzzy fonts.
Then again, recent MacOS versions have also dropped it, and I couldn't stand fuzzy fonts either (and so much for their screens being "retina" screens).
In order to do subpixel font rendering correctly, the application needs to know about the subpixel arrangement of the display hardware. So it falls under the same category as things like color management, which Wayland also doesn't support; they just didn't standardize an API to provide that information.
Thanks: I haven't dove deep enough to know exactly where the issue was, but booting into Gnome wayland session on Ubuntu 22.04 resulted in blurry fonts on my external 32" 4k screen, whereas just switching the session to Gnome on X worked fine.
My search at the time (a few months ago) suggested that subpixel rendering didn't work (changing settings from gnome-tweaks didn't do anything either).
There are still the odd issues with HiDPI but I think that's mainly down to XWayland support in apps that don't support Wayland natively yet.
e.g. If I'm running VLC and I move it from a 1:1 screen to a HiDPI screen then my mouse pointer goes tiny as it passes over it. It's a similar situation for some older Electron based apps too.
Support for XWayland is OKish but not seamless.
Hopefully time will solve that, for VLC I think that should happen in version 4.
Mixed DPI is basically unusable on Wayland due to X11 apps but last I checked, X doesn't even support this at all.
I gave up waiting because years later still almost every non Gnome app is electron or otherwise running in XWayland. Just changed my monitors to have the same dpi scale.
> Mixed DPI is basically unusable on Wayland due to X11 apps but last I checked, X doesn't even support this at all.
Xorg/RandR provides the necessary info, but it is up to the apps (really, toolkits) to work with that and to do it properly it needs the cooperation of both window managers and applications(toolkits) to define some messages/events for scaling, etc. It shouldn't be any different than the other common stuff between X clients, like ICCCM and EWMH (if anything, it should be part of a new version of EWMH), but nobody has bothered so far.
Also there is the -separate- issue where under XWayland, Wayland is lying to X about DPI so apps cannot scale themselves if needed. E.g. Lazarus/LCL has its own scaling logic which works even with the Gtk2 backend (Gtk2 has no scaling logic itself) under Xorg but under XWayland the compositor is lying to the program that is running at 96 (or whatever "non-hidpi") DPI and does bitmap scaling instead (so applications appear blurry). Or at least this is how it is under KDE (which is the only DE i have installed on my PC to use as a secondary environment for testing out stuff - i mainly use Window Maker myself as my main environment).
Been away from linux GUI systems for some time but isn't that resolved now with ozone in chromium which brings wayland support. I remember using electron beta with ozone and building vscode with it for proper wayland support. It worked quite well then.
I saw that it was supposedly fixed a while ago but as of this year I can confirm that Steam, Spotify, Telegram, Discord, and VS code do not DPI scale properly.
The most offensive one is that Telegram _used_ to scale properly and somehow they broke it about a year ago. GTK also still does not support fractional scailing despite the majority of monitors being sold right now are optimal at 125% or 150%.
I have had a set up with a “retina” iMac (5120 × 2880) surrounded by two Thunderbolt displays (2560 × 1440) for close to a decade. No Linux distribution I’ve tried has managed to handle the different DPIs properly during this entire time period. There may be a way to cobble it together but I gave up many times. Neither the display layer, the window manager, nor the apps seem to be able to take full responsibility for figuring it out.
Windows (through Bootcamp) barely understands the setup and even still, dragging windows across the DPI borders results in horrible jank and artifacts.
Only macOS has this “multiple DPI monitors wizardry” figured out, and it worked from day one.
These days the distros themselves all support it fine and the built in programs all work correct. But any program which relies on the XWayland compatibility layer does not work as X11 has no support for live switching dpi without killing the window and recreating it.
Apple has the advantage of forcing programs to use new apis while on the Linux side you have boomers insisting on using obsolete tech until they die because it lets them use gimp over telnet.
One upon a time (or long forgotten concepts?) there did exist a window system with full client-side scriptability, .i.e. it implemented what nowadays is "the browser as a thin client" with Javascript for client-side rendering:
Gosling, James; Rosenthal, David S.H.; Arden, Michelle J.: The NeWS Book: An Introduction to the Network/extensible Window System
NeWS used Postscript as the language for client-side tasks. Seems it was way too early for this good idea at that time.
> So what you do is, you keep stepping this number up 1 millisecond at a time while playing a smooth animation, until you’ve eliminated any stuttering in the animation. And now you have done something X11 cannot do- eliminated screen tearing with the absolute minimum latency cost possible.
This should be a debugging tool, not something and users should have to do. The system should have a software PLL control system to make sure the rendering starts at the last possible second and not later.
Also, it should be using current mouse pose + velocity to predict where your mouse will be at display time although that may be harder with generic x86 architecture - not sure there’s any software knowledge of what the display scan out time is as it can depend on the physical properties of the display.
> If you’re into Gnome, Wayland is probably a good experience today out of the box, even if you aren’t a power user.
I'm not sure this is true, yet. I recently tried Wayland together with Gnome on a Arch installation, and seemingly applications need specific fixes otherwise they are kind of buggy. Firefox had bunch of issues out-of-the-box, that required some incarnation of right environment variables to work properly with Wayland, and seemed the same thing applied to other software as well. If I recall correctly, some other application was also behaving buggy/with artifacts, think it was maybe DaVinci Resolve but not 100% sure...
For now, I'll just stick with Xorg that Just Works(TM), but I'll be back to try Wayland every now and then until it works perfectly.
I probably won't care about Wayland till it has full parity with X11.
And that includes global keylogging and screenshotting. Obviously some kind of security mechanism would be needed so not just any program can do it, since that seems to be the point of Wayland, but I'm not a fan of just killing use cases and creating a mess of incompatible servers and optional features.
I'll ask here in case anyone knows: I actually needed to move from Wayland to X11 on the new Framework laptop because of problems with the touchpad, which I believe uses libinput in Wayland. It appeared as though small motions of the touchpad weren't registering and seemingly no amount of fiddling with the libinput parameters helped. I moved to X11 and the synaptics driver and have had zero issues. I'm not really looking for tech support, but I am curious if anyone else has had similar experiences. This was the final issue that pushed me off of Wayland and back onto X11.
Yes, that's the only reason I can't use Wayland as well. The touchpad becomes unusable and as I'm only using that for my work, it's extremely frustrating. Can't seem to be able to aim at anything, tried any configuration possible. Wheel momentum is gone also and doesn't pick up micro motions as synaptics + X11 do. The screen looks great though, graphics and motions feel much better, it's amazing to look at, but it's unusable.
Odd, I was just playing around with Wayland today on my new (12th-gen) Framework laptop, and I didn't see the issue you describe. I also use the libinput driver on X11 without issue.
Can someone address various criticism, possibly slander of Wayland, along the lines of "well it works now, but it is already outdated"?
I can't remember were I read it, but among the claims were that it is too closely coupled to GNOME, and various things like fractional scaling, apps having to provide min/max/close buttons/decorations, or window shading not being in the core protocol because of that. Also - and more seriously - the rendering was claimed to be based an outdated/wrong model meaning GUI applications can't detect and optimize for things like when to render or not, e.g. make decisions based on refresh rates or not being focused, resulting in performance or battery life problems.
The other point about missing native network support I can dismiss myself, but for the others I am not knowledgeable enough.
Same here. I need to restart gnome shell everytime I update an extension for some stupid reason, but it's easy and I don't lose any work. This is a sucky design decision that is just a showstopper for me. I wish it the best I guess...
i also noticed this immediately and it was infuriating. my display runs at 240hz, so there must be at least 2, maybe even 3 frames of latency for it to be noticeable. it's made more perplexing that wayland was supposed to represent a reduction in latency.
> Out of the box, there’s a bit of that in wayland too, but sway has a way out: max_render_time.
this is possibly the worst way you could reduce input latency - mostly because of the tail-end of render times will be the most noticeable and the worst performing - please never do this unless you're trying to make your users inexplicably angry :)
> when really, other X servers exist, and have varying degrees of support for the extensions Xorg supports
a non-point here from me but i only know of XOrg and Xfree86, and i think the last time i tried that, linux kernels started with a two.. :)
You’ve got ones like VcXsrv on Windows, which was particularly handy for running GUI stuff in WSL a few years back (before Microsoft provided a first-party solution in WSLg).
Is there a single Wayland supporter who finds the user experience to be better than X11? Because my understanding of Wayland is that the only people who want it are gui devs.
>Is there a single Wayland supporter who finds the user experience to be better than X11?
The immediately noticeable improvement with wayland is with respect to screen tearing, especially in multi-monitor rotated configurations. Screen tearing has always been an issue on x11, and while intel/amd drivers do a decent job in basic cases, things that don't work will likely never work given that x11 is mostly deprecated.
Screen tearing has not been an issue on X11 for like... a decade? Granted, that's after development on Wayland started.
Also I just don't get this obsession with screen tearing. I've experienced it on occasion in other context and I... just kinda don't care about it? Certainly if you're a video editing/creating/whatever professional, it actually matters for your job, but otherwise it's just cosmetic, and often hardly noticeable at all.
On X when in a Zoom call, if I drag any window around the whole DE starts tearing wildly to the point of it being very distracting. On Wayland, it's smooth as butter (or Windows or MacOS for that matter).
This is just one very easy to replicate example. There are a bunch of other seemingly simple tasks that lead to some rather hideous tearing in X.
Ah yes. Issues you don't have don't exist. Especially, with super consistent, trivial, uniform system components like graphics drivers, display servers, and compositors on linux.
No bloody idea, nouveau is broken. Sorry to break it to you or you're either using NVIDIA binary drivers or buy a different GPU.
The issue does not exist. Period. If you have troubles enabling these options, I'm sorry for you. Linux in 2022 still requires to choose your HW wisely. If you don't like it/have a different PoV, it's not my problem, it is the status quo.
I have noticeable screen tearing on vertical monitors right now when scrolling and dragging windows on Fedora 36/Gnome 42/Xorg/nouveau. Glad to know this issue doesn't exist so I can stop paying attention to it I guess.
I don't know whether you would call the author a supporter yet, but from the article, with respect to an issue that matters for gamers:
"And now you have done something X11 cannot do - eliminated screen tearing with the absolute minimum latency cost possible.
"This [is] fantastic.... And like, this is the first time I’ve ever seen the vsync setting in a game actually sync the game up with the vblank interval in a way that matters. It works for games in wine. It’s amazing. I have never experienced gaming on Linux that looked this smooth in my life."
On a previous laptop I used i3, then after a few years on Windows, I returned to Linux on my current laptop and decided to try Sway, and now I’ve been using it for almost a year and a half, but I set up i3 somewhere along the way too, which I have used when I needed screen sharing on Zoom.
I much prefer Sway. It handles output management much better than i3 (because it’s integrated and integrated well rather than being entirely up to you with xrandr—so this probably wouldn’t apply to full desktop environments like GNOME or KDE), supports mixed-DPI environments, properly supports high-DPI (though I’ve also been using patches for fractional scaling since I want 1.5×), avoids all tearing (which was what really surprised me when I first ran i3, I’d forgotten what the tearing was like), and supports my XF86AudioMicMute key (key code 256; it took a little effort to get it to work, involving dumping the xkb keymap and adding in a suitable entry, but I think that it’s literally impossible to support under X, though you may be able to remap it to a different key like F20 in some way at a lower level, but my attempts at that failed).
It’s not been without its troubles. Screen sharing is only possible at the screen granularity rather than individual windows, and I think Zoom is still broken because they did things stupidly in the past (using a GNOME screenshot API many times per second instead of the compositor-neutral screen sharing API that did exist when they implemented their thing) and are still unravelling them. I’ve also had a couple of apps require tweaks to unbreak, e.g. https://github.com/CadQuery/CQ-editor/issues/266, if you build it with a version of Qt that supports Wayland (the default, though their first-party distribution doesn’t), you have to explicitly tell it to use xcb instead of wayland or it crashes on startup. But honestly that’s all I can think of.
Oh, one more thing, I guess. Cursor sizing is comically broken in Sway. With `seat seat0 xcursor_theme Adwaita 96`, I get cursors at at least five different sizes when hovering over different windows. Some ignore it and use the default size multiplied by the scaling factor. Some use it but ignore the scaling factor. Some use it and multiply by the scaling factor. Some use it and multiply by the scaling factor rounded up to the next integer. Some do different things altogether. I haven’t diagnosed it all yet.
Yes, the fact that mixed DPI display actually works properly on my laptop and desktop is a massive user experience improvement that overshadows all the downsides.
> Because my understanding of Wayland is that the only people who want it are gui devs.
That might be, but by definition those that are the devs decide what everyone uses.
I mean, if you're willing to pay them to work on something else, fine, but as long as this stuff is run mostly by volunteers, that's kind of the score.
Hey, I recognise you! You're the bemenu guy! I must admit I recently swapped `bemenu` out for `tofi` but I did use it for several years and it worked well, so thanks for that.
I am one of those people that has been watching Valve's support of Proton and Steamdeck, and already familiar with Ubuntu for development... and now considering leaving Windows behind for good.
So far my experience with both development and gaming is good, however I find that in order to control the NVIDIA GPU (a GTX 1080) - I need software like GWE to set the power limit, and fan curve... and for whatever reason these do not work in Wayland.
I wish I could use Wayland though - I did notice the desktop feels smoother.
Can we switch now? Are there forks of GWE for Wayland? Or do we still need to wait months/years until these tools show up?
PS: if I upgraded my GTX 1080 to an AMD card, would I have apps to control power limit / fans on Wayland?
I left windows behind when my preview program instance was upgraded to windows 11. I loathed the advertising that was encroaching into what felt like every aspect of the GUI.
I moved to Fedora and haven't looked back. This coincided with me no longer playing a lot of PC/Windows games, so I haven't missed many of them, and there's a great many that work great in Steam/Proton.
Using sway on archlinux, the only problem with wayland i have is Java/Jetbrains Products and the lacking support for native wayland. The rest works the same as on x11 for me (or better).
> Wayland wants every frame to be perfect. That means no screen tearing
Such idea was quite central, but it still ignored some uses cases where tearing is acceptable for lowest perceived latency possible (competitive gaming and such).
I've been using Wayland with sway for about a year as my daily driver. I have xwayland also and a few applications use it, specifically I'm using dmenu because all the wlroots native menus are missing some features I need (why is there no text based menu in the world that can read from bash aliases and PATH at the same time?) and I've got to say, it runs perfect for me. Granted, my machine is mainly a workstation and I don't game really, the most I do is watch a video with vlc. But I have run into problems with game emulation, specifically running retroarch/libretro just doesnt work for me, I haven't dug into it yet. I have run Minetest and several quake 3 engine based games on it with no trouble.
I'm probably going to build an Alpine or NixOS based workstation from scratch using Wayland and Sway at some point in the near future, this was an experiment with Debian I did that worked out so well I just kept using it.
I'm using it right now on Nvidia. It 'works' for most Wayland/Nvidia things, but there are still a few things to note.
The Good:
- No weird xWayland flickering like GNOME
- Desktop transitions never drop a frame, even with 1:1 trackpad gestures on my Magic Trackpad
- The kwin implementation seems to be less picky/buggy than Mutter? Might just be my hardware config, but Wayland/Nvidia/GNOME would crash constantly for a number of reasons. It all came down to random gnome-shell segfaults that I couldn't debug.
The Bad:
- Compositing/alpha effects are somewhat broken (on GTK and Qt)
- Sometimes a panel will stop responding or fail to render (or both)
- krunner seems entirely broken
Overall, I'd say it's running pretty well, and is probably better than GNOME for Nvidia users. Their work is cut out for them, and the Wayland-specific bugfixes are starting to roll in on a weekly basis. You'll probably have an even better experience if you aren't using Nvidia hardware.
I'd say for smooth desktop experience, Nvidia is a bad option in general (at least yet). They only very recently started caring about addressing Wayland support and a lot of things are rough becasue of that. Plus it will be a long time before their kernel driver is upstreamed, and a lot of the above depends on that.
So I strongly recommend AMD for good modern Linux desktop experience.
I agree, but that doesn't make it any less valuable of a metric. Nvidia cards are extremely widespread, reporting the performance on Nvidia hardware does a good job of representing what a lot of people will experience.
Furthermore, Nvidia made inroads for Wayland support years ago, GNOME just refused to adopt it. Nvidia's terms were always that Wayland implimentations could adopt EGLstreams whenever they wanted, and that GBM would not be considered an acceptable alternative until it was faster. Their so-called hostility towards the Linux desktop mostly amounts to not contributing to GNOME and making their drivers proprietary for so long. In that sense, they're about as evil as webkit contributors who don't fix x86-linux bugs.
That whole GBM debacle was simply their masking of a deeper problem. They can't interoperate with the kernel properly because their driver module is not GPL compliant. And proper Wayland support relies on a lot of kernel (DRM) functionality. They have to do convoluted dance workarounds to address the above.
Basically, Nvidia will never work really well on Wayland until their kernel driver is upstreamed. This year they finally decided to open source their kernel module. But it's still some road for them to get to upstreaming.
Bunch of applications are without titlebars, like Electron apps or MPV, because they refused to support kind of server (or compositor) side decorations.
Some apps don't have proper icons (because they don't set properly app_id, but KDE works around that).
They are also some weird GNOME specific glitches I didn't encountered in KDE or Sway (like Slack crashing when you try to bring it to foreground).
Only problem I have with KDE is that it re-arranges windows when it comes back from sleep.
> Some apps don't have proper icons (because they don't set properly app_id, but KDE works around that).
How exactly? I experienced such issue in KDE with Firefox for instance (since I'm using a beta version) and had to create a couple of custom .desktop files to address it.
Most of my desktops run KDE on Wayland - it runs pretty well, especially latest version. All Intel or AMD GPUs - no nVidia though so can't talk about that part - however I have heard it is very usable.
X11 apps like VLC run with fuzzy fonts though - Plasma 5.26 is going to address that to some extent. Then I would have no reason to run anything X11.
VSCode needs electron switches to use Wayland but works fine after you do that. Firefox on Arch at least needed MOZ_ENABLE_WAYLAND=1 but works really well after you do that.
I switch between KDE Wayland and Awesome, my main pain point with Wayland specifically would be that my preferred video player (SMPlayer) has issues showing the video on top of its viewing area.
Unfortunately KWin kept the silly Xorg "screens are views of part of a display" convention so multiscreen workspace switching remains the same. I had really hoped to be able to switch workspaces on any screen independently.
If you have mixed resolution, mix refresh rate, and mixed DPI setups, anything Wayland native will be just fine. XWayland can still have some quirks, but I'm not sure if that's an easily solvable problem (or at all).
Though I don't have HiDPI displays, I have a triple monitor setup with NVIDIA (2x 1920x1200@75Hz~96ppi and 1x 2560x1440@60Hz~108ppi) and though I use Xorg due to some specific applications, the Wayland session is completely usable and fine otherwise for 99% of my use cases. I actually wish I could use it full time.
By "have some quirks" you mean windows will either show as 200% or 50% size on one monitor which is usually unusable. But you're right, its not anything the Wayland side can control, but as a user, you are better off just getting your monitors to have the same DPI than to wait for every single electron app to update it's base electron version.
If you have neat monitors where the difference in pixel dpi is simply an integer multiple (1×, 2×, etc) it's really good, no issues at all.
If you need fractional diff in pixel dpi (e.g. you want one screen at 1× and the other at 1.5×) it can kind of work, but dodgy - the 1.5× screen will have its windows rendered at 2× then downscaled, so won't be pixel crisp (and the unnecessary 2× render can push limits on old integrated graphics cards)
If you need diff in text dpi, e.g. like setting Xrdb.dpi or gnome's Scaling Factor, then you're up shit creek (but with X11 to keep you company)
I have two 1920x1080 60fps screens and one 2560x1440 120fps screen and they work perfectly in sway. The DPI between them is different but it's minor enough that I haven't bothered fixing it
What tool could you use in Wayland to move and resize windows around, but custom? Sort of like a simple autohotkey script?
I don't like tiling apps. But I'd like something where I can press a key to have my current app window moved to exactly where I want it x1, y1, x2, y2, and then that way I can stack all of them in the center of my screen. It helps me focus.
From a programming point of view, do not make one of the biggest mistakes of x11: keep wayland code statically linked, do not create zillions of client libraries and then depends on their ABI. Wayland is so simple anyway, that would be kind of ...
The only "fake" exception is xkb, since it is not part of wayland, client-side keyboard key symbol resolution has to go thru libxbkcommon ABI because you don't know where the data files are, unless a standard environment variable is defined, and you cannot be sure the format of those data files would be working with your code, unless the format and file hierarchy of those data files is simplified and posix-ized/iso-ified for forever namely _without_ versioning.
Since I play games on elf/linux, my main issue is the steam client without wayland support, not even a wayland->x11 fallback (they don't even have a vulkan->GL->CPU fallback, not to mention the 32bits code...)
For me, further confirming at how badly planned Wayland was/is. I keep hearing that there was no way forward other than "scrapping X and starting over" but given the extent to which X still works just as good, if not better -- I just don't believe it.
Someone forgot "never break backwards compatibility" and it CONTINUES to show.
For starters, I do agree that the X11 model is fundamentally broken on the modern computing platform.
However, sadly, I'm beginning to think that Wayland has made some fundamental architectural choices that are not fixable and are also broken on the modern computing platform.
The biggest problem is that Wayland takes the "compositor" route and turns it up to 11. Unfortunately, nobody does multi-window compositor to application integration well--Apple is the best, but by no means good. Even Windows drops to software rendering on resizes and actually integrating with the compositor needs a whole host of new Windows calls that nobody ever uses.
The whole "You need link a library and draw your own decorations" is, put simply, ludicrous. You don't draw them on Windows, macOS, iOS, or Android. Yes, that means you can't change your window manager, so be it. Look at the contortions and dependency chain that the winit package on Rust has to keep in order to communicate with Wayland.
The fact that I need 50,000 lines of code called wlroots is nothing short of a travesty. Even if I use a language which is much better for security, that 50,000 lines of code is a shambling security disaster that will bite me in the ass.
And that's before we start talking about some of the architectural design decisions made in wlroots. The design decisions are so anathema to something like Rust that people have given up and been forced back to C:
http://way-cooler.org/blog/2019/04/29/rewriting-way-cooler-i...
(Side note: this is one of my perennial hot buttons, and Wayland isn't the only guilty party. "Everybody Wants To Rule the Event Loop" by "Tears for Engineers" has become ingrained in programmers, and it's a BAD thing. Event loops don't compose and you wind up with callback hell trying to put two of them together. Polling does compose (ie. “readiness” rather than the “completion”), but it's a lot more work on both sides of the programming equation--the application and the library--to get it right.)
There's also some strangeness about hiDPI and scaling in Wayland that just seems to be wrong. But maybe my brain's just too small, and I don't have enough context--it wouldn't be the first time.
Of course, it's easy to armchair quarterback and difficult to construct. This is hard, grungy work and the Wayland guys are putting in the elbow grease. Until I'm willing to pick up an editor, my opinions aren't worth much.
Overall, though, I see myself staying on X11 until I simply can't anymore.
If the GNOME project were a corporation with non-technical management and a PR department, some higher-up would have ordered the devs to just add server-side decorations to Mutter already, regardless of how it complicates the code or compromises their design vision, to fix the bad PR. Of course, at least one such corporation, Red Hat, is already funding a lot of the development on GNOME. Maybe the Red Hat higher-ups are giving their desktop developers too much autonomy.
I've read the thread. I think the GNOME developers have logical reasons for their position. But it still makes them look bad, particularly since every other Wayland compositor supports SSD.
I'm an obligate kwayland user because x11 apparently cannot keep up with both gaming and having 2 monitors. When playing tf2, a game that came out in 2004, i used to get very hard stutters on x11 that i never managed to exactly track down (ie not immediately visible in whatever monitoring tool you can think of) but on wayland i have no such issue, with everything else the same. Apparently this is an x11 problem som users encounter with multiple screens, i just happen to have a gpu powerful enough to not see the stutters until a game is using all the bandwidth.
> The effort you need to go through to actually use these depends on how your distribution handles the file permissions of /dev/uinput. Some of them have it as root:input, in which case you just need to usermod -a -G input <yourusername> and then relog to get it working. Others have it as root:root so you either need to go do some reconfigurations to change its permissions or live with running the software using it as root.
There's a trick to that. The TL;DR is "install the steam-devices package or similar" (https://github.com/ValveSoftware/steam-devices/), which adds the following udev rule (and others, but this is the relevant one):
The reason Steam has that udev rule is for something really cool called Steam Link, which allows you to use your Android phone, through your local network, as if it were a joystick for your computer. For some games (and its own user interface, which switches to a simplified "big picture" mode when you connect that emulated joystick) it does it directly without going through /dev/uinput, but for other games (which read the joystick input directly, without going through the Steam libraries) it sends the emulated input through /dev/uinput.
Perhaps a minor nitpick for most of the people, but for me a big problem with X11 was that hotkeys fire on key-down, not on key-up. Because of this, you cannot really use Ctrl+Shift to change input languages - it interferes with another hotkeys which start with Ctrl+Shift+...
The response on X11 bugtracker from ~2007 was "xkb is old, we don't want to touch it, let's see if any alternatives will fix it in the future". I hoped so much that Wayland would fix that... but no.
The problem with wayland is that it doesn't have a killer feature and a lot of anti features (performance). I couldn't care less about this X11/wayland debate as a user.
I would love to see a minimalist Linux distro for ARM SBCs that shipped Wayland, river (or Sway) and foot as its default environment. I guess the interest just isn’t there.
I just tried using Wayland/Sway an hour ago. I guess you still cannot really use proprietary drivers? It wouldn't work for me, anyway; not even with the command line option to ignore my heresy.
I'd like to use Wayland. I have used it with Gnome on Fedora on my wife's Surface Pro 8, and it was really nice.
haven't had issues with Wayland in a while... my last issue was with my screenshot app Flameshot... but it has been fixed... my only two problems now is that I wish that Flameshot would support some other service besides IMGUR and I also wish that it could record animations/videos ;)
Not sure how people use X11 anymore. Every time I try it, it seems to be horribly broken in some small ways, whereas Wayland just works. Could be that my hardware is too new or weird.
Manjaro uses Wayland with Gnome out of the box and I've been using it for nearly a year on and off. I have a cheap Samsung laptop that I bought to run it which replaced a broken macbook that I've since replaced. I used Manjaro as my daily driver until recently. Mostly I'm pretty happy with it. Gnome isn't perfect but it is widely used. With at least some distributions now defaulting to Wayland (usually with Gnome), things are getting usable. So, I just went with path of the least resistance and ended up with a usable laptop.
The touchpad configuration is indeed a bit sucky in Gnome. I actually connected an Apple magic touchpad at some point (version 2). Aside from the bluetooth support in Linux (which is pretty terrible and seems to not work reliably), it worked really nice with this setup once I plugged it in with the usb cable. That made me realize an important thing: most non Apple touchpads are pretty terrible and need extensive software hackery to compensate for just how terrible they are. The quality of that hackery varies widely between different drivers and hardware suppliers. The Apple touchpads are great on Linux; on par with the experience on MacOS (aside from bluetooth, which is nowhere close to that). In the end, I bought a wireless mouse and problem solved (after jumping through some hoops to get that working). But technically these are not Wayland issues.
And things like screen sharing in various video call tools needed a bit of work before they started working. This too is technically not a Wayland problem but it is bloody inconvenient if you are trying to attend meetings for work. I eventually managed to sort these issues out and have used meets, ms teams, slack, cisco webex, zoom and probably a few others. I struggled with Discord.
Generally the biggest issue in this space is just the wide variety of independent and poorly integrated stuff that all needs to come together for things to work flawlessly. You need a login thingy, a window management thingy and a compositor. And then you need various cruft for making the sound work, Bluetooth, and all the rest. And they all point fingers at each other when things don't work. Getting recent versions of these things definitely helps though. So use a Linux distribution with rolling updates. And use something more mainstream.
With X windows this is actually the same deal except the dust has kind of settled after a few decades to the point where it mostly just works or fails in ways that can be fixed with a few text file tweaks. So, Wayland has probably gotten to the point that with the right hardware, things mostly just work. But your mileage may vary. But that has always been that way with Linux.
I stopped reading at the suggestion that ydotool "provides xdotool functionality in a more generic way". Seems like author uses "x11 apologist" epithet just for the sake of catchy article title without recognizing any areas where wayland lacks basic functionality.
Can someone explain to me why Linux users seem to like arguing about purely technical pieces of their system like the init manager or the compositor as if it significantly affected them?
Wayland destroys backward compatibility, destroys everything we knew about the graphics subsystem, offers to work with graphics in a different totally new way which results in a ton of concessions.
A lot of Linux users do not use either Gnome or KDE and outside these two DEs, Wayland is a total PITA to use. I'd say it's just broken.
This is the extremist take without any nuance that causes these flamewars without any good reason. Nothing factual, just hearsay dramatized to the max.
Wayland is backwards compatible, it literally has XWayland which is a completely transparent way of running any X application (with the caveat that these X apps can see each other, so not all of the positives of wayland apply).
Wtf, graphics subsystem? Wayland implementations use the standard kernel DRM interface - if anything xserver was always a red herring here with proprietary blobs for nvidia being part of the program itself, instead of being a separate driver.
Because the draw of the system is the technical setup even more so than the end user experience. For a lot of Linux users, liking how the system is constructed technically is more important than actually doing stuff on it
Plus, it affects them significantly. The switch to pulseaudio meant the support forums were filled with people that had no sound. Because the newest ubuntu activated wayland, Teams could not share the desktop anymore, the web app. So it is necessary to know about the technical details to fix the problem (switch back to alsa and X, for example)
Same reason petrolheads like to argue about purely technical pieces of their cars like the engine or transmission, as if it significantly affected them.
The thing that sticks out like a sore thumb to me, and which I've been unable to solve, is that apparently it's not possible to configure trackpad scroll speed. At all.
From what I was able to gather, Wayland/libinput say they shouldn't be responsible for handling it and instead window managers should[0][1], meanwhile gnome says wayland/libinput should handle it[2] and ultimately - several years later - it's still not possible to in pretty much any Linux distro that uses Wayland(?)
When I switch to my Linux laptop to test things, my trackpad is bonkers and I have to move my finger in like 1mm increments because otherwise I'd scroll like 10 pages in Firefox. It's infuriatingly frustrating.
[0] https://gitlab.freedesktop.org/libinput/libinput/-/issues/18...
[1] https://gitlab.freedesktop.org/wayland/wayland/-/issues/87
[2] https://gitlab.gnome.org/GNOME/gnome-control-center/-/issues...