Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a Windows system and user-mode dev, I absolutely never understand these sorts of drive-by posts with nearly zero technical depth.

If there is one thing about Windows that is really good, it is its kernel and driver architecture, and absolute plethora of user-mode libraries that come with the OS, that can be programmed against with a variety of languages from ancient to brand-new, all maintained by the vendor. Doing the same thing on a given distro of Linux is a headache at best, and impossible at worst (which is partly why game developers don't target native Linux).

The problems with Windows have always been in the user-mode (with the notable exception of Vista, and I still maintain that Vista was OK; its problems were due to Intel strong-arming MS into certifying a broken version of Vista for its sub-par integrated GPUs of the time). Windows 11 control panel sort-of gone? There's still the god-mode menu introduced in Vista. Right-click menu gone, or too much Copilot? Go to Group Policy editor, switch off what you don't need; revert what you can. People complain you 'cannot create local user accounts any more'. Also not true, that feature is a fundamental part of Windows and probably won't ever be removed. There are workarounds. Any Windows user or sysadmin worth their salt will have a GPE fleet-wide policy, and registry settings.

Everything one sees on Windows can be stripped out and reverted to Windows 2000 mode. That grey boxy UI is literally still there. Compile a program for 32-bit, set the compatibility mode to Windows 2000, and bam, there you go. If you add in the manifests for UTF-8 and high pixel density, the UI is scaled pixel-perfect by the system.

Speaking of high pixel density, Windows is the only OS that does scaling properly. macOS just pretends non-'retina' displays don't exist, Linux distros are a minefield of Xorg, Wayland, a million different conf.d files, command-line arguments, and env variables.

Why would anyone want to replace their core product with something that a) they cannot control, and b) does not satisfy their business and customer needs?





>Windows is the only OS that does scaling properly. macOS just pretends non-'retina' displays don't exist

Not true. I use a high-DPI (~250) MacBook with a non-high-DPI (~100) external monitor [0] and the transition between the two is seamless. Windows are identically sized when dragging from one screen to another. The same holds true when I use the laptop with a mid-DPI (~150) monitor.

I could not say the same was true a few years ago when I tried a high-DPI Windows 10 laptop with a non-high-DPI external monitor; it looked something like this [1]. Perhaps this has since been fixed.

macOS is able to achieve consistent sizing across displays irrespective of pixel density because it uses a compositor to render the whole screen at a high resolution and, if necessary, downsamples it proportionally for each screen. (Wayland on Linux can do the same, though it's certainly a much bigger headache to get consistently working than macOS.) When I tried using Windows 10 at two DPIs simultaneously, it just let me scale the font size and other UI elements on a per-screen basis, but not the screen as a whole, since I assume it does not use a compositor.

[0] Not my setup, but here is someone doing just that with a 30" 2560x1600 (~100 PPI) display and a ~250 PPI MacBook: https://www.reddit.com/r/macsetups/comments/tfbpid/my_macboo...

[1] Again, not my setup, but the Windows UI is rendered at different sizes on displays of different resolutions: https://www.reddit.com/r/computers/comments/16y1dux/how_do_i...


I have a long blog post stewing here. I'll give you the gist.

Moving windows between monitors of different pixel densities is a rather difficult problem. Windows handles pixel density per-application, not globally, and it uses something called device-independent pixels (DIPs) for scaling. macOS and every desktop environment I've tried on Linux does scaling globally, or at least globally per-display.

On Windows, when a window is moved across two displays with different scaling factors, a simple algorithm is used. It will choose the display that the greater fraction of the window is in to select the DIP, render, compose and rasterise, and hence and one part of the window may appear too small or too large on the other display.

On the other hand, macOS, GNOME, and KDE take the easy (but IMO very lazy) way out by rasterising the entire application window to the pixel density of whichever display that the greater fraction of the window is in, copying that framebuffer to the viewport of the other displays, scaling with some filtering algorithm, and then composing, leading to blurring on at least one display. I am happy to bet that you're just not noticing the early rasterisation and filtered scaling going on. Having used all 3 OSs across a variety of monitors, I am extremely particular about blurry text; enough that I will stop using a certain setup if it doesn't satisfy me (it's why I stopped using Linux on my personal system).

I'll concede neither is good enough. The real solution here is:

  1. Render the application to as many viewports as there are displays that the application window is in, with the appropriate DIP for each display's scale factor
  2. Compose the application viewports into each display's viewport depending on the apparent window position
  3. The above will automatically clip away the fraction of the window that is outside each display
  4. Rasterise the composed viewport for each display
Another concession: I personally prefer pixel-perfect rendering rather than having the same visual size, and hardly ever use windows spanning multiple displays (especially of different pixel density), so Windows' behaviour is less of a problem to me.

My bigger issue is other desktop environments not supporting subpixel anti-aliasing, not supporting 'fractional' scaling (macOS is by far the biggest offender), and edge artifacts that result from bad clipping. I have a few photos I took of KDE, where random pixels are lit up at the bottom of my secondary display, with my laptop below it.


You may be right about the correct solution, but nobody actually uses any app long term with it straddling two displays, so the actual impact of this is not huge.

And DIPs have their own problems that I first encountered with WPF - rendering an application on a DPI that's not a neat multiple of what it was designed for means that lines and features don't necessarily line up with the pixel grid.

Depending how the app chose to handle this, it either causes blurriness or uneven and changing line widths as you move the window.


>I am happy to bet that you're just not noticing the early rasterisation and filtered scaling going on

macOS renders content on my 100 PPI monitor at exactly 100 DPI; 1:1, no scaling, so everything looks crisp at the pixel level. The scaling only happens on high-DPI displays (I think the cutoff is around 150-200), and for me at least, ~250 PPI is more than dense enough to not see any individual pixels and thus no aliasing artifacts. Since you like pixel-perfect rendering even at very high resolutions, perhaps you have superhuman vision. My eyes are decidedly average. :-)

>I hardly ever use windows spanning multiple displays

Me neither. My issue is that the windows are rendered at different sizes even when they're not spanning both displays: if I dragged the window in the example photo upwards to sit entirely on the top display, it would stay huge, whereas if I dragged it downwards to sit entirely on the bottom display, it would stay small.


> Since you like pixel-perfect rendering even at very high resolutions, perhaps you have superhuman vision.

I'm just annoyingly particular about this. It's why I accept a framerate hit in video games and don't use upscalers like DLSS, and why I intend to swap my 3840 × 2160 600 × 340 mm monitor for a 5120 × 2880 one of the same physical size. Some really nice ones were demonstrated at CES a fortnight ago.

> if I dragged the window in the example photo upwards to sit entirely on the top display, it would stay huge, whereas if I dragged it downwards to sit entirely on the bottom display, it would stay small.

This is not the behaviour I see. The window upon occupying the larger percentage of a display, 'snaps' to the DIP of that display.


You get that that’s worse though, right?

Windows renders the window once at a single DIP resolution. The other side of the window appears either too big or too small.

MacOS renders the window once at a single DIP resolution. Then, the other half of the window is upscaled or downscaled for the other screen. It’s going out of its way to make it consistent; Windows doesn’t bother.

Your worries about blurry text go away when you use nearest neighbor upscaling (this is configurable in the MacOS zoom settings). Nice crisp text, at the right size.


> Your worries about blurry text go away when you use nearest neighbor upscaling (this is configurable in the MacOS zoom settings). Nice crisp text, at the right size.

macOS does not do nearest-neighbour when doing pixel density scaling. It especially does not do nearest-neighbour when the 'looks like' resolution of any display is not a nice divisor of the physical resolution. As the grandparent commenter said, macOS renders to a fixed framebuffer. The size of this framebuffer depends on the pixel density of the physical display; at Apple-blessed densities of ≥ 79 px/cm, this framebuffer is four times the 'looks like' resolution (twice in each dimension); below this, it is the same resolution.

After this rendering macOS applies filtered scaling to fit the framebuffer to the physical resolution. If upscaled, this leads to blurry text and UI; when downscaled, this causes ringing artifacts[1].

[1]: https://www.reddit.com/r/mac/comments/12j14ud/macos_vs_windo...

I concede that Windows' implementation is simpler, but I will argue that practically it doesn't matter because basically no one I know uses an application window across multiple displays.

My very strong opinion: text/vector UI should never be raster-scaled to fit varying pixel densities.


They might be referring to when Apple removed subpixel antialiasing around ~2018. It caused some consternation at the time because there were still plenty of non-retina Macbook Airs in service. While it technically works, macOS really is not meant for non-high DPI displays.

>macOS is able to achieve consistent sizing across displays irrespective of pixel density because it . . . downsamples it proportionally for each screen

For me, the most important consideration is to complete avoid downsampling -- because it makes everything blurry.

On both macOS and Linux, the way I do that is to choose to scale the UI by an integral factor (usually 200% in my case) and then (since 200% makes things a little bigger than I prefer) fine tune the apps in which I spend most of my time (namely, Emacs and my browser).

Specifically, on Linux, Emacs relies on GTK to draw its window, which IIUC cannot do fractional scaling, so if I were to set a fractional scaling factor in Gnome Settings, then Emacs would be blurry whereas there is no blurriness when I set an integral scaling factor in Gnome and use something like (set-face-attribute 'default nil :height 90) to adjust the size of text in Emacs.


You "never understand" these posts and then list off a ton of crap I shouldn't have to do to an OS to make it usable.

The default experience of using windows is downright user-hostile and it reveals the thinking of the corporation behind it. Yeah, you _can_ do all that to make it somewhat usable, but when alternatives exist that are much less of a pain, I'll be taking those.


My point was that the article is logically flawed. The user mode of the OS sucks, so let's run the same user mode with a different kernel? What?

I don't care about configuration. I've had to do plenty of configuration on Linux as well; it's just different (text files instead of GPO/registry). I'm not sure I can list all the Arch Linux wiki articles I've read trying to get one driver or another feature working.

I am not here to convince anyone to stop using one platform or another. They're different tools that solve different problems, and I run all of them. I have a Linux laptop for work, a Windows laptop/desktop for personal use, a Proxmox hypervisor on my homelab running a variety of LXC containers, Linux and Windows Server guests.


>The user mode of the OS sucks,

Not from the perspective of Microsoft. It sells OneDrive and Office 365. It makes money from ads.

>so let's run the same user mode with a different kernel?

The kernel is a piece of legacy cruft that isn't necessary for selling OneDrive and Office 365. It's only a cost. Throw that out and replace with an off the shelf Linux kernel. With some minor tweaks, it can sell OneDrive too. Then you can fire a lot of kernel developers. The line goes up.


> The kernel is a piece of legacy cruft that isn't necessary for selling OneDrive and Office 365.

The kernel is running OneDrive and Office 365. It's making money hand-over-fist.


It also makes non-zero money from selling and maintaining Windows.

My experience of Linux (and Mac OS) has been the opposite; they are extremely painful to make usable.

Yes, I have to disable a lot of stuff to get Windows the way I like it. But that's still exponentially easier than having to add, install, or perhaps even buy a lot of stuff to maybe get Linux/Mac to behave kind of how I want it to.


Having been a longtime Windows user, an on/off Linux desktop user, and now primarily a Mac user, I really think it's just what you're used to. Each desktop environment has its own strengths and weaknesses, and trying to bend one to be like the other is going to end in frustration. The userland of each OS is sufficiently different that different desktop metaphors break in different ways when you try to port them. MacOS will never have a taskbar, Windows will never have a functional dock and system menubar, and Linux will never have a cohesive toolkit because it's too fragmented. But each has its strengths and the key to productivity is to work with the desktop as designed rather than against it.

My experience with paid independent Mac desktop apps (e.g. Little Snitch, Al Dente, Daisy Disk, Crossover, anything from Rogue Amoeba etc.) is that they try a lot harder to integrate well with the system than equivalent freeware apps on Windows. MacOS is definitely "missing" some features out of the box (per-app volume control?) but makes up for it with certain things largely being more seamless, especially with regard to drivers (in my experience).

I also miss Linux DEs some days for their extreme customization potential and low resource usage. But it's hard to achieve compatibility between the "best" applications of each DE and GTK and Qt have their own warts.

Just go with the flow, and if Windows jives with you then more power to you. I can't stand it anymore though.


> Having been a longtime Windows user, an on/off Linux desktop user, and now primarily a Mac user, I really think it's just what you're used to

I've also used all three OS's in anger and largely agree.

I like to call that sort of attitude YOSPOS, named after one of the technology-oriented subforums on Something Awful. It stands for "Your Operating System is a Piece Of Shit."

Which OS? Your OS, whichever one (the royal) You happen to be using at the time. They all stink for different reasons, and it's just a matter of which OS's annoyances you decide to put up with.

That said, good lord, Windows 11 has been rough. I actually don't mind most of the UI changes, but the AI psychosis and the general lack of stability has made Windows 11 one of the only versions of Windows I can remember that started mediocre and kept getting worse with updates instead of better.


Every OS sucks. Pick the one that you feel sucks the least for you at the time.

https://youtu.be/CPRvc2UMeMI

It's really really not a new sentiment.

From the description on this 14-year-old video:

  An older song, from back in the days of XP and OS X.3.

> You "never understand" these posts and then list off a ton of crap I shouldn't have to do to an OS to make it usable.

In the context of changes Microsoft could make, that list of instructions is there for demonstration purposes. It's about how if Microsoft wanted to clean up their mess, they have a far far easier method than what's suggested in the article.

> when alternatives exist that are much less of a pain, I'll be taking those

That's a different topic from the article and the comment you replied to.


By way of example — I can (and did) remove the ads from the Start Menu on Windows 10 Professional. But there's literally no reason they should've been there to begin with.

Large majority of population considers Windows usable as is, that is why you still see Windows at best buy and not Linux powered desktops, other than castrated Chromebooks.

Thing is there is no reason Microsoft’s Linux distro wouldn’t be just as horrible to use “out of the box”.

UX problems != low-level engineering issues.

I think most people agree that current Windows sucks due to a combination of engineering neglect and deliberate enshittification.

But how the OS is put together and some of the debug tooling (WinDbg, ProcDump, Windows Performance Analyzer, ttd, graphics debuggers etc) mean that it's much easier to debug complex apps like games on Windows. Windows had this stuff since forever.

And due to the stability of the system architecture and the QA MS does mean that Windows might be shitty in some ways, but institutional knowledge has built up over the decades.

Linux in contrast is like the ship of Theseus.

A lot of the work Valve has done on Linux was to plug these gaps and had to develop similar tooling on Linux, otherwise its impossible to fix full-stack problems where the user clicks in a videogame, and something does or does not happen on the screen.

I'm not glazing Windows, I'm just infuriated by the persistent feeling of technical superiority of Linux people, who don't even bother to understand the problems, and explain away the lead Microsoft has as some sort of shadowy anti-user conspiracy, rather than the fact that Windows does a lot of things that users care about better than Linux.


> Everything one sees on Windows can be stripped out and reverted to Windows 2000 mode. That grey boxy UI is literally still there.

Can the horrendous W11 taskbar be reverted to the classic taskbar, with full support for changing its size and screen position etc?

Can classic Explorer, without any OneDrive/Copilot nonsense, be restored?

Can the new "Settings" (*excuse me while I vomit) layouts be junked in favour of the Control Panel, along with all the associated modals such as the WiFi selection sidebar etc.?


We are now at the point where the average PC user can configure linux to their liking but it takes a windows sys admin with strong domain knowledge to configure windows to their liking. Im a windows sysadmin(pretty shit one but still) and I struggle to remove a lot of the w11 features and have been unable to get it working how it used to.

Average PC user doesn't even know what an OS is. All they know is clicking certain icons. Newer GUIs are designed to be anti-intellectual. It has started to gain speed with iPhones and has been consistently worse.

What you remove/configure also depends on what you expect. Windows and its ecosystem is GUI-first so I can do most of my customizations using a GUI app like Winaero Tweaker. I can use Powershell to remove certain Windows components too. It usually takes 1 or 2 hours and it stays as it is even with semiannual updates.

With Linux systems I spend much more time to bend it to my wishes but the whole design philosophy is off. Most of the configuration doesn't give proper feedback. It sometimes half works. The API churn rate in Linux world is higher (a lot lower with KDE, to give them the credit). Package management is great. However you don't actually get to choose. Browsers use GTK APIs and cairo, I dislike the libraries (especially the font rendering) but have no choice unless I want to port browsers. I dislike CSDs, again no choice especially with how Wayland turned out (basically CSDs are default, apps opt-in to SSDs). Many things that can have good GUIs are terminal based. The existing GUIs break often. So it quickly turns into me fighting the basics.

I learned a lot from trying to make Linux my desktop and debugging driver issues from ATI cards (anyone remembers fglrx and editing Xorg config?) to Nvidia ones. I used Linux as my primary desktop between 2008 and 2020. I developed many software on it and still earn my living from embedded Linux stuff (I use WSL2 nowadays). However more I look into Linux's "engineering" more I hate it.

If I really want it, I need to spend some serious development time creating a more Windows-like OS out of Linux starting from libc and go up. I dislike almost every library I read the source from Linux world, especially GNU and GNOME ones. I like Qt and KDE's software architecture but the anything below (maybe except systemd) is off. Maybe Redox is a better target for this effort but I need a working system for my desktop now.


The start menu sometimes glitches out for a few seconds, so it makes total sense to replace the whole OS from the kernel up.

Signed, a npm jockey who lives in the world of churn


> If there is one thing about Windows that is really good, it is its kernel and driver architecture,

Woah, back up a bit. In the article, it looks like the blue screen is a 0x0 (iopr) exception, likely a wild jump into the weeds. But back in the day, the majority of blue screens were 0xE exceptions -- page fault in the kernel. Why? Buggy driver that didn't wire down a page and it got swapped out from under the driver. Not under Microsofts direct control... BUT... they had a great example in OS/2. In WinNT, there are 2 security rings, kernel and user space. But x86 supports 4 rings. OS/2 used ring 1 for drivers, so that the kernel could both blame the correct driver and also stay alive. So simple. (Of course, it means it is hard to port to hardware with only 2 security rings.) WinNT drivers are not things of beauty. The dev experience is cranky, and validation is a nightmare -- and the lowest bidding Asian contractor that is writing your driver for your el-cheapo peripheral rarely signs up for that nightmare.


> WinNT drivers are not things of beauty. The dev experience is cranky, and validation is a nightmare

I think one could say the same for any platform; in general, developing drivers is just difficult, full stop. That driver quality for peripherals can be bad is not the fault of the platform. I'm sure I could find dodgy drivers in the Linux tree that were merged in only because 'shrug it makes PineappleCorp's device work, who cares if it is littered with UB'.


Well, these days it seems Linux drivers get enough eye-balls on them that anything meaningful is going to get looked at. Sure, I expect there are some low usage drivers in the repo that just haven't had enough mileage. At least with Linux I can see the driver code. (The second day at my current job, somebody pointed me at a bug with an obscure symptom. A quick check of a log file showed a 0xE exception. A couple hours later I posted a link to the bug in source. Somehow, the universe decided to give me a bug I had seen many times before to get my reputation off to a good start -- it's better to be lucky than smart.)

At the dayjob we have lots of sporadic problems with USB drivers of Linux in our fleet of embedded devices (RPi). I had a lot of problems with USB-C and Thunderbolt docks in last 5 years. If USB doesn't get enough eyes to not crash/freeze systems entirely, I don't know what else would get. Monolithic kernel design should have belonged to past but we don't get nice things.

There are performance penalties for moving drivers out of the kernel/ring 0. For some things, that matters (network, graphics), for others it doesn't, like printers.

And Microsoft has made the least stable of the drivers a recoverable fault, at least.


The Intel thing is extremely true, but also equally true is the god awful Nvidia drivers that existed in the early days of Windows Vista. I don't even think Nvidia had a non-beta driver until 6 months after Windows Vista went gold. I think I recall seeing that Nvidia was responsible for something like 30% of crashes on Windows Vista.

Now, we could split hairs over where the failure was with that one--whether Microsoft not working enough with Nvidia, or whichever; but the point still stands.

Windows Vista walked so Windows 7 could run, essentially.


> Windows Vista walked so Windows 7 could run, essentially.

Good point. Although I personally have a soft spot for the all the Longhorn castles in the sky that MS were building, and for Vista in general.


> Speaking of high pixel density, Windows is the only OS that does scaling properly.

Haven't tried Windows 11, but a bunch of Microsoft applications in Windows 10 render text using sub-pixel rendering in 2x high dpi mode, resulting in every character having a two-pixel coloured border around it. That's about as far from "scaling properly" as it gets.


Would it be possible to ship a high performance, lean "modern" Windows, with legacy apps etc run seamlessly in VMs or containers?

This is probably the hypervisor that Azure VMs run on, or perhaps Windows Server. Unlike Linux, (most) legacy apps (if they target NT) don't need VMs or containers; they run natively. You can compile for Windows 2000 on Windows 11 25H2 and run it natively on the host to test.

Why do you need VMs or containers? Windows's architecture allows much granular control of applications, so it should be possible to limit what the applications can access to a great extent already. Unlike Linux the system ABI is stable so you can run older applications without shipping the entire userspace.

Also Windows 10 LTSC exist (not 11 with all the rounded modern UI BS). It shows how good Windows could be.


> with legacy apps etc run seamlessly in VMs or containers?

We kind of have examples of that already in DOSBox. Even where Windows OOTB compatibility fails, getting some ancient piece of software running in DOSBox is often not an issue.


> If there is one thing about Windows that is really good, it is its kernel and driver architecture

Sure, but alternatively, you could just lay those guys off and bank the savings of outsourcing to Linus and co.


This could be a bait. I was going to comment how wrong you are. How much better and advanced the windows model is compared to linux and how making products that co-operate with other companies (read: people with products they want to get paid for, when did that started being a bad thing?) instead of being "gatekeepy" over ideologies will always have an upper edge.

But seeing how companies have worked in the past, you might be right, some middle manager there might just axe the most valuable part of their product.


That's a great point about DPI scaling. I remember being extremely surprised that I couldn't get a sharp image ona 4K monitor with macOS because of this limitation

> and I still maintain that Vista was OK; its problems were due to Intel strong-arming MS into certifying a broken version of Vista for its sub-par integrated GPUs of the time

Nah, it was "Vista Ready" bullshit to "certify" the utter bullshit of 512Mb RAM and 5000RPM (for the notebooks) machines already built and mostly shipped by the late 2006. Of course it ran like shit if it needed to be in the swap 95% of the time. It's even more drastic if you look at the DRAM market at the time - DDR was dead, DDR2 provided the solution to finally bring to the consumer market a cheap 4/8GB RAM machines from the OTS consumer components and DDR3 was right around the corner.

> People complain you 'cannot create local user accounts any more'.

Also people forgot on how you needed a computer with iTunes for the first use of iPad - otherwise it wouldn't work. Or how the only way to use some Android phones without a Google account you literally needed to take out the SIM card or otherwise you had no way to skip "enter your google account or register one" (reminds anything?) and this was years before current situation.

Sure, MS or more likely some brain-dead manager with the the only KPI in his empty head would push for a total block of the local accounts without some enterprise (Entra?) shitfuck workaround but that would still take some time.


>If there is one thing about Windows that is really good, it is its kernel and driver architecture

Article did hang a lantern on that. Big issue is that, it doesnt matter how good the Kernel is if you cant use it. I think dropping Windows is more likely than fixing windows. Windows 11 is more than just the usual Headache Edition of Windows like ME, Vista, 8 or what have you. Its definitely a new strategy.

>with the notable exception of Vista, and I still maintain that Vista was OK; its problems were due to Intel strong-arming MS into certifying a broken version of Vista for its sub-par integrated GPUs of the time

Agreed honestly. The reason it bricked my wifes computer was because HP dragged its feet adopting the new driver model. The reason it stuffed my laptop was that Asus refused to release a supported laptop lid driver for my hardware. Games for Windows live was comorbid with Vista which pissed off gamers.

>Windows 11 control panel sort-of gone? There's still the god-mode menu introduced in Vista. Right-click menu gone, or too much Copilot? Go to Group Policy editor, switch off what you don't need; revert what you can. People complain you 'cannot create local user accounts any more'. Also not true, that feature is a fundamental part of Windows and probably won't ever be removed. There are workarounds. Any Windows user or sysadmin worth their salt will have a GPE fleet-wide policy, and registry settings.

I mean, we have the AI stuff blocked at a policy level, they just started ignoring that policy and its everywhere. They have done the same with a few other feature deployments. Group Policy has really turned into "Do you want to enable the grace period for the new thing we are pushing". Windows App, hilariously, just got boned by a windows 11 update except the older Remote Desktop App (Support ending in march) still works, and the Mac version of the App still works fine too.

>Why would anyone want to replace their core product with something that a) they cannot control, and b) does not satisfy their business and customer needs?

Control is a deep topic. But the biggest issue is Business needs. Linux is currently only 50% of the way into being anywhere decent in a Microsoft shop. Microsoft Defender for Endpoint however, is growing like a cancer and is starting to look like a testbed for bringing a lot of Microsoft command and control into a linux environment. This guys making a prediction now, but really theres nothing in Windows that cannot be ported officially by Microsoft to Linux given enough time. Honestly I think the bigger question is "When Microsoft inevitably does this will the FLOSS community get anything out of it".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: