It feels like everything is falling apart and getting worse. Yet somehow people are racing to produce AI slop faster. If software eventually collapses under its own weight, things might be so borked we have to bootstrap everything from scratch, staring with assembly.
I know it's not fashionable to be positive about macos but since ditching Windows a decade ago it's pretty much just worked, and I can run Excel and the like. I'm sticking with Sequoia and avoiding 26 though, for now.
Or: Steve Ballmer oversaw the decline of Mircosoft's flagship product, but left before he could be blamed for it.
A lot of Windows' current problems can be traced back to the Ballmer era, including the framework schizophrenia, as Microsoft shifted between Win32, UWP, WPF, and god knows what else. This has lead to the current chaotic and disjointed UI experience, and served to confuse and drive away developers. Repeatedly sacrificing reliable and consistent UX while chasing shiny and new technologies is no way to run an OS.
I think MS's biggest mistake was to not properly maintain and develop the Foundation Classes, basically a thin C++ wrapper library on top of the C API that retained most of the benefits of the Win32 API while eliminating a lot of the boilerplate code. Instead they went after Java with the .NET managed stuff, bloated and slow compared to the native API.
Qt is now the best "old school" UI framework by far.
>including the framework schizophrenia, as Microsoft shifted between Win32, UWP, WPF
Ah yes, and the solution being presented is Linux, with Xlib, Motif, Qt, GTK, and your choice of 167 different desktop environments. Don't forget the whole Wayland schism.
Mac is no better, shifting SDKs every few years, except Apple goes one step further by breaking all legacy applications so you are forced to upgrade. Can't be schizo when you salt the earth and light a match to everything that came before the Current Thing.
PowerPC stuff? Anything more than a few years old?
Forget it.
You can't even run versions of iPhoto or iTunes after they deliberately broke them and replaced them with objectively shittier equivalents. Their own apps!
Windows can still run programs from the 90s unmodified. There are VB6 apps from 1998 talking to Access databases still running small businesses today.
Can't say the same for either Mac or Linux.
It's not really a problem for Apple because their userbase is content to re-buy all their software every 5 years.
Well, that's true. It's an interesting point actually. Windows certainly wins in terms of binary compatibility.
I was thinking more about the developer perspective, i.e. churn in terms of frameworks. Yes, PowerPC is gone. Intel will be gone soon.
But both the transitions from PowerPC to Intel as well as from Intel to ARM were pretty straightforward for developers if you were using Cocoa and not doing any assembly stuff.
Carbon only every was a bandaid to give devs some time for the transition to Cocoa.
Maybe I am a bit jaded, but with Apple's yearly OS release cycle — and breaking things nearly every time — I grew sick and tired of software I spent good money or relied on suddenly not working anymore.
Imagine taking your car in for an oil change annually and the radio stopped working when you got it back. It's incompatible with the new oil, they say. You'd be furious.
With the Windows of yore this wasn't so much of an issue — with 5-10 years between upgrade cycles — and service packs in between — you could space it out.
When you work in the computer industry, there tends to be a disconnect with how they are used in the real world by real people — as tools. People grow accustomed to their tools and expect them to be reliable as opposed to some ephemeral service.
Apple's change for the sake of change is extremely annoying, especially since the changes have been regressions lately.
They always push their commercial interest at the cost of their users, refusing to maintain stuff properly to save money.
At some point I had to change a Mac because the GPU wasn't compatible with some apps after they pushed their Metal framework. But it was working just fine for me, and I didn't really need to change it at this moment; Apple just decided so.
And if you use their software on different hardware and make the mistake of upgrading just one, it is very likely that you will have to upgrade the other because the newer software version won't be compatible with the older hardware (had the problem with Notes/Reminders database needing an OS upgrade to be able to sync).
Microsoft is all over the place, but at least it is very likely that you can get away with changing your hardware only once every 10 years if you buy high-end stuff.
Is that a good or bad thing? Yes, Mac chops off legacy after a decade or so, but I don’t see not being able to run apps from the 90s as a problem (or if I did, I’d probably be running windows or Linux instead of Mac OS).
From my own experience things tend to keep working on Linux if you package your own userland libraries instead of depending on the ever changing system libraries. More or less how you would do it on Windows.
Except Windows isn't perfect either, I had to deal with countless programs that required an ancient version of the c runtime, some weird database libraries that weren't installed by default and countless other Microsoft dependencies that somehow weren't part of the ever growing bloat.
Although it's rare for me, I have used some old software that was built for Windows 9X or old versions of NT. So far, the track record is perfect - native programs have worked just fine, though I obviously can't vouch for all of them.
Old, complex games are the worst-case scenario, and are the exception, not the rule. Since they were only beginning to figure out hardware-accelerated 3D gaming in the 90s, it meant that we were left with lots of janky implementations and outdated graphics APIs that were quickly forgotten about. Though, MDK doesn't seem to suffer from this - it should be capable of running on newer systems directly [1]. One big issue it does have is that it uses a 16-bit installer, which is one thing that was explicitly retired during the transition to 64-bit due to it being so archaic at that moment, only being relevant to Windows 1-3. But you can still install the game using the method described in the article, and it should hopefully run fine from there on. Since it has options to use a software renderer and old DirectX, at least one of these should work.
I use WinAmp 2.0 sometimes which was released in 1996. I prefer to use v5 but I like to show friends that such old software still works fine (even Shoutcast streaming works fine).
> Try running windows 11 on old CPUs, or machines without secure boot / TPM 2.0.
The more relevant test is the reverse: running Windows XP and apps of that era on modern hardware. It will work perfectly. The same cannot be said of 2000-era Mac software.
That's because TPM 2.0 module allows M$ to uniquely identify you and sell your info to advertisers - it's not an actual technical limitation, it's just because M$ is greedy, and it's a shame they aren't punished by governments for creating all this unnecessary eWaste just to make even more cash.
With GNU/Linux and BSD I just recompile. I can run old C stuff from the 90's with few flags.
Under GNU/Linux, the VB6 counterpart would be TCL/Tk+SQlite, which would run nearly the same over almost 25-30 years.
As a plus, I can run my code with any editor and the TCL/Tk dependencies will straightly run on both XP, Mac, BSD and GNU/Linux with no propietary chains ever, or worse, that Visual Studio monstruosity. A simple editor will suffice and IronTCL weights less than 100MB and that even bundled with some tool, as BFG:
Carbon is long deprecated and as mentioned was only ever meant as a transitional framework.
Cocoa still exists and is usable.
UITouch is not a framework, but a class in UIKit.
UIkit still exists and is usable.
Same for Catalyst. Same for SwiftUI.
As said, I'm not pretending everything is sunshine and roses in Apple-Land. But at least Apple seems to mostly dogfood their own frameworks, which unfortunately doesn't seem to be the case anymore with Microsoft. WinUI 3 and WPF are supposed to be the "official" frameworks to use, but it seems Microsoft themselves are not using them consistently and they also don't seem to put a lot of resources behind them.
Win32, MFC, Windows Forms, and WPF also exist and are quite usable.
Apple also doesn't always uses their stuff as they are supposed to, Webviews are used in a few "native" apps, some macOS apps are actually iOS ones ported via Catalyst, which is the reason they feel strange, and many other stuff I could list.
> Ah yes, and the solution being presented is Linux, with Xlib, Motif, Qt, GTK
I'm not going to descend into a "my OS's API is worse than yours" pissing match with you, because it's pointless and tangential. The issue is not "is the Windows framework situation worse than Linux" but rather "is the Windows framework situation worse than it used to be" and the answer is emphatically yes, and due mostly to Ballmer's obsession with chasing shiny things, such as that brief period when he decided that all Windows must look like a phone.
July 2014: Microsoft lays off 14k people, a large portion of which are SDET (Software Development Engineer in Test)/QA/test people.
The idea was that regular developers themselves would be writing and owning tests rather than relying on separate testers.
I'm sure there were multiple instances of insane empire building and lots of unproductivity, but it's also hard to not think this was where the downfall began.
Ultimately it still comes down to someone in the chain giving a damn. There are obvious, surface level bugs across most technologies. Yet, developers, PMs, VPs all sign off and say, "Close enough".
The first Go proverb Rob Pike listed in his talk "Go Proverbs" was, "Don't communicate by sharing memory, share memory by communicating."
Go was designed from the beginning to use Tony Hoare's idea of communicating sequential processes for designing concurrent programs.
However, like any professional tool, Go allows you to do the dangerous thing when you absolutely need to, but it's disappointing when people insist on using the dangerous way and then blame it on the language.
> people insist on using the dangerous way and then blame it on the language
Can you blame them when the dangerous way uses 0 syntax while the safe way uses non-0 syntax? I think it's fine to criticize unsafe defaults, though of course it would not be fair to treat it like it's the only option
They're not using the dangerous way because of syntax, they're using it because they think they're "optimizing" their code. They should write correct code first, measure, and then optimize if necessary.
This is all very nice as an idea or a mythical background story ("Go was designed entirely around CSP"), but Go is not a language that encourages "sharing by communicating". Yes, Go has channels, but many other languages also have channels, and they are less error prone than Go[1]. For many concurrent use cases (e.g. caching), sharing memory is far simpler and less error-prone than using channels.
If you're looking for a language that makes "sharing by communicating" the default for almost every kind of use case, that's Erlang. Yes, it's built around the actor model rather than CSP, but the end result is the same, and with Erlang it's the real deal. Go, on the other hand, is not "built around CSP" and does not "encourage sharing by communicating" any more than Rust or Kotlin are. In fact, Rust and Kotlin are probably a little bit more "CSP-centric", since their channel interface is far less error-prone.
Not quite. Erlang uses the Actor model which delivers messages asynchronously to named processes. In Go, messages are passed between goroutines via channels, which provide a synchronization mechanism (when un-buffered). The ability to synchronize allow one to setup a "rhythm" to computation that the Actor model is explicitly not designed to do. Also, note that a process must know its consumer in the Actor model, but goroutines do not need to know their consumer in the CSP model. Channels can even be passed around to other goroutines!
There's also a nice talk Rob Pike gave that illustrated some very useful concurrency patterns that can be built using the CSP model:
https://www.youtube.com/watch?v=f6kdp27TYZs
It's true that message sends with Erlang processes do not perform rendezvous synchronization (i.e., sends are nonblocking), but they can be used in a similar way by having process A send a message to process B and then blocking on a reply from process B. This is not the same as unbuffered channel blocking in Go or Clojure, but it's somewhat similar.
For example, in Erlang, `receive` _is_ a blocking operation that you have to attach a timeout to if you want to unblock it.
You're correct about identity/names: the "queue" part of processes (the part that is most analogous to a channel) is their mailbox, which cannot be interacted with except via message sends to a known pid. However, you can again mimic some of the channel-like functionality by sending around pids, as they are first class values, and can be sent, stored, etc.
I agree with all of your points, just adding a little additional color.
That's not what the comment said. It said, "How about a Rust to C converter?..." The idea was that using a converter could eliminate the problem of not having a rust compiler for certain platforms.
Haha! You know, I think this is a perfect illustration between something being mathematically beautiful verses pragmatically beautiful. The beauty of one often looks ugly by the standards of the other.
What's cool about this is that computers are so fast that you could probably make a decent 2D game using only this software-rendered OpenGL 1.1 library.
If you keep resolution low, manage your scene complexity carefully and commit to the art style, you can make reasonable 3D games that run even on 20 year old hardware. I did so as an experiment on my game [1] [2] and now that I can see that it works, I am working on a second, more "serious" and complete one. Computers are fast.
Edit: I missed this was software rendered. I’m one gen-iteration ahead. Prob would still be possible to render my game cpu side provided I use the most efficient sprite depth ordering algorithm possible (my game is isometric pixel art like rollercoaster tycoon)
Ha! That’s what I’m stuck with for Metropolis 1998. I have to use the ancient OpenGL fixed function pipeline (thankfully I discovered an AB extension function in the gl.h file that allows addition fields to be passed to the GPU).
I’m using SFML for the graphics framework which I think is OpenGL 1.x
Basically the idea is to throw away yet again the current set of execution pipelines, and have only two kinds of shaders, mesh shaders which are general purpose compute units that can be combined together, and task shaders which have the role to orchestrate the execution of mesh shaders.
So basically you can write graphics algorithms like in the software rendering days, with whatever approach one decides to do, without having to try to fit them into the traditional GPU pipeline, yet running on the graphics card instead of the GPU.
This is how approaches like Nanite came to be as well.
Straight facts, thank you:) but one small nitpick: mesh/task shading only replaces the vertex pipeline (VS/TS/GS), the pixel shader is still a thing afterwards...
The Principle of Least Privilege is one of the foundational aspects of security. Governments should be enforcing that not requiring companies to collect very sensitive information like they are currently doing. Things like "prove your age", digital ID, and Chat Control are actively malicious when it comes to safety, security, and privacy.
I know right now there are some privacy-focused distros of Android, but it might be time to just have a fork that moves off in a different direction. I think the only way to have success is create a distro that is very friendly to developers. If you get enough devs creating software for the fork, you can start to get users. I imagine the fork would only be popular with enthusiasts and devs at first.
> If you get enough devs creating software for the fork, you can start to get users.
Why would devs invest effort into developing for a new platform? You've hit the issue of bootstrapping two-sided markets
Most users don't know or care about this side-loading issue, and when they are informed about what it means, they definitely like the idea that the app can be traced to a real human who has been validated by their phone provider. Not having those things sounds like malware and hacking to them.
The people who decide platform support for major apps are usually business people, not developers. Businesses want control, which is toxic to the user. So, the only way for a new OS to get enough software support to get off the ground is to take away even more control and allow more abuses by app owners. If an OS has no user base and has pro user anti app owner security design, then very few app owners will provide apps. This problem applies both to android forks and novel systems.
reply