I think of it more in the reverse, the choice being removed is the hardware you can use. It has been the case from the dawn of computing that you start from a usecase (which correlates to software, which maps to an operating system) and then look at your options for hardware. The more specific your usecase, the more specific your software, which correlates to a specific choice of hardware. There is no, and can be no, "have it all". It's a fundamental principle of mathematics, the postulates you choose radically change the set of proofs you have access to, and the set of proofs you choose entail the axioms and structures you can take.
Now it can be better or worse, and right now it's never been better. There was a time when your language, your shell and your operating system were specific to the exact model of computer (not just a patched kernel, everything was fully bespoke) and you have a very limited set of peripherals. That we suffer from more esoteric operating systems lagging behind the bleeding edge of extremely complicated peripherals is a very good place to be in. That there's always room for improvement shouldn't be cause for sadness.
> Now it can be better or worse, and right now it's never been better. There was a time when your language, your shell and your operating system were specific to the exact model of comput
No, it is not. There was a small period of time between the 90s and the 2010s where you could grab almost every 386 OS and have your hardware mostly decently run for it, and if not, drivers would be easily written from manufacturer specifications. That time was definitely better then than what we have today, or what we had before then. I am writing this as someone who was written serial port controller drivers for the BeOS.
> That we suffer from more esoteric operating systems lagging behind the bleeding edge of extremely complicated peripherals is a very good place to be in.
This is the wrong logic, because operating systems become esoteric since they can't support the hardware, and hardware becomes complicated and under-specified because there's basically only one operating system to take care of. You may _think_ you have no reason to be sad if you're a user of Windows or Linux, but you have plenty anyway.
> There was a small period of time between the 90s and the 2010s where you could grab almost every 386 OS and have your hardware mostly decently run for it
And prior to that, you could grab every OS running on IBM clones and not have to worry about graphics drivers at all, because graphics acceleration wasn't a thing. The era you refer to had already introduced software contingency on hardware within x86. This disparity was further compounded in the mid-2010s as GPUs exploded in complexity and their drivers screamed into tens of millions of lines of code, eclipsing kernels themselves. This is not distinguishable from the introduction of graphics drivers in any generalized manner. They were driven by the same process.
An important thing I want to point out as well; you're doing a lot of heavy lifting by limiting the pool to x86 computers, which is already giving up and admitting to a very strong restriction of hardware choice. Don't take that as pedantry, it's a very well hidden assumption that you've accidentally overlooked, or in the case that you think it's irrelevant, I'm letting you know that I don't consider it irrelevant in the slightest. When I think of computers, I'm not just thinking of x86 PCs. In the 90s I'm thinking of SGI workstations, Acorns, Amiga, Macs. I'm thinking of mainframes and supercomputers and everything else.
> This is the wrong logic, because operating systems become esoteric since they can't support the hardware
On the contrary, I assure you that this logic rests on faulty premises. As a general principle it's clearly false since most operating systems (which are long forgotten) predate it by decades, and in the specific context of Linux winning over FreeBSD, it's still not applicable as that happened smack dab in this era you describe.
> You may _think_ you have no reason to be sad if you're a user of Windows or Linux, but you have plenty anyway.
I'm a user of Linux, FreeBSD and 9Front. I just don't (and never have) bought hardware at random. You can reason your way into sadness any which way, but rationalization isn't always meaningfully justified. I just don't find it sad that my second desktop can't have an RX 9000 whatever in it. Where's the cut off line for that? Why not be sad that I can't jam a Fujitsu ARM processor into a PCIE slot as another type of satellite processor? The incompatibility is of the same effect, but I don't see you lamenting or even considering the latter, as though mounting a processor to a PCB is somehow fundamentally less possible than writing a modern graphics driver.
> And prior to that, you could grab every OS running on IBM clones and not have to worry about graphics drivers at all, because graphics acceleration wasn't a thing. The era you refer to had already introduced software contingency on hardware within x86. This disparity was further compounded in the mid-2010s as GPUs exploded in complexity and their drivers screamed into tens of millions of lines of code, eclipsing kernels themselves.
Not at all; I excluded this early era because you could _not_ be sure to find an OS that would support your graphics card at all, other than maybe what the BIOS supported. I am talking about the 90s because GPUs already had plenty of non-BIOS-supported features, like multiple CRTCs, weird fixed acceleration pipelines, weird coprocessors with random ISAs, and yet you could still find operating systems with 3rd party drivers supporting them.
It is a _perfectly_ distinguishable era. See how many OSes support 3D cards from the era like i9xx. Heck, FreeBSD itself qualifies, but also BeOS and many others.
In addition, I am talking about the _kernel_ part, which by any logic should be ridiculously simple. E.g. this is not a compiler to a random ISA or anything like that. It is what in Linux you would call a DRM driver, and the only reason they are complex and millions of LoCs is that they are under-specified, by AMD and the rest. Most of lines of AMD driver code in Linux are the register indices for each and every card submodel (yes, really, header files!), when it is clearly that before they would just have standarized on one set and abstracted from it. Compare AtomBIOS support in cards from a decade ago and cards from today. It is literally easier today for a 3rd party to implement support for the more complicated parts of the GPU (e.g. running compute code!), which AMD more or less documents, than it is to support basic modesetting support as it was in the 00s. This has happened!
Hardware may be more complicated, but interfaces needn't be more complicated. This, I believe, is a symptom, not the cause.
> I just don't find it sad that my second desktop can't have an RX 9000 whatever in it. Where's the cut off line for that? Why not be sad that I can't jam a Fujitsu ARM processor into a PCIE slot as another type of satellite processor?
You do not find it sad that there is no longer any operating system other than Linux supporting any amount of hardware, simple or not ?
Also, you call every non-Linux OS as "esoteric" as a counter-argument to my point , yet you try to use support for definitely esoteric hardware (which would be even hard to acquire!) as an argument for your point, whatever it is ? When I'm complaining that I can no longer rely on FreeBSD, literally the 2nd open OS with most hardware support, on supporting basic hardware (!) from this decade, when on the past I could more or less rely on _all_ BSDs supporting it, as well as a myriad other OSes , the argument that "oh well it never supported hardware that it is impossible to find in stores anyway, so I don't care" sounds pretty hollow.
Certainly even slightly deviating from the popular hardware has always resulted in diminishing returns, but today it is much worse, _except_ for Linux.
It's like this. You eventually got Starcraft2 to work. That means Linux can run Starcraft2, it's in the "Runs" category. Games like League of Legends, which have kernel level anti cheat, are in the "Won't Run" category.
But you don't want to sacrifice comfort or other things. The game should work just right on Linux.
I have an Nvidia card and use mostly Ubuntu (mate), also for gaming. It's even a problem now, because I would benefit from a hard divide between the gaming and working\studying system (I have a gaming user in backlog).
On Linux it's mostly KSP, Factorio, but sometimes DeepRockGalactic, Valheim, Euro Truck Sim or Warhammer: Total War1\2\3. These games work flawlessly or with <10%fps hit.
There are games that kind of work - Ancestors: Humankind Odyssey, Cyberpunk, Hunt: Showdown. But you lose comfort and I'd rather just play them on Windows, than suffer decreased functionality on Linux. I know that some of it (definitely Cyberpunk) is only because of NVIDIA.
When buying games I usually don't buy Windows only games unless there is a very good reason. And I quit League of Legends and WRC rally because of anti cheat scam. I feel scammed after putting lot of money in a game and suddenly losing the ability to play it.
This shifting of goalposts just to cater to linux just explains it all.
Comeon. If a customer bought a game that says it runs on linux, they should be able to play it on linux well, not just launch it and quit within 5 mins.
I get you have the ideology up in your head, but don't lie and embellish linux to this degree. The attitude just turns people off.
> If a customer bought a game that says it runs on linux, they should be able to play it on linux well
None of those games say they run on Linux.
- Starcraft 2 is available for windows/mac: https://starcraft2.blizzard.com/en-us/
- Anno 1800 is available for windows: https://store.steampowered.com/app/916440/Anno_1800/
- Hogwarts Legacy is available for windows: https://www.hogwartslegacy.com/en-us/pc-specs
The fact that you can play most games on Linux these days is due to the Wine developers, Valve, and CodeWeavers. But those efforts are completely unrelated to the developers of those three games. Buying Starcraft 2 is not, in any way, purchasing a Linux game or transferring money to anyone working on Linux support.
Every game I've purchased that actually says it runs on Linux, has worked beautifully on Linux (stellaris and factorio come to mind). Most windows games work beautifully on Linux too, but Blizzard isn't lifting any fingers to make it that way.
Yeah I hope I'm clear in that I'm not "against Linux" or "against people choosing to use Linux." I think Linux is awesome.
And I choose to use Windows for most of my personal computing, due to my gaming preferences, some needs (concussions + poor eyesight means things like scaling and brightness controls and refresh rate matter a lot to me), and my preference for DxO PhotoLab (which isn't Linux compatible.)
"Linux" is really a family of operating systems, so people need to be more specific. It might run perfectly out of the box on consumer/gamer focused operating systems like Bazzite or SteamOS while perhaps requiring more work on something like Red Hat or NixOS. Those different operating systems all have wildly different approaches to how the OS actually works despite generally being able to run a largely overlapping set of programs.
It's like saying something works on "laptop" without specifying whether it's a Thinkpad or a Chromebook or a Macbook.
I can't comment generally but I use NixOS and have had no issues playing games on Steam. The setup was laughably simple, just `programs.steam.enable = true;` and Steam handles compatibility so well that I buy games without thinking "will this run".
Actually there was one thing I couldn't do but this isn't unique to NixOS. I tried to install a GTAV mod that allows you to ride your smart bike trainer in game: GTBikeV. The mod can be installed, but the Bluetooth doesn't work. This is a WINE limitation.
I think once we get off LLM's and find something that more closely maps to how humans think, which is still not known afaik. So either never or once the brain is figured out.
I'd agree that LLMs are a dead end to AGI, but I don't think that AI needs to mirror our own brains very closely to work. It'd be really helpful to know how our brains work if we wanted to replicate them, but it's possible that we could find a solution for AI that is entirely different from human brains while still having the ability to truly think/learn for itself.
> ... I don't think that AI needs to mirror our own brains very closely to work.
Mostly agree, with the caveat that I haven't thought this through in much depth. But the brain uses many different neurotransmitter chemicals (dopamine, serotonin, and so on) as part of its processing, it's not just binary on/off signals traveling through the "wires" made of neurons. Neural networks as an AI system are only reproducing a tiny fraction of how the brain works, and I suspect that's a big part of why even though people have been playing around with neural networks since the 1960's, they haven't had much success in replicating how the human mind works. Because those neurotransmitters are key in how we feel emotion, and even how we learn and remember things. Since neural networks lack a system to replicate how the brain feels emotion, I strongly suspect that they'll never be able to replicate even a fraction of what the human brain can do.
For example, the "simple" act of reaching up to catch a ball doesn't involve doing the math in one's head. Rather, it's strongly involved with muscle memory, which is strongly connected with neurotransmitters such as acetylcholine and others. The eye sees the image of the ball changing in direction and subtly changing in size, the brain rapidly predicts where it's going to be when it reaches you, and the muscles trigger to raise the hands into the ball's path. All this happens without any conscious thought beyond "I want to catch that ball": you're not calculating the parabolic arc, you're just moving your hands to where you already know the ball will be, because your brain trained for this since you were a small child playing catch in the yard. Any attempt to replicate this without the neurotransmitters that were deeply involved in training your brain and your muscles to work together is, I strongly suspect, doomed to failure because it has left out a vital part of the system, without which the system does not work.
Of course, there are many other things AIs are being trained for, many of which (as you said, and I agree) do not require mimicking the way the human brain works. I just want to point out that the human brain is way more complex than most people realize (it's not merely a network of neurons, there's so much more going on than that) and we just don't have the ability to replicate it with current computer tech.
Nobody can know, but I think it is fairly clearly possible without signs of sentience that we would consider obvious and indisputable. The definition of 'intelligence' is bearing a lot of weight here, though, and some people seem to favour a definition that makes 'non-sentient intelligence' a contradiction.
As far as I know, and I'm no expert in the field, there is no known example of intelligence without sentience. Actual AI is basically algorithm and statistics simulating intelligence.
Definitely a definition / semantics thing. If I ask an LLM to sketch the requirements for life support for 46 people, mixed ages, for a 28 month space journey… it does pretty good, “simulated” or not.
If I ask a human to do that and they produce a similar response, does it mean the human is merely simulating intelligence? Or that their reasoning and outputs were similar but the human was aware of their surroundings and worrying about going to the dentist at the same time, so genuinely intelligent?
There is no formal definition to snap to, but I’d argue “intelligence” is the ability to synthesize information to draw valid conclusions. So, to me, LLMs can be intelligent. Though they certainly aren’t sentient.
Can you spell out your definition of 'intelligence'? (I'm not looking to be ultra pedantic and pick holes in it -- just to understand where you're coming from in a bit more detail.) The way I think of it, there's not really a hard line between true intelligence and a sufficiently good simulation of intelligence.
I would say that "true" intelligence will allow someone/something to build a tool that never existed before while intelligence simulation will only allow someone/something to reproduce tools that already known. I would make a difference between someone able to use all his knowledge to find a solution to a problem using tools he knows of and someone able to discover a new tool while solving the same problem.
I'm not sure the latter exists without sentience.
I honestly don't think humans fit your definition of intelligent. Or at least not that much better than LLMs.
Look at human technology history...it is all people doing minor tweaks on what other people did. Innovation isn't the result of individual humans so much as it is the result of the collective of humanity over history.
If humans were truly innovative, should we not have invented for instance at least a way of society and economics that was stable, by now? If anything surprise me about humans it is how "stuck" we are in the mold of what others humans do.
Circulate all the knowledge we have over and over, throw in some chance, some reasoning skills of the kind LLMs demonstrate every day in coding, have millions of instances most of whom never innovate anything but some do, and a feedback mechanism -- that seems like human innovation history to me, and does not seem like demonstrating anything LLMs clearly do not possess. Except of course not being plugged into history and the world the way humans are.
One of my uncles was asked to stop bringing his rifle to highschool because him and one of the teachers kept talking about hunting in the parking lot and getting to class late. The principle felt they were likely to at least make it in the building on time if they weren't chatting in the parking lot about their rifles/hunts/etc.
People used to have an insane amount of freedom and things generally went better.
I was in Cub Scouts in the early 90s and got a Swiss Army knife. I thought it would be cool to show it off to the kids on the bus. It got confiscated by the principal and I was suspended for one day. I think I got off light. I can’t imagine what would happen these days.
Absolutely, I would also walk down the public roads also to get from one field to another, nobody said anything. It was quite normal in the rural Midwest. You'll probably find lots of true stories online as well about kids arriving to school and checking their rifle with the principal at the beginning of class and then getting them back at the end of the day.
Check the gun with the principal?! No, you leave it on the gun rack in the back of the pickup, and lock the truck door like normal people at my high school. :-)
Dang, seems like a completely different world than the one I live in. Honestly I would prefer it if we were able to teach our kids personal responsibility to this level, I actually believe people can be that mature by age 7 and you know whether a kid is a rule breaker or not by that point.
You know what OS doesn’t handle the notch? OSX. It happily throws the system tray icons right back there, with an obscure work around to bring them back. Software quality at Apple these days…
reply