An operating system whose main goal is not performance is forever going to be a toy of professors. Linux is a mess compared to the beautiful code that would comprise this OS, but it's a mess that runs fast on real hardware and let's people get stuff done. That matters.
An operating system whose main goal is not performance is forever going to be a toy of professors.
Be careful! That statement was once true of programming languages.
Over time, larger and larger numbers of people relaxed their expectations for performance in exchange for benefits like greater safety. Simultaneously, implementors invested in optimizing the performance of languages previously considered "slow."
Operating systems may well wind up with the same dynamic.
But only because hardware got much faster, with larger resources (memory, disk). I just did a Stupid Benchmark, compiling this code:
/*
* Standard 'C' demo... Say hello to the world
*
* Compile command: cc hello
*/
#include <stdio.h>
main()
{
printf("Hello world\n");
}
(I know, not completely valid ANSI C code). On my home system (32 bit 2.6GHz intel gobs of RAM yada yada) it took GCC 0.2 seconds to compile and link the program (as a static program---by default it did it in 0.05 seconds). On the system I grabbed the code from (an operating system for the 8-bit MC6809) it took 35 seconds. And that was on a simulated MC6809 running on said 2.6GHz system, which is still faster, at approximately 50MHz, than a real MC6809 system of 1MHz (I thing you could get a 2MHz MC6809, but not much faster than that).
Even a state-of-the-art computer in 1980 (say, a Cray supercomputer) would have a hard time keeping up with <a href="http://www.chrisfenton.com/homebrew-cray-1a/">hobbyist system of today</a>, and it's because of this increase in computing power that we've been able to have the "toy languages" of yesterday be useful languages today.
But given that CPU speeds have leveled off, the easy performance gains are pretty much gone these days.
On the surface, you wouldn't think immutable data structures would offer any significant performance gains. But there are a lot of cool techniques and unforeseen consequences that emerge.
The clojure library "om" is able to use reference equality (instead of a deep comparison) to check for diffs. This results in a 2x or 3x performance gain over mutable data structures.
So I agree with you, this will likely just be a toy OS. But if his design demonstrate huge advantages, the ideas could be adopted by mainstream operating systems.
As I mentioned elsewhere, such an OS could have important performance benefits. GC could be made to be incremental at a fundamentally new level. In fact, incremental GC could be nothing more than a series of defrag copies that proceed at a rate just faster than new object creation. Such a system would rock for writing real-time systems.
Important performance benefits compared to what? Systems that are very performance sensitive typically don't use garbage collectors to begin with. They instead rely on techniques to avoid generating much garbage in the first place.
You're making my point for me here. This kind of technology might make it possible to write even more performant soft real-time systems but still have the productivity benefits of GC.
I don't think I am. Trading performance for productivity is completely different discussion. It sounded like the claim here was that immutability in the OS (whatever that means) might result in a system with better performance.
We have programming languages where everything is immutable (Haskell). This results in programming techniques that generate a lot of garbage. We have not seen that this results in programs with better performance than those that do not generate garbage to begin with. Why would it be different in an operating system?
Trading performance for productivity is completely different discussion.
The point is that you could possibly buy more for less in this trade off. It's not just GC. It's a close to pauseless incremental GC that might become available.
It's not so much that it has to be in an OS, though it seems it would be better in an OS, however.
I think this is becoming less true: people are willing to trade off performance for safety and flexibility, given the right circumstances. For example, running everything inside of virtualized Linux instances is less performant than running in bare-metal Linux, but virtualization is nonetheless taking off like crazy.
We're running everything on our own fully owned colocated servers, yet we're running everything in virtualised environments. Private clouds are attractive for many of us because of the management benefits of being able to use a VM as the unit of deployment. It also makes sense to increase isolation, as colo space is expensive enough to justify servers that are much more powerful than many of the apps we run needs (our typical new setups start at 16-24 cores these days, and I expect that to be 32-48 cores by year end), so we want to co-locate many apps on the same servers but don't want them to able to interfere too much with each other.
In fact, we have some systems where things are even deployed in containers inside VMs for various reasons...
Oh I'd love to see him do it, because it would be a great research platform. Don't get me wrong. But that didn't seem to be the goal. Maybe I just interpreted the article wrong.
Yes, but his comment was about an OS needing to be pragmatic and designed with performance in mind (which Scala was), not about it not being immutable.
The goal of Linux was not performance, it was to create a free OS. The fact that it has become a high quality performant OS is a side-benefit (driven by external commercial factors, much later in its development timeline).
I'm not sure that was even the goal. Seems Linus just wanted to create it for the hell of it, didn't think it was going anywhere, and released it as free software just because... From everything I've read of the history of Linux it seems it was mostly an accident (a pleasant one) of history. (I use Linux BTW)
Finding new ~structures is important, see the issues with X vs Wayland. By re-ordering things you get far better performance even on limited systems (raspberry pi for instance). Sure you lose other properties but that's part of the compromise. I'd love to see a similar effect on the whole system by going immutable first. With added that it could give a smaller system, easier to understand, secure, change.
That's as silly as the claim that no one will use relational databases, because only humans can properly optimize queries and transactions are too slow to be useful.