Hacker Newsnew | past | comments | ask | show | jobs | submit | more tombert's commentslogin

I've only ever used QNX in the form of Blackberry products (mostly the Playbook), so I am afraid I don't what the advantages of it would be compared to Linux or something.

I know it's a microkernel which is inherently cool to me, but I don't know what else it buys you.

Can anyone here give me a high-level overview of why QNX is cool?


QNX is hard realtime. At one point, its kernel had O(1) guarantees for message passing and process switching. It could have been rewritten without any loops. I'm not sure if that's still true.

It's also really compact. This used to be a great selling point for underpowered car infotainment systems. Some cars had around 1Mb of RAM for their infotainment, yet they were able to run fairly complex media systems.

QNX is also used for non-UI components, just as a good realtime OS.


I think it is mostly used for non-UI stuff. I could be wrong but outside of car infotainment I've never seen it used for UI stuff. Mostly it just sits headless quietly running some branch of industry that we all depend on. The joke used to be that if QnX had a y2k bug that had been missed civilization would end and never mind windows because you won't have any water, food, energy or transportation anymore.

Yep. QNX was better than anything else around 2000. VxWorks was technically slimmer and more reliable, but QNX had a real mostly-POSIX-compatible environment. You could develop/debug the code on QNX itself and deploy it on the devices.

They were also early adopters of Eclipse, which was the "default IDE" before the advent of VS Code.


I've used VxWorks as well, yes, it was slimmer (a lot slimmer, actually), but I would disagree that it was more reliable. QnX supported a ton of hardware out of the box and if there ever was unreliability as far as I've seen it it was always comms layer related, never the core OS or any other bits that you could put next to VxWorks and compare on a functional level. You just required a much bigger SBC to run it, and that's why we used VxWorks in the first place. But I would have been much happier with QnX. I'm imagining the modern day equivalent of QnX running on a Raspberry Pi Pico or one of the larger Arduino's or a Teensy. That would be an absolute game-changer.

Hard real time (so latency guarantees), microkernel (and they actually mean it, so your device drivers can't hose your system), standardized networked IPC including network transparency for all services, ISRs at the application level.

>IPC including network transparency

Sadly not anymore, Qnet was removed in 8.0


Oh! I only worked with it commercially prior to that so I never got the memo. What an insanely stupid move. That was one of their USPs.

In general QnX was commercially mismanaged and technically excellent. I'm imagining a world where they clued in early on that an open source real time OS would have run circles around the rest of the offerings and they'd have cleaned up on commercial licensing. Since the 80's they've steadily lost mind and marketshare though I suspect they'll always be around in some form.


There's been talk about this on Reddit too, where our chief architect of QNX 8 broke down the decision. He mentioned it was ultimately a tough decision, but that in the end the cons outweighed the pros.

Hey, could you please post a link to the thread you're referring to? I'm guessing it had to do with the io-pkt to io-sock transition, but I couldn't find any information about that.

I've also noticed that all of the message passing system calls still accept the node ID. Are there plans to open up this interface to allow for implementation of custom network managers, maybe? I'd be very interested in exploring that.


Such decisions should always involve the customers. A chief architect that knocks one of the foundation stones out from under a building isn't doing the bureau they work for any favors.

People make fun of it, but I think the fact that Unixey stuff can use tools that have existed since the 70's [1] can be attributed to the fact that they're text based. Every OS has its own philosophy on how to do GUI stuff and as such GUI programs have to do a lot of bullshit to migrate, but every OS can handle text in one form or another.

When I first started using Linux I used to make fun of people who were stuck on the command line, but now pretty much everything I do is a command line program (using NeoVim and tmux).

[1] Yes, obviously with updates but the point more or less still stands.


And when everything is a text file you have (optimally) a human readable single source of truth on things... Very important when things get complicated and layered. In GUI stuff your only option is often to start anew, make the same movements as the first time and hope you end with what you want.

Hey! I actually wrote a thing to make the Swaybar a little more "complete" (e.g. battery status, currently selected program, clock, inspirational quote from ChatGPT, etc): https://git.sr.ht/~tombert/swaybar3

Not going to claim it will change the world or anything, but this runs perpetually with Sway and according to System Monitor it hovers at a little less than a megabyte of RAM. You can set how often you want things to update, and add as many sections as you'd like, and it's easy to create extra modules if you are so inclined (though not as easy as the Clojure version since I haven't found an implementation of multimethods for Rust that I like as much).


> It's very much based on reason and law.

I have no interest in the rest of this argument, but I think I take a bit of issue on this particular point. I don't think the law is fully settled on this in any jurisdiction, but certainly not in the United States.

"Reason" is a more nebulous term; I don't think that training data is inherently "theft", any more than inspiration would be even before generative AI. There's probably not an animator alive that wasn't at least partially inspired by the works of Disney, but I don't think that implies that somehow all animations are "stolen" from Disney just because of that fact.

Obviously where you draw the line on this is obviously subjective, and I've gone back and forth, but I find it really annoying that everyone is acting like this is so clear cut. Evil corporations like Disney have been trying to use this logic for decades to try and abuse copyright and outlaw being inspired by anything.


It can be based on reason and law without being clear cut - that situation applies to most of reason and law.

> I don't think that training data is inherently "theft", any more than inspiration would be even before generative AI. There's probably not an animator alive that wasn't at least partially inspired by the works of Disney ...

Sure, but you can reason about it, such as by using analogies.


Producing something interesting has never been an issue for a junior engineer. I built lots of stuff that I still think is interesting when I was still a junior and I was neither unique nor special. Any idiot could always go to a book store and buy a book on C++ or JavaScript and write software to build something interesting. High-school me was one such idiot.

"Senior" is much more about making sure what you're working on is polished and works as expected and understanding edge cases. Getting the first 80% of a project was always the easy part; the last 20% is the part that ends up mattering the most, and also the part that AI tends to be especially bad at.

It will certainly get better, and I'm all for it honestly, but I do find it a little annoying that people will see a quick demo of AI doing something interesting really quickly, and then conclude that that is the hard part part; even before GenAI, we had hackathons where people would make cool demos in a day or two, but there's a reason that most of those demos weren't immediately put onto store shelves without revision.


This is very true. And similarly for the recently-passed era of googling, copying and pasting and glueing together something that works. The easy 80% of turning specs into code.

Beyond this issue of translating product specs to actual features, there is the fundamental limit that most companies don't have a lot of good ideas. The delay and cost incurred by "old style" development was in a lot of cases a helpful limiter -- it gave more time to update course, and dumb and expensive ideas were killed or not prioritized.

With LLMs, the speed of development is increasing but the good ideas remain pretty limited. So we grind out the backlog of loudest-customer requests faster, while trying to keep the tech debt from growing out of control. While dealing with shrinking staff caused by layoffs prompted by either the 2020-22 overhiring or simply peacocking from CEOs who want to demonstrate their company's AI prowess by reducing staff.

At least in my company, none of this has actually increased revenue.

So part of me thinks this will mean a durable role for the best product designers -- those with a clear vision -- and the kinds of engineers that can keep the whole system working sanely. But maybe even that will not really be a niche since anything made public can be copied so much faster.


Honestly I think a lot of companies have been grossly overhiring engineers, even well before generative AI; I think a lot of companies cannot actually justify having engineering teams as large as they do, but they have to have all these engineers because OtherBigCo has a lot of engineers and if they have all of them then it must be important.

Intentionally or not, generative AI might be an excuse to cut staff down to something that's actually more sustainable for the company.


I'm not entirely convinced it's going to lead to programmers losing the power to command high salaries. Now that nearly anyone can generate thousands upon thousands of lines of mediocre-to-bad code, they will likely be the doing exactly that without really being able to understand what they're doing and as such there will always be the need for humans who can actually read and actually understand code when a billion unforeseen consequences pop up from deploying code without oversight.

I recently witnessed one such potential fuckup. The AI had written functioning code, except one of the business rules was misinterpreted. It would have broken in a few months time and caused a massive outage. I imagine many such time bombs are being deployed in many companies as we speak.

Yeah; I saw a 29,000 line pull request across seventy files recently. I think that realistically 29,000 lines of new code all at once is beyond what a human could understand within the timeframe typically allotted for a code review.

Prior to generative AI I was (correctly) criticized once for making a 2,000 line PR, and I was told to break it up, which I did, but I think thousand-line PRs are going to be the new normal soon enough.


That’s the fault of the human who used the LLM to write the code and didn’t test it properly.

Exhaustive testing is hard, to be fair, especially if you don’t actually understand the code you’re writing. Tools like TLA+ and static analyzers exist precisely for this reason.

An example I use to talk about hidden edge cases:

Imagine we have this (pseudo)code

  fn doSomething(num : int) {
    if num % 2 == 0 {
      return  Math.sqrt(num)
    } else {
       return Math.pow(num, 2)
    }

  }
Someone might see this function, and unit test it based on the if statement like:

    assert(doSomething(4) == 2)
    assert(doSomething(3) == 9)
These tests pass, it’s merged.

Except there’s a bug in this; what if you pass in a negative even number?

Depending on the language, you will either get an exception or maybe a complex answer (which not usually something you want). The solution in this particular case would be to add a conditional, or more simply just make the type an unsigned integer.

Obviously this is just a dumb example, and most people here could pick this up pretty quick, but my point is that sometimes bugs can hide even when you do (what feels like) thorough testing.


I feel like Christmas is so secularized and commercialized that it doesn't have a lot to do with Christianity anymore.

I'm not saying this as a bad thing; I'm not and have never been religious but I still love Christmas.


I'm not sure; sometimes being an experienced dev gravitates you towards the lazy solutions that are "good enough". Senior engineers are often expected to work at a rate that precludes solving interesting problems, and so the dumber solution will often win; at least that's been my experience, and what I tell myself to go to sleep at night when I get told for the millionth time that the company can't justify formal verification.

I understand what you're saying and certainly I've come up against that myself. I didn't intend my comment to be super pejorative.

Fabrice is an absolute legend. Most people would be content with just making QEMU, but this guy makes TinyC and FFmpeg and QuickJS and MicroQuickJS and a bunch of other huge projects.

I am envious that I will never anywhere near his level of productivity.


Not to detract from his status as a legend, but I think the kind of person that singlehandedly makes one of these projects is exactly the kind of person that would make the others.

I forgot about FFmpeg (thanks for the reminder), but my first thought was "yup that makes perfect sense".


Sure, they're not unrelated or anything, but at the same time, they're all really important, huge projects.

Not just programming either; he invented a mathematical technique for calculating the nth hex digit of pi

I know it's not true, but it would be funny if Bellard had access to AI for 15 years (time-traveler, independent invention, classified researcher) and that was the cause of his superhuman producitvity.

AI will let 10,000 Bellards bloom - or more.


I think Haiku is in that "last 5%" phase. They have something that is 95% of the way there, it's 95% cool, but frustratingly, that last 5% is really important; there's a lot of boring, thankless work with any software that has broad reach.

Most people don't like doing it, but in order for the operating system to be "good", you really need most of this unsexy stuff to work; you need to be able to easily install WiFi drivers, you need to support most modern video cards, you need to suss out the minutia of the graphics APIs, you need to test every possible edge case in the filesystem, you need to ensure that file associations are consistent, etc.

I've mentioned this before, but this is part of what I respect so much about the Wine project. It's been going on for decades, each release gets a little better, and a lot of that work is almost certainly the thankless boring stuff that is absolutely necessary to get Wine to be "production ready".

I ran Haiku a bit on an old laptop, and I do actually like it. It's ridiculously fast and snappy (even beating Linux in some cases), and I really do wish them the best, but as of right now I don't think it's viable quite yet. I'm not 100% sure how they're going to tackle GPU drivers (since GPU drivers are almost an entire OS in their own right), but I would love to have something FOSS that takes us out of the codified mediocrity of POSIX.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: