Hacker Newsnew | past | comments | ask | show | jobs | submit | addaon's commentslogin

Also, not every airline has a 3-digit code. e.g. Aero Republica has the two-alphanum designator P5, but doesn't have a 3-digit.

> typically you can just write the rest of the code with the roles of the values swapped

Today, yes; most modern instruction sets are pretty orthogonal, and you can use a value in any register symmetrically -- although even today, division instructions (if they exist!) are among the most likely to violate that expectation, along with instructions that work with the stack pointer. But in the XOR heyday, this was less true -- instruction sets were less orthogonal, and registers were more scarced. It's not unreasonable for an OS scheduler tick to do some work to figure out the newly-scheduled task's stack pointer in one register, and need to swap it into an SPR or similar so that the return from interrupt returns to the new location, for example; and this is the exact type of place where the XOR trick occasionally has value.


It would be interesting to do a write-up like this on "modern microcontrollers." Some of the content is similar (some µc cores look relatively similar to µp cores with a 10-20 year lag), but there's differences too. Things that would come to mind for me:

1) Strategic pipeline lengths -- long pipelines drive throughput, short pipelines drive interrupt responsiveness. 5-stage pipelines are still popular for realtime cores.

2) Heterogenous cores -- a mix of short- and long-pipeline cores on a single chip, with some optimized for responsiveness and some optimized for throughput. (This could actually be added to the µp article as well, discussing big.LITTLE style heterogeneity with some cores optimized for total throughput and some optimized for power efficiency.) Unlike in the µp case, this is pared with a general assumption that cores are usually developer-managed (asymmetric multiprocessing) rather than magically managed by a scheduled (symmetric multiprocessing). (Dedicated cores for low power come up in µcs too.)

3) Fast memories; some very fast memories. Everything fits in SRAM on chip. Some SRAM is tightly coupled to a specific core (tightly coupled memory), which gives as fast as single cycle access; some is hanging off an AXI bus to allow sharing between cores, but adds a few cycles (and possible collisions) to access, making caches still relevant (which has not always been true for µcs). The µp developer approach to performance of "memory accesses rule everything" is not nearly as true on µcs.

4) Peripherals and accelerators dominate silicon area, and dominate system performance. (This can also be said of µps these days.) Proper use of DMA engines can completely change the solution to problems. Smart peripherals unload huge amounts of work from the core, making the core less important -- in some cases, it's really just there to configure the DMA engine and the peripherals. (This sounds an awful lot like cores on a µp just feeding GPUs these days.)

5) Topology awareness. Multiple AXI busses and peripheral busses; software needs to be aware of what peripheral or SRAM chunk hangs off what bus to maximize performance, minimize collisions, or even just to allow the peripheral to be used at all from a given core in a given power state. This has some similarities to NUMA awareness in µp development, but as with AMP vs SMP it's generally more visible to developers.

I could keep going... there's an article here.


The return value of foo(n), converted to bool, acts as the condition variable…

A truly excellent novel.

And the call-out to the Folio Society (editions at the intersection of books and works of art) is well earned. For those into such things, I'd suggest Centipede Press as another press that, at it's best, goes even beyond Folio Society for great quality and artistic merits, and has similarly great taste in selection -- this [1] stands out as two novellas that have been formative to me, and an edition that both lives up to the original (including the back-to-back printing of the original paperback) and that I'll be happy to pass on.

[1] https://www.centipedepress.com/sf/babel17.html


I just recently got a Suntup edition of A Scanner Darkly. Another publisher whose bindery is itself a work of art.

Since you've clearly looked at this a bit... would you give a sentence or two comparing Indris, F*, and the other lesser known players in this space (languages for both writing and formally verifying programs)? I find it a wide space to explore, and while ecosystem maturity seems like a huge deciding factor right now, I assume there's real and meaningful differences between the languages as well.

Idris is rather unique in that the development flow involves writing out your program as a trellis with "holes", and you work through your codebase interactively filling those holes. It feels like Haskell with an even more powerful type system and a companion expert system to alleviate the burden that the type-system's power entails. You're still basically always in program-land, mentally.

F* (and most other dependently-typed languages, or adjacent ones like Liquid Haskell) has a whole external SMT solver layer that lives outside of the language. Think like if SML modules were even less unified with the core language, and also most of your time was spent in that layer. They're really not fun to try and make complex software with, just because the context-switching required at scale is borderline inhuman.

Lean has a unified proof-system in the language like Idris, but it has much the same grain as the languages with external SMT solvers. You're spending most of your mental time in proofsland, thinking primarily about how to prove what you want to do. That's because with how Lean as a language is set up, you're basically centering all your reasoning around the goal. If there's a problem, you're adjusting the structure of your reasoning, changing your proof strategy, or identifying missing lemmas, etc.

You can kind of think of it as though Idris is "inside out" compared to most of the other dependently typed languages.


This feels like a knee-jerk reaction. While it may be a relevant critique of some news releases about academic research… this one literally contains a thumbnail with a link to a sufficiently-high-resolution image of the document. You can read it by clicking on the only image in the article.

This addresses the “short long tail” (known bounded variance due to the multiple physical operations underlying a single logical memory op), but for hard real time applications the “long long tail” of correctable-ECC-error—and-scrub may be the critical case.

Isn’t this a bit like spell-checking a Nigerian Prince email? The valuable eyeballs are the ones that don’t notice or don’t care.

Sure, If ranking is done purely based on clicks and not quality. I'm just thinking of it as a meta "loss" function in the AI context. So I'd say its the passionate enthusiasts who care enough to provide feedback on such topics.

A remote doctor can prescribe.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: