Hacker Newsnew | past | comments | ask | show | jobs | submit | somat's commentslogin

"From the dawn of the Space Age through the present, NASA has relied on resilient software running on redundant hardware to make up for physical defects, wear and tear, sudden failures, or even the effects of cosmic rays on equipment."

An interesting case study in this domain is to compare the Saturn V Launch Vehicle Digital Computer with the Apollo Guidance Computer

Now the LVDC, that was a real flight computer, triply redundant, every stage in the processing pipeline had to be vote confirmed, the works.

https://en.wikipedia.org/wiki/Launch_Vehicle_Digital_Compute...

Compare the AGC, with no redundancy. a toy by comparison. But the AGC was much faster and lighter so they just shipped two of them(three if you count the one in the lunar module) and made sure it was really good at restarting fast.

There is a lesson to be learned here but I am not sure what it is. Worse is better? Can not fail vs fail gracefully?


I think the lesson is that redundancy can exist at different layers

> There is a lesson to be learned here but I am not sure what it is.

Restart your Claude Code sessions as often as possible


> Worse is better?

Maybe if you know what the tradeoffs are and are ready to deal with the deficiencies (by rebooting fast). And didn't they had issues with the lunar module Guidance Computer on the first moon landing?


I think it is part of a more general problem, I don't think anybody intends to make a terrible DSL it is just a natural progression from.

    1.we have a command line program
    2.command line args are traditionally parsed by getopt(or close relative) so we will use that(it's expected)
    3.our command line program has grown tremendously in complexity and our args are now effectively a domain specific language.
    4.congratulations, we are now shipping a language using a woefully inadequate parsing engine with some of the worst syntax in existence.
see also: ip-tables, find

I think it would behoove many of these programs to take a good look at what they are doing when they reach step 3 and invest in a real syntax and parser. It is fine to keep a command line interface, but you don't have to use getopt.


Not just modern machines, the Nintendo64 was memory bound under most circumstances and as such many traditional optimizations (lookup tables, unrolling loops) can be slower on the N64. The unrolling loops case is interesting. Because the cpu has to fetch more instructions this puts more strain on the memory bus.

If curious, On a N64 the graphics chip is also the memory controller so every thing the cpu can do to stay off the memory bus has an additive effect allowing the graphics to do more graphics. This is also why the n64 has weird 9-bit ram, it is so they could use a 18-bit pixel format, only taking two bytes per pixel, for cpu requests the memory controller ignored the 9th bit, presenting a normal 8 bit byte.

They were hoping that by having high speed memory, 250 mHz, the cpu ran at 90mHz, it could provide for everyone and it did ok, there are some very impressive games on the n64. but on most of them the cpu is running fairly light, gotta stay off that memory bus.

https://www.youtube.com/watch?v=xFKFoGiGlXQ (Kaze Emanuar: Finding the BEST sine function for Nintendo 64)


The N64 was a particularly unbalanced design for its era so nobody was used to writing code like that yet. Memory bandwidth wasn't a limitation on previous consoles so it's like nobody thought of it.

> This is also why the n64 has weird 9-bit ram, it is so they could use a 18-bit pixel format, only taking two bytes per pixel, for cpu requests the memory controller ignored the 9th bit, presenting a normal 8 bit byte.

The Ensoniq EPS sampler (the first version) used 13-bit RAM for sample memory. Why 13 and not 12? Who knows? Possibly because they wanted it "one louder", possibly because the Big Rival in the E-Mu Emulator series used μ-law codecs which have the same effective dynamic range as 13-bit linear.

Anyway you read a normal 16-bit word using the 68000's normal 16-bit instructions but only the upper 13 were actually valid data for the RAM, the rest were tied low. Haha, no code space for you!


The funny thing is that X11 can actually do heterogeneous dpi and Wayland can't.

Unfortunately you will never find yourself in a situation to actually use a mixed dpi X11 setup (you lose your homogeneous desktop) and Wayland is better at spoofing it (for whatever reason fractional scaling works better in Wayland).

http://wok.oblomov.eu/tecnologia/mixed-dpi-x11/

My favorite quote from that writeup.

"If you think this idea is a bit stupid, shed a tear for the future of the display servers: this same mechanism is essentially how Wayland compositors —Wayland being the purported future replacement for X— cope with mixed-DPI setups."


Yeah I've done that, I used my linux box for years with a 24" 1920x1200 screen and a 32" 4k screen next to each other.

Doing some basic mathematics and xrandr command-line wizardry to apply scaling factors to each display, I was able to get Xorg to render to a virtual framebuffer and treat the monitors as appropriately scaled windows onto it, so that dragging applications from one screen to the other didn't result in any noticeable change in size.

Worked pretty well.


It is a stupid law but I feel people are overthinking this.

For compliance the os has to provide an age category to an application and an interface for the user to enter this data. We already have an api to provide information to applications. it's called the filesystem. and an interface to enter the data, that's called the shell. so everything is already there. If the user lives in california and wants to be compliant (wait a minute, let me stop laughing) all they have to do is put a file somewhere with a age category in it. if the application can't find it. well it's not their fault the law is stupid.


Yes people are overthinking.

Actually having a cross-distro way to specify an age group for parental control purposes would be very useful.

If the law starts to change and be about surveillance (which it isn't about _right now_) then distro maintainers will just not implement that.


You are underthinking this.

You described a technical solution to comply with this law. Yes, that's easy. The problem is the legal implications.


What are the legal implications?

I'm not a lawyer. But it seems to be the same as accepting the terms of service of some product you use or clicking on a "Yes I'm 18+" button to gain access, isn't it? If you somehow suffer from a negative outcome, it puts the blame clearly on you, from a legal standpoint, if you lied about your age or ignored the TOS.

It says nobody is liable for the signal being wrong i.e. the parent is allowed to give the child an over 18 account or full access if they think that's appropriate.


How should maintainers think about it?

Without that file, I hope the age category generalizes to 0. Also, I suppose the file’s ctime should be subtracted from localtime and added to the age, but maybe not if the special value 0 was entered.


Attributes exist due to it's origin as a markup language. XML is actually (big surprise) a pretty good markup language. Where the tags are sort of like function calls and the attributes are args. With little to no information to be gleaned out of the text. The big sin was to say "hey the tooling is getting pretty good for for these sgml like markup languages. Lets use it as a structured data interchange format. It's almost the same thing". Now all the data is in the text and the attributes are not just superfluous but actively harmful as there is a weird extra data axis that people will aggressively use.

See also: postscript. The document structure extensions being comments always bothered me. I mean surely, surely in a turing complete language there is somewhere to fit document structure information. Adobe: nah, we will jam it in the comments.

https://dn790008.ca.archive.org/0/items/ps-doc-struc-conv-3/...


Not sure it's a fair comparison. The spec says:

"Use of the document structuring conventions... allows PostScript language programs to communicate their document structure and printing requirements to document managers in a way that does not affect the PostScript language page description"

The idea being that those document managers did not themselves have to be PostScript interpreters in order to do useful things with PostScript documents given to them. Much simpler.

For example, a page imposition program, which extracts pages from a document and places them effectively on a much larger sheet, arranged in the way they need to be for printing 8- or 16- or 32-up on a commercial printing press, can operate strictly on the basis of the DSC comments.

To it, each page of PostScript is essentially an opaque blob that it does not need to interpret or understand in the least. It is just a chunk of text between %%BeginPage and %%EndPage comments.

This is tremendously useful. A smaller scale of two-up printing is explicitly mentioned as an example on p. 9 of the spec.


XML makes for a pretty good markup language and an ok data interchange format(not a great fit, but the tooling is pretty good). but every single time I have seen it used as a programing language I found it deeply regrettable.

For comparison JSON is a terrible markup language, a pretty good data interchange format, and again, a deeply regrettable programing language. I don't know if anyone has put programing language in straight JSON (I suspect they have shudders) but ansible has quite a few programing structures and is in YAML which is JSON dressed in a config language's clothes.

However as a counter point to my json indictment, it may be possible to make a decent language out of it, look to lisp, it's S-expressions are a sort of a data interchange format(roughly equivalent to json) and it is a pretty good language.


I still enjoy building my pc's, But I put them in 4u server chassis. they are built better and have sane airflow. I have not been 13 for a long time and it is tricky to find non rgb parts anymore. No windows on my case but it still looks like someone is holding a rave through the gaps. sigh.

For free. My main rant about desktop vs server grade motherboards. For a desktop system you really want a desktop grade motherboard. server grade is expensive, takes forever to post, the compute tends toward slow and wide vs desktop's fast and tall, and the parts(ram, cpu) compatability tends to be much more picky. My grip is why is the desktop mb airflow so bad. In a server board everything is aligned front to back. pcie, ram, cpu cooler are all aligned the same way. in a desktop board the pcie goes front to back, the ram goes top to bottom. and toss a coin for which way a cpu cooler will fit.


The requirement for unique user names is a little strange, I was putting together a small internal tool recently and after a bit of thought decided to use an opaque internal id for users and let the users pick and change their name and secret at will.

I think for a larger public service it would make sense to expose some sort of internal id(or hash of it. What bob am I talking to?. but people share the same name all the time it is strange that we can't in our online communities.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: