Hacker Newsnew | past | comments | ask | show | jobs | submit | bcrl's commentslogin

There were more ~26+ years ago. gcc and egcs had some subtle register allocator bugs that would get tripped up under heavy register pressure on i386 that were the bane of my existence as a kernel developer at the time.

Please name a computer science program that has an ethics component.

Yes, I wish software developers were more like actual engineers in this regard.


All Computer Engineering & Systems Engineering programs in Canada require two ethics components (once at graduation, once at P.Eng)

Sadly, in the USA, I believe most engineering ethics classes are optional electives, and it shows when you look at the graduating student body today.

Software was already far down the bloat path by the time the Core 2 Duo came out, so the upgrade didn't make all that much of a difference in feel given how much latency was caused by software performing random reads off a disk. That's why SSDs made such a huge difference.

Back in the MS-DOS days, the amount of data needed to be read off a disk while the OS booted was negligible, so a second or two on a fast 486 felt amazing compared to the incredibly slow grind of watching code execute on an 8086 or slow 80286. Software was still in the space of having to run tolerably on an 8086, so the added resources of a newer faster machine actually did improve the feel of the system.


Many of the merge lanes in California are insanely short compared to those in the rest of the world. The worst are the ones that have merge immediately before an overpass and exit immediately after where merging and exiting have about the width of the overpass to change lanes. I found those infuriating when I used to visit friends in the Bay area. The pattern where I live is the opposite (long exit lane before the overpass and a long merge lane after) and provides far better margins of safety.


It's plausible that the AI companies have given up storing data for training runs and just stream it off the Internet directly now. It's probably cheaper to stream than buying more SSDs and HDDs from a supply constrained supply chain at this point.


That this is a plausible explanation is... beyond horrifying to me.


Given the tectonic shift in priorities for Linux kernel development over the past decade, I'm willing to bet that many key developers would be open to a microkernel architecture now than ~25+ years ago. CPUs now have hardware features that reduce the overhead of MMU context changes which gets rid of a significant part of the cost of having isolated address spaces to contain code. The Meltdown and Spectre attacks really forced the security issue to the point where major performance costs to improve security became acceptable in a way that was not the case in the '90s or '00s.


To make things even more confusing, the high-density floppy introduced on the Amiga 3000 stored 1760 KiB


At least there it stored exactly 3,520 512-byte sectors, or 1,760 KB. They didn't describe them as 1.76MB floppies.


Please read the article in full. The GPU die where all the computations occur and the majority of power is spent will remain on TSMC.

TSMC plans their A14 process to be in high volume production in 2028. It will include backside power delivery introduced in their A14 process (expected 2026/2027 high volume production), which means it will be quite competitive with Intel.

https://semiwiki.com/wikis/industry-wikis/tsmc-a14-process-t... https://semiwiki.com/wikis/industry-wikis/%F0%9F%A7%A0-tsmc-...

There's an older article at https://www.igorslab.de/en/350-watts-for-nvidias-new-top-of-... which shows the breakdown of power consumption for GPUs. The GPU die itself is only 230W of the entire power budget.


The entire sentence is even less enthusiastic:

"The GPU die will remain with TSMC, but portions of the I/O die are expected to leverage Intel's 18A or the planned 14A process slated for 2028, contingent on yield improvements."

Reading between the lines: Nvidia will most likely design a TSMC version of those I/O die portions in case Intel fails.

Intel has a decades long reputation of failing its attempted foundry customers. Whether or not Nvidia's ownership stake is sufficient to overcome the inertia within Intel that has resulted in those failures remains to be seen.


Thanks for publishing your blog! The articles are quite enlightening, and it's interesting to see how semiconductors evolved in the '70s, '80s and '90s. Having grown up in this time, I feel it was a great time to learn as one could understand an entire computer, but details like this were completely inaccessible back then. Keep up the good work knowing that it is appreciated!

A more personal question: is your reverse engineering work just a hobby or is it tied in with your day to day work?


Thanks! The reverse engineering is just for fun. I have a software background, so I'm entirely unqualified to be doing this :-) But I figure that if I'm a programmer, I should know how computers really work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: