Hacker Newsnew | past | comments | ask | show | jobs | submit | jfindley's commentslogin

I tend to look at the grinder and also the choice of the beans (roast level, consistency, chips). As another commenter pointed out you do occasionally get places that will buy a super fancy machine but have no idea what to do with it. It's rarer to spend loads on a fancy grinder if you don't know what you're doing.

100%, I just don’t have an eye yet for commercial grinders :)

io_uring is in a curious place. Yes it does offer significant performance advantages, but it continues to be such a consistent source of bugs - many with serious security implications - that it's questionable if it's really worth using.

I do agree that it's a bit dated and today you'd do other things (notably SO_REUSEPORT), just feel that io_uring is a questionable example.


> continues to be such a consistent source of bugs - many with serious security implications... just feel that io_uring is a questionable example.

Are you saying this as someone with experience, or is it just a feeling? Please give examples of recent bugs in io_uring that have security implications.


There are a couple of notable examples of projects[0] and companies[1] that have got tired of it, and no longer use it.

There's considerable difficulty these days extrapolating "real" vulnerabilities from kernel CVEs, as the kernel team quite reasonably feel that basically any bug can be a vulnerability in the right situation, but the list of vulnerabilities in io_uring over the past 12 months[2] is pretty staggering to me.

0: https://github.com/containerd/containerd/pull/9320 1: https://security.googleblog.com/2023/06/learnings-from-kctf-... 3: https://nvd.nist.gov/vuln/search#/nvd/home?offset=0&rowCount...


Not OP, and I'm no expert in the area at all, but I _do_ have a feeling that there have been quite a few such issues posted here and elsewhere that I read in the last year.

https://www.cve.org/CVERecord/SearchResults?query=io_uring seems to back that up. Only one relevant CVE listed there for 2026 so far, for more than two per month on average in 2025. Caveat: I've not looked into the severity and ease of exploit for any of those issues listed.


Did you read the CVEs? Half these aren't vulnerabilities. One allows the root user to create a kernel thread and then block its shutdown for several minutes. One is that if you do something that's obviously stupid, you don't get an event notification for it.

Remember the Linux kernel's policy of assigning a CVE to every single bug, in protest to the stupid way CVEs were being assigned before that.


> Did you read the CVEs?

You obviously didn't read to the end of my little post, yet feel righteous enough to throw that out…

> One allows the root user to create a kernel thread and then block its shutdown for several minutes.

Which as part of a compromise chain could cause a DoS issue that might be able to bypass common protections like cgroup imposed limits.


If we apply risk/reward analysis, how probable is such a chain of exploits? If you already got local root, you might as well do a little bit more than a simple DoS.

Depending on how much performance would be gained by using io_uring in a particular case, and how many layers of protection exist around your server, it might be a risk worth taking.


Is there any way to have something like a distance blur? e.g. as rays travel further you reduce the number, subsample then apply a gaussian(or algo of choice) blur across those that return, increasing in intensity as the rays angle gets coarser?

It'd be really neat to have some way of enabling really long-distance raytraced voxels so you can make planet-scale worlds look good, but as far as I'm aware noone's really nailed the technical implementation yet. A few companies and engines seem to have come up with pieces of what might end up being a final puzzle, but not seen anything close to a complete solution yet.


Yup you could blur, but it is not cheap, and it doesn't feel very satisfying to look at blurry stuff in the distance.

We have a "depth of field" implementation for when you're in dialog with an NPC. There it looks nice, because you're focused on one thing. But when looking around its not that great.

Ideally you want it close to native res in the distance, but without any wobble produced by noise as you move. This is really hard.


Do note though that AIUI these are all E-cores, have poor single-threaded performance and won't support things like AVX512. That is going to skew your performance testing a lot. Some workloads will be fine, but for many users that are actually USING the hardware they buy this is likely to be a problem.

If that's you then the GraniteRapids AP platform that launched previously to this can hit similar numbers of threads (256 for the 6980P). There are a couple of caveats to this though - firstly that there are "only" 128 physical cores and if you're using VMs you probably don't want to share a physical core across VMs, secondly that it has a 500W TDP and retails north of $17000, if you can even find one for sale.

Overall once you're really comparing like to like, especially when you start trying to have 100+GbE networking and so on, it gets a lot harder to beat cloud providers - yes they have a nice fat markup but they're also paying a lot less for the hardware than you will be.

Most of the time when I see takes like this it's because the org has all these fast, modern CPUs for applications that get barely any real load, and the machines are mostly sitting idle on networks that can never handle 1/100th of the traffic the machine is capable of delivering. Solving that is largely a non-technical problem not a "cloud is bad" problem.


These Intel Darkmont cores are in a different performance class than the (Crestmont) E-cores used in the previous generation of Sierra Forest Xeon CPUs. For certain workloads they may have even a close to double performance per core.

Darkmont is a slightly improved variant of the Skymont cores used in Arrow Lake/Lunar Lake and it has a performance very similar to the Arm Neoverse V3 cores used in Graviton5, the latest generation of custom AWS CPUs.

However, a Clearwater Forest Xeon CPU has much more cores per socket than Graviton5 and it also supports dual-socket motherboards.

Darkmont also has a greater performance than the older big Intel cores, like all Skylake derivatives, inclusive for AVX-using programs, so it is no longer comparable with the Atom series of cores from which it has evolved.

Darkmont is not competitive in absolute performance with AMD Zen 5, but for the programs that do not use AVX-512 it has better performance per watt.

However, since AMD has started to offer AVX-512 for the masses, the number of programs that have been updated to be able to benefit from AVX-512 is increasing steadily, and among them are also applications where it was not obvious that using array operations may enhance performance.

Because of this pressure from AMD, it seems that this Clearwater Forest Xeon is the final product from Intel that does not support AVX-512. Both next 2 Intel CPUs support AVX-512, i.e. the Diamond Rapids Xeon, which might be launched before the end of the year, and the desktop and laptop CPU Nova Lake, whose launch has been delayed to next year (together with the desktop Zen 6, presumably due to the shortage of memories and production allocations at TSMC).


E-cores aren't that slow, yesteryear ones were already around Skylake levels of performance (clock for clock). Now one might say that's a 10+ year old uarch, true, but those ten years were the slowest ten years in computing since the beginning of computing, at least as far as sequential programs are concerned.


All the good commercial parametric CAD apps have an API that allow you to define models programatically to avoid repitition, or do more complicated things like ensure gear ratios are exactly correct. I'm not sure I entirely understand what you're getting at with the "stays in sync" part though.


Unfortunately the index is the easy part. Transforming user input into a series of tokens which get used to rank possible matches and return the top N, based on likely relevence, is the hard part and I'm afraid this doesn't appear to do an acceptable job with any of the queries I tested.

There's a reason Google became so popular as quickly as it did. It's even harder to compete in this space nowadays, as the volume of junk and SEO spam is many orders of magnitude worse as a percentage of the corpus than it was back then.


I am definitely not trying to complete with Google, instead I am offering an old-school "just search" engine with no tracking, personalization filtering, or AI.

It's driven by my own personal nostalgia for the early Internet, and to find interesting hidden corners of the Internet that are becoming increasingly hard to find on Google after you wade through all of the sponsored results and spam in the first few pages...


There may be a free CS course out there that teaches how to implement a simplified version of Google's PageRank. It's essentially just the recursive idea that a page is important if important pages link to it. The original paper for it is a good read, too. Curiously, it took me forever to find the unaltered version of the paper that includes Appendix A: Advertising and Mixed Motives, explaining how any search engine with an ad-based business model will inherently be biased against the needs of their users[0]

[0] https://www.site.uottawa.ca/~stan/csi5389/readings/google.pd...


Nice find, will review!


It's not quite as simple as that though - in most places, especially California, water shortages are not a simple natural imbalance between the amount of rain that falls and how much flows out in rivers and streams.

If demand is far higher than supply due to overuse by industry that's definitely a water shortage - there isn't enough of it, and something is probably suffering as a result. I don't think that's a useful definition of drought though. If someone builds a massive factory consuming 100s of millions of gallons of water per day that's definitely going to cause a problem but I'm not sure it's reasonable to say that there's suddenly a drought.

I think the definition of drought is instead current rainfall compared to historical average - which then leads to the question of if the change is just that rainfall has now been low for so long the historical average has changed, or if rainfall has actually improved. I don't think the article addressed this, but I only skimmed it so maybe I missed it.


> If someone builds a massive factory consuming 100s of millions of gallons of water per day that's definitely going to cause a problem

Lots of factories in Washington, seemingly no problem.


Are you implying 800 miles worth of latitude, along with North Pacific weather in general, is irrelevant?


It’s very relevant,

and the next 200 years of settlers should probably take note,

instead of just continuing to barrel into a place that was unreasonable to live in when it started, and hasn’t changed much in that regard.


If you want people to take your benchmark seriously, you'd need to provide a very great deal more information on how those numbers are generated. "It's complicated, just trust me" isn't a good enough explanation.

If you want people to listen, you need to have a link where you explain what hardware you're using, what settings you're using, what apps/games you're running, what metrics you're using and how you compute your Magical Number.

My already high level of sceptism is compounded by some scarcely-believable results, such as that according to your testing the i9-14900K and i9-13900K have essentially identical performance. Other, more reputable and established sources do not agree with you (to put it mildly).


Hey, I do try to make the site as transparent as possible - but admit that the site does not make it obvious. For such a doubt, go into the comparison of the two (https://www.pc-kombo.com/us/benchmark/games/cpu/compare?ids%...) where all benchmarks used that the two processors share are listed. The benchmark bars are clickable and go to the source.

It does get really complicated to address something like that when all comparisons are indirect. Thankfully, that's not the case here.

The 13900K and 14900K in games really have been that close, see https://www.computerbase.de/artikel/prozessoren/intel-core-i... for an example, where the two have a 2% FPS difference.


The standard is... well, it IS indeed a standard, I guess you can't really argue that, but it's a very great deal more permissive than many people might hope or expect. https://ijmacd.github.io/rfc3339-iso8601/ is a wonderful illustration of some of the deeply silly time formats permitted by ISO 8601.


I don't seem to be able to select "hard" AI - is this just not implemented yet? It'd be nice to have a stronger AI, but I do realise that this is a lot easier said than done.


Yeah. Medium seems too easy. But I also can’t select hard mode.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: