Hacker Newsnew | past | comments | ask | show | jobs | submit | BearOso's commentslogin

The cable length is only for the spec. You can get longer cables that achieve the higher bandwidth, they're just not certified for that.

Right, so per spec it is a downgrade.

And? The question stands, why is the USB 4 spec a downgrade?

Probably because with USB 3.2 2x2 they were reviewing too many longer cables that didn't meet the requirements, so they lowered the length so companies didn't submit them only to fail to get certified. It's worth noting that 1.2m is now in the USB4 spec.

It's mostly explained if you go to the project page. For me, the I would say the hardest thing about something like this is gleaning the Microsoft driver APIs. In the 9x days, Microsoft documentation was not quite thorough and difficult to access. It's still not pleasant.

It’s good use for AI

Have the model spit out example programs to study the API


This was using an exploit already fixed in a recent version and publicly known. It's worthless on the black market or as a bug bounty.

it is not worthless unfortunately! the point of whole blog is about patch gaps in chromium ecosystem.

> To reduce us to anything less is to deny the awesomeness of the cosmos itself.

Teacher: "Photosynthesis makes energy from water, CO2 and light. The mitochondria are the power centers of the cell."

Grade-schooler: "How do they work?"

Teacher: "Um. Um..."

Modern scientist: "Quantum entanglement and tunneling. We don't really understand any of it."


It's also crazy more expensive to run than we thought. That doesn't bode well when their loss-leader period is over and they need to start making money.


You have to maintain a completely separate implementation of AI generated code that's translated from C, so not even idiomatic zig.

Edit And then I go their repository and read commits like this https://github.com/hdresearch/ziggit/commit/31adc1da1693e402... which confirms it wasn't even looked over by a human.


I don't know zig well. What's raising flags for you?


The commit I linked shows that it didn't even read the user name and email from git's config file, but used a test name, which means it's woefully incomplete.

Then there's stuff like this: https://github.com/hdresearch/ziggit/blob/master/src/cmd_bra...

It's just one giant function. Sometimes big functions are necessary. This one is clearly AI generated and not very readable for a human. This is just from a quick glance.


This was orchestrated and developed by agents with verifications like the codebase compiling or git's CLI test suite passing.

That was so the commit authors don't all appear like blank accounts on GitHub


Surely "the commits are attributed to the user who creates them" is a pretty basic feature of the git CLI, and not something that you can add in as a fix later after posting your project to Github and writing a blog post about how much faster than git it is.

It's very easy to be faster than git's CLI if you don't have to do any of the things that git's CLI does!


They mention Anthropic, so I assumed something similar. At $5 per 5 million tokens, 13 billion would cost $65,000. However, the image in the article shows over 17 billion used, which is $85,000. That's an entry-level programmer's yearly salary. It doesn't quite pass their code tests, and it's automatic code translation, so it's going to be a pretty direct transcription. There's still probably a lot of messy code to clean up. I'm not sure it's worth it.


This is effectively what fossilize for steam can do. It records the shaders and structures at an intermediate level, then rebuilds them for the specific hardware. It also does distribution, so you always have ridiculously low shader compile times. I like it because it makes proton better than running under Windows because it eliminates shader stutter.


It was SimCity Classic.


That first bullet is a bit sketchy. Benchmarks, particularly geekbench, may have increased 6x, but that's being manipulated.

The GPUs have become much larger, so 6.8x is believable there, as is the inclusion of a matmul unit boosting AI.

The 2.x numbers are the most realistic, especially because they represent actual workloads.


Even the geekbench numbers from the link only ~doubled. For both single- and multi-core CPU and Metal GPU.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: