Hacker Newsnew | past | comments | ask | show | jobs | submit | am17an's commentslogin

Working in open source, I've now heard a wide variety of disabilities that people have and they have to be aided by an LLM for writing even descriptions of their PRs.

I’m not sure what you’re trying to say? How did these people write PRs before?

There are many possibilities. There are assistive technologies of various levels of quality, there is human assistance, and then there is simply being unable to participate.

Chris McCausland says he relies a lot less on others due to AI.

It has been quite effective in showing how diversity can influence opinions when he has been on radio programs and offered his perspective. There have been conversations which have started on the usual circular crapping on AI that I'm sure that everyone here has witnessed themselves that becomes much more nuanced when he says how his life has changed.

That's why diversity is important. Don't do it like Star Trek Discovery which has 'I know! let's use diversity to solve this problem. Great! That was super effective! Now everybody go back to your minor roles'


That's a recurring thing through Star Trek. Geordi's visor saved a whole civilization once. I'm not actually sure what you're referring to in Discovery despite watching the whole thing.

Geordie's thing was mostly superpower-but-you-look-idiotic.

Discovery was just so ham fisted when they tried to make points and missed massive opportunities when they could have been done organicity because of the situation.

I really thought the first extra plus future season was going to be a comment on colonialism, but no they just turned up, said they were the more civilised ones and y'all should join our new improved federation. The opportunity was just sitting there to show people figuring out their own culture and not appreciating an interloper dictating how their lives should be simply because they have a fancy starship.

Not to mention declaring their ship sentient because it dreams. It just screams 'conform to our expectations of what sentience should be and we will accept you as a person' They portrayed the exact opposite of what they intended.

(sorry for the rant, I was mauled by a Federation as a child)


> Geordie's thing was mostly superpower-but-you-look-idiotic.

I always thought it was one of those banana hair clips, spray painted gold.

As an admirer of low budget creativity, it’s very inspiring. But it still looks ridiculous.


That's okay. I see what you mean though. Starfleet Academy has done a decent job addressing that with some issues I hope they'll deal with in season 2.

They meant people like to find valid sounding excuses to use AI for their writing.

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html


That I know. If I hear “I’ve only used it to clean it up” one more time my head is going to explode.

You can still run larger MoE models using expert weight off-loading to the CPU for token generation. They are by and large useable, I get ~50 toks/second on a kimi linear 48B (3B active) model on a potato PC + a 3090

Sure. “Tell me a joke”

Damn I’m jealous that they figured out how to pay their contributors. I’ve been toiling away for free


They already have with qwen3.5


I agree with the previous post that there's hope that there's a convergence point in the not too distant future where consumer hardware can run powerful models.

At the moment, the 397Bn Qwen3.5 model (which I assume is what you're referring to) is still out of reach of most consumers to run locally: the only relatively straightforward path (i.e. discounting custom Threadripper builds) to running it would be a 512Gb Mac Studio.

However, in a generation or two (of hardware and models) maybe we'll see convergence with more hardware available with 3-400Gb of memory for more approachable money (a tough sell right now, I accept, with memory prices as they are) and models offering great performance in this size range.


I was referring to the 35B version. It is surprisingly good for its size. You can use it for implementation tasks without it going off the rails


What do you use for sub-50ms inference?


Could be bank statement line item Classification

Honestly you can run this on a 16GB VRAM GPU with llama.cpp. Just try it!


One often overlooked after that is ggml, the tensor library that runs llama.cpp is not based on pytorch, rather just plain cpp. In a world where pytorch dominates, it shows that alternatives are possible and are worthy to be pursued.


Holy smokes we're cooked.


I immediately flagged it. But it doesn't matter much. No one has skin in the game of commenting on HN anyway.


Yeah that’s an LLM isn’t it? Commenting on outsourcing judgement. The dead internet is real


Maintainers time is a more scarce resource than free tokens. I would much rather get my time back after reading those PRs


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: