Working in open source, I've now heard a wide variety of disabilities that people have and they have to be aided by an LLM for writing even descriptions of their PRs.
There are many possibilities. There are assistive technologies of various levels of quality, there is human assistance, and then there is simply being unable to participate.
Chris McCausland says he relies a lot less on others due to AI.
It has been quite effective in showing how diversity can influence opinions when he has been on radio programs and offered his perspective. There have been conversations which have started on the usual circular crapping on AI that I'm sure that everyone here has witnessed themselves that becomes much more nuanced when he says how his life has changed.
That's why diversity is important. Don't do it like Star Trek Discovery which has 'I know! let's use diversity to solve this problem. Great! That was super effective! Now everybody go back to your minor roles'
That's a recurring thing through Star Trek. Geordi's visor saved a whole civilization once. I'm not actually sure what you're referring to in Discovery despite watching the whole thing.
Geordie's thing was mostly superpower-but-you-look-idiotic.
Discovery was just so ham fisted when they tried to make points and missed massive opportunities when they could have been done organicity because of the situation.
I really thought the first extra plus future season was going to be a comment on colonialism, but no they just turned up, said they were the more civilised ones and y'all should join our new improved federation. The opportunity was just sitting there to show people figuring out their own culture and not appreciating an interloper dictating how their lives should be simply because they have a fancy starship.
Not to mention declaring their ship sentient because it dreams. It just screams 'conform to our expectations of what sentience should be and we will accept you as a person' They portrayed the exact opposite of what they intended.
(sorry for the rant, I was mauled by a Federation as a child)
That's okay. I see what you mean though. Starfleet Academy has done a decent job addressing that with some issues I hope they'll deal with in season 2.
You can still run larger MoE models using expert weight off-loading to the CPU for token generation. They are by and large useable, I get ~50 toks/second on a kimi linear 48B (3B active) model on a potato PC + a 3090
I agree with the previous post that there's hope that there's a convergence point in the not too distant future where consumer hardware can run powerful models.
At the moment, the 397Bn Qwen3.5 model (which I assume is what you're referring to) is still out of reach of most consumers to run locally: the only relatively straightforward path (i.e. discounting custom Threadripper builds) to running it would be a 512Gb Mac Studio.
However, in a generation or two (of hardware and models) maybe we'll see convergence with more hardware available with 3-400Gb of memory for more approachable money (a tough sell right now, I accept, with memory prices as they are) and models offering great performance in this size range.
One often overlooked after that is ggml, the tensor library that runs llama.cpp is not based on pytorch, rather just plain cpp. In a world where pytorch dominates, it shows that alternatives are possible and are worthy to be pursued.
reply