Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How far are we from having a consumer box that can run "AI"?

A computer like device that can "generate text", "generate audio", "generate video" and also "train".



I got my Mac Studio with M1 Ultra and 128g of RAM specifically for this reason, in case I needed it. Definitely not using its full capacities for any other purpose, though it's nice to be able to run 10 instances of a big data-grinding program at once, overnight.

Still waiting on 'consumer' AI training facilities I can practically use: it's a bit arcane, not up to speed on that. I can generate text up to Llama 70b if I'm patient, and generate images and sort-of-video quite easily.


The Macbook Pro is this, if you don't mind its relatively slow speed. It's costly, but a 92GB Macbook Pro is still by far the cheapest way to get that much VRAM.


A device like the Rabbit r1 looks promising. You don't technically need to run inference on the same device. We're still a few years away from making this efficient and small enough to run locally, but there is a lot of potential for the next generation of AI assistants to revolutionize how computing is done, and have mass market appeal.

I'm surprised that Apple hasn't jumped on this yet. All the building blocks are in place to make a revolutionary device possible. It just needs to be very polished, and have great UX, which is what Apple typically excels at.


Good point. And when you talk about a few years away this made me think about the ENIAC. A modern household typically consumes far less electricity than ENIAC.

Now our smartphones are 1000 times more powerful than the ENIAC and use less power.

Do you think Apple likes to jump on things? Apple usually tries not to be first, but definitely likes to polish .


You can do it pretty much with a random processor 64GB RAM (RAM is really cheap atm) and a 4080.


Really! I always imagine a system: Composed of 5 specialized computers.

One for each category: audio, text, video, image.

One analyzer to coordinate everything.

This would be my API that I could access with mobile devices.

Here's a scenario:

I could talk to my phone about ideas, in the background it would create apps prototypes, create posters, make music based on something i whistle, teach me ask i ask question about a topic.

We could delegate the mundane stuff to it.


For the currently used architectures, it doesn't make sense to have 5 specialized, dedicated computers as the "AI" text processing and "AI" video processing and the others use very similar architectures and there's no benefit from specialization, the "video-specialized" hardware would be just as good at processing text and vice versa.


I could be a new architecture.

A processor has different cores, Computers may have several hard-drives, 4 sticks of ram.

Each component can run in parallel.for example, if a long video processing task is underway and your text generation component is idle, it could assist.

Should the audio component fail , only that specific part would be affected.


There's no point in an AI generating consumer device, the compute can just be in the cloud.


“There’s no point in personal computing consumer device, the compute can just be on a mainframe.”

Apologize for the allergic reaction to running something on somebody else’s computer. As much as I appreciate our connected world, I prefer my data and compute local and private.

There’s few things in this world that infuriate me on a daily basis more than a slow or lost internet connection.

Edit: grammar


While I appreciate the sentiment, the market of people like you who are ALSO unwilling to just purchase a GPU is probably miniscule.


The grandparent asked for 'a consumer box that can run "AI"', there's no reason to think they wouldn't accept a box that contains a GPU.


Oh, I don’t mind getting a GPU. I also don’t believe it’s right at the moment to build a machine just for AI activities.

I was commenting more on the idea to run AI in the cloud instead of on device.


sounds a bit like "there is a world market for 50 computers"

the main reason for me would be no need to pay and also no one censoring the content


If you are serious enough about gen AI to want to run it locally you can just get a good video card. Otherwise, you'll save time and money by just using a service/collab, I promise you that there will be ones that offer privacy and uncontrolled generation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: