Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To put things in perspective, the dataset it's trained on is ~240TB and Stability has over ~4000 Nvidia A100 (which is much faster than a 1080ti). Without those ingredients, you're highly unlikely to get a model that's worth using (it'll produce mostly useless outputs).

That argument also makes little sense when you consider that the model is a couple gigabytes itself, it can't memorize 240TB of data, so it "learned".

But if you want to create custom versions of SD, you can always try out dreambooth: https://github.com/XavierXiao/Dreambooth-Stable-Diffusion, that one is actually feasible without spending millions of dollars on GPUs.



  >> it can't memorize 240TB of data, so it "learned"
learning is a form of memorization but yeah


It compresses a whole image down to 1 byte, 60000:1 ratio. That's how much it is allowed to "memorise" from each input on average. Less than a pixel from a whole image.


hey..... first time i want to dip my toes into this, what graphics card do you suggest?


depends on your wallet but RTX 3080 or RTX 3060 are good graphic cards with which you can create these images. If you just want to dip your toes and not spend much you can use Google Colab and rent out googles graphicscards either for free or for $10. Here's a link to a google colab that you can just run for free and that is used a lot https://colab.research.google.com/github/TheLastBen/fast-sta...

P.S. if you want to buy a graphics card, make sure to have at least 12GB VRAM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: