For the record I wouldn't argue "blockchain" is the only possible means of decentralization/distribution for offloading graphics compute.
With that said, an example would be a solo artist making CGI in C4D for a client/hobby (this is a small subset of potential use cases, but most relatable today).
Typical workflow costs would look like:
- Local Workstation ($4K w/ 2 high end nvda gpus)
- Render 2000 frames at 4K resolution for final render (need a better figure here but let's call it $500)
- Time spent on workflow rendering - 5 hours
- Time spent on final rendering - 10 hours
In a decentralized render situation:
- Local computer ($500 - in theory no discrete graphics necessary)
- Render 2000 frames at 4K resolution ($100)
- Time spent on workflow rendering - 1 hour
- Time spent on final rendering - 2 hours
I'm not trying to handwave at the data here btw, I just have a day job and want to answer promptly. RNDR is far and away cheaper at the moment than local or render farm compute: https://twitter.com/rsquires/status/1466599124451487744
Here are the main reasons why it's cheaper:
- Most decentralized GPU operators are "running at a loss" -- people who own GPUs don't expect a return so any dollars earned while they're idle is good with them.
- No "datacenter tax". Server GPUs are objectively more expensive than the network's GPU's which bias towards consumer.
- In terms of time, rendering is one of the most parallelizable workloads. 2000 frames rendered sequentially on a cloud instance is just slower than 2000 frames rendered in parallel on 50 cloud instances.
For QoS, because it's so much cheaper, redundancy is much more justifiable to add to the system.
On compatibility, an SDK would allow people to build their own products that leverage the compute of the network. That's not something there is a high incentive for centralized render farms to offer.
Why is a blockchain useful here though? For example, here is how it could work without a blockchain:
I could create "Distributed Render Service" application - you register your machine with my service. My service runs on a pretty standard set up on AWS. I keep the details in a sql database. There is a client you download to your machine which basically figures when the GPU isn't in use and makes a call to my service for a "job". The client pulls down the relevant data, completes the job on your GPU and sends the data back. My service credits your account. On the flip side I have a place for people who need said rendering done. They upload "jobs". My central service figures which jobs to send where.
Where does a blockchain make it cheaper, more compatible or improve service? I understand that you state that a blockchain isn't the only possible means - but the key difference in question is central, trusted control v not. My service has a central, trusted control. Blockchains are created to avoid a central, trusted control. So even if we avoid blockchain - what would be the scenario where you can concretely avoid central, trusted control and it would be beneficial on the dimensions stated?
Fwiw, I believe you are making these statements in good faith, and they aren't argumentative for their own sake. Like the original poster, I just struggle to see the value in 99% of the situations people propose involving blockchains (or avoiding central/trusted control).
Totally, appreciate the discourse btw! Here are my thoughts for where the blockchain buzzword is useful:
- Consensus around asset ownership -- every render to the network automatically ascribes authorship of the final render via a minting process a la NFT.
- Authorship/ownership graph -- Similar to above, in my mind the end game is an asset graph that tracks combinations of assets and their owners used to create final renders or assets. Piracy in digital assets is rampant and having a distributed source graph of "what assets are made of" isn't something tractable for a central body to take on but would reward impactful creators more meritocratically.
- From the above, a blockchain setup could bake in the direct monetization of IP, copyright, permissions, etc.
- Supply side incentives -- for all the misplaced hype, blockchain can actually nail financializing "sharing economy" primitives in a trustless way. I'm sure there are many people who don't want to trust running VMs on a closed network. Adding "liquidity" to the compute market here may be the biggest benefit IMO. Technically speaking it's doable without blockchain but I'm not aware of for-profit success stories, though happy to be proven wrong.
- Open source data -- The market doesn't incentivize open source data for a highly technical (and niche atm) segment like this. Intellectual property management and law could actually be much simpler if open was the standard.
- Licensing -- I have to think the service you suggest would be cease and desisted by a number of large incumbents on legal grounds of running a business without each node contributor having the right licensing permissions as well as the central body. IANAL but there's a sharing economy argument (similar to Uber & Airbnb) somewhere in here that brings the blockchain approach more into the legislative grey area in a way that makes the market more efficient.
Btw, it's not lost on me that these useful aspects present new classes of integrity, security, privacy challenges. Just trying to discuss in good faith as you mentioned. I think the blockchain aspect isn't super interesting on it's own right but brings the barriers for entry of a system like this way down.
With that said, an example would be a solo artist making CGI in C4D for a client/hobby (this is a small subset of potential use cases, but most relatable today).
Typical workflow costs would look like: - Local Workstation ($4K w/ 2 high end nvda gpus) - Render 2000 frames at 4K resolution for final render (need a better figure here but let's call it $500) - Time spent on workflow rendering - 5 hours - Time spent on final rendering - 10 hours
In a decentralized render situation: - Local computer ($500 - in theory no discrete graphics necessary) - Render 2000 frames at 4K resolution ($100) - Time spent on workflow rendering - 1 hour - Time spent on final rendering - 2 hours
I'm not trying to handwave at the data here btw, I just have a day job and want to answer promptly. RNDR is far and away cheaper at the moment than local or render farm compute: https://twitter.com/rsquires/status/1466599124451487744
Here are the main reasons why it's cheaper: - Most decentralized GPU operators are "running at a loss" -- people who own GPUs don't expect a return so any dollars earned while they're idle is good with them. - No "datacenter tax". Server GPUs are objectively more expensive than the network's GPU's which bias towards consumer. - In terms of time, rendering is one of the most parallelizable workloads. 2000 frames rendered sequentially on a cloud instance is just slower than 2000 frames rendered in parallel on 50 cloud instances.
For QoS, because it's so much cheaper, redundancy is much more justifiable to add to the system.
On compatibility, an SDK would allow people to build their own products that leverage the compute of the network. That's not something there is a high incentive for centralized render farms to offer.
If you want to go deep I highly recommend recent podcast with CEO of OTOY and the two hosts of the Mograph podcast: https://www.youtube.com/watch?v=SRtOMF4JKd4