Hacker Newsnew | past | comments | ask | show | jobs | submit | transformi's commentslogin

Now create agents framework :)


Looks like a competitor to the A2P that google released....


No, they're working together on it

> AP2 is designed as a universal protocol, providing security and trust for a variety of payments like stablecoins and cryptocurrencies. To accelerate support for the web3 ecosystem, in collaboration with Coinbase, Ethereum Foundation, MetaMask and other leading organizations, we have extended the core constructs of AP2 and launched the A2A x402 extension, a production-ready solution for agent-based crypto payments. Extensions like these will help shape the evolution of cryptocurrency integrations within the core AP2 protocol.

[0] https://cloud.google.com/blog/products/ai-machine-learning/a...


So currently what are the best OSS reasoning models? (and how much compute the needed)


Interesting approach! Remind me the early insights that neurons in DNN that capture similar concepts.


and due to simply asking it...:)


Ironically the first time I asked it said it's Claude Sonnet 3.5, probably they trained on that corpus as well


Cool - do you train model that will be the proxy from the votes of persons?


we're not training models or proxying human votes with models


i see they mostly offer api to run the sandbox on their infra...is there a way to host the sandboxes self hosted?(how much memory/compute needed?)


Looks like an ad... BTW there are more sandboxes code- here is OSS one: Daytona https://github.com/daytonaio/daytona


The only Beam-specific part are the sandboxes, but those can easily be swapped out for the vendor of your choice. The architecture we described isn't exclusive to our product.

Beam is fully OSS BTW: https://github.com/beam-cloud/beta9


Fighting what you perceive to be an ad... with another ad?


Create alternative self-made feed of videos using VEO3 based on my intercation in social media.


Interesting to see if GROQ hardware can run this diffusion architecture..it will be two time magnitude of currently known speed :O


(Disc: Googler but don't have any specific knowledge of this architecture)

My understanding of Groq is that the reason it is fast is that all the weights are kept in SRAM and since the SRAM <-> Compute bandwidth is much faster than HBM <-> Compute bandwidth, you can generate tokens faster (During generation the main bottleneck is just bringing in the weights + KV caches into compute).

If the diffusion models just do multiple unmasked forward passes through a transformer, then the activation * weights computation + (attention computation) will be the bottleneck which will make each denoising step compute bound and there won't be any advantage in storing the weights in SRAM since you can overlap the HBM -> compute transfer with compute itself.

But my knowledge of diffusion is non-existent, so take this with a truck of salt.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: