This is an amazing idea, and congrats on getting so far through it.
I personally would be wary about fire. Custom electronics without experience, and then putting (assuming here) high energy density batteries in a soft toy handled by little kids. Any accident will be absolutely disastrous. And when you scale up those very low probability failures are bound to happen.
How do you think about this, is this handled already?
This is very interesting and exciting, but IMHO the comparisons read as a bit disingenuous with the other models at 16 bit weights.
The 16 bit releases of the others models are not optimized for size, making it difficult to take the comparison seriously.
Would be interesting to see a comparison to quantized versions of the other models. If this model beats the others also in a fair comparison it gives more credibility to it.
Instead of anchoring the sun and thus noon at the top it would be interesting to have the sun move around the clock face as the year progresses. Noon then moves around as the year progresses.
”Up” could be said to point towards the center of the galaxy instead.
What they're saying is that the error for a vector increases with r, which is true.
Trivially, with r=0, the error is 0, regardless of how heavily the direction is quantized. Larger r means larger absolute error in the reconstructed vector.
Yes, the important part is that the normalized error does not increase with the dimension of the vector (which does happen when using biased quantizers)
It is expected that bigger vectors have proportionally bigger error, nothing can be done by the quantizer about that.
This is cool. It makes storage of the KV cache much smaller, making it possible to keep more of it in fast memory.
Bandwidth-wise it is worse (more bytes accessed) to generate and do random recall on than the vanilla approach, and significantly worse than a quantized approach. That’s because the reference needs to be accessed.
I guess implied is that since the KV cache is smaller, the probability is higher that the parts it that are needed are in fast memory, and that bandwidth requirements of slow links is reduced, and performance goes up.
Would be interesting with a discussion about benefits/drawbacks of the approach. Ideally backed by data.
> Instead of expecting it to understand my requests, I almost always build tooling first to give us a shared language to discuss the project.
This is probably the key. I’ve found this to be true in general. Building simple tools that the model can use help frame the problem in a very useful way.
Tbh shrinking the image is probably the cheapest operation you can do that still lets every pixel influence the result. It’s just the average of all pixels, after suitable color conversion.
The author of the article seems to assume there is no color conversion (e.g., the resizing of the image is done with sRGB-encoded values rather than converting them to linear first). Which is a stupid way to do it but I'd believe most handwritten routines are just that.
I personally would be wary about fire. Custom electronics without experience, and then putting (assuming here) high energy density batteries in a soft toy handled by little kids. Any accident will be absolutely disastrous. And when you scale up those very low probability failures are bound to happen.
How do you think about this, is this handled already?
reply