It's because they're natively trained with 1 bit, so it's not losing anything. Now, the question might be how they manage to get decent predictive performance with such little precision. That I don't know.
I always remind myself and everyone else that human DNA is "only" 1.6 GB of data, and yet it encodes all of the complex systems of the human body including the brain, and can replicate itself. Our intuitive feel of how much stuff can be packed into how many bits are probably way off from the true limits of physics.
It encodes the data on top of locally optimal trajectories in the physical world that were learned in millions of years of evolution. Treat this as context, not weights.
For now, the DNA replication and the synthesis of RNA and proteins using the information stored in DNA are the best understood parts about how a cell grows and divides, but how other complex cellular structures, e.g. membranes or non-ribosomal peptides, are assembled and replicated is much less understood.
We need more years of research, perhaps up to a decade or two, until we will be able to know the entire amount of information describing a simple bacterial cell, and perhaps more than that for a much more complex eukaryotic cell.
Human DNA has 3.2 billion base pairs, and with 2x the information density compared to binary systems (due to 4-letters as opposed 2), that's roughly 800MB of informational data.
Second, what's even more crazy is that roughly 98% of that DNA is actually non-coding.. just junk.
So, we are talking about encoding entirety of the logic to construct a human body in just around 16MB of data!!!
That's some crazy levels of recursive compression.. maybe it's embedding "varying" parsing logic, mixed with data, along the chain.
As another poster has said, much of the "junk" is not junk.
The parts of the DNA with known functions encode either proteins or RNA molecules, being templates for their synthesis.
The parts with unknown functions include some amount of true junk caused by various historical accidents that have been replicated continuously until now, but they also include a lot of DNA that seems to have a role in controlling how the protein or RNA genes are expressed (i.e. turning off or on the synthesis of specific proteins or RNAs), by mechanisms not well understood yet.
Just vanilla JS unless you've got prior experience because any engine you use is going to have a setup process and bootstrapping code and a learning curve for you that will eat into your time. Across the weekend you might only really have a few hours to dedicate to this project and to hold their attention.
Using the "memory" game as an example, do you want the problems you solve to be how to shuffle the cards in a random order, or do you want to be solving why the cards all positioned weirdly because PhaserJS defines an anchor "origin" point in objects and by default that's x 0.5 / y 0.5 which means 50% width / 50% height aka the center of the object so you need to either set their origin to x 0 / y 0 or factor that into their position by subtracting half their width and height, and their width and height has scaled and unscaled values too width vs displayWidth... and of course if you're using a group for the card's display objects, that class does not support setting the origin.
It seems like most breakthroughs I see are for efficiency? What are the most importsnt breakthroughs from the past two or three years for intelligence?
If you think of it from the point of view of the universal approximation theorem, it's all efficiency optimisation. We know that it works if we do it incredibly inefficiently.
Every architecture improvement is essentially a way to achieve the capability of a single fully-connected hidden layer network n wide. With fewer parameters.
Given these architectures usually still contain fully connected layers, unless they've done something really wrong, they should still be able to do anything if you make the entire thing large enough.
That means a large enough [insert model architecture] will be able to approximate any function to arbitrary precision. As long as the efficiency gains with the architecture are retained as the scale increases they should be able to get there quicker.
Most breakthroughs that are published are for efficiency because most breakthroughs that are published are for open source.'
All the foundation model breakthroughs are hoarded by the labs doing the pretraining. That being said, RL reasoning training is the obvious and largest breakthrough for intelligence in recent years.
With all the floating around of AI researchers though, I kind of wonder how "secret" all these secrets are. I'm sure they have internal siloing, but even still, big players seem to regularly defect to other labs. On top of this, all the labs seem to be pretty neck and neck, with no one clearly pulling ahead across the board.
> What are the most importsnt breakthroughs from the past two or three years for intelligence?
The most important one in that timeframe was clearly reasoning/RLVR (reinforcement learning with verifiable rewards), which was pioneered by OpenAI's Q* aka Strawberry aka o1.
I’m confused why the hype and the investment got so high. And why everyone treats it like a race. Why can’t we gradually develop it like dna sequencing.
To be fair, DNA sequencing was very hyped up (although not nearly as much as AI). The HGP finished two years ahead of schedule, which is sort of unheard of for something in it's domain, and was mainly a result of massive public interest about personalized medicine and the like. I will admit that a ton of foundational DNA sequencing stuff evolved over decades, but the massive leap forward in the early 2000s is comparable to the LLM hype now.
I assumed it was obvious. Being first is all that matters. Investors don't want to invest in second place. Obviously, first is achieving AGI and not some GPT bot. That's why so many people keep saying AGI is in _____ weeks away with some even being preposterous stating AGI might have already happened. They need to keep attracting investors. Same as Musk constantly saying FSD is ____ weeks away.
reply