You're thinking conservatively. The camera could capture information that could be used to refocus, denoise, infer depth, and more. They could have attached more sensors, like cell phones have, to power some of these things.
More sensors isn't software, that's hardware. What should these sensors do?
What do you mean by refocus and infer depth?
Denoise is done in post processing and can (depending on quality) take a lot of CPU. Not something to do on camera.
I use DxO for processing RAW images. It'll peg all the cores on my desktop to 100%, haven't measured the watts consumption during that but it's not something a little battery can deliver. The idea of running something DxO-equivalent on camera is unreasonable. And why would I want something inferior? I don't see a use case for trying to do post-processing on-camera.