Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess deep world models are still severely riddled by all sorts of problems: vanishing gradients, BPTT being O(T), poor generalization ability of NNs (which likely is due to the lack of attractor state associative recall, as well as concept composability), lack of probabilistic message passing to deal with uncertainty, and perhaps some priors about the world are necessary to make learning tractable (such as spatial maps and fine-tuning for time scales that contain interesting information).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: