What a weird paper. They compare 3D rendering a synthetic camera path against a stock 2D image stabilisation algorithm. Of course, true 3D algorithms will win.
And their main takeaway seems to be that one should do global bundle adjustment for recovering the camera poses ... which I thought has been common knowledge for years and is what pretty much every SfM tool implements.
My TLDR would be: stuff that works well continues to work well even if you use a neural radiance field instead of a point cloud for representing geometry.
Those results look eerily similar to Microsoft's 2016 Hyperlapse paper&software.
And their main takeaway seems to be that one should do global bundle adjustment for recovering the camera poses ... which I thought has been common knowledge for years and is what pretty much every SfM tool implements.
My TLDR would be: stuff that works well continues to work well even if you use a neural radiance field instead of a point cloud for representing geometry.
Those results look eerily similar to Microsoft's 2016 Hyperlapse paper&software.