Self-improving Multiplane-to-layer Images for Novel View Synthesis
IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)
Novel view synthesis methods have seen increasing popularity in the last years. Nonetheless, state-of-the-art approaches are highly computationally expensive. We propose a new method that does not require any finetuning when a new scene is processed and the proxy geometry is built. Our model employs a feed-forward refinement procedure that corrects the estimated scene representation by aggregating information from input views. Initially, we represent the scene with a set of fronto-parallel semitransparent planes, which are afterwards converted to deformable layers in an end-to-end manner. The use of a single representation shared between all viewpoints allows us to perform real-time rendering with common graphics primitives. Experimental results demonstrate that our method is on par with recent models with the noticeable advantage of speed for inference and compactness of the inferred layered geometry.