Transformation-Grounded Image Generation Network for Novel 3D View Synthesis
Citations Over TimeTop 1% of 2017 papers
Abstract
We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach first explicitly infers the parts of the geometry visible both in the input and novel views and then casts the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion/disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods.
Related Papers
- → Learning to Hallucinate Examples from Extrinsic and Intrinsic Supervision(2021)5 cited
- → Hallucinations in Children: A Follow‐up Study(1988)39 cited
- → Dream studies in hallucinated patients(1937)8 cited
- → Do Language Models Know When They're Hallucinating References?(2023)23 cited
- The hallucinating patient and nursing intervention.(1976)