Post

Single Image to Novel Views with Semantic-Preserving Generative Warping ( GenWarp )

๐ŸŒŸFew pointers from the paper

  • ๐ŸŽฏGenerating novel views from a single image remains a challenging task due to the complexity of 3D scenes and the limited diversity in the existing multi-view datasets to train a model on.

  • ๐ŸŽฏ Recent research combining large-scale text-to-image (T2I) models with monocular depth estimation (MDE) has shown promise in handling in-the-wild images.

  • ๐ŸŽฏIn these methods, an input view is geometrically warped to novel views with estimated depth maps, then the warped image is inpainted by T2I models. However, they struggle with noisy depth maps and loss of semantic details when warping an input view to novel viewpoints.

  • ๐ŸŽฏIn this paper, authors have proposed a novel approach for single-shot novel view synthesis, a semantic-preserving generative warping framework that enables T2I generative models to learn where to warp and where to generate, through augmenting cross-view attention with self-attention.

  • ๐ŸŽฏTheir approach addresses the limitations of existing methods by conditioning the generative model on source view images and incorporating geometric warping signals.

๐ŸขOrganization: SonyAI, Sony Group Corporation, ๊ณ ๋ ค๋Œ€ํ•™๊ต

๐Ÿง™Paper Authors: Junyoung Seo, Kazumi Fukuda, Takashi Shibuya, Takuya Narihira, Naoki Murata, Shoukang Hu, Chieh-Hsin (Jesse) Lai , Seungryong Kim, Yuki Mitsufuji, PhD

 GenWarp Novel Views

Translate to Korean

๐ŸŒŸ๋…ผ๋ฌธ์˜ ๋ช‡ ๊ฐ€์ง€ ์ง€์นจ

  • ๐ŸŽฏ๋‹จ์ผ ์ด๋ฏธ์ง€์—์„œ ์ƒˆ๋กœ์šด ๋ทฐ๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์€ 3D ์žฅ๋ฉด์˜ ๋ณต์žก์„ฑ๊ณผ ๋ชจ๋ธ์„ ํ›ˆ๋ จํ•  ๊ธฐ์กด ๋‹ค์ค‘ ๋ทฐ ๋ฐ์ดํ„ฐ ์„ธํŠธ์˜ ์ œํ•œ๋œ ๋‹ค์–‘์„ฑ์œผ๋กœ ์ธํ•ด ์–ด๋ ค์šด ์ž‘์—…์œผ๋กœ ๋‚จ์•„ ์žˆ์Šต๋‹ˆ๋‹ค.

  • ๐ŸŽฏ ๋Œ€๊ทœ๋ชจ T2I(Text-to-Image) ๋ชจ๋ธ๊ณผ MDE(๋‹จ์•ˆ ๊นŠ์ด ์ถ”์ •)๋ฅผ ๊ฒฐํ•ฉํ•œ ์ตœ๊ทผ ์—ฐ๊ตฌ๋Š” ์‹ค์ œ ์ด๋ฏธ์ง€๋ฅผ ์ฒ˜๋ฆฌํ•˜๋Š” ๋ฐ ์žˆ์–ด ๊ฐ€๋Šฅ์„ฑ์„ ๋ณด์—ฌ์ฃผ์—ˆ์Šต๋‹ˆ๋‹ค.

  • ๐ŸŽฏ์ด๋Ÿฌํ•œ ๋ฐฉ๋ฒ•์—์„œ ์ž…๋ ฅ ๋ทฐ๋Š” ์ถ”์ •๋œ ๊นŠ์ด ๋งต์ด ์žˆ๋Š” ์ƒˆ๋กœ์šด ๋ทฐ๋กœ ๊ธฐํ•˜ํ•™์ ์œผ๋กœ ๋’คํ‹€๋ฆฐ ๋‹ค์Œ T2I ๋ชจ๋ธ์— ์˜ํ•ด ๋’คํ‹€๋ฆฐ ์ด๋ฏธ์ง€๋ฅผ ๊ทธ๋ฆฝ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์‹œ๋„๋Ÿฌ์šด ๊นŠ์ด ๋งต๊ณผ ์ž…๋ ฅ ๋ณด๊ธฐ๋ฅผ ์ƒˆ๋กœ์šด ๊ด€์ ์œผ๋กœ ์™œ๊ณกํ•  ๋•Œ ์˜๋ฏธ๋ก ์  ์„ธ๋ถ€ ์ •๋ณด๊ฐ€ ์†์‹ค๋˜๋Š” ๋ฐ ์–ด๋ ค์›€์„ ๊ฒช์Šต๋‹ˆ๋‹ค.

  • ๐ŸŽฏ์ด ๋…ผ๋ฌธ์—์„œ ์ €์ž๋“ค์€ T2I ์ƒ์„ฑ ๋ชจ๋ธ์ด ์…€ํ”„ ์–ดํ…์…˜์„ ํ†ตํ•ด ํฌ๋กœ์Šค ๋ทฐ ์–ดํ…์…˜์„ ๊ฐ•ํ™”ํ•˜์—ฌ ์›Œํ”„ํ•  ์œ„์น˜์™€ ์ƒ์„ฑ ์œ„์น˜๋ฅผ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๋Š” ์˜๋ฏธ๋ก ์  ๋ณด์กด ์ƒ์„ฑ ์›Œํ•‘ ํ”„๋ ˆ์ž„์›Œํฌ์ธ ๋‹จ์ผ ์ƒท ์†Œ์„ค ๋ทฐ ํ•ฉ์„ฑ์„ ์œ„ํ•œ ์ƒˆ๋กœ์šด ์ ‘๊ทผ ๋ฐฉ์‹์„ ์ œ์•ˆํ–ˆ์Šต๋‹ˆ๋‹ค.

  • ๐ŸŽฏ๊ทธ๋“ค์˜ ์ ‘๊ทผ ๋ฐฉ์‹์€ ์†Œ์Šค ๋ทฐ ์ด๋ฏธ์ง€์—์„œ ์ƒ์„ฑ ๋ชจ๋ธ์„ ์กฐ์ •ํ•˜๊ณ  ๊ธฐํ•˜ํ•™์  ๋’คํ‹€๋ฆผ ์‹ ํ˜ธ๋ฅผ ํ†ตํ•ฉํ•˜์—ฌ ๊ธฐ์กด ๋ฐฉ๋ฒ•์˜ ํ•œ๊ณ„๋ฅผ ํ•ด๊ฒฐํ•ฉ๋‹ˆ๋‹ค.

๐Ÿข์กฐ์ง: SonyAI, Sony Group Corporation, ๊ณ ๋ ค๋Œ€ํ•™๊ต

๐Ÿง™๋…ผ๋ฌธ ์ €์ž: Junyoung Seo, Kazumi Fukuda, Takashi Shibuya, Takuya Narihira, Naoki Murata, Shoukang Hu, Chieh-Hsin (Jesse) Lai, Seungryong Kim, Yuki Mitsufuji, PhD

This post is licensed under CC BY 4.0 by the author.