Texture Memory-Augmented Deep Patch-Based Image Inpainting

Rui Xu, Minghao Guo, Jiaqi Wang, Xiaoxiao Li, Bolei Zhou, Chen Change Loy*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

38 Citations (Scopus)

Abstract

Patch-based methods and deep networks have been employed to tackle image inpainting problem, with their own strengths and weaknesses. Patch-based methods are capable of restoring a missing region with high-quality texture through searching nearest neighbor patches from the unmasked regions. However, these methods bring problematic contents when recovering large missing regions. Deep networks, on the other hand, show promising results in completing large regions. Nonetheless, the results often lack faithful and sharp details that resemble the surrounding area. By bringing together the best of both paradigms, we propose a new deep inpainting framework where texture generation is guided by a texture memory of patch samples extracted from unmasked regions. The framework has a novel design that allows texture memory retrieval to be trained end-to-end with the deep inpainting network. In addition, we introduce a patch distribution loss to encourage high-quality patch synthesis. The proposed method shows superior performance both qualitatively and quantitatively on three challenging image benchmarks, i.e., Places, CelebA-HQ, and Paris Street-View datasets (Code will be made publicly available in https://github.com/open-mmlab/mmediting).

Original languageEnglish
Pages (from-to)9112-9124
Number of pages13
JournalIEEE Transactions on Image Processing
Volume30
DOIs
Publication statusPublished - 2021
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 1992-2012 IEEE.

ASJC Scopus Subject Areas

  • Software
  • Computer Graphics and Computer-Aided Design

Keywords

  • generative adversarial network
  • Image completion
  • texture synthesis

Cite this