Computer Graphics

University of California - Berkeley

Error-tolerant Image Compositing


Abstract

Gradient-domain compositing is an essential tool in computer vision and its applications, e.g., seamless cloning, panorama stitching, shadow removal, scene completion and reshuffling. While easy to implement, these gradient-domain techniques often generate bleeding artifacts where the composited image regions do not match. One option is to modify the region boundary to minimize such mismatches. However, this option may not always be sufficient or applicable, e.g., the user or algorithm may not allow the selection to be altered. We propose a new approach to gradient-domain compositing that is robust to inaccuracies and prevents color bleeding without changing the boundary location. Our approach improves standard gradient-domain compositing in two ways. First, we define the boundary gradients such that the produced gradient field is nearly integrable. Second, we control the integration process to concentrate residuals where they are less conspicuous. We show that our approach can be formulated as a standard least-squares problem that can be solved with a sparse linear system akin to the classical Poisson equation. We demonstrate results on a variety of scenes. The visual quality and run-time complexity compares favorably to other approaches.

Citation

Michael W. Tao, Micah K. Johnson, and Sylvain Paris. "Error-tolerant Image Compositing". In European Conference on Computer Vision (ECCV), 2010.

Supplemental Material

Comparisons and Methodology

Here is an HTML webpage that shows parameter changes and complete comparisons against other algorithms.

Released Code and Image Data

Here is our MATLAB code that contains several examples and released image data from our paper.

Oral Presentation Monday, September 6, 2010; 9:00AM - 10:40AM at European Conference on Computer Vision 2010. Acknowledgments The authors thank Todor Georgiev for the link with the Poisson equation, Kavita Bala and George Drettakis for their discussion about visual masking, Aseem Agarwala and Bill Freeman for their help with the paper, Tim Cho and Biliana Kaneva for helping with the validation, Medhat H. Ibrahim for the image of the Egyption pyramids, Adobe Systems, Inc. for supporting Micah K. Johnson's research, and Ravi Ramamoorthi for supporting Michael Tao's work. This material is based upon work supported by the National Science Foundation under Grant No. 0739255 and No. 0924968.