[mosaicing.gif]
[bar.gif]


[quad3light.jpg]

Abstract

As a project for a digital image processing course at Stanford University (EE368), we examined the problem of image mosaicing, which is comprised of building a large field of view image from a sequence of smaller snapshots. We studied a selection of proposed techniques to solve the problems connected with image stitching: correcting geometric deformations using a camera model, registering images using image data and eliminating seams from image mosaics. Using this selection of techniques, we designed a program cabable of handling all those tasks automatically. In particular, we focused on two registration techniques, namely phase correlation and feature-based registration. We implemented these two methods in a complete image mosaicing framework. In this report we present a comparison of the different approaches and show some of the high-resolution mosaics we were able to create using this program.

Introduction

Image mosaics are collections of overlapping images that are transformed in order to result in a complete image of a wide angle scene. The transformations can be viewed as simple relations between coordinate systems. By applying the appropriate transformations via a warping operation and merging the overlapping regions of a warped image, it is possible to construct a single image covering the entire visible area of the scene. Nevertheless, those coordinate transformations are not known beforehand, unless the camera parameters are tracked with precision. The central problem of image mosaicing is thus to compute these parameters solely from the image data, a problem commonly called image registration.

To keep the problem tractable, we decided to restrict ourselves to a "single point-of-view" constraint, which means that all snapshots are taken from roughly the same point with changes exclusively in the orientation of the camera, but without zooming. In our implementation, image registration is decomposed into 2 stages : we first find the relative transform between each pair of overlapping images, and then compute the absolute coordinate transform for each image by a error minimization technique. Lastly we seamlessly blend the images to get a one-piece color panorama. The general data flow graph is shown in the following figure:

[flowChart.GIF]

We will give a brief introduction to projective geometry, with special attention to the implications of our simplifying assumptions to the camera model. Then we will present two registration methods that we examined, together with the global registration optimization. The next section features the projection on the final panorama and the composing, and a method to compensate for exposure variations. Lastly we will present results from sample images and a conclusion of our work, followed by references and a breakdown of the time spent on the project.

Presentation

You can download the Powerpoint slides or an Acrobat PDF document of the presentation we gave in class on May 24, 2000.

Table of Contents

Next (Proposal)


[bar.gif]
© 2000 Laurent Meunier and Moritz Borgmann