Research Description

from these uncalibrated 2-D views
to this textured 3-D scene

Research

My doctoral research focussed on the extraction of view invariant information about a scene given one or more views of a static scene given preselected sets of parallel and coplanar edges. View invariant scene information includes 3-D geometry of objects in the scene, surface texture/reflectance of objects, scene lighting conditions, and a description of the cameras used to create those views. So far I have been able to reconstruct camera parameters, planar 3-D geometry and surface texture, given 1 or more views of a scene with pre-selected parallel and coplanar edges. The resulting scene description is a VRML file.

This technique has been used to recover 3-D textured polygonal descriptions of scenes from uncalibrated views. Review the following section to see some results.

Results

3-D Model from a single uncalibrated 2-D View
3-D Models from Uncalibrated 2-D Views
Single view scene modeling
Lens calibration
Three view scene modeling
Two view scene modeling

Software and Tutorials

Two modeling programs have been written: A primitive X11-based system called sceneBuild developed by myself with a tutorial written by my noble associates Matt Lennon and Bill Butera. And a more user friendly Motif/OpenInventor-based system called ModelMaker developed by Eugene Lin with its own online tutorial.

Both modeling systems let the user create what is called an "origami scene" (i.e. a scene made up of polyhedral surfaces in which the hinge lines between intersecting surfaces are visible) where certain geometric relationships (like parallism, orthogonality and planarity) among key lines and points are specified.

This 2-D origami image model is given to a 3-D analysis program, called sceneAnalyze, which uses geometric relationships among 2-D point and line image elements to estimate the original camera's internal imaging geometry, to estimate position and orientation of cameras, and to determine the 3-D geometry of scene surfaces. Under test conditions, we have found that we are able to recover polygonal scene geometry to better than 0.5 % dimensional accuracy from uncalibrated views of piecewise rectangular objects.

Once camera geometry and a rough polygonal description of the scene have been determined we can focus on recovering the geometry and texture of objects that can only initially be approximated as polyhedrals. In particular, Eugene Lin and I have studied the problem of recovering generalized cylinders and free-form surfaces under these conditions. Another useful and interesting application that I'm investigating for my thesis is treat the 3-D textured model resulting from one or more pre-recorded frames as the basis for model-based reprentations of live video.

Selected Publications

Vision-assisted modeling for model-based video representations,
Shawn Becker, Ph.D. Thesis Proposal, MIT, March 1996.

Semiautomatic 3-D Model Extraction From Uncalibrated 2-D Camera Views,
Shawn Becker and V. Michael Bove, Jr, (presented at) SPIE Symposium on Electronic Imaging : Science & Technology, San Jose, February 5-10, 1995.

Computation of some projective-chirplet-transform and metaplectic-chirplet-transform subspaces, with applications in image processing
Steve Mann and Shawn Becker, DSP World Symposium, Boston, Massachusetts, November, 1992.

Interactive Measurement of Three-Dimensional Objects Using a Depth Buffer and Linear Probe
Shawn Becker, William A. Barrett and Dan R. Olsen, ACM Transactions on Graphics, Vol. 10, No. 2, April, 1991.

More online publications

| home | overview | research | sbecker (at) alum (dot) mit (dot) edu |