Dissertation

Light Field Capture and Display
This dissertation describes light-efficient methods for capturing and displaying 3D images using thin, optically-attenuating masks. Light transport is modeled, under geometrical optics, as a 4D light field. Four motivating applications are presented: digital photography, single-shot visual hull reconstruction, depth-sensing LCDs, and 3D display using dual-stacked LCDs.
Mask-based Light Field Capture and Display (pdf)
Douglas Lanman
Ph.D. Dissertation, Brown University, School of Engineering, defended on 6 July 2010
Committee: Gabriel Taubin, Joseph Mundy, and Ramesh Raskar
Additional Material: video of presentation at NIPS 2010
Industrial Research

Pinlight Displays
A design for an optical see-through near-eye display that provides a wide field of view (110 degrees) and a compact form factor approaching eyeglasses. The approach uses tiled defocused point light sources coded through a transmissive spatial light modulator to project into the eye.
Pinlight Displays: Wide Field of View Augmented-Reality Eyeglasses using
Defocused Point Light Sources (pdf)
A. Maimone, D. Lanman, K. Rathinavel, K. Keller, D. Luebke, and H. Fuchs
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2014)

Cascaded Displays
We introduce cascaded displays: layered spatial light modulators, subject to lateral displacements and refreshed at staggered intervals, synthesizing images with greater spatiotemporal resolution than any SLM in their construction. Benefits are demonstrated with a dual-layer LCD, showcasing head-mounted display (HMD) applications.
Cascaded Displays: Spatiotemporal Superresolution using Offset Pixel Layers (pdf)
Felix Heide, Douglas Lanman, Dikpal Reddy, Jan Kautz, Kari Pulli, and David Luebke
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2014)
Additional Material: project website, video

Near-Eye Light Field Displays
Near-eye light field displays depict sharp images by synthesizing light fields corresponding to virtual scenes located within a viewer's natural accommodation range. We establish fundamental trade-offs between resolution, field of view, and form factor and demonstrate a thin, lightweight HMD prototype containing a pair of microlens-covered OLEDs.
Near-Eye Light Field Displays (pdf)
Douglas Lanman and David Luebke
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2013)
Additional Material: project website, video, interview

Focus 3D
We present a glasses-free 3D display design with the potential to provide viewers with nearly correct accommodation cues, as well as motion parallax and binocular disparity. The design achieves high angular resolution by positioning spatial light modulators about a large lens: one conjugate to the viewer's eye, and one or more near the plane of the lens.
Focus 3D: Compressive Accommodation Display (pdf)
A. Maimone, G. Wetzstein, M. Hirsch, D. Lanman, R. Raskar, and H. Fuchs
ACM Transactions on Graphics (Volume 32 Issue 5, September 2013)
Additional Material: project website, video

Correcting for Optical Aberrations
Optical aberrations of the human eye are currently corrected using eyeglasses, contact lenses, or surgery. We describe a fourth option: modifying the composition of displayed content such that the perceived image appears in focus, after passing through an eye with known optical defects.
Correcting for Optical Aberrations using Multilayer Displays (pdf)
Fu-Chung Huang, Douglas Lanman, Brian Barsky, and Ramesh Raskar
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2012)
Additional Material: project website
Postdoctoral and Graduate Research

Tensor Displays
We introduce tensor displays: a family of compressive light field displays comprising all architectures employing stacks of time-multiplexed, light-attenuating layers illuminated by uniform or directional backlighting (i.e., any low-resolution light field emitter). We construct a prototype and show interactive display using a GPU-based implementation.
Tensor Displays: Compressive Light Field Synthesis using
Multilayer Displays with Directional Backlighting (pdf)
Gordon Wetzstein, Douglas Lanman, Matthew Hirsch, and Ramesh Raskar
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2012)
Additional Material: project website
Depth of Field Analysis for Multilayer Automultiscopic Displays (pdf)
Douglas Lanman, Gordon Wetzstein, Matthew Hirsch, and Ramesh Raskar
OSA International Symposium on Display Holography (ISDH 2012)
Real-Time Image Generation for Compressive Light Field Displays
Gordon Wetzstein, Douglas Lanman, Matthew Hirsch, and Ramesh Raskar
OSA International Symposium on Display Holography (ISDH 2012)
Construction and Calibration of LCD-based Multi-Layer Light Field Displays
Matthew Hirsch, Douglas Lanman, Gordon Wetzstein, and Ramesh Raskar
OSA International Symposium on Display Holography (ISDH 2012)

Polarization Field Displays
We introduce polarization fields as an optically-efficient construction for light field display with layered LCDs. Such displays contain a stack of liquid crystal panels with a single pair of crossed polarizers, with layers acting as spatially-controllable polarization rotators. We construct a prototype and show interactive display using a GPU-based implementation.
Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs (pdf)
Douglas Lanman, Gordon Wetzstein, Matthew Hirsch, Wolfgang Heidrich, and Ramesh Raskar
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2011)
Additional Material: project website
Applying Formal Optimization Methods to Multi-Layer Automultiscopic Displays (pdf)
Douglas Lanman, Gordon Wetzstein, Matthew Hirsch, Wolfgang Heidrich, and Ramesh Raskar
SPIE Stereoscopic Displays and Applications XXIII

Tomographic Image Synthesis
We optimize automultiscopic displays composed of compact volumes of light-attenuating material. Inexpensively fabricated by stacking transparencies, the attenuators recreate a light field when back-illuminated. Tomographic optimization resolves inconsist views, leading to brighter, higher-resolution 3D displays with greater depth of field and higher dynamic range.
Layered 3D: Tomographic Image Synthesis for
Attenuation-based Light Field and Hight Dynamic Range Displays (pdf)
Gordon Wetzstein, Douglas Lanman, Wolfgang Heidrich, and Ramesh Raskar
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2011)
Additional Material: project website

Content-Adaptive Parallax Barriers
We optimize the performance of automultiscopic displays constructed by stacking a pair of modified LCD panels. Rather than using heuristically-defined parallax barriers, we demonstrate that both layers can be simultaneously optimized for the multi-view content. This process leads to 3D displays with increased brightness and refresh rate.
Content-Adaptive Parallax Barriers for Automultiscopic 3D Display (pdf)
Douglas Lanman, Matthew Hirsch, Yunhee Kim, and Ramesh Raskar
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2010)
Additional Material: project website

BiDi Screen:
A Thin, Depth-Sensing LCD
We transform an LCD into a BiDirectional (BiDi) screen allowing 2D multi-touch and 3D gestures. An image sensor is placed behind the liquid crystal layer, forming a mask-based light field camera and allowing passive depth estimation. Applications include hybrid 2D/3D gestures and image-based relighting.
BiDi Screen: A Thin, Depth-Sensing LCD for 3D Interaction using Light Fields (pdf)
Matthew Hirsch, Douglas Lanman, Henry Holtzman, and Ramesh Raskar
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2009)
Additional Material: project website, supplementary appendix

Shield Fields:
Modeling and Capturing 3D Occluders
In this paper we decouple 3D occluders from 4D illumination using shield fields: the 4D attenuation function which acts on any light field incident on an occluder. We then analyze occluder reconstruction from cast shadows, leading to a single-shot light field camera for visual hull reconstruction.
Shield Fields: Modeling and Capturing 3D Occluders (pdf)
Douglas Lanman, Ramesh Raskar, Amit Agrawal, and Gabriel Taubin
ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2008)
Additional Material: project website

Computational Plenoptic Imaging
The plenoptic function is a ray-based model for light that includes the color spectrum as well as spatial, temporal, and directional variation. In this state of the art report, we review approaches that optically encode dimensions of the plenpotic function beyond those captured by traditional photography and reconstruct the recorded information computationally.
Computational Plenoptic Imaging (pdf)
Gordon Wetzstein, Ivo Ihrke, Douglas Lanman, and Wolfgang Heidrich
Computer Graphics Forum, 2011
Additional Material: project website
State of the Art in Computational Plenoptic Imaging (pdf)
Gordon Wetzstein, Ivo Ihrke, Douglas Lanman, and Wolfgang Heidrich
European Association for Computer Graphics (Eurographics 2011 STAR)
Additional Material: project website

Descattering Transmission
via Angular Filtering
We describe a single-shot method to differentiate unscattered and scattered components of light transmitted through a heterogeneous translucent material. We propose a method to capture the angularly-varying statistics of scattered light, allowing computation of an unscattered direct-only image.
Descattering Transmission via Angular Filtering (pdf)
Jaewon Kim, Douglas Lanman, Yasuhiro Mukaigawa, and Ramesh Raskar
European Conference on Computer Vision (ECCV 2010)

Image Destabilization:
Defocus using Lens and Sensor Motion
We propose an imaging system in which the lens and sensor are perturbed during the exposure. We analyze the defocus effects, demonstrating approximately depth-independent defocus and exaggerated, programmable, and pleasing bokeh with small apertures (such as those found in mobile phones).
Image Destabilization: Programmable Defocus using Lens and Sensor Motion (pdf)
Ankit Mohan, Douglas Lanman, Shinsaku Hiura, and Ramesh Raskar
IEEE International Conference on Computational Photography (ICCP 2009)
Additional Material: project website, presentation (ppt), supplementary material (zip)

Modeling and Synthesis of
Aperture Effects in Cameras
In this paper we describe the capture, analysis, and synthesis of vignetting and depth-of-field effects in conventional cameras. We also consider calibration using point sources and introduce the Bokeh Brush: a novel, post-capture method for full-resolution control of the shape of out-of-focus points.
Modeling and Synthesis of Aperture Effects in Cameras (ps, pdf)
Douglas Lanman, Ramesh Raskar, and Gabriel Taubin
Int'l Symposium on Computational Aesthetics in Graphics, Visualization, and Imaging (CAe 2008)

Surround Structured Lighting
This paper presents a new system for rapidly acquiring complete 3D models using a single structured light projector, a pair of planar mirrors, and one or more cameras. Using an orthographic projector composed of a Fresnel lens and a DLP projector, we display a single Gray code sequence to encode all the illumination planes within the scanning volume.
Surround Structured Lighting for Full Object Scanning (ps, pdf)
Douglas Lanman, Daniel Crispell, and Gabriel Taubin
6th International Conference on 3D Digital Imaging and Modeling (3DIM 2007)
Additional Material: presentation (pdf, ppt)
Surround Structured Lighting: 3-D Scanning with Orthographic Illumination (pdf)
Douglas Lanman, Daniel Crispell, and Gabriel Taubin
Elsevier Journal for Computer Vision and Image Understanding (CVIU)
Special Issue on New Advances in 3D Imaging and Modeling, Spring 2009

Shape from Depth Discontinuities
under Orthographic Projection
We recover the 3-D surface of opaque objects, viewed under orthographic projection while rotated on a turntable. Gaps are filled using a novel shape completion scheme. For verification, we construct a large-format orthographic multi-flash camera and analyze limitations for point and directional illumination.
Shape from Depth Discontinuities under Orthographic Projection (ps, pdf)
Douglas Lanman, Daniel Cabrini Hauagge, and Gabriel Taubin
IEEE International Workshop on 3-D Digital Imaging and Modeling (3DIM 2009)
Additional Material: poster, supplementary material

Multi-Flash 3D Photography
We describe a new 3D scanning system which exploits the depth discontinuity information captured by the multi-flash camera proposed by Raskar et al. in 2004. In contrast to existing differential and global shape-from-silhouette algorithms, our method can reconstruct the position and orientation of points located deep inside concavities.
Beyond Silhouettes: Surface Reconstruction using Multi-Flash Photography (ps, pdf)
Daniel Crispell, Douglas Lanman, Peter G. Sibley, Yong Zhao, and Gabriel Taubin
Third International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT 2006)
Multi-Flash 3D Photography: Capturing Shape and Appearance (ps, pdf)
Douglas Lanman, Peter G. Sibley, Daniel Crispell, Yong Zhao, and Gabriel Taubin
33rd International Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 2006)
Additional Material: poster, supplementary material
Shape from Depth Discontinuities (pdf)
Daniel Crispell, Douglas Lanman, Peter G. Sibley, Yong Zhao, and Gabriel Taubin
Emerging Trends in Visual Computing, Lecture Notes in Computer Science Series (LNCS)
Springer-Verlag, Vol. 5416, 2009

Catadioptric Systems
Catadioptric imaging systems are composed of both lenses and mirrors. In my research at Brown, I am working with Prof. Gabriel Taubin to develop novel computer vision algorithms using these systems. Current efforts focus on system design, calibration techniques, and 3D reconstruction. To date, two papers have been accepted for publication in this area.
Reconstructing a 3D Line from a Single Catadioptric Image (ps, pdf)
Douglas Lanman, Megan Wachs, Gabriel Taubin, and Fernando Cukierman
Third International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT 2006)
Additional Material: poster
Spherical Catadioptric Arrays: Construction, Multi-View Geometry, and Calibration (ps, pdf)
Douglas Lanman, Daniel Crispell, Megan Wachs, and Gabriel Taubin
Third International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT 2006)
Additional Material: poster

Visual Sensor Networks
Sensor networks have received a great deal of attention lately. In a similar vein, we seek to develop methods for utilizing distributed ad-hoc networks of "smart" camera-processor nodes. Current research is focused on developing robust autocalibration procedures. A more detailed description is available in this paper summary.
Undergraduate Research

Model-based Face Capture
A novel method was developed for creating photo-realistic 3D face models from 2D images of a subject. A scattered data interpolation algorithm was used to deform an exemplar mesh to fit individual face geometries.
Results: report, presentation, website

Sensor Network Simulation
As an intern at Los Alamos National Laboratory I created a comprehensive sensor network simulation package. Working with Dr. Anders M. Jorgensen, I also developed a novel algorithm for distributed audio source localization.
Results: presentation

Distributed Task Allocation
For a final course project in EE 141: Swarm Intelligence, I contributed to the development of novel division-of-labor algorithms for multi-robot systems. Studies were conducted using the Webots simulator and Khepera robots.
Results: report, website, animation, publication
Last Updated: July 28, 2014