Image-based rendering for non-diffuse synthetic scenes

Dani Lischinski and Ari Rapoport


ABSTRACT

    Most current image-based rendering methods operate under the assumption that all of the visible surfaces in the scene are opaque ideal diffuse (Lambertian) reflectors. This paper is concerned with image-based rendering of non-diffuse synthetic scenes. We introduce a new family of image-based scene representations and describe corresponding image-based rendering algorithms that are capable of handling general synthetic scenes containing not only diffuse reflectors, but also specular and glossy objects. Our image-based representation is based on layered depth images. It represents simultaneously and separately both view-independent scene information and view-dependent appearance information. The view-dependent information may be either extracted directly from our data-structures, or evaluated procedurally using an image-based analogue of ray tracing. We describe image-based rendering algorithms that recombine the two components together in a manner that produces a good approximation to the correct image from any viewing position. In addition to extending image-based rendering to non-diffuse synthetic scenes, our paper has an important methodological contribution: it places image-based rendering, light-field rendering, and volume graphics in a common framework of discrete raster-based scene representations.

Proc. Ninth Eurographics Workshop on Rendering, in Rendering Techniques '98, pp. 301-314, 1998. Download the paper (PDF, 238k).

Animations (QuickTime format, about 47 Mbytes each!)

  1. Glossy teapot
  2. Scene with LLF reflections
  3. Scene with image-based ray tracing