Academia.eduAcademia.edu

Outline

A tele-immersive system based on binocular view interpolation

https://doi.org/10.2312/EGVE/EGVE04/137-146

Abstract

The main idea behind tele-immersive environment is to create an immersive virtual environment that connect people across networks and enable them to interact not only with each other, but also with various other forms of shared digital data (video, 3D models, images, text, etc.). Tele-immersive environments may eventually replace current video and telephone conferencing, and enable for a better and more intuitive way to communicate between people and computer systems. To accomplish this, participants to a meeting has to be represented digitally with a high degree of accuracy in order to keep a sense of immersion. Tele-immersive environments should have the same "feel" as a real meeting. Interactions among people should be natural. In other to create such a system, we need to solve the key problem of how to create in real-time new views from a fixed network of cameras that will correspond to new viewpoints. We also need to do this for two virtual cameras corresponding to the inter-ocular distance of each participant. In this paper, we will describe a new binocular view interpolation method based on a re-projection technique using calibrated cameras. We will discuss the various aspects of this new algorithm and of the hardware systems necessary to perform these operations in real-time. We will also present early experimental results illustrating the various advantages of this algorithm.

References (27)

  1. E.H. Adelson and J. Bergen. 'The plenoptic function and the elements of early vision'. In Computational Models of Visual Processing, pages 3-20. MIT Press, Cambridge, MA. 1991.
  2. S. Avidan and A. Shashua. 'Novel view synthesis in tensor space'. In Proc. of IEEE Conference on Com- puter Vision and Pattern Recognition, pages 1034- 1040, 1997.
  3. S. Baba, H. Saito, S. Vedula, K.M. Cheung, and T. Kanade. 'Appearance-Based Virtual-View Generation for Fly Through in a Real Dynamic Scene', In VisSym '00 (Joint Eurographics -IEEE TCVG Symposium on Visualization), May, 2000.
  4. H. Baker, N. Bhatti, D. Tanguay, I. Sobel, D. Gelb, M. Goss, J. MacCormick, B. Culbertson , T. Malzben- der,'Computation and Performance Issues in Coliseum, an Immersive Videoconferencing System'. ACM Mul- timedia 2003, Berkeley, CA, November 2-8, 2003.
  5. S. E. Chen. 'QuickTime VR an image-based approach to virtual environment navigation'. Computer Graph- ics, 29(Annual Conference Series):29-38,1995.
  6. S.F. El-Hakim, J.-A. Beraldin, G. Godin, P. Boulanger. 'Two 3-D Sensors for Environment Modeling and Vir- tual Reality: Calibration and Multi-view registration'. International Archives of Photogrammetry and Remote Sensing. Volume 31, Part B5, Commission V. Vienna, Austria: 140-146. July 9-19, 1996.
  7. H. Fuchs, G. Bishop, K. Arthur, L. McMillan, R. Ba- jcsy, S. Lee, H. Farid, and T. Kanade. 'Virtual space teleconferencing using a sea of cameras'. Technical Report TR94-033, 18, 1994.
  8. S.J. Gortler, R. Grzeszczuk, R. Szeliski, and M.F. Co- hen. 'The lumigraph'. In Computer Graphics Proceed- ings, Annual Conference Series, pages 43-54, Proc. SIGGRAPH'96. August 1996.
  9. D. O. Gorodnichy. 'On Importance of Nose for Face Tracking'.Proc. Intern. Conf. on Automatic Face and Gesture Recognition (FG'2002), pp. 188-196, Wash- ington DC, May 20-21, 2002.
  10. N. K. Hayles, 'How to Put Bodies Back in the Pic- ture', Immersed in Technology: Art and Virtual Envi- ronments, Cambridge: MIT Press, 1995.
  11. H. C. Huangy. 'Disparity-based view morphing -a new technique for imagebased rendering'. ACM Sym- posium on Virtual Reality Software and Technology (VRST), Taipei, Taiwan, November 2-5, 1998.
  12. T. Kanade, H. Saito, and S. Vedula. 'The 3d room: Digitizing timevarying 3d events by synchronized mul- tiple video streams'. Robotics Institute Technical Re- port, CMU-RI-TR-98-34. December 1998.
  13. S. Laveau and O. Faugeras. '3-d scene representation as a collection of images and fundamental matrices'. Rapport de recherche, Institut national de recherche en informatique et automatique, RR-2205, February 1994.
  14. M. Levoy and P. Hanrahan. 'Light field rendering'. In Computer Graphics Proceedings, Annual Confer- ence Series, pages 31-42, Proc. SIGGRAPHŠ96. Au- gust 1996.
  15. J. Lanier. 'Virtually there'. Scientific American, pages 66 Ű75. April 2001.
  16. B.J. Lei and E.A. Hendriks. 'Multi-step view synthesis with occlusion handling'. 2001.
  17. L. McMillan and G. Bishop. 'Plenoptic modelling: An image-based rendering system'. Computer Graphics, 29(Annual Conference Series):39 Ű46, 1995.
  18. W. Matusik, C. Buehler, R. Raskar, S. Gortler, L. McMillan. 'Image-based Visual Hulls'. SIGGRAPH 2000, pp. 369-374.
  19. M. Pollefeys. 'Self-calibration and metric 3D recon- struction from uncalibrated image sequences', Ph.D. Thesis, ESAT-PSI, K.U.Leuven, 1999.
  20. S. Prince, T. Williamson, A. Cheok, F. Farbiz, M. Billinghurst, and H. Kato. '3-d live:real-time in- teraction for mixed reality'. In Proceedings of the ACM Conference on Computer Supported Collabora- tive Work (CSCW 2002), Nov. 16-20th , New Orleans, Louisiana. 2002.
  21. D. Scharstein. 'View synthesis using stereo vi- sion'. PhD thesis, Cornell University, Technical Report CORNELLCS:TR96-1604. January 1997.
  22. S. Seitz and C. Dyer. 'Physically-Valid View Synthesis by Image Interpolation'. Proc. Workshop on Represen- tation of Visual Scenes, IEEE Computer Society Press, June, 1995.
  23. S. Seitz and C. Dyer. 'View Morphing'. In SIGGRAPH 96 Conference Proceedings, Annual Conference Se- ries, pages 21 Ű30. ACM SIGGRAPH, Addison Wes- ley, August 1996. held in New Orleans, Louisiana, Au- gust 1996.
  24. S.M. Seitz. 'Image-Based Transformation of View- point and Scene Appearance, Ph.D. Dissertation', Computer Sciences Department Technical Report 1354, University of Wisconsin -Madison, October 1997.
  25. G. Wolberg, H. Sueyllam, M. A. Ismail, and K. M. Ahmed. 'One dimensional resampling with inverse and forward mapping functions'. Journal of Graphics Tools, 5(3):11-33, 2000.
  26. C.R. Wren, A. Azarbaye Jani, T. Darrell, and A. Pentland. 'Pfinder: Real-Time Tracking of the Human Body', IEEE Transactions on Pattern Analysis and Ma- chine Intelligence, Vol. 19, No. 7, pp. 780-785, July 1997.
  27. H. Y. Shum and S. B. Kang. 'A review of image-based rendering techniques'. IEEE/SPIE Visual Communi- cations and Image Processing (VCIP) 2000, pp. 2-13, Perth. June 2000. c The Eurographics Association 2004.