Academia.eduAcademia.edu

Outline

Lensless photography with only an image sensor

2017, Applied Optics

https://doi.org/10.1364/AO.56.006450

Abstract

Photography usually requires optics in conjunction with a recording device (an image sensor). Eliminating the optics could lead to new form factors for cameras. Here, we report a simple demonstration of imaging using a bare CMOS sensor that utilizes computation. The technique relies on the space variant point-spread functions resulting from the interaction of a point source in the field of view with the image sensor. These space-variant point-spread functions are combined with a reconstruction algorithm in order to image simple objects displayed on a discrete LED array as well as on an LCD screen. We extended the approach to video imaging at the native frame rate of the sensor. Finally, we performed experiments to analyze the parametric impact of the object distance. Improving the sensor designs and reconstruction algorithms can lead to useful cameras without optics. The optical systems of cameras in mobile devices typically constrain the overall thickness of the devices 1-2 . By eliminating the optics, it is possible to create ultra-thin cameras with interesting new form factors. Previous work in computational photography has eliminated the need for lenses by utilizing apertures in front of the image sensor or via coherent illumination of the sample 8 . In the former case, the apertures create shadow patterns on the sensor that could be computationally recovered by solving a linear inverse problem 9 . The latter case requires coherent illumination, which is not generally applicable to imaging. In most instances, coded apertures have replaced the lenses. Microfabricated coded apertures have recently shown potential for the thinner systems, 4 with thickness on the order of millimeters. However, these apertures are absorbing and hence, exhibit relatively low transmission efficiencies. Another method utilizes holographic phase masks integrated onto the image sensor in conjunction with computation to enable simple imaging. In this case, precise microfabrication of the mask onto the sensor is required. Another computational camera utilizes a microlens array to form a large number of partial images of the scene, which is then numerically combined to form a single image with computational refocusing . Here, we report on a computational camera that is comprised of only a conventional image sensor and no other elements. Our motivation for this camera is based upon the recognition that all cameras essentially rely on the fact that the information about the object enters the aperture of the lens, the coded aperture or micro-lens array, and is recorded by the image sensor. In the case of the coded aperture and the microlens array, numerical processing is performed to represent the image for human consumption. If all optical elements are eliminated, the information from the object is still recorded by the image sensor. If appropriate reconstruction algorithms were developed, the image recorded by the sensor can be subsequently recovered for human consumption. It is analogous the multi-sensory compressive sensing 6 where spatial light

References (19)

  1. J. Bareau and P. P. Clark, in Proceedings of International Optical Design Conference (SPIE 2006), 63421F
  2. A. Bruckner and M. Schoberl, in Proceedings of MOEMS and Miniaturized Systems XII (SPIE 2013), 861617
  3. T. M. Cannon and E. E. Fenimore, Opt. Eng. 19, 193283 (1980)
  4. M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan and R. Baraniuk, arXiv 1509, 00116v2 (2016)
  5. X. Yuan, H. Jiang, G. Huang and P. Wilford, arXiv 1508, 03498 (2015).
  6. H. Jiang, G. Huang, and P. Wilford, APSIPA Transactions on Signal and Information Processing 3, e15 (2014).
  7. G. Huang, H. Jiang, K. Matthews, and P. Wilford, in Proceedings of IEEE International Conference on Image Processing (IEEE, 2013), pp. 2101
  8. W. Bishara,S. Mavandadi, F. W. Yu, S. Feng, R. Lau, A. Ozcan, Proc. Natl. Acad. Sci. USA 108, 7296-7301 (2011).
  9. B. Adcock, A. C. Hansen, C. Poon, and B. Roman, arXiv 1302, 0561 (2014)
  10. P. R. Gill and D. G. Stork, in Proceedings of Computational Optical Sensing and Imaging, Alexandria, VA, 2013
  11. D. G. Stork and P. R. Gill, International Journal of Advances in Systems and Measurements, 7(3,4), 201-208 (2014)
  12. J. Tanida, O. Kumagai, K. Yamada, S. Miyatake, K. Ishida, T. Morimoto, N. Kondou, D. Miyazaki, and Y. Ichioka, Appl. Opt. 40, 1806-1813 (2001)
  13. K. Venkataraman, D. Lelescu, J. Duparre, A. McMahon, G. Molina, P. Chatterjee, R. Mullis and S. Nayar, ACM Trans. on Graphics. 32, 166 (2013)
  14. G. Kim and R. Menon, Appl. Phys. Lett. 105, 061114 (2014)
  15. A. K. Cline, C. B. Moler, G. W. Stewart, and J. H. Wilkinson, SIAM J. Numer. Anal. 16, 368-375 (1979).
  16. A. L. Cohen. Optical Acta. 29(1), 63-67 (1982)
  17. Supplementary Information
  18. K. Kulkarni and P. Turaga, IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(4), 772-784 (2016)
  19. H. Chen, S. Jayasuriya, J. Yang, J. Stephen, S. Sivaramakrishnan, A. Veeraraghavan and A. Molnar, arXiv 1605, 03621 (2016).