Academia.eduAcademia.edu

Outline

End-to-end Optics Design for Computational Cameras

2021

https://doi.org/10.25781/KAUST-23EL6

Abstract

Imaging systems have long been designed in separated steps: the experience-driven optical design followed by sophisticated image processing. Such a general-propose approach achieves success in the past but left the question open for specific tasks and the best compromise between optics and post-processing, as well as minimizing costs. Driven from this, a series of works are proposed to bring the imaging system design into end-to-end fashion step by step, from joint optics design, point spread function (PSF) optimization, phase map optimization to a general end-to-end complex lens camera. To demonstrate the joint optics application with image recovery, we applied it to flat lens imaging with a large field of view (LFOV). In applying a super-resolution single-photon avalanche diode (SPAD) camera, the PSF encoded by diffractive op tical element (DOE) is optimized together with the post-processing, which brings the optics design into the end-to-end stage. Expanding to color imaging, opt...

References (285)

  1. 2.1 Aberrations and Traditional Lens Design. . . . . . . . . . . . . . .
  2. 2.2 Computational Optics. . . . . . . . . . . . . . . . . . . . . . . . . .
  3. Manufacturing Planar Optics. . . . . . . . . . . . . . . . . . . . . .
  4. 2.4 Image Quality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  5. 2.5 Learned Image Reconstruction. . . . . . . . . . . . . . . . . . . . .
  6. 3 Designing Optics for Learned Recovery . . . . . . . . . . . . . . . . .
  7. Lens Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  8. 4.1 Ideal Phase Profile . . . . . . . . . . . . . . . . . . . . . . . . . . .
  9. 4.2 Aperture Partitioning . . . . . . . . . . . . . . . . . . . . . . . . .
  10. 4.3 Fresnel Depth Profile Optimization . . . . . . . . . . . . . . . . . .
  11. Aberration Analysis . . . . . . . . . . . . . . . . . . . . . . . . . .
  12. 5 Learned Image Reconstruction . . . . . . . . . . . . . . . . . . . . . .
  13. 5.1 Image Formation Model . . . . . . . . . . . . . . . . . . . . . . . .
  14. 5.2 Generative Image Recovery . . . . . . . . . . . . . . . . . . . . . .
  15. Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  16. 7 Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  17. 8 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  18. 8.1 Field of View Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
  19. 8.2 Generalization Analysis . . . . . . . . . . . . . . . . . . . . . . . .
  20. 8.3 Fine-tuning for Alternative Lens Designs . . . . . . . . . . . . . . .
  21. Hallucination Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
  22. 9 Experimental Assessment . . . . . . . . . . . . . . . . . . . . . . . . .
  23. 9.1 Imaging over Large Depth Ranges and in Low Light . . . . . . . .
  24. 10 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . .
  25. End-to-End Encoding Through Optimizing PSF: Super-resolution SPAD Camera
  26. 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  27. 2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  28. 2.1 Image super-resolution (SR) . . . . . . . . . . . . . . . . . . . . . .
  29. 2.2 PSF Engineering for Computational Imaging . . . . . . . . . . . . .
  30. 2.3 Imaging with SPAD Sensors . . . . . . . . . . . . . . . . . . . . . .
  31. 2.4 End-to-End Computational Cameras . . . . . . . . . . . . . . . . .
  32. 3 End-to-end Diffractive Optics Design and Image Reconstruction . . .
  33. 3.1 Image Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  34. 3.2 Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . REFERENCES
  35. E. R. Dowski and W. T. Cathey, "Extended depth of field through wave-front coding," Applied optics, vol. 34, no. 11, pp. 1859-1866, 1995.
  36. S. C. Tucker, W. T. Cathey, and E. R. Dowski, "Extended depth of field and aberration control for inexpensive digital microscope systems," Optics Express, vol. 4, no. 11, pp. 467-474, 1999.
  37. W. T. Cathey and E. R. Dowski, "New paradigm for imaging systems," Applied Optics, vol. 41, no. 29, pp. 6080-6092, 2002.
  38. A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, "4d fre- quency analysis of computational cameras for depth of field extension," in ACM Trans. Graph. (TOG), vol. 28, no. 3. ACM, 2009, p. 97.
  39. P. E. Debevec and J. Malik, "Recovering high dynamic range radiance maps from photographs," in Proceedings of the 24th Annual Conference on Com- puter Graphics and Interactive Techniques, ser. SIGGRAPH '97. USA: ACM Press/Addison-Wesley Publishing Co., 1997, p. 369-378.
  40. S. Mann and R. W. Picard, "Being 'undigital' with digital cameras: extending dynamic range by combining differently exposed pictures," 1994.
  41. E. Reinhard and K. Devlin, "Dynamic range reduction inspired by photorecep- tor physiology," IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 1, pp. 13-24, 2005.
  42. M. Rouf, R. Mantiuk, W. Heidrich, M. Trentacoste, and C. Lau, "Glare encoding of high dynamic range images," ser. CVPR '11. USA: IEEE Computer Society, 2011.
  43. S. Nayar, V. Branzoi, and T. Boult, "Programmable Imaging using a Digi- tal Micromirror Array," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. I, Jun 2004, pp. 436-443.
  44. D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. Vera, and S. D. Feller, "Multiscale gigapixel photography," Nature, vol. 486, no. 7403, p. 386, 2012.
  45. O. S. Cossairt, D. Miau, and S. K. Nayar, "Gigapixel computational imaging," in 2011 IEEE International Conference on Computational Photography (ICCP), 2011, pp. 1-8.
  46. Q. Sun, X. Dun, Y. Peng, and W. Heidrich, "Depth and transient imaging with compressive spad array cameras," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
  47. Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, "Learned large field-of-view imaging with thin-plate optics," in ACM Transactions on Graphics (Proc. SIGGRAPH Asia), vol. 38, no. 6. ACM, 2019.
  48. V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, "End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging," ACM Transactions on Graphics (TOG), vol. 37, no. 4, p. 114, 2018.
  49. J. Chang and G. Wetzstein, "Deep optics for monocular depth estimation and 3d object detection," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  50. --, "Deep optics for monocular depth estimation and 3d object detection," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  51. Q. Sun, J. Zhang, X. Dun, B. Ghanem, Y. Peng, and W. Heidrich, "End-to-end learned, optically coded super-resolution spad camera," ACM Trans. Graph., vol. 39, no. 2, Mar. 2020.
  52. Q. Sun, E. Tseng, Q. Fu, W. Heidrich, and F. Heide, "Learning rank-1 diffrac- tive optics for single-shot high dynamic range imaging," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
  53. C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, "Deep optics for single-shot high-dynamic-range imaging," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1375-1385.
  54. M. Nimier-David, D. Vicini, T. Zeltner, and W. Jakob, "Mitsuba 2: A retar- getable forward and inverse renderer," Transactions on Graphics (Proceedings of SIGGRAPH Asia), vol. 38, no. 6, Dec. 2019.
  55. E. Tseng, A. Mosleh, F. Mannan, K. St-Arnaud, A. Sharma, Y. Peng, A. Braun, D. Nowrouzezahrai, J.-F. Lalonde, and F. Heide, "Differentiable compound op- tics and processing pipeline optimization for end-to-end camera design," ACM Transactions on Graphics (TOG), vol. 40, no. 4, 2021.
  56. Q. Sun, C. Wang, F. Qiang, D. Xiong, and H. Wolfgang, "End-to-end complex lens design with differentiable ray tracing," ACM Transactions on Graphics (TOG), vol. 40, no. 4, 2021.
  57. T. O. Aydin, R. Mantiuk, K. Myszkowski, and H.-P. Seidel, "Dynamic range independent image quality assessment," ACM Trans. Graph., vol. 27, no. 3, p. 1-10, Aug. 2008.
  58. A. R. Robertson, "Recent cie work on color difference evaluation," in Review and Evaluation of Appearance: Methods and Techniques. ASTM International, 1986.
  59. H. Haim, S. Elmalem, R. Giryes, A. Bronstein, and E. Marom, "Depth es- timation from a single image using deep learned phase coded mask," IEEE Transactions on Computational Imaging, vol. 4, pp. 298-310, 2018.
  60. X. Zhang, R. Ng, and Q. Chen, "Single image reflection separation with percep- tual losses," in IEEE Conference on Computer Vision and Pattern Recognition, 2018.
  61. L. He, G. Wang, and Z. Hu, "Learning depth from single images with deep neu- ral network embedding focal length," IEEE Transactions on Image Processing, vol. 27, pp. 4676-4689, 2018.
  62. G. R. Fowles, Introduction to modern optics. Courier Corporation, 1975.
  63. C. F. Gauss, Dioptrische Untersuchungen von CF Gauss. in der Dieterichschen Buchhandlung, 1843.
  64. R. Kingslake and R. B. Johnson, Lens design fundamentals. Academic Press, 2009.
  65. L. Seidel, "Ueber die theorie der fehler," mit welchen die durch optische In- strumente gesehenen Bilder behaftet sind, und über die mathematischen Bedin- gungen ihrer Aufhebung. Abhandlungen der Naturwissenschaftlich-Technischen Commission bei der Königl. Bayerischen Akademie der Wissenschaften in München. Cotta, vol. 2, p. 4, 1857.
  66. G. G. Sliusarev, "Aberration and optical design theory," Bristol, England, Adam Hilger, Ltd., 1984, 672 p. Translation., 1984.
  67. J. M. Geary, Introduction to lens design: with practical ZEMAX examples. Willmann-Bell Richmond, 2002.
  68. D. Malacara-Hernández and Z. Malacara-Hernández, Handbook of optical de- sign. CRC Press, 2016.
  69. S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, "Burst photography for high dynamic range and low-light imaging on mobile cameras," ACM Transactions on Graphics (TOG), vol. 35, no. 6, p. 192, 2016.
  70. C. Chen, Q. Chen, J. Xu, and V. Koltun, "Learning to see in the dark," 2018.
  71. X. Yuan, L. Fang, Q. Dai, D. J. Brady, and Y. Liu, "Multiscale gigapixel video: A cross resolution image matching and warping approach," in 2017 IEEE In- ternational Conference on Computational Photography (ICCP). IEEE, 2017, pp. 1-9.
  72. Light.co, "Light l16 camera," 2018.
  73. MobilEye, "Mobileeye tricam," 2018.
  74. K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatter- jee, R. Mullis, and S. Nayar, "Picam: An ultra-thin high performance monolithic camera array," ACM Transactions on Graphics (TOG), vol. 32, no. 6, p. 166, 2013.
  75. Y. Peng, Q. Fu, H. Amata, S. Su, F. Heide, and W. Heidrich, "Computational imaging using lightweight diffractive-refractive optics," Optics express, vol. 23, no. 24, pp. 31 393-31 407, 2015.
  76. F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, "High- quality computational imaging through simple lenses," ACM Transactions on Graphics (TOG), vol. 32, no. 5, p. 149, 2013.
  77. F. Heide, Q. Fu, Y. Peng, and W. Heidrich, "Encoded diffractive optics for full-spectrum computational imaging," Scientific Reports, vol. 6, 2016.
  78. G. R. Fowles, Introduction to modern optics. Courier Dover Publications, 2012.
  79. W. J. Smith, Modern lens design. McGraw-Hill, 2005.
  80. Y. Shih, B. Guenter, and N. Joshi, "Image enhancement using calibrated lens simulations," in European Conference on Computer Vision. Springer, 2012, pp. 42-56.
  81. D. G. Stork and P. R. Gill, "Lensless ultra-miniature cmos computational im- agers and sensors," Proc. Sensorcomm, pp. 186-190, 2013.
  82. --, "Optical, mathematical, and computational foundations of lensless ultra- miniature diffractive imagers and sensors," International Journal on Advances in Systems and Measurements, vol. 7, no. 3, p. 4, 2014.
  83. M. Monjur, L. Spinoulas, P. R. Gill, and D. G. Stork, "Ultra-miniature, compu- tationally efficient diffractive visual-bar-position sensor," in Proc. SensorComm. IEIFSA, 2015.
  84. N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, "Diffusercam: lensless single-exposure 3d imaging," Optica, vol. 5, no. 1, pp. 1-9, 2018.
  85. Y. Peng, Q. Fu, F. Heide, and W. Heidrich, "The diffractive achromat full spectrum computational imaging with diffractive optics," ACM Trans. Graph. (SIGGRAPH), vol. 35, no. 4, p. 31, 2016.
  86. M. Papas, T. Houit, D. Nowrouzezahrai, M. H. Gross, and W. Jarosz, "The magic lens: refractive steganography." ACM Trans. Graph., vol. 31, no. 6, pp. 186-1, 2012.
  87. Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, "High-contrast computational caustic design," ACM Transactions on Graphics (TOG), vol. 33, no. 4, p. 74, 2014.
  88. Y. Peng, X. Dun, Q. Sun, and W. Heidrich, "Mix-and-match holography," ACM Trans. Graph. (SIGGRAPH Aisa), vol. 36, no. 6, p. 191, 2017.
  89. Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraragha- van, "Phasecam3d -learning phase masks for passive single view depth estima- tion," in Proc. ICCP, 2019.
  90. J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, "Hybrid optical-electronic convolutional neural networks with optimized diffractive op- tics for image classification," Scientific reports, vol. 8, no. 1, p. 12324, 2018.
  91. S. Oliver, R. Lake, S. Hegde, J. Viens, and J. Duparre, "Imaging module with symmetrical lens system and method of manufacture," May 4 2010, uS Patent 7,710,667.
  92. W. Duoshu, C. Luo, Y. Xiong, T. Chen, H. Liu, and J. Wang, "Fabrication tech- nology of the centrosymmetric continuous relief diffractive optical elements," Physics Procedia, vol. 18, pp. 95-99, 2011.
  93. P. Genevet, F. Capasso, F. Aieta, M. Khorasaninejad, and R. Devlin, "Recent advances in planar optics: from plasmonic to dielectric metasurfaces," Optica, vol. 4, no. 1, pp. 139-152, 2017.
  94. S. H. Ahn and L. J. Guo, "Large-area roll-to-roll and roll-to-plate nanoimprint lithography: a step toward high-throughput application of continuous nanoim- printing," ACS Nano, vol. 3, no. 8, pp. 2304-2310, 2009.
  95. S. Y. Chou, P. R. Krauss, and P. J. Renstrom, "Nanoimprint lithography," Journal of Vacuum Science & Technology B: Microelectronics and Nanometer Structures Processing, Measurement, and Phenomena, vol. 14, no. 6, pp. 4129- 4133, 1996.
  96. M. Zoberbier, S. Hansen, M. Hennemeyer, D. Tönnies, R. Zoberbier, M. Brehm, A. Kraft, M. Eisner, and R. Völkel, "Wafer level cameras-novel fabrication and packaging technologies," in Int. Image Sens. Workshop, 2009.
  97. F. Fang, X. Zhang, A. Weckenmann, G. Zhang, and C. Evans, "Manufacturing and measurement of freeform optics," CIRP Annals, vol. 62, no. 2, pp. 823-846, 2013.
  98. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004.
  99. K. Mitra, O. Cossairt, and A. Veeraraghavan, "To denoise or deblur: param- eter optimization for imaging systems," in Digital Photography X, vol. 9023. International Society for Optics and Photonics, 2014, p. 90230G.
  100. A. Mittal, A. K. Moorthy, and A. C. Bovik, "No-reference image quality assess- ment in the spatial domain," IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695-4708, 2012.
  101. M. Estribeau and P. Magnan, "Fast mtf measurement of cmos imagers using iso 12333 slanted-edge methodology," in Detectors and Associated Signal Pro- cessing, vol. 5251. International Society for Optics and Photonics, 2004, pp. 243-253.
  102. EMVA Standard, "1288: Standard for characterization and presentation of spec- ification data for image sensors and cameras," European Machine Vision Asso- ciation, 2005.
  103. J. R. Parker, Algorithms for image processing and computer vision. John Wiley & Sons, 2010.
  104. G. D. Boreman, Modulation transfer function in optical and electro-optical sys- tems. SPIE press Bellingham, WA, 2001, vol. 21.
  105. J. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual losses for real-time style transfer and super-resolution," in European Conference on Computer Vision. Springer, 2016, pp. 694-711.
  106. M. Geese, U. Seger, and A. Paolillo, "Detection probabilities: Performance prediction for sensors of autonomous vehicles," Electronic Imaging, vol. 2018, no. 17, pp. 148-1-148-14, 2018.
  107. T. S. Cho, C. L. Zitnick, N. Joshi, S. B. Kang, R. Szeliski, and W. T. Freeman, "Image restoration by matching gradient distributions," IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 34, no. 4, pp. 683- 694, 2012.
  108. D. Krishnan and R. Fergus, "Fast image deconvolution using hyper-laplacian priors," in Advances in Neural Information Processing Systems. NIPS, 2009, pp. 1033-1041.
  109. E. Gilad and J. Von Hardenberg, "A fast algorithm for convolution integrals with space and time variant kernels," Journal of Computational Physics, vol. 216, no. 1, pp. 326-336, 2006.
  110. C. J. Schuler, H. Christopher Burger, S. Harmeling, and B. Scholkopf, "A ma- chine learning approach for non-blind image deconvolution," in Proc. Computer Vision and Pattern Recognition, 2013.
  111. L. Xu, J. S. Ren, C. Liu, and J. Jia, "Deep convolutional neural network for image deconvolution," in Advances in Neural Information Processing Systems, 2014, pp. 1790-1798.
  112. J. Zhang, J. Pan, W.-S. Lai, R. W. Lau, and M.-H. Yang, "Learning fully convolutional networks for iterative non-blind deconvolution," 2017.
  113. S. Nah, T. H. Kim, and K. M. Lee, "Deep multi-scale convolutional neural network for dynamic scene deblurring," in CVPR, vol. 1, no. 2, 2017, p. 3.
  114. O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, "Deblurgan: Blind motion deblurring using conditional adversarial networks," arXiv preprint arXiv:1711.07064, 2017.
  115. O. Cossairt and S. Nayar, "Spectral focal sweep: Extended depth of field from chromatic aberrations," in IEEE International Conference on Computational Photography (ICCP). IEEE, 2010, pp. 1-8.
  116. P. Wang, N. Mohammad, and R. Menon, "Chromatic-aberration-corrected diffractive lenses for ultra-broadband focusing," Scientific Reports, vol. 6, 2016.
  117. M. Huh, P. Agrawal, and A. A. Efros, "What makes imagenet good for transfer learning?" arXiv preprint arXiv:1608.08614, 2016.
  118. Y. Bengio, "Deep learning of representations for unsupervised and transfer learn- ing," in Proceedings of ICML Workshop on Unsupervised and Transfer Learning, 2012, pp. 17-36.
  119. E. Hecht, "Hecht optics," Addison Wesley, vol. 997, pp. 213-214, 1998.
  120. J. W. Goodman, Introduction to Fourier optics. Roberts and Company Pub- lishers, 2005.
  121. A. Kalvach and Z. Szabó, "Aberration-free flat lens design for a wide range of incident angles," Journal of the Optical Society of America B, vol. 33, no. 2, p. A66, 2016.
  122. J. Zhu, T. Yang, and G. Jin, "Design method of surface contour for a freeform lens with wide linear field-of-view," Optics express, vol. 21, no. 22, pp. 26 080- 26 092, 2013.
  123. R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, P. Hanrahan et al., "Light field photography with a hand-held plenoptic camera," 2005.
  124. Y.-R. Ng, P. M. Hanrahan, M. A. Horowitz, and M. S. Levoy, "Correction of optical aberrations," Aug. 14 2012, uS Patent 8,243,157.
  125. R. Ramanath, W. Snyder, Y. Yoo, and M. Drew, "Color image processing pipeline in digital still cameras," IEEE Signal Processing Magazine, vol. 22, no. 1, pp. 34-43, 2005.
  126. T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron, "Un- processing images for learned raw denoising," arXiv preprint arXiv:1811.11127, 2018.
  127. F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pajak, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian et al., "Flexisp: A flexible camera image processing framework," ACM Transactions on Graphics (TOG), vol. 33, no. 6, p. 231, 2014.
  128. L. Sun, S. Cho, J. Wang, and J. Hays, "Edge-based blur kernel estimation using patch priors," in Proc. International Conference on Computational Photography (ICCP), 2013, pp. 1-8.
  129. O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234-241.
  130. A. Odena, V. Dumoulin, and C. Olah, "Deconvolution and checkerboard arti- facts," Distill, vol. 1, no. 10, p. e3, 2016.
  131. C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang et al., "Photo-realistic single image super- resolution using a generative adversarial network," in CVPR, vol. 2, no. 3, 2017, p. 4.
  132. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," in Advances in neu- ral information processing systems, 2014, pp. 2672-2680.
  133. M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein gan," arXiv preprint arXiv:1701.07875, 2017.
  134. L.-W. Chang, Y. Chen, W. Bao, A. Agarwal, E. Akchurin, K. Deng, and E. Bar- soum, "Accelerating recurrent neural networks through compiler techniques and quantization," 2018.
  135. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," CVPR, 2017.
  136. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, "Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging," Nature communications, vol. 3, p. 745, 2012.
  137. A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, "Femto-photography: capturing and visualizing the propagation of light," ACM Trans. Graph. (ToG), vol. 32, no. 4, p. 44, 2013.
  138. D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. Wong, and J. H. Shapiro, "Photon-efficient imaging with a single-photon camera," Nature Communications, vol. 7, 2016.
  139. G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, "Single-photon sensitive light- in-fight imaging," Nature Communications, vol. 6, 2015.
  140. M. O'Toole, F. Heide, D. B. Lindell, K. Zang, S. Diamond, and G. Wetzstein, "Reconstructing transient images from single-photon sensors," in Proc. Com- puter Vision and Pattern Recognization (CVPR). IEEE, 2017, pp. 2289-2297.
  141. D. E. Schwartz, E. Charbon, and K. L. Shepard, "A single-photon avalanche diode array for fluorescence lifetime imaging microscopy," IEEE journal of solid- state circuits, vol. 43, no. 11, pp. 2546-2557, 2008.
  142. D.-U. Li, J. Arlt, J. Richardson, R. Walker, A. Buts, D. Stoppa, E. Charbon, and R. Henderson, "Real-time fluorescence lifetime imaging system with a 32× 32 0.13 µm cmos low dark-count single-photon avalanche diode array," Optics Express, vol. 18, no. 10, pp. 10 257-10 269, 2010.
  143. M. V. Nemallapudi, S. Gundacker, P. Lecoq, E. Auffray, A. Ferri, A. Gola, and C. Piemonte, "Sub-100 ps coincidence time resolution for positron emission tomography with lso: Ce codoped with ca," Physics in Medicine & Biology, vol. 60, no. 12, p. 4635, 2015.
  144. A. C. Ulku, C. Bruschini, I. M. Antolović, Y. Kuo, R. Ankri, S. Weiss, X. Michalet, and E. Charbon, "A 512× 512 spad image sensor with integrated gating for widefield flim," IEEE Journal of Selected Topics in Quantum Elec- tronics, vol. 25, no. 1, pp. 1-12, 2018.
  145. J. M. Pavia, M. Wolf, and E. Charbon, "Measurement and modeling of mi- crolenses fabricated on single-photon avalanche diode arrays for fill factor re- covery," Optics express, vol. 22, no. 4, pp. 4202-4213, 2014.
  146. G. Intermite, A. McCarthy, R. E. Warburton, X. Ren, F. Villa, R. Lussana, A. J. Waddie, M. R. Taghizadeh, A. Tosi, F. Zappa et al., "Fill-factor improve- ment of si cmos single-photon avalanche diode detector arrays by integration of diffractive microlens arrays," Optics Express, vol. 23, no. 26, pp. 33 777-33 791, 2015.
  147. H. Chen, M. S. Asif, A. C. Sankaranarayanan, and A. Veeraraghavan, "Fpa- cs: Focal plane array-based compressive imaging in short-wave infrared," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2015, pp. 2358-2366.
  148. L. Xiao, F. Heide, M. O'Toole, A. Kolb, M. B. Hullin, K. Kutulakos, and W. Heidrich, "Defocus deblurring and superresolution for time-of-flight depth cameras," in Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition. IEEE, 2015, pp. 2376-2384.
  149. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, "Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread func- tion," Proceedings of the National Academy of Sciences, vol. 106, no. 9, pp. 2995-2999, 2009.
  150. Y. Shechtman, S. J. Sahl, A. S. Backer, and W. Moerner, "Optimal point spread function design for 3d imaging," Physical review letters, vol. 113, no. 13, p. 133902, 2014.
  151. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, "Image and depth from a conventional camera with a coded aperture," ACM Trans. Graph. (TOG), vol. 26, no. 3, p. 70, 2007.
  152. B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, "Dehazenet: An end-to-end system for single image haze removal," IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187-5198, 2016.
  153. D. Gong, J. Yang, L. Liu, Y. Zhang, I. D. Reid, C. Shen, A. Van Den Hengel, and Q. Shi, "From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur." in Proc. Computer Vision and Pattern Recognization (CVPR), vol. 1, no. 2. IEEE, 2017, p. 5.
  154. S. Su, F. Heide, G. Wetzstein, and W. Heidrich, "Deep end-to-end time-of- flight imaging," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2018, pp. 6383-6392.
  155. J. Yang, J. Wright, T. Huang, and Y. Ma, "Image super-resolution as sparse representation of raw image patches," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2008, pp. 1-8.
  156. J. Yang, J. Wright, T. S. Huang, and Y. Ma, "Image super-resolution via sparse representation," IEEE transactions on image processing, vol. 19, no. 11, pp. 2861-2873, 2010.
  157. C.-Y. Yang and M.-H. Yang, "Fast direct super-resolution by simple functions." IEEE, 2013, pp. 561-568.
  158. R. Timofte, V. De Smet, and L. Van Gool, "A+: Adjusted anchored neighbor- hood regression for fast super-resolution." Springer, 2014, pp. 111-126.
  159. S. Schulter, C. Leistner, and H. Bischof, "Fast and accurate image upscaling with super-resolution forests," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2015, pp. 3791-3799.
  160. W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueck- ert, and Z. Wang, "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016, pp. 1874-1883.
  161. C. Dong, C. C. Loy, and X. Tang, "Accelerating the super-resolution convolu- tional neural network," in European Conference on Computer Vision. Springer, 2016, pp. 391-407.
  162. C. Dong, C. C. Loy, K. He, and X. Tang, "Image super-resolution using deep convolutional networks," IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 2, pp. 295-307, 2016.
  163. Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang, "Deep networks for image super-resolution with sparse prior." IEEE, 2015, pp. 370-378.
  164. J. Kim, J. Kwon Lee, and K. Mu Lee, "Accurate image super-resolution us- ing very deep convolutional networks," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2016, pp. 1646-1654.
  165. --, "Deeply-recursive convolutional network for image super-resolution," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2016, pp. 1637-1645.
  166. W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, "Deep laplacian pyramid networks for fast and accurate super-resolution," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2017, pp. 624-632.
  167. B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, "Enhanced deep residual net- works for single image super-resolution," in Proc. Computer Vision and Pattern Recognization (CVPR)Workshops, vol. 1, no. 2. IEEE, 2017, p. 3.
  168. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recog- nition," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2016, pp. 770-778.
  169. M. Haris, G. Shakhnarovich, and N. Ukita, "Deep back-projection networks for super-resolution," arXiv, 2018.
  170. N. George and W. Chi, "Extended depth of field using a logarithmic asphere," Journal of Optics A: Pure and Applied Optics, vol. 5, no. 5, p. S157, 2003.
  171. L.-H. Yeh and L. Waller, "3d super-resolution optical fluctuation imaging (3d- sofi) with speckle illumination," in Computational Optical Sensing and Imaging. Optical Society of America, 2016, pp. CW5D-2.
  172. C. Zhou, S. Lin, and S. K. Nayar, "Coded aperture pairs for depth from defocus and defocus deblurring," International journal of computer vision, vol. 93, no. 1, pp. 53-72, 2011.
  173. R. F. Marcia, Z. T. Harmany, and R. M. Willett, "Compressive coded aperture imaging," in Computational Imaging VII, vol. 7246. International Society for Optics and Photonics, 2009, p. 72460G.
  174. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, "Coded aperture compressive temporal imaging," Optics express, vol. 21, no. 9, pp. 10 526-10 545, 2013.
  175. G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, "Compressive coded aperture spectral imaging: An introduction," IEEE Signal Processing Magazine, vol. 31, no. 1, pp. 105-115, 2014.
  176. G. Kim, J. A. Domínguez-Caballero, and R. Menon, "Design and analysis of multi-wavelength diffractive optics," Optics Express, vol. 20, no. 3, pp. 2814- 2823, 2012.
  177. W. Qu, H. Gu, H. Zhang, and Q. Tan, "Image magnification in lensless holo- graphic projection using double-sampling fresnel diffraction," Applied Optics, vol. 54, no. 34, pp. 10 018-10 021, 2015.
  178. M. Petrov, S. Bibikov, Y. Yuzifovich, R. Skidanov, and A. Nikonorov, "Color correction with 3d lookup tables in diffractive optical imaging systems," Proce- dia Engineering, vol. 201, pp. 73-82, 2017.
  179. Y. Peng, X. Dun, Q. Sun, F. Heide, and W. Heidrich, "Focal sweep imaging with multi-focal diffractive optics," in International Conference on Computational Photography (ICCP). IEEE, 2018, pp. 1-8.
  180. C. Zhao, A. Carass, B. E. Dewey, J. Woo, J. Oh, P. A. Calabresi, D. S. Reich, P. Sati, D. L. Pham, and J. L. Prince, "A deep learning based anti-aliasing self super-resolution algorithm for mri," in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2018, pp. 100-108.
  181. S. Datta, N. Chaki, and K. Saeed, "Minimizing aliasing effects using faster super resolution technique on text images," in Transactions on Computational Science XXXI. Springer, 2018, pp. 136-153.
  182. D. O'Connor, Time-correlated single photon counting. Academic Press, 2012.
  183. D. D.-U. Li, S. Ameer-Beg, J. Arlt, D. Tyndall, R. Walker, D. R. Matthews, V. Visitkul, J. Richardson, and R. K. Henderson, "Time-domain fluorescence lifetime imaging techniques suitable for solid-state imaging sensor arrays," Sen- sors, vol. 12, no. 5, pp. 5650-5669, 2012.
  184. A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. Wong, J. H. Shapiro, and V. K. Goyal, "First-photon imaging," Science, vol. 343, no. 6166, pp. 58-61, 2014.
  185. A. K. Pediredla, A. C. Sankaranarayanan, M. Buttafava, A. Tosi, and A. Veer- araghavan, "Signal processing based pile-up compensation for gated single- photon avalanche diodes," arXiv preprint arXiv:1806.07437, 2018.
  186. F. Heide, S. Diamond, D. B. Lindell, and G. Wetzstein, "Sub-picosecond photon- efficient 3d imaging using single-photon sensors," arXiv, 2018.
  187. F. Heide, M. O'Toole, K. Zang, D. B. Lindell, S. Diamond, and G. Wetzstein, "Non-line-of-sight imaging with partial occluders and surface normals," ACM Trans. Graph., 2019.
  188. D. B. Lindell, G. Wetzstein, and M. O'Toole, "Wave-based non-line-of-sight imaging using fast f-k migration," ACM Trans. Graph. (SIGGRAPH), vol. 38, no. 4, p. 116, 2019.
  189. D. B. Lindell, M. O'Toole, and G. Wetzstein, "Single-photon 3d imaging with deep sensor fusion," ACM Trans. Graph. (SIGGRAPH), no. 4, 2018.
  190. M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, "Deepbinarymask: Learning a binary mask for video compressive sensing," arXiv, 2016.
  191. A. Chakrabarti, "Learning sensor multiplexing design through back- propagation," in Advances in Neural Information Processing Systems, 2016, pp. 3081-3089.
  192. Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraragha- van, "Phasecam3d-learning phase masks for passive single view depth estima- tion," in Computational Photography (ICCP), 2019 IEEE International Con- ference on. IEEE, 2019, pp. 1-8.
  193. J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, "Hybrid optical-electronic convolutional neural networks with optimized diffractive op- tics for image classification," Scientific Reports, 2018.
  194. M. Parker, Digital Signal Processing 101, Second Edition: Everything You Need to Know to Get Started, 2nd ed. Newton, MA, USA: Newnes, 2017.
  195. R. W. Gerchberg and W. O. Saxton, "A practical algorithm for the determina- tion of phase from image and diffraction plane pictures," Optik, vol. 35, p. 237, 1972.
  196. B. Morgan, C. M. Waits, J. Krizmanic, and R. Ghodssi, "Development of a deep silicon phase fresnel lens using gray-scale lithography and deep reactive ion etching," Journal of microelectromechanical systems, vol. 13, no. 1, pp. 113- 120, 2004.
  197. F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, "Imaging in scat- tering media using correlation image sensors and sparse convolutional coding," Optics Express, vol. 22, no. 21, pp. 26 338-26 350, 2014.
  198. M. Bevilacqua, A. Roumy, C. Guillemot, and M. L. Alberi-Morel, "Low- complexity single-image super-resolution based on nonnegative neighbor em- bedding," 2012.
  199. R. Zeyde, M. Elad, and M. Protter, "On single image scale-up using sparse- representations," in International conference on curves and surfaces. Springer, 2010, pp. 711-730.
  200. P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, "Contour detection and hierar- chical image segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 5, pp. 898-916, 2011.
  201. P. D. Burns and D. Williams, "Refined slanted-edge measurement for practical camera and scanner testing," in IS AND TS PICS CONFERENCE. Society for Imaging Science and Technology, 2002, pp. 191-195.
  202. D. Qin, Y. Xia, and G. M. Whitesides, "Soft lithography for micro-and nanoscale patterning," Nature protocols, vol. 5, no. 3, pp. 491-502, 2010.
  203. S. Donati, G. Martini, and M. Norgia, "Microconcentrators to recover fill-factor in image photodetectors with pixel on-board processing circuits," Optics express, vol. 15, no. 26, pp. 18 066-18 075, 2007.
  204. B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll, "Burst denoising with kernel prediction networks," in Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, 2018, pp. 2502-2510.
  205. A. Darmont and S. of Photo-optical Instrumentation Engineers, "High dynamic range imaging: sensors and architectures." SPIE Washington, 2012.
  206. U. Seger, "Hdr imaging in automotive applications," in High Dynamic Range Video. Elsevier, 2016, pp. 477-498.
  207. T. Mertens, J. Kautz, and F. Van Reeth, "Exposure fusion: A simple and practical alternative to high dynamic range photography," in Computer graphics forum, vol. 28, no. 1. Wiley Online Library, 2009, pp. 161-171.
  208. S. K. Nayar and T. Mitsunaga, "High dynamic range imaging: Spatially varying pixel exposures," in Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), vol. 1. IEEE, 2000, pp. 472-479.
  209. T. Willassen, J. Solhusvik, R. Johansson, S. Yaghmai, H. Rhodes, S. Manabe, D. Mao, Z. Lin, D. Yang, O. Cellek et al., "A 1280× 1080 4.2 µm split-diode pixel hdr sensor in 110 nm bsi cmos process," in Proceedings of the International Image Sensor Workshop, Vaals, The Netherlands, 2015, pp. 8-11.
  210. A. Morimitsu, I. Hirota, S. Yokogawa, I. Ohdaira, M. Matsumura, H. Takahashi, T. Yamazaki, H. Oyaizu, Y. Incesu, M. Atif et al., "A 4m pixel full-pdaf cmos image sensor with 1.58 µm 2× 1 on-chip micro-split-lens technology," in ITE Technical Report 39.35. The Institute of Image Information and Television Engineers, 2015, pp. 5-8.
  211. M. D. Tocci, C. Kiser, N. Tocci, and P. Sen, "A versatile hdr video production system," in ACM Transactions on Graphics (TOG), vol. 30, no. 4. ACM, 2011, p. 41.
  212. G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, "Hdr image reconstruction from a single exposure using deep cnns," ACM Transactions on Graphics (TOG), vol. 36, no. 6, p. 178, 2017.
  213. K. Fotiadou, G. Tsagkatakis, and P. Tsakalides, "Snapshot high dynamic range imaging via sparse representations and feature learning," IEEE Transactions on Multimedia, 2019.
  214. M. Rouf, R. Mantiuk, W. Heidrich, M. Trentacoste, and C. Lau, "Glare encoding of high dynamic range images," CVPR 2011, pp. 289-296, 2011.
  215. C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, "Deep optics for single-shot high-dynamic-range imaging," arXiv preprint arXiv:1908.00620, 2019.
  216. P. E. Debevec and J. Malik, "Recovering high dynamic range radiance maps from photographs," in SIGGRAPH '08, 1997.
  217. E. Reinhard, G. Ward, S. Pattanaik, P. E. Debevec, W. Heidrich, and K. Myszkowski, "High dynamic range imaging: Acquisition, display, and image- based lighting," 2010.
  218. M. D. Grossberg and S. K. Nayar, "High dynamic range from multiple images: Which exposures to combine?" 2003.
  219. T. Mertens, J. Kautz, and F. V. Reeth, "Exposure fusion: A simple and prac- tical alternative to high dynamic range photography," Comput. Graph. Forum, vol. 28, pp. 161-171, 2009.
  220. S. W. Hasinoff, F. Durand, and W. T. Freeman, "Noise-optimal capture for high dynamic range photography," 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 553-560, 2010.
  221. S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski, "High dynamic range video," ACM Trans. Graph., vol. 22, pp. 319-325, 2003.
  222. E. A. Khan, A. O. Akyüz, and E. Reinhard, "Ghost removal in high dynamic range images," 2006 International Conference on Image Processing, pp. 2005- 2008, 2006.
  223. C. Liu, "Exploring new representations and applications for motion analysis," 2009.
  224. O. Gallo, N. Gelfandz, W.-C. Chen, M. Tico, and K. Pulli, "Artifact-free high dynamic range imaging," 2009 IEEE International Conference on Computa- tional Photography (ICCP), pp. 1-7, 2009.
  225. M. Granados, K. I. Kim, J. Tompkin, and C. Theobalt, "Automatic noise mod- eling for ghost-free hdr reconstruction," ACM Trans. Graph., vol. 32, pp. 201:1- 201:10, 2013.
  226. J. Hu, O. Gallo, K. Pulli, and X. Sun, "Hdr deghosting: How to deal with satu- ration?" 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1163-1170, 2013.
  227. N. K. Kalantari, E. Shechtman, C. Barnes, S. Darabi, D. B. Goldman, and P. Sen, "Patch-based high dynamic range video," ACM Trans. Graph., vol. 32, pp. 202:1-202:8, 2013.
  228. P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shecht- man, "Robust patch-based hdr reconstruction of dynamic scenes," ACM Trans. Graph., vol. 31, pp. 203:1-203:11, 2012.
  229. N. K. Kalantari and R. Ramamoorthi, "Deep high dynamic range imaging of dynamic scenes," ACM Trans. Graph., vol. 36, pp. 144:1-144:12, 2017.
  230. --, "Deep hdr video from sequences with alternating exposures," Comput. Graph. Forum, vol. 38, pp. 193-205, 2019.
  231. F. Banterle, P. Ledda, K. Debattista, and A. Chalmers, "Inverse tone mapping," in GRAPHITE, 2006.
  232. P. Didyk, R. Mantiuk, M. Hein, and H.-P. Seidel, "Enhancement of bright video features for hdr displays," Comput. Graph. Forum, vol. 27, pp. 1265-1274, 2008.
  233. L. Meylan, S. J. Daly, and S. Süsstrunk, "The reproduction of specular highlights on high dynamic range displays," in Color Imaging Conference, 2006.
  234. A. G. Rempel, M. Trentacoste, H. Seetzen, H. D. Young, W. Heidrich, L. A. Whitehead, and G. Ward, "Ldr2hdr: on-the-fly reverse tone mapping of legacy video and photographs," in SIGGRAPH 2007, 2007.
  235. A. O. Akyüz, R. W. Fleming, B. E. Riecke, E. Reinhard, and H. H. Bülthoff, "Do hdr displays support ldr content?: a psychophysical evaluation," in SIGGRAPH 2007, 2007.
  236. B. Masiá, S. Agustin, R. W. Fleming, O. Sorkine-Hornung, and D. Gutier- rez, "Evaluation of reverse tone mapping through varying exposure conditions," ACM Trans. Graph., vol. 28, p. 160, 2009.
  237. K. Moriwaki, R. Yoshihashi, R. Kawakami, S. You, and T. Naemura, "Hy- brid loss for learning single-image-based HDR reconstruction," arXiv preprint arXiv:1812.07134, 2018.
  238. Y. Endo, Y. Kanamori, and J. Mitani, "Deep reverse tone mapping," ACM Transactions on Graphics (Proc. of SIGGRAPH Asia), vol. 36, no. 6, p. 177, 2017.
  239. J. Zhang and J. Lalonde, "Learning high dynamic range from outdoor panoramas," CoRR, vol. abs/1703.10200, 2017. [Online]. Available: http: //arxiv.org/abs/1703.10200
  240. S. Lee, G. H. An, and S.-J. Kang, "Deep chain hdri: Reconstructing a high dynamic range image from a single low dynamic range image," IEEE Access, vol. 6, pp. 49 913-49 924, 2018.
  241. S. Lee, G. Hwan An, and S.-J. Kang, "Deep recursive hdri: Inverse tone map- ping using generative adversarial networks," in The European Conference on Computer Vision (ECCV), September 2018.
  242. C. Wang, Y. Zhao, and R. Wang, "Deep inverse tone mapping for compressed images," IEEE Access, vol. 7, pp. 74 558-74 569, 2019.
  243. D. Marnerides, T. Bashford-Rogers, J. Hatchett, and K. Debattista, "Expandnet: A deep convolutional neural network for high dynamic range expansion from low dynamic range content," CoRR, vol. abs/1803.02266, 2018. [Online]. Available: http://arxiv.org/abs/1803.02266
  244. S. Ning, H. Xu, L. Song, R. Xie, and W. Zhang, "Learning an inverse tone mapping network with a generative adversarial regularizer," 2018 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1383-1387, 2018.
  245. H. Jang, K. Bang, J. Jang, and D. Hwang, "Inverse tone mapping operator using sequential deep neural networks based on the human visual system," IEEE Access, vol. 6, pp. 52 058-52 072, 2018.
  246. S. Hajisharif, J. Kronander, and J. Unger, "Adaptive dualiso hdr reconstruc- tion," EURASIP Journal on Image and Video Processing, vol. 2015, pp. 1-13, 2015.
  247. A. Serrano, F. Heide, D. Gutierrez, G. Wetzstein, and B. Masiá, "Convolutional sparse coding for high dynamic range imaging," Comput. Graph. Forum, vol. 35, pp. 153-163, 2016.
  248. W. Guicquero, A. Dupret, and P. Vandergheynst, "An algorithm architecture co- design for cmos compressive high dynamic range imaging," IEEE Transactions on Computational Imaging, vol. 2, pp. 190-203, 2016.
  249. H. Zhao, B. Shi, C. Fernandez-Cull, S.-K. Yeung, and R. Raskar, "Unbounded high dynamic range photography using a modulo camera," 2015 IEEE Interna- tional Conference on Computational Photography (ICCP), pp. 1-10, 2015.
  250. K. Hirakawa and P. M. Simon, "Single-shot high dynamic range imaging with conventional camera hardware," 2011 International Conference on Computer Vision, pp. 1339-1346, 2011.
  251. A. Chakrabarti, "Learning sensor multiplexing design through back- propagation," ArXiv, vol. abs/1605.07078, 2016.
  252. R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, "Convolutional neu- ral networks that teach microscopes how to image," ArXiv, vol. abs/1709.07223, 2017.
  253. M. Kellman, E. Bostan, M. Chen, and L. Waller, "Data-driven design for fourier ptychographic microscopy," in 2019 IEEE International Conference on Compu- tational Photography (ICCP). IEEE, 2019, pp. 1-8.
  254. E. Nehme, D. Freedman, R. Gordon, B. Ferdman, T. Michaeli, and Y. Shecht- man, "Dense three dimensional localization microscopy by deep learning," 2019.
  255. Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. E. Moerner, "Mul- ticolour localization microscopy by point-spread-function engineering." Nature photonics, vol. 10, pp. 590-594, 2016.
  256. Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraragha- van, "Phasecam3d -learning phase masks for passive single view depth esti- mation," 2019 IEEE International Conference on Computational Photography (ICCP), pp. 1-12, 2019.
  257. J. Marco, Q. Hernandez, A. Muñoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, "Deeptof: off-the-shelf real-time correction of multipath in- terference in time-of-flight imaging," ACM Trans. Graph., vol. 36, pp. 219:1- 219:12, 2017.
  258. R. Mantiuk, K. J. Kim, A. G. Rempel, and W. Heidrich, "Hdr-vdp-2: A cali- brated visual metric for visibility and quality predictions in all luminance con- ditions," ACM Transactions on graphics (TOG), vol. 30, no. 4, pp. 1-14, 2011.
  259. R. E. Fischer, B. Tadic-Galeb, P. R. Yoder, and R. Galeb, Optical system design. McGraw Hill New York, 2000.
  260. M. J. Allen, "Automobile windshields, surface deterioration," SAE Technical Paper, Tech. Rep., 1970.
  261. A. Flores, M. R. Wang, and J. J. Yang, "Achromatic hybrid refractive- diffractive lens with extended depth of focus," Appl. Opt., vol. 43, no. 30, pp. 5618-5630, Oct 2004. [Online]. Available: http://ao.osa.org/abstract.cfm? URI=ao-43-30-5618
  262. Z. Liu, A. Flores, M. R. Wang, and J. J. Yang, "Diffractive infrared lens with extended depth of focus," Optical Engineering, vol. 46, no. 1, pp. 1 -9, 2007.
  263. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, "Image and depth from a conventional camera with a coded aperture," ACM Trans. Graph., vol. 26, no. 3, p. 70-es, Jul. 2007.
  264. A. Levin, "Analyzing depth from coded aperture sets," in Computer Vision - ECCV 2010, K. Daniilidis, P. Maragos, and N. Paragios, Eds. Berlin, Heidel- berg: Springer Berlin Heidelberg, 2010, pp. 214-227.
  265. X. Dun, H. Ikoma, G. Wetzstein, Z. Wang, X. Cheng, and Y. Peng, "Learned rotationally symmetric diffractive achromat for full-spectrum computational imaging," Optica, vol. 7, no. 8, pp. 913-922, Aug 2020.
  266. S. Colburn, A. Zhan, and A. Majumdar, "Metasurface optics for full-color com- putational imaging," Science Advances, vol. 4, no. 2, 2018.
  267. S. S. Khan, A. V. R. , V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mi- tra, "Towards photorealistic reconstruction of highly multiplexed lensless im- ages," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
  268. Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraragha- van, "Phasecam3d â€" learning phase masks for passive single view depth esti- mation," in 2019 IEEE International Conference on Computational Photogra- phy (ICCP). Los Alamitos, CA, USA: IEEE Computer Society, may 2019, pp. 1-12.
  269. A. Kotwal, A. Levin, and I. Gkioulekas, "Interferometric transmission probing with coded mutual intensity," vol. 39, no. 4, Jul. 2020.
  270. Y. Wu, F. Li, F. Willomitzer, A. Veeraraghavan, and O. Cossairt, "Wished: Wavefront imaging sensor with high resolution and depth ranging," in 2020 IEEE International Conference on Computational Photography (ICCP), 2020, pp. 1-10.
  271. V. Boominathan, J. K. Adams, J. T. Robinson, and A. Veeraraghavan, "Phlat- cam: Designed phase-mask based thin lensless camera," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 7, pp. 1618-1629, 2020.
  272. O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, "Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better," in The IEEE International Conference on Computer Vision (ICCV), Oct 2019.
  273. G. Côté, J.-F. Lalonde, and S. Thibault, "Extrapolating from lens design databases using deep learning," Opt. Express, vol. 27, no. 20, pp. 28 279-28 292, Sep 2019.
  274. --, "Deep learning-enabled framework for automatic lens design starting point generation," Opt. Express, vol. 29, no. 3, pp. 3841-3854, Feb 2021.
  275. D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, "Compact snapshot hyperspectral imaging with diffracted rotation," ACM Transactions on Graphics (Proc. SIGGRAPH 2019), vol. 38, no. 4, pp. 117:1- 13, 2019.
  276. S.-H. Baek, H. Ikoma, D. S. Jeon, Y. Li, W. Heidrich, G. Wetzstein, and M. H. Kim, "End-to-end hyperspectral-depth imaging with learned diffractive optics," arXiv preprint arXiv:2009.00463, 2020.
  277. C. Zhang, B. Miller, K. Yan, I. Gkioulekas, and S. Zhao, "Path-space differen- tiable rendering," ACM Trans. Graph., vol. 39, no. 4, pp. 143:1-143:19, 2020.
  278. C. Zhang, L. Wu, C. Zheng, I. Gkioulekas, R. Ramamoorthi, and S. Zhao, "A differential theory of radiative transfer," ACM Trans. Graph., vol. 38, no. 6, pp. 227:1-227:16, 2019.
  279. S. Bangaru, T.-M. Li, and F. Durand, "Unbiased warped-area sampling for differentiable rendering," ACM Trans. Graph., vol. 39, no. 6, pp. 245:1-245:18, 2020.
  280. C. Kolb, D. Mitchell, and P. Hanrahan, "A realistic camera model for computer graphics," in Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, 1995, pp. 317-324.
  281. F. A. Jenkins and H. E. White, Fundamentals of optics. Tata McGraw-Hill Education, 2018.
  282. Q. Guo, I. Frosio, O. Gallo, T. Zickler, and J. Kautz, "Tackling 3d tof arti- facts through learning and the flat dataset," in The European Conference on Computer Vision (ECCV), September 2018.
  283. E. Agustsson and R. Timofte, "Ntire 2017 challenge on single image super- resolution: Dataset and study," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017.
  284. S. W. Hasinoff and K. N. Kutulakos, "Light-efficient photography," IEEE Trans- actions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2203- 2214, 2011.
  285. O. Cossairt, C. Zhou, and S. Nayar, "Diffusion Coding Photography for Ex- tended Depth of Field," ACM Transactions on Graphics (TOG), Aug 2010.