Abstract
Imaging systems have long been designed in separated steps: the experience-driven optical design followed by sophisticated image processing. Such a general-propose approach achieves success in the past but left the question open for specific tasks and the best compromise between optics and post-processing, as well as minimizing costs. Driven from this, a series of works are proposed to bring the imaging system design into end-to-end fashion step by step, from joint optics design, point spread function (PSF) optimization, phase map optimization to a general end-to-end complex lens camera. To demonstrate the joint optics application with image recovery, we applied it to flat lens imaging with a large field of view (LFOV). In applying a super-resolution single-photon avalanche diode (SPAD) camera, the PSF encoded by diffractive op tical element (DOE) is optimized together with the post-processing, which brings the optics design into the end-to-end stage. Expanding to color imaging, opt...
References (285)
- 2.1 Aberrations and Traditional Lens Design. . . . . . . . . . . . . . .
- 2.2 Computational Optics. . . . . . . . . . . . . . . . . . . . . . . . . .
- Manufacturing Planar Optics. . . . . . . . . . . . . . . . . . . . . .
- 2.4 Image Quality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 2.5 Learned Image Reconstruction. . . . . . . . . . . . . . . . . . . . .
- 3 Designing Optics for Learned Recovery . . . . . . . . . . . . . . . . .
- Lens Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 4.1 Ideal Phase Profile . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 4.2 Aperture Partitioning . . . . . . . . . . . . . . . . . . . . . . . . .
- 4.3 Fresnel Depth Profile Optimization . . . . . . . . . . . . . . . . . .
- Aberration Analysis . . . . . . . . . . . . . . . . . . . . . . . . . .
- 5 Learned Image Reconstruction . . . . . . . . . . . . . . . . . . . . . .
- 5.1 Image Formation Model . . . . . . . . . . . . . . . . . . . . . . . .
- 5.2 Generative Image Recovery . . . . . . . . . . . . . . . . . . . . . .
- Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 7 Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 8 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 8.1 Field of View Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
- 8.2 Generalization Analysis . . . . . . . . . . . . . . . . . . . . . . . .
- 8.3 Fine-tuning for Alternative Lens Designs . . . . . . . . . . . . . . .
- Hallucination Analysis . . . . . . . . . . . . . . . . . . . . . . . . .
- 9 Experimental Assessment . . . . . . . . . . . . . . . . . . . . . . . . .
- 9.1 Imaging over Large Depth Ranges and in Low Light . . . . . . . .
- 10 Discussion and Conclusion . . . . . . . . . . . . . . . . . . . . . . . .
- End-to-End Encoding Through Optimizing PSF: Super-resolution SPAD Camera
- 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 2.1 Image super-resolution (SR) . . . . . . . . . . . . . . . . . . . . . .
- 2.2 PSF Engineering for Computational Imaging . . . . . . . . . . . . .
- 2.3 Imaging with SPAD Sensors . . . . . . . . . . . . . . . . . . . . . .
- 2.4 End-to-End Computational Cameras . . . . . . . . . . . . . . . . .
- 3 End-to-end Diffractive Optics Design and Image Reconstruction . . .
- 3.1 Image Formation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 3.2 Image Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . REFERENCES
- E. R. Dowski and W. T. Cathey, "Extended depth of field through wave-front coding," Applied optics, vol. 34, no. 11, pp. 1859-1866, 1995.
- S. C. Tucker, W. T. Cathey, and E. R. Dowski, "Extended depth of field and aberration control for inexpensive digital microscope systems," Optics Express, vol. 4, no. 11, pp. 467-474, 1999.
- W. T. Cathey and E. R. Dowski, "New paradigm for imaging systems," Applied Optics, vol. 41, no. 29, pp. 6080-6092, 2002.
- A. Levin, S. W. Hasinoff, P. Green, F. Durand, and W. T. Freeman, "4d fre- quency analysis of computational cameras for depth of field extension," in ACM Trans. Graph. (TOG), vol. 28, no. 3. ACM, 2009, p. 97.
- P. E. Debevec and J. Malik, "Recovering high dynamic range radiance maps from photographs," in Proceedings of the 24th Annual Conference on Com- puter Graphics and Interactive Techniques, ser. SIGGRAPH '97. USA: ACM Press/Addison-Wesley Publishing Co., 1997, p. 369-378.
- S. Mann and R. W. Picard, "Being 'undigital' with digital cameras: extending dynamic range by combining differently exposed pictures," 1994.
- E. Reinhard and K. Devlin, "Dynamic range reduction inspired by photorecep- tor physiology," IEEE Transactions on Visualization and Computer Graphics, vol. 11, no. 1, pp. 13-24, 2005.
- M. Rouf, R. Mantiuk, W. Heidrich, M. Trentacoste, and C. Lau, "Glare encoding of high dynamic range images," ser. CVPR '11. USA: IEEE Computer Society, 2011.
- S. Nayar, V. Branzoi, and T. Boult, "Programmable Imaging using a Digi- tal Micromirror Array," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. I, Jun 2004, pp. 436-443.
- D. J. Brady, M. E. Gehm, R. A. Stack, D. L. Marks, D. S. Kittle, D. R. Golish, E. Vera, and S. D. Feller, "Multiscale gigapixel photography," Nature, vol. 486, no. 7403, p. 386, 2012.
- O. S. Cossairt, D. Miau, and S. K. Nayar, "Gigapixel computational imaging," in 2011 IEEE International Conference on Computational Photography (ICCP), 2011, pp. 1-8.
- Q. Sun, X. Dun, Y. Peng, and W. Heidrich, "Depth and transient imaging with compressive spad array cameras," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
- Y. Peng, Q. Sun, X. Dun, G. Wetzstein, and W. Heidrich, "Learned large field-of-view imaging with thin-plate optics," in ACM Transactions on Graphics (Proc. SIGGRAPH Asia), vol. 38, no. 6. ACM, 2019.
- V. Sitzmann, S. Diamond, Y. Peng, X. Dun, S. Boyd, W. Heidrich, F. Heide, and G. Wetzstein, "End-to-end optimization of optics and image processing for achromatic extended depth of field and super-resolution imaging," ACM Transactions on Graphics (TOG), vol. 37, no. 4, p. 114, 2018.
- J. Chang and G. Wetzstein, "Deep optics for monocular depth estimation and 3d object detection," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
- --, "Deep optics for monocular depth estimation and 3d object detection," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
- Q. Sun, J. Zhang, X. Dun, B. Ghanem, Y. Peng, and W. Heidrich, "End-to-end learned, optically coded super-resolution spad camera," ACM Trans. Graph., vol. 39, no. 2, Mar. 2020.
- Q. Sun, E. Tseng, Q. Fu, W. Heidrich, and F. Heide, "Learning rank-1 diffrac- tive optics for single-shot high dynamic range imaging," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
- C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, "Deep optics for single-shot high-dynamic-range imaging," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 1375-1385.
- M. Nimier-David, D. Vicini, T. Zeltner, and W. Jakob, "Mitsuba 2: A retar- getable forward and inverse renderer," Transactions on Graphics (Proceedings of SIGGRAPH Asia), vol. 38, no. 6, Dec. 2019.
- E. Tseng, A. Mosleh, F. Mannan, K. St-Arnaud, A. Sharma, Y. Peng, A. Braun, D. Nowrouzezahrai, J.-F. Lalonde, and F. Heide, "Differentiable compound op- tics and processing pipeline optimization for end-to-end camera design," ACM Transactions on Graphics (TOG), vol. 40, no. 4, 2021.
- Q. Sun, C. Wang, F. Qiang, D. Xiong, and H. Wolfgang, "End-to-end complex lens design with differentiable ray tracing," ACM Transactions on Graphics (TOG), vol. 40, no. 4, 2021.
- T. O. Aydin, R. Mantiuk, K. Myszkowski, and H.-P. Seidel, "Dynamic range independent image quality assessment," ACM Trans. Graph., vol. 27, no. 3, p. 1-10, Aug. 2008.
- A. R. Robertson, "Recent cie work on color difference evaluation," in Review and Evaluation of Appearance: Methods and Techniques. ASTM International, 1986.
- H. Haim, S. Elmalem, R. Giryes, A. Bronstein, and E. Marom, "Depth es- timation from a single image using deep learned phase coded mask," IEEE Transactions on Computational Imaging, vol. 4, pp. 298-310, 2018.
- X. Zhang, R. Ng, and Q. Chen, "Single image reflection separation with percep- tual losses," in IEEE Conference on Computer Vision and Pattern Recognition, 2018.
- L. He, G. Wang, and Z. Hu, "Learning depth from single images with deep neu- ral network embedding focal length," IEEE Transactions on Image Processing, vol. 27, pp. 4676-4689, 2018.
- G. R. Fowles, Introduction to modern optics. Courier Corporation, 1975.
- C. F. Gauss, Dioptrische Untersuchungen von CF Gauss. in der Dieterichschen Buchhandlung, 1843.
- R. Kingslake and R. B. Johnson, Lens design fundamentals. Academic Press, 2009.
- L. Seidel, "Ueber die theorie der fehler," mit welchen die durch optische In- strumente gesehenen Bilder behaftet sind, und über die mathematischen Bedin- gungen ihrer Aufhebung. Abhandlungen der Naturwissenschaftlich-Technischen Commission bei der Königl. Bayerischen Akademie der Wissenschaften in München. Cotta, vol. 2, p. 4, 1857.
- G. G. Sliusarev, "Aberration and optical design theory," Bristol, England, Adam Hilger, Ltd., 1984, 672 p. Translation., 1984.
- J. M. Geary, Introduction to lens design: with practical ZEMAX examples. Willmann-Bell Richmond, 2002.
- D. Malacara-Hernández and Z. Malacara-Hernández, Handbook of optical de- sign. CRC Press, 2016.
- S. W. Hasinoff, D. Sharlet, R. Geiss, A. Adams, J. T. Barron, F. Kainz, J. Chen, and M. Levoy, "Burst photography for high dynamic range and low-light imaging on mobile cameras," ACM Transactions on Graphics (TOG), vol. 35, no. 6, p. 192, 2016.
- C. Chen, Q. Chen, J. Xu, and V. Koltun, "Learning to see in the dark," 2018.
- X. Yuan, L. Fang, Q. Dai, D. J. Brady, and Y. Liu, "Multiscale gigapixel video: A cross resolution image matching and warping approach," in 2017 IEEE In- ternational Conference on Computational Photography (ICCP). IEEE, 2017, pp. 1-9.
- Light.co, "Light l16 camera," 2018.
- MobilEye, "Mobileeye tricam," 2018.
- K. Venkataraman, D. Lelescu, J. Duparré, A. McMahon, G. Molina, P. Chatter- jee, R. Mullis, and S. Nayar, "Picam: An ultra-thin high performance monolithic camera array," ACM Transactions on Graphics (TOG), vol. 32, no. 6, p. 166, 2013.
- Y. Peng, Q. Fu, H. Amata, S. Su, F. Heide, and W. Heidrich, "Computational imaging using lightweight diffractive-refractive optics," Optics express, vol. 23, no. 24, pp. 31 393-31 407, 2015.
- F. Heide, M. Rouf, M. B. Hullin, B. Labitzke, W. Heidrich, and A. Kolb, "High- quality computational imaging through simple lenses," ACM Transactions on Graphics (TOG), vol. 32, no. 5, p. 149, 2013.
- F. Heide, Q. Fu, Y. Peng, and W. Heidrich, "Encoded diffractive optics for full-spectrum computational imaging," Scientific Reports, vol. 6, 2016.
- G. R. Fowles, Introduction to modern optics. Courier Dover Publications, 2012.
- W. J. Smith, Modern lens design. McGraw-Hill, 2005.
- Y. Shih, B. Guenter, and N. Joshi, "Image enhancement using calibrated lens simulations," in European Conference on Computer Vision. Springer, 2012, pp. 42-56.
- D. G. Stork and P. R. Gill, "Lensless ultra-miniature cmos computational im- agers and sensors," Proc. Sensorcomm, pp. 186-190, 2013.
- --, "Optical, mathematical, and computational foundations of lensless ultra- miniature diffractive imagers and sensors," International Journal on Advances in Systems and Measurements, vol. 7, no. 3, p. 4, 2014.
- M. Monjur, L. Spinoulas, P. R. Gill, and D. G. Stork, "Ultra-miniature, compu- tationally efficient diffractive visual-bar-position sensor," in Proc. SensorComm. IEIFSA, 2015.
- N. Antipa, G. Kuo, R. Heckel, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, "Diffusercam: lensless single-exposure 3d imaging," Optica, vol. 5, no. 1, pp. 1-9, 2018.
- Y. Peng, Q. Fu, F. Heide, and W. Heidrich, "The diffractive achromat full spectrum computational imaging with diffractive optics," ACM Trans. Graph. (SIGGRAPH), vol. 35, no. 4, p. 31, 2016.
- M. Papas, T. Houit, D. Nowrouzezahrai, M. H. Gross, and W. Jarosz, "The magic lens: refractive steganography." ACM Trans. Graph., vol. 31, no. 6, pp. 186-1, 2012.
- Y. Schwartzburg, R. Testuz, A. Tagliasacchi, and M. Pauly, "High-contrast computational caustic design," ACM Transactions on Graphics (TOG), vol. 33, no. 4, p. 74, 2014.
- Y. Peng, X. Dun, Q. Sun, and W. Heidrich, "Mix-and-match holography," ACM Trans. Graph. (SIGGRAPH Aisa), vol. 36, no. 6, p. 191, 2017.
- Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraragha- van, "Phasecam3d -learning phase masks for passive single view depth estima- tion," in Proc. ICCP, 2019.
- J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, "Hybrid optical-electronic convolutional neural networks with optimized diffractive op- tics for image classification," Scientific reports, vol. 8, no. 1, p. 12324, 2018.
- S. Oliver, R. Lake, S. Hegde, J. Viens, and J. Duparre, "Imaging module with symmetrical lens system and method of manufacture," May 4 2010, uS Patent 7,710,667.
- W. Duoshu, C. Luo, Y. Xiong, T. Chen, H. Liu, and J. Wang, "Fabrication tech- nology of the centrosymmetric continuous relief diffractive optical elements," Physics Procedia, vol. 18, pp. 95-99, 2011.
- P. Genevet, F. Capasso, F. Aieta, M. Khorasaninejad, and R. Devlin, "Recent advances in planar optics: from plasmonic to dielectric metasurfaces," Optica, vol. 4, no. 1, pp. 139-152, 2017.
- S. H. Ahn and L. J. Guo, "Large-area roll-to-roll and roll-to-plate nanoimprint lithography: a step toward high-throughput application of continuous nanoim- printing," ACS Nano, vol. 3, no. 8, pp. 2304-2310, 2009.
- S. Y. Chou, P. R. Krauss, and P. J. Renstrom, "Nanoimprint lithography," Journal of Vacuum Science & Technology B: Microelectronics and Nanometer Structures Processing, Measurement, and Phenomena, vol. 14, no. 6, pp. 4129- 4133, 1996.
- M. Zoberbier, S. Hansen, M. Hennemeyer, D. Tönnies, R. Zoberbier, M. Brehm, A. Kraft, M. Eisner, and R. Völkel, "Wafer level cameras-novel fabrication and packaging technologies," in Int. Image Sens. Workshop, 2009.
- F. Fang, X. Zhang, A. Weckenmann, G. Zhang, and C. Evans, "Manufacturing and measurement of freeform optics," CIRP Annals, vol. 62, no. 2, pp. 823-846, 2013.
- Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004.
- K. Mitra, O. Cossairt, and A. Veeraraghavan, "To denoise or deblur: param- eter optimization for imaging systems," in Digital Photography X, vol. 9023. International Society for Optics and Photonics, 2014, p. 90230G.
- A. Mittal, A. K. Moorthy, and A. C. Bovik, "No-reference image quality assess- ment in the spatial domain," IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695-4708, 2012.
- M. Estribeau and P. Magnan, "Fast mtf measurement of cmos imagers using iso 12333 slanted-edge methodology," in Detectors and Associated Signal Pro- cessing, vol. 5251. International Society for Optics and Photonics, 2004, pp. 243-253.
- EMVA Standard, "1288: Standard for characterization and presentation of spec- ification data for image sensors and cameras," European Machine Vision Asso- ciation, 2005.
- J. R. Parker, Algorithms for image processing and computer vision. John Wiley & Sons, 2010.
- G. D. Boreman, Modulation transfer function in optical and electro-optical sys- tems. SPIE press Bellingham, WA, 2001, vol. 21.
- J. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual losses for real-time style transfer and super-resolution," in European Conference on Computer Vision. Springer, 2016, pp. 694-711.
- M. Geese, U. Seger, and A. Paolillo, "Detection probabilities: Performance prediction for sensors of autonomous vehicles," Electronic Imaging, vol. 2018, no. 17, pp. 148-1-148-14, 2018.
- T. S. Cho, C. L. Zitnick, N. Joshi, S. B. Kang, R. Szeliski, and W. T. Freeman, "Image restoration by matching gradient distributions," IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), vol. 34, no. 4, pp. 683- 694, 2012.
- D. Krishnan and R. Fergus, "Fast image deconvolution using hyper-laplacian priors," in Advances in Neural Information Processing Systems. NIPS, 2009, pp. 1033-1041.
- E. Gilad and J. Von Hardenberg, "A fast algorithm for convolution integrals with space and time variant kernels," Journal of Computational Physics, vol. 216, no. 1, pp. 326-336, 2006.
- C. J. Schuler, H. Christopher Burger, S. Harmeling, and B. Scholkopf, "A ma- chine learning approach for non-blind image deconvolution," in Proc. Computer Vision and Pattern Recognition, 2013.
- L. Xu, J. S. Ren, C. Liu, and J. Jia, "Deep convolutional neural network for image deconvolution," in Advances in Neural Information Processing Systems, 2014, pp. 1790-1798.
- J. Zhang, J. Pan, W.-S. Lai, R. W. Lau, and M.-H. Yang, "Learning fully convolutional networks for iterative non-blind deconvolution," 2017.
- S. Nah, T. H. Kim, and K. M. Lee, "Deep multi-scale convolutional neural network for dynamic scene deblurring," in CVPR, vol. 1, no. 2, 2017, p. 3.
- O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, and J. Matas, "Deblurgan: Blind motion deblurring using conditional adversarial networks," arXiv preprint arXiv:1711.07064, 2017.
- O. Cossairt and S. Nayar, "Spectral focal sweep: Extended depth of field from chromatic aberrations," in IEEE International Conference on Computational Photography (ICCP). IEEE, 2010, pp. 1-8.
- P. Wang, N. Mohammad, and R. Menon, "Chromatic-aberration-corrected diffractive lenses for ultra-broadband focusing," Scientific Reports, vol. 6, 2016.
- M. Huh, P. Agrawal, and A. A. Efros, "What makes imagenet good for transfer learning?" arXiv preprint arXiv:1608.08614, 2016.
- Y. Bengio, "Deep learning of representations for unsupervised and transfer learn- ing," in Proceedings of ICML Workshop on Unsupervised and Transfer Learning, 2012, pp. 17-36.
- E. Hecht, "Hecht optics," Addison Wesley, vol. 997, pp. 213-214, 1998.
- J. W. Goodman, Introduction to Fourier optics. Roberts and Company Pub- lishers, 2005.
- A. Kalvach and Z. Szabó, "Aberration-free flat lens design for a wide range of incident angles," Journal of the Optical Society of America B, vol. 33, no. 2, p. A66, 2016.
- J. Zhu, T. Yang, and G. Jin, "Design method of surface contour for a freeform lens with wide linear field-of-view," Optics express, vol. 21, no. 22, pp. 26 080- 26 092, 2013.
- R. Ng, M. Levoy, M. Brédif, G. Duval, M. Horowitz, P. Hanrahan et al., "Light field photography with a hand-held plenoptic camera," 2005.
- Y.-R. Ng, P. M. Hanrahan, M. A. Horowitz, and M. S. Levoy, "Correction of optical aberrations," Aug. 14 2012, uS Patent 8,243,157.
- R. Ramanath, W. Snyder, Y. Yoo, and M. Drew, "Color image processing pipeline in digital still cameras," IEEE Signal Processing Magazine, vol. 22, no. 1, pp. 34-43, 2005.
- T. Brooks, B. Mildenhall, T. Xue, J. Chen, D. Sharlet, and J. T. Barron, "Un- processing images for learned raw denoising," arXiv preprint arXiv:1811.11127, 2018.
- F. Heide, M. Steinberger, Y.-T. Tsai, M. Rouf, D. Pajak, D. Reddy, O. Gallo, J. Liu, W. Heidrich, K. Egiazarian et al., "Flexisp: A flexible camera image processing framework," ACM Transactions on Graphics (TOG), vol. 33, no. 6, p. 231, 2014.
- L. Sun, S. Cho, J. Wang, and J. Hays, "Edge-based blur kernel estimation using patch priors," in Proc. International Conference on Computational Photography (ICCP), 2013, pp. 1-8.
- O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention. Springer, 2015, pp. 234-241.
- A. Odena, V. Dumoulin, and C. Olah, "Deconvolution and checkerboard arti- facts," Distill, vol. 1, no. 10, p. e3, 2016.
- C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. P. Aitken, A. Tejani, J. Totz, Z. Wang et al., "Photo-realistic single image super- resolution using a generative adversarial network," in CVPR, vol. 2, no. 3, 2017, p. 4.
- I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," in Advances in neu- ral information processing systems, 2014, pp. 2672-2680.
- M. Arjovsky, S. Chintala, and L. Bottou, "Wasserstein gan," arXiv preprint arXiv:1701.07875, 2017.
- L.-W. Chang, Y. Chen, W. Bao, A. Agarwal, E. Akchurin, K. Deng, and E. Bar- soum, "Accelerating recurrent neural networks through compiler techniques and quantization," 2018.
- P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," CVPR, 2017.
- A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, "Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging," Nature communications, vol. 3, p. 745, 2012.
- A. Velten, D. Wu, A. Jarabo, B. Masia, C. Barsi, C. Joshi, E. Lawson, M. Bawendi, D. Gutierrez, and R. Raskar, "Femto-photography: capturing and visualizing the propagation of light," ACM Trans. Graph. (ToG), vol. 32, no. 4, p. 44, 2013.
- D. Shin, F. Xu, D. Venkatraman, R. Lussana, F. Villa, F. Zappa, V. K. Goyal, F. N. Wong, and J. H. Shapiro, "Photon-efficient imaging with a single-photon camera," Nature Communications, vol. 7, 2016.
- G. Gariepy, N. Krstajić, R. Henderson, C. Li, R. R. Thomson, G. S. Buller, B. Heshmat, R. Raskar, J. Leach, and D. Faccio, "Single-photon sensitive light- in-fight imaging," Nature Communications, vol. 6, 2015.
- M. O'Toole, F. Heide, D. B. Lindell, K. Zang, S. Diamond, and G. Wetzstein, "Reconstructing transient images from single-photon sensors," in Proc. Com- puter Vision and Pattern Recognization (CVPR). IEEE, 2017, pp. 2289-2297.
- D. E. Schwartz, E. Charbon, and K. L. Shepard, "A single-photon avalanche diode array for fluorescence lifetime imaging microscopy," IEEE journal of solid- state circuits, vol. 43, no. 11, pp. 2546-2557, 2008.
- D.-U. Li, J. Arlt, J. Richardson, R. Walker, A. Buts, D. Stoppa, E. Charbon, and R. Henderson, "Real-time fluorescence lifetime imaging system with a 32× 32 0.13 µm cmos low dark-count single-photon avalanche diode array," Optics Express, vol. 18, no. 10, pp. 10 257-10 269, 2010.
- M. V. Nemallapudi, S. Gundacker, P. Lecoq, E. Auffray, A. Ferri, A. Gola, and C. Piemonte, "Sub-100 ps coincidence time resolution for positron emission tomography with lso: Ce codoped with ca," Physics in Medicine & Biology, vol. 60, no. 12, p. 4635, 2015.
- A. C. Ulku, C. Bruschini, I. M. Antolović, Y. Kuo, R. Ankri, S. Weiss, X. Michalet, and E. Charbon, "A 512× 512 spad image sensor with integrated gating for widefield flim," IEEE Journal of Selected Topics in Quantum Elec- tronics, vol. 25, no. 1, pp. 1-12, 2018.
- J. M. Pavia, M. Wolf, and E. Charbon, "Measurement and modeling of mi- crolenses fabricated on single-photon avalanche diode arrays for fill factor re- covery," Optics express, vol. 22, no. 4, pp. 4202-4213, 2014.
- G. Intermite, A. McCarthy, R. E. Warburton, X. Ren, F. Villa, R. Lussana, A. J. Waddie, M. R. Taghizadeh, A. Tosi, F. Zappa et al., "Fill-factor improve- ment of si cmos single-photon avalanche diode detector arrays by integration of diffractive microlens arrays," Optics Express, vol. 23, no. 26, pp. 33 777-33 791, 2015.
- H. Chen, M. S. Asif, A. C. Sankaranarayanan, and A. Veeraraghavan, "Fpa- cs: Focal plane array-based compressive imaging in short-wave infrared," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2015, pp. 2358-2366.
- L. Xiao, F. Heide, M. O'Toole, A. Kolb, M. B. Hullin, K. Kutulakos, and W. Heidrich, "Defocus deblurring and superresolution for time-of-flight depth cameras," in Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition. IEEE, 2015, pp. 2376-2384.
- S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. Moerner, "Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread func- tion," Proceedings of the National Academy of Sciences, vol. 106, no. 9, pp. 2995-2999, 2009.
- Y. Shechtman, S. J. Sahl, A. S. Backer, and W. Moerner, "Optimal point spread function design for 3d imaging," Physical review letters, vol. 113, no. 13, p. 133902, 2014.
- A. Levin, R. Fergus, F. Durand, and W. T. Freeman, "Image and depth from a conventional camera with a coded aperture," ACM Trans. Graph. (TOG), vol. 26, no. 3, p. 70, 2007.
- B. Cai, X. Xu, K. Jia, C. Qing, and D. Tao, "Dehazenet: An end-to-end system for single image haze removal," IEEE Transactions on Image Processing, vol. 25, no. 11, pp. 5187-5198, 2016.
- D. Gong, J. Yang, L. Liu, Y. Zhang, I. D. Reid, C. Shen, A. Van Den Hengel, and Q. Shi, "From motion blur to motion flow: A deep learning solution for removing heterogeneous motion blur." in Proc. Computer Vision and Pattern Recognization (CVPR), vol. 1, no. 2. IEEE, 2017, p. 5.
- S. Su, F. Heide, G. Wetzstein, and W. Heidrich, "Deep end-to-end time-of- flight imaging," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2018, pp. 6383-6392.
- J. Yang, J. Wright, T. Huang, and Y. Ma, "Image super-resolution as sparse representation of raw image patches," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2008, pp. 1-8.
- J. Yang, J. Wright, T. S. Huang, and Y. Ma, "Image super-resolution via sparse representation," IEEE transactions on image processing, vol. 19, no. 11, pp. 2861-2873, 2010.
- C.-Y. Yang and M.-H. Yang, "Fast direct super-resolution by simple functions." IEEE, 2013, pp. 561-568.
- R. Timofte, V. De Smet, and L. Van Gool, "A+: Adjusted anchored neighbor- hood regression for fast super-resolution." Springer, 2014, pp. 111-126.
- S. Schulter, C. Leistner, and H. Bischof, "Fast and accurate image upscaling with super-resolution forests," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2015, pp. 3791-3799.
- W. Shi, J. Caballero, F. Huszár, J. Totz, A. P. Aitken, R. Bishop, D. Rueck- ert, and Z. Wang, "Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2016, pp. 1874-1883.
- C. Dong, C. C. Loy, and X. Tang, "Accelerating the super-resolution convolu- tional neural network," in European Conference on Computer Vision. Springer, 2016, pp. 391-407.
- C. Dong, C. C. Loy, K. He, and X. Tang, "Image super-resolution using deep convolutional networks," IEEE transactions on pattern analysis and machine intelligence, vol. 38, no. 2, pp. 295-307, 2016.
- Z. Wang, D. Liu, J. Yang, W. Han, and T. Huang, "Deep networks for image super-resolution with sparse prior." IEEE, 2015, pp. 370-378.
- J. Kim, J. Kwon Lee, and K. Mu Lee, "Accurate image super-resolution us- ing very deep convolutional networks," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2016, pp. 1646-1654.
- --, "Deeply-recursive convolutional network for image super-resolution," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2016, pp. 1637-1645.
- W.-S. Lai, J.-B. Huang, N. Ahuja, and M.-H. Yang, "Deep laplacian pyramid networks for fast and accurate super-resolution," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2017, pp. 624-632.
- B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, "Enhanced deep residual net- works for single image super-resolution," in Proc. Computer Vision and Pattern Recognization (CVPR)Workshops, vol. 1, no. 2. IEEE, 2017, p. 3.
- K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recog- nition," in Proc. Computer Vision and Pattern Recognization (CVPR). IEEE, 2016, pp. 770-778.
- M. Haris, G. Shakhnarovich, and N. Ukita, "Deep back-projection networks for super-resolution," arXiv, 2018.
- N. George and W. Chi, "Extended depth of field using a logarithmic asphere," Journal of Optics A: Pure and Applied Optics, vol. 5, no. 5, p. S157, 2003.
- L.-H. Yeh and L. Waller, "3d super-resolution optical fluctuation imaging (3d- sofi) with speckle illumination," in Computational Optical Sensing and Imaging. Optical Society of America, 2016, pp. CW5D-2.
- C. Zhou, S. Lin, and S. K. Nayar, "Coded aperture pairs for depth from defocus and defocus deblurring," International journal of computer vision, vol. 93, no. 1, pp. 53-72, 2011.
- R. F. Marcia, Z. T. Harmany, and R. M. Willett, "Compressive coded aperture imaging," in Computational Imaging VII, vol. 7246. International Society for Optics and Photonics, 2009, p. 72460G.
- P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, and D. J. Brady, "Coded aperture compressive temporal imaging," Optics express, vol. 21, no. 9, pp. 10 526-10 545, 2013.
- G. R. Arce, D. J. Brady, L. Carin, H. Arguello, and D. S. Kittle, "Compressive coded aperture spectral imaging: An introduction," IEEE Signal Processing Magazine, vol. 31, no. 1, pp. 105-115, 2014.
- G. Kim, J. A. Domínguez-Caballero, and R. Menon, "Design and analysis of multi-wavelength diffractive optics," Optics Express, vol. 20, no. 3, pp. 2814- 2823, 2012.
- W. Qu, H. Gu, H. Zhang, and Q. Tan, "Image magnification in lensless holo- graphic projection using double-sampling fresnel diffraction," Applied Optics, vol. 54, no. 34, pp. 10 018-10 021, 2015.
- M. Petrov, S. Bibikov, Y. Yuzifovich, R. Skidanov, and A. Nikonorov, "Color correction with 3d lookup tables in diffractive optical imaging systems," Proce- dia Engineering, vol. 201, pp. 73-82, 2017.
- Y. Peng, X. Dun, Q. Sun, F. Heide, and W. Heidrich, "Focal sweep imaging with multi-focal diffractive optics," in International Conference on Computational Photography (ICCP). IEEE, 2018, pp. 1-8.
- C. Zhao, A. Carass, B. E. Dewey, J. Woo, J. Oh, P. A. Calabresi, D. S. Reich, P. Sati, D. L. Pham, and J. L. Prince, "A deep learning based anti-aliasing self super-resolution algorithm for mri," in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2018, pp. 100-108.
- S. Datta, N. Chaki, and K. Saeed, "Minimizing aliasing effects using faster super resolution technique on text images," in Transactions on Computational Science XXXI. Springer, 2018, pp. 136-153.
- D. O'Connor, Time-correlated single photon counting. Academic Press, 2012.
- D. D.-U. Li, S. Ameer-Beg, J. Arlt, D. Tyndall, R. Walker, D. R. Matthews, V. Visitkul, J. Richardson, and R. K. Henderson, "Time-domain fluorescence lifetime imaging techniques suitable for solid-state imaging sensor arrays," Sen- sors, vol. 12, no. 5, pp. 5650-5669, 2012.
- A. Kirmani, D. Venkatraman, D. Shin, A. Colaço, F. N. Wong, J. H. Shapiro, and V. K. Goyal, "First-photon imaging," Science, vol. 343, no. 6166, pp. 58-61, 2014.
- A. K. Pediredla, A. C. Sankaranarayanan, M. Buttafava, A. Tosi, and A. Veer- araghavan, "Signal processing based pile-up compensation for gated single- photon avalanche diodes," arXiv preprint arXiv:1806.07437, 2018.
- F. Heide, S. Diamond, D. B. Lindell, and G. Wetzstein, "Sub-picosecond photon- efficient 3d imaging using single-photon sensors," arXiv, 2018.
- F. Heide, M. O'Toole, K. Zang, D. B. Lindell, S. Diamond, and G. Wetzstein, "Non-line-of-sight imaging with partial occluders and surface normals," ACM Trans. Graph., 2019.
- D. B. Lindell, G. Wetzstein, and M. O'Toole, "Wave-based non-line-of-sight imaging using fast f-k migration," ACM Trans. Graph. (SIGGRAPH), vol. 38, no. 4, p. 116, 2019.
- D. B. Lindell, M. O'Toole, and G. Wetzstein, "Single-photon 3d imaging with deep sensor fusion," ACM Trans. Graph. (SIGGRAPH), no. 4, 2018.
- M. Iliadis, L. Spinoulas, and A. K. Katsaggelos, "Deepbinarymask: Learning a binary mask for video compressive sensing," arXiv, 2016.
- A. Chakrabarti, "Learning sensor multiplexing design through back- propagation," in Advances in Neural Information Processing Systems, 2016, pp. 3081-3089.
- Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraragha- van, "Phasecam3d-learning phase masks for passive single view depth estima- tion," in Computational Photography (ICCP), 2019 IEEE International Con- ference on. IEEE, 2019, pp. 1-8.
- J. Chang, V. Sitzmann, X. Dun, W. Heidrich, and G. Wetzstein, "Hybrid optical-electronic convolutional neural networks with optimized diffractive op- tics for image classification," Scientific Reports, 2018.
- M. Parker, Digital Signal Processing 101, Second Edition: Everything You Need to Know to Get Started, 2nd ed. Newton, MA, USA: Newnes, 2017.
- R. W. Gerchberg and W. O. Saxton, "A practical algorithm for the determina- tion of phase from image and diffraction plane pictures," Optik, vol. 35, p. 237, 1972.
- B. Morgan, C. M. Waits, J. Krizmanic, and R. Ghodssi, "Development of a deep silicon phase fresnel lens using gray-scale lithography and deep reactive ion etching," Journal of microelectromechanical systems, vol. 13, no. 1, pp. 113- 120, 2004.
- F. Heide, L. Xiao, A. Kolb, M. B. Hullin, and W. Heidrich, "Imaging in scat- tering media using correlation image sensors and sparse convolutional coding," Optics Express, vol. 22, no. 21, pp. 26 338-26 350, 2014.
- M. Bevilacqua, A. Roumy, C. Guillemot, and M. L. Alberi-Morel, "Low- complexity single-image super-resolution based on nonnegative neighbor em- bedding," 2012.
- R. Zeyde, M. Elad, and M. Protter, "On single image scale-up using sparse- representations," in International conference on curves and surfaces. Springer, 2010, pp. 711-730.
- P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, "Contour detection and hierar- chical image segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 5, pp. 898-916, 2011.
- P. D. Burns and D. Williams, "Refined slanted-edge measurement for practical camera and scanner testing," in IS AND TS PICS CONFERENCE. Society for Imaging Science and Technology, 2002, pp. 191-195.
- D. Qin, Y. Xia, and G. M. Whitesides, "Soft lithography for micro-and nanoscale patterning," Nature protocols, vol. 5, no. 3, pp. 491-502, 2010.
- S. Donati, G. Martini, and M. Norgia, "Microconcentrators to recover fill-factor in image photodetectors with pixel on-board processing circuits," Optics express, vol. 15, no. 26, pp. 18 066-18 075, 2007.
- B. Mildenhall, J. T. Barron, J. Chen, D. Sharlet, R. Ng, and R. Carroll, "Burst denoising with kernel prediction networks," in Proceedings of the IEEE Confer- ence on Computer Vision and Pattern Recognition, 2018, pp. 2502-2510.
- A. Darmont and S. of Photo-optical Instrumentation Engineers, "High dynamic range imaging: sensors and architectures." SPIE Washington, 2012.
- U. Seger, "Hdr imaging in automotive applications," in High Dynamic Range Video. Elsevier, 2016, pp. 477-498.
- T. Mertens, J. Kautz, and F. Van Reeth, "Exposure fusion: A simple and practical alternative to high dynamic range photography," in Computer graphics forum, vol. 28, no. 1. Wiley Online Library, 2009, pp. 161-171.
- S. K. Nayar and T. Mitsunaga, "High dynamic range imaging: Spatially varying pixel exposures," in Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), vol. 1. IEEE, 2000, pp. 472-479.
- T. Willassen, J. Solhusvik, R. Johansson, S. Yaghmai, H. Rhodes, S. Manabe, D. Mao, Z. Lin, D. Yang, O. Cellek et al., "A 1280× 1080 4.2 µm split-diode pixel hdr sensor in 110 nm bsi cmos process," in Proceedings of the International Image Sensor Workshop, Vaals, The Netherlands, 2015, pp. 8-11.
- A. Morimitsu, I. Hirota, S. Yokogawa, I. Ohdaira, M. Matsumura, H. Takahashi, T. Yamazaki, H. Oyaizu, Y. Incesu, M. Atif et al., "A 4m pixel full-pdaf cmos image sensor with 1.58 µm 2× 1 on-chip micro-split-lens technology," in ITE Technical Report 39.35. The Institute of Image Information and Television Engineers, 2015, pp. 5-8.
- M. D. Tocci, C. Kiser, N. Tocci, and P. Sen, "A versatile hdr video production system," in ACM Transactions on Graphics (TOG), vol. 30, no. 4. ACM, 2011, p. 41.
- G. Eilertsen, J. Kronander, G. Denes, R. K. Mantiuk, and J. Unger, "Hdr image reconstruction from a single exposure using deep cnns," ACM Transactions on Graphics (TOG), vol. 36, no. 6, p. 178, 2017.
- K. Fotiadou, G. Tsagkatakis, and P. Tsakalides, "Snapshot high dynamic range imaging via sparse representations and feature learning," IEEE Transactions on Multimedia, 2019.
- M. Rouf, R. Mantiuk, W. Heidrich, M. Trentacoste, and C. Lau, "Glare encoding of high dynamic range images," CVPR 2011, pp. 289-296, 2011.
- C. A. Metzler, H. Ikoma, Y. Peng, and G. Wetzstein, "Deep optics for single-shot high-dynamic-range imaging," arXiv preprint arXiv:1908.00620, 2019.
- P. E. Debevec and J. Malik, "Recovering high dynamic range radiance maps from photographs," in SIGGRAPH '08, 1997.
- E. Reinhard, G. Ward, S. Pattanaik, P. E. Debevec, W. Heidrich, and K. Myszkowski, "High dynamic range imaging: Acquisition, display, and image- based lighting," 2010.
- M. D. Grossberg and S. K. Nayar, "High dynamic range from multiple images: Which exposures to combine?" 2003.
- T. Mertens, J. Kautz, and F. V. Reeth, "Exposure fusion: A simple and prac- tical alternative to high dynamic range photography," Comput. Graph. Forum, vol. 28, pp. 161-171, 2009.
- S. W. Hasinoff, F. Durand, and W. T. Freeman, "Noise-optimal capture for high dynamic range photography," 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 553-560, 2010.
- S. B. Kang, M. Uyttendaele, S. A. J. Winder, and R. Szeliski, "High dynamic range video," ACM Trans. Graph., vol. 22, pp. 319-325, 2003.
- E. A. Khan, A. O. Akyüz, and E. Reinhard, "Ghost removal in high dynamic range images," 2006 International Conference on Image Processing, pp. 2005- 2008, 2006.
- C. Liu, "Exploring new representations and applications for motion analysis," 2009.
- O. Gallo, N. Gelfandz, W.-C. Chen, M. Tico, and K. Pulli, "Artifact-free high dynamic range imaging," 2009 IEEE International Conference on Computa- tional Photography (ICCP), pp. 1-7, 2009.
- M. Granados, K. I. Kim, J. Tompkin, and C. Theobalt, "Automatic noise mod- eling for ghost-free hdr reconstruction," ACM Trans. Graph., vol. 32, pp. 201:1- 201:10, 2013.
- J. Hu, O. Gallo, K. Pulli, and X. Sun, "Hdr deghosting: How to deal with satu- ration?" 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1163-1170, 2013.
- N. K. Kalantari, E. Shechtman, C. Barnes, S. Darabi, D. B. Goldman, and P. Sen, "Patch-based high dynamic range video," ACM Trans. Graph., vol. 32, pp. 202:1-202:8, 2013.
- P. Sen, N. K. Kalantari, M. Yaesoubi, S. Darabi, D. B. Goldman, and E. Shecht- man, "Robust patch-based hdr reconstruction of dynamic scenes," ACM Trans. Graph., vol. 31, pp. 203:1-203:11, 2012.
- N. K. Kalantari and R. Ramamoorthi, "Deep high dynamic range imaging of dynamic scenes," ACM Trans. Graph., vol. 36, pp. 144:1-144:12, 2017.
- --, "Deep hdr video from sequences with alternating exposures," Comput. Graph. Forum, vol. 38, pp. 193-205, 2019.
- F. Banterle, P. Ledda, K. Debattista, and A. Chalmers, "Inverse tone mapping," in GRAPHITE, 2006.
- P. Didyk, R. Mantiuk, M. Hein, and H.-P. Seidel, "Enhancement of bright video features for hdr displays," Comput. Graph. Forum, vol. 27, pp. 1265-1274, 2008.
- L. Meylan, S. J. Daly, and S. Süsstrunk, "The reproduction of specular highlights on high dynamic range displays," in Color Imaging Conference, 2006.
- A. G. Rempel, M. Trentacoste, H. Seetzen, H. D. Young, W. Heidrich, L. A. Whitehead, and G. Ward, "Ldr2hdr: on-the-fly reverse tone mapping of legacy video and photographs," in SIGGRAPH 2007, 2007.
- A. O. Akyüz, R. W. Fleming, B. E. Riecke, E. Reinhard, and H. H. Bülthoff, "Do hdr displays support ldr content?: a psychophysical evaluation," in SIGGRAPH 2007, 2007.
- B. Masiá, S. Agustin, R. W. Fleming, O. Sorkine-Hornung, and D. Gutier- rez, "Evaluation of reverse tone mapping through varying exposure conditions," ACM Trans. Graph., vol. 28, p. 160, 2009.
- K. Moriwaki, R. Yoshihashi, R. Kawakami, S. You, and T. Naemura, "Hy- brid loss for learning single-image-based HDR reconstruction," arXiv preprint arXiv:1812.07134, 2018.
- Y. Endo, Y. Kanamori, and J. Mitani, "Deep reverse tone mapping," ACM Transactions on Graphics (Proc. of SIGGRAPH Asia), vol. 36, no. 6, p. 177, 2017.
- J. Zhang and J. Lalonde, "Learning high dynamic range from outdoor panoramas," CoRR, vol. abs/1703.10200, 2017. [Online]. Available: http: //arxiv.org/abs/1703.10200
- S. Lee, G. H. An, and S.-J. Kang, "Deep chain hdri: Reconstructing a high dynamic range image from a single low dynamic range image," IEEE Access, vol. 6, pp. 49 913-49 924, 2018.
- S. Lee, G. Hwan An, and S.-J. Kang, "Deep recursive hdri: Inverse tone map- ping using generative adversarial networks," in The European Conference on Computer Vision (ECCV), September 2018.
- C. Wang, Y. Zhao, and R. Wang, "Deep inverse tone mapping for compressed images," IEEE Access, vol. 7, pp. 74 558-74 569, 2019.
- D. Marnerides, T. Bashford-Rogers, J. Hatchett, and K. Debattista, "Expandnet: A deep convolutional neural network for high dynamic range expansion from low dynamic range content," CoRR, vol. abs/1803.02266, 2018. [Online]. Available: http://arxiv.org/abs/1803.02266
- S. Ning, H. Xu, L. Song, R. Xie, and W. Zhang, "Learning an inverse tone mapping network with a generative adversarial regularizer," 2018 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1383-1387, 2018.
- H. Jang, K. Bang, J. Jang, and D. Hwang, "Inverse tone mapping operator using sequential deep neural networks based on the human visual system," IEEE Access, vol. 6, pp. 52 058-52 072, 2018.
- S. Hajisharif, J. Kronander, and J. Unger, "Adaptive dualiso hdr reconstruc- tion," EURASIP Journal on Image and Video Processing, vol. 2015, pp. 1-13, 2015.
- A. Serrano, F. Heide, D. Gutierrez, G. Wetzstein, and B. Masiá, "Convolutional sparse coding for high dynamic range imaging," Comput. Graph. Forum, vol. 35, pp. 153-163, 2016.
- W. Guicquero, A. Dupret, and P. Vandergheynst, "An algorithm architecture co- design for cmos compressive high dynamic range imaging," IEEE Transactions on Computational Imaging, vol. 2, pp. 190-203, 2016.
- H. Zhao, B. Shi, C. Fernandez-Cull, S.-K. Yeung, and R. Raskar, "Unbounded high dynamic range photography using a modulo camera," 2015 IEEE Interna- tional Conference on Computational Photography (ICCP), pp. 1-10, 2015.
- K. Hirakawa and P. M. Simon, "Single-shot high dynamic range imaging with conventional camera hardware," 2011 International Conference on Computer Vision, pp. 1339-1346, 2011.
- A. Chakrabarti, "Learning sensor multiplexing design through back- propagation," ArXiv, vol. abs/1605.07078, 2016.
- R. Horstmeyer, R. Y. Chen, B. Kappes, and B. Judkewitz, "Convolutional neu- ral networks that teach microscopes how to image," ArXiv, vol. abs/1709.07223, 2017.
- M. Kellman, E. Bostan, M. Chen, and L. Waller, "Data-driven design for fourier ptychographic microscopy," in 2019 IEEE International Conference on Compu- tational Photography (ICCP). IEEE, 2019, pp. 1-8.
- E. Nehme, D. Freedman, R. Gordon, B. Ferdman, T. Michaeli, and Y. Shecht- man, "Dense three dimensional localization microscopy by deep learning," 2019.
- Y. Shechtman, L. E. Weiss, A. S. Backer, M. Y. Lee, and W. E. Moerner, "Mul- ticolour localization microscopy by point-spread-function engineering." Nature photonics, vol. 10, pp. 590-594, 2016.
- Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraragha- van, "Phasecam3d -learning phase masks for passive single view depth esti- mation," 2019 IEEE International Conference on Computational Photography (ICCP), pp. 1-12, 2019.
- J. Marco, Q. Hernandez, A. Muñoz, Y. Dong, A. Jarabo, M. H. Kim, X. Tong, and D. Gutierrez, "Deeptof: off-the-shelf real-time correction of multipath in- terference in time-of-flight imaging," ACM Trans. Graph., vol. 36, pp. 219:1- 219:12, 2017.
- R. Mantiuk, K. J. Kim, A. G. Rempel, and W. Heidrich, "Hdr-vdp-2: A cali- brated visual metric for visibility and quality predictions in all luminance con- ditions," ACM Transactions on graphics (TOG), vol. 30, no. 4, pp. 1-14, 2011.
- R. E. Fischer, B. Tadic-Galeb, P. R. Yoder, and R. Galeb, Optical system design. McGraw Hill New York, 2000.
- M. J. Allen, "Automobile windshields, surface deterioration," SAE Technical Paper, Tech. Rep., 1970.
- A. Flores, M. R. Wang, and J. J. Yang, "Achromatic hybrid refractive- diffractive lens with extended depth of focus," Appl. Opt., vol. 43, no. 30, pp. 5618-5630, Oct 2004. [Online]. Available: http://ao.osa.org/abstract.cfm? URI=ao-43-30-5618
- Z. Liu, A. Flores, M. R. Wang, and J. J. Yang, "Diffractive infrared lens with extended depth of focus," Optical Engineering, vol. 46, no. 1, pp. 1 -9, 2007.
- A. Levin, R. Fergus, F. Durand, and W. T. Freeman, "Image and depth from a conventional camera with a coded aperture," ACM Trans. Graph., vol. 26, no. 3, p. 70-es, Jul. 2007.
- A. Levin, "Analyzing depth from coded aperture sets," in Computer Vision - ECCV 2010, K. Daniilidis, P. Maragos, and N. Paragios, Eds. Berlin, Heidel- berg: Springer Berlin Heidelberg, 2010, pp. 214-227.
- X. Dun, H. Ikoma, G. Wetzstein, Z. Wang, X. Cheng, and Y. Peng, "Learned rotationally symmetric diffractive achromat for full-spectrum computational imaging," Optica, vol. 7, no. 8, pp. 913-922, Aug 2020.
- S. Colburn, A. Zhan, and A. Majumdar, "Metasurface optics for full-color com- putational imaging," Science Advances, vol. 4, no. 2, 2018.
- S. S. Khan, A. V. R. , V. Boominathan, J. Tan, A. Veeraraghavan, and K. Mi- tra, "Towards photorealistic reconstruction of highly multiplexed lensless im- ages," in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.
- Y. Wu, V. Boominathan, H. Chen, A. Sankaranarayanan, and A. Veeraragha- van, "Phasecam3d â€" learning phase masks for passive single view depth esti- mation," in 2019 IEEE International Conference on Computational Photogra- phy (ICCP). Los Alamitos, CA, USA: IEEE Computer Society, may 2019, pp. 1-12.
- A. Kotwal, A. Levin, and I. Gkioulekas, "Interferometric transmission probing with coded mutual intensity," vol. 39, no. 4, Jul. 2020.
- Y. Wu, F. Li, F. Willomitzer, A. Veeraraghavan, and O. Cossairt, "Wished: Wavefront imaging sensor with high resolution and depth ranging," in 2020 IEEE International Conference on Computational Photography (ICCP), 2020, pp. 1-10.
- V. Boominathan, J. K. Adams, J. T. Robinson, and A. Veeraraghavan, "Phlat- cam: Designed phase-mask based thin lensless camera," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 7, pp. 1618-1629, 2020.
- O. Kupyn, T. Martyniuk, J. Wu, and Z. Wang, "Deblurgan-v2: Deblurring (orders-of-magnitude) faster and better," in The IEEE International Conference on Computer Vision (ICCV), Oct 2019.
- G. Côté, J.-F. Lalonde, and S. Thibault, "Extrapolating from lens design databases using deep learning," Opt. Express, vol. 27, no. 20, pp. 28 279-28 292, Sep 2019.
- --, "Deep learning-enabled framework for automatic lens design starting point generation," Opt. Express, vol. 29, no. 3, pp. 3841-3854, Feb 2021.
- D. S. Jeon, S.-H. Baek, S. Yi, Q. Fu, X. Dun, W. Heidrich, and M. H. Kim, "Compact snapshot hyperspectral imaging with diffracted rotation," ACM Transactions on Graphics (Proc. SIGGRAPH 2019), vol. 38, no. 4, pp. 117:1- 13, 2019.
- S.-H. Baek, H. Ikoma, D. S. Jeon, Y. Li, W. Heidrich, G. Wetzstein, and M. H. Kim, "End-to-end hyperspectral-depth imaging with learned diffractive optics," arXiv preprint arXiv:2009.00463, 2020.
- C. Zhang, B. Miller, K. Yan, I. Gkioulekas, and S. Zhao, "Path-space differen- tiable rendering," ACM Trans. Graph., vol. 39, no. 4, pp. 143:1-143:19, 2020.
- C. Zhang, L. Wu, C. Zheng, I. Gkioulekas, R. Ramamoorthi, and S. Zhao, "A differential theory of radiative transfer," ACM Trans. Graph., vol. 38, no. 6, pp. 227:1-227:16, 2019.
- S. Bangaru, T.-M. Li, and F. Durand, "Unbiased warped-area sampling for differentiable rendering," ACM Trans. Graph., vol. 39, no. 6, pp. 245:1-245:18, 2020.
- C. Kolb, D. Mitchell, and P. Hanrahan, "A realistic camera model for computer graphics," in Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, 1995, pp. 317-324.
- F. A. Jenkins and H. E. White, Fundamentals of optics. Tata McGraw-Hill Education, 2018.
- Q. Guo, I. Frosio, O. Gallo, T. Zickler, and J. Kautz, "Tackling 3d tof arti- facts through learning and the flat dataset," in The European Conference on Computer Vision (ECCV), September 2018.
- E. Agustsson and R. Timofte, "Ntire 2017 challenge on single image super- resolution: Dataset and study," in The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, July 2017.
- S. W. Hasinoff and K. N. Kutulakos, "Light-efficient photography," IEEE Trans- actions on Pattern Analysis and Machine Intelligence, vol. 33, no. 11, pp. 2203- 2214, 2011.
- O. Cossairt, C. Zhou, and S. Nayar, "Diffusion Coding Photography for Ex- tended Depth of Field," ACM Transactions on Graphics (TOG), Aug 2010.