Imaging system design for improved information capacity
1984, Applied optics
Sign up for access to the world's latest research
Abstract
Shannon's theory of information for communication channels is used to assess the performance of line-scan and sensor-array imaging systems and to optimize the design trade-offs involving sensitivity, spatial response, and sampling intervals. Formulations and computational evaluations account for spatial responses typical of line-scan and sensor-array mechanisms, lens diffraction and transmittance shading, defocus blur, and square and hexagonal sampling lattices.
Related papers
Journal of the Optical Society of America A, 1986
... Parallel to these developments, several researchers began applying the concepts of information theory, especially that of channel capacity as developed by Shannon,1 0"'1 to optics, many attempting to estimate the resolution limit of an imag-ing system for a given signal-to ...
Information on physical image quality of medical images is important for imaging system assessment in order to promote and stimulate the development of state-of-the-art imaging systems. In this paper, we present a method for measuring physical performance of medical imaging systems. In this method, mutual information (MI) which is a concept from information theory was used to measure combined properties of image noise and resolution of an imaging system. In our study, the MI was used as a measure to express the amount of information that an output image contains about an input object. The more the MI value provides, the better the image quality is. To validate the proposed method, computer simulations were first performed to investigate the effects of noise and resolution degradation on the MI. Then experiments were conducted to measure the physical performance of an imaging plate which was used as an image detector. Our simulation and experimental results confirmed that the combine...
Advanced Maui Optical and Space Surveillance Technologies Conference, 2009
Recent advances in optics and instrumentation have dramatically increased the amount of data, both spatial and spectral, that can be obtained about a target scene. The volume of the acquired data can and, in fact, often does far exceed the amount of intrinsic information present in the scene. In such cases, the large volume of data alone can impede the analysis and extraction of relevant information about the scene. One approach to overcoming this impedance mismatch between the volume of data and intrinsic information in the scene the data are supposed to convey is compressive sensing. Compressive sensing exploits the fact that most signals of interest, such as image scenes, possess natural correlations in their physical structure. These correlations, which can occur spatially as well as spectrally, can suggest a more natural sparse basis for compressing and representing the scene than standard pixels or voxels. A compressive sensing system attempts to acquire and encode the scene in this sparse basis, while preserving all relevant information in the scene. One criterion for assessing the content, acquisition, and processing of information in the image scene is Shannon information. This metric describes fundamental limits on encoding and reliably transmitting information about a source, such as an image scene. In this framework, successful encoding of the image requires an optimal choice of a sparse basis, while losses of information during transmission occur due to a finite system response and measurement noise. An information source can be represented by a certain class of image scenes, .e.g. those that have a common morphology. The ability to associate the recorded image with the correct member of the class that produced the image depends on the amount of Shannon information in the acquired data. In this manner, one can analyze the performance of a compressive imaging system for a specific class or ensemble of image scenes. We present such an information-based analysis of a compressive imaging system based on a new highly efficient and robust method that enables us to evaluate statistical entropies. Our method is based on the notion of density of states (DOS), which plays a major role in statistical mechanics by allowing one to express macroscopic thermal averages in terms of the number of configuration states of a system for a certain energy level. Instead of computing the number of microstates associated with a macroscopic energy of the system, however, we compute here the number of possible configurations (states) in the space of variables characteristic of an image scene and its observations that correspond to a certain probability value. This allows us to compute the statistical entropy of many correlated variables as an essentially one-dimensional (1D) probability integral, as we shall see presently. We assess the performance of a single pixel compressive sensing (CS) system based on the amount of information encoded and transmitted in parameters that characterize the information in the scene. Specifically, we shall study two applications of the CS approach, namely the problem of faint companion detection and the problem of satellite material disambiguation. Here, we compute the amount of statistical information the single-pixel data convey about the essential parameters of the scene, as a function of the choice of the projective measurement basis and the amount of measurement noise. The noise creates confusion when associating the recorded data with the correct member of the ensemble that produced the image. We show that multiple measurements enable one to mitigate this confusion noise.
Medical Imaging 2008: Physics of Medical Imaging, 2008
This paper presents an information-entropy based metric for combined evaluation of resolution and noise properties of radiological images. The metric is expressed by the amount of transmitted information (TI). It is a measure of how much information that one image contains about an object or an input. Merits of the proposed method are its simplicity of computation and the experimented setup. A computer-simulated step wedge was used for simulation study on the relationship of TI and the degree of blur as well as the noise. Three acrylic step wedges were also manufactured and used as test sample objects for experiments. Two imaging plates for computed radiography were employed as information detectors to record X-ray intensities. We investigated the effects of noise and resolution degradation on the amount of TI by varying exposure levels. Simulation and experimental results show that the TI value varies when the noise level or the degree of blur is changed. To validate the reasoning and usefulness of the proposed metric, we also calculated and compared the modulation transfer functions and noise power spectra for the employed imaging plates. Results show that the TI has close correlation with both image noise and image blurring, and it may offer the potential to become a simple and generally applicable measure for quality evaluation of medical images.
Multidimensional Systems and Signal Processing, 1992
Multiresponse imaging is a process that acquires A images, each with a different optical response, and reassembles them into a single image with an improved resolution that can approach 1/fA times the photodetector-army sampling lattice. Our goals are to optimize the performance of this process in terms of the resolution and fidelity of the restored image and to assess the amount of information required to do so. The theoretical approach is based on the extension of both image restoration and rate distortion theories from their traditional realm of signal processing to image processing which includes image gathering and display.
1985
In this paper we formulate and use information and fidelity criteria to assess image gathering and processing, combining optical design with image-forming and edge-detection algorithms. The optical design of the image-gathering system revolves around the relationship among sampling passband, spatial response, and signal-to-noise ratio (SNR). Our formulations of information, fidelity, and optimal (Wiener) restoration account for the insufficient sampling (i.e., aliasing) common in image gathering as well as for the blurring and noise that conventional formulations account for. Performance analyses and simulations for ordinary optical-design constraints and random scenes indicate that (1) different image-forming algorithms prefer different optical designs; (2) informationally optimized designs maximize the robustness of optimal image restorations and lead to the highest-spatial-frequency channel (relative to the sampling passband) for which edge detection is reliable (if the SNR is sufficiently high); and (3) combining the informationally optimized design with a 3 by 3 lateral-inhibitory image-plane-processing algorithm leads to a spatial-response shape that approximates the optimal edge-detection response of (Marr's model of) human vision and thus reduces the data preprocessing and transmission required for machine vision.
Medical Imaging, 2011
Journal of the Optical Society of America, 1986
Suitability of the average mutual information (AMI) as a quality criterion in digital printing reproduction is analyzed. The sensitivity of the AMI to variation in the basic bandwidth measures of quality is discussed on the basis of expressions derived. By defining the AMI as a function of the spatial frequency, comparisons with other quality criteria such as the modulation transfer function (or the coherent transfer function) and signal-to-noise ratio are made. Computed results for digital halftoning and impact printing confirm the applicability of the AMI.
Journal of Electronic Imaging, 2009
We describe an information-theoretic method for quantifying overall image quality in terms of mutual information (MI). MI is used to express the amount of information that an output image contains about an input object. The more the MI value provides, the better the image quality is. Therefore, the overall quality of an image can be quantitatively evaluated by measuring MI. We demonstrated by way of image simulation that MI increases with increasing contrast and decreases with the increase of noise and blur. We investigated the utility of this method by applying it to evaluate the performance of four imaging plate detectors. We also compared evaluation results in terms of MI against those in terms of the detective quantum efficiency conventionally used for characterizing the efficiency performance of imaging systems. Our results demonstrate that the proposed method is simple to implement and has potential usefulness for evaluation of overall image quality.
Journal of Software Engineering and Applications, 2010
In digital radiographic systems, a tradeoff exists between image resolution (or blur) and noise characteristics. An imaging system may only be superior in one image quality characteristic while being inferior to another in the other characteristic. In this work, a computer simulation model is presented that is to use mutual-information (MI) metric to examine tradeoff behavior between resolution and noise. MI is used to express the amount of information that an output image contains about an input object. The basic idea is that when the amount of the uncertainty associated with an object before and after imaging is reduced, the difference of the uncertainty is equal to the value of MI. The more the MI value provides, the better the image quality is. The simulation model calculated MI as a function of signal-to-noise ratio and that of resolution for two image contrast levels. Our simulation results demonstrated that MI associated with overall image quality is much more sensitive to noise compared to blur, although tradeoff relationship between noise and blur exists. However, we found that overall image quality is primarily determined by image blur at very low noise levels.

Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (19)
- P. B. Fellgett and E. H. Linfoot, Philos. Trans. R. Soc. London 247, 269 (1955).
- E. H. Linfoot, J. Opt. Soc. Am. 45, 808 (1955).
- C. Shannon, Bell Syst. Tech. J. 27,379 (1978); or C. Shannon and W. Weaver, The Mathematical Theory of Communication (U. Illinois Press, Urbana, 1964).
- F. 0. Huck and S. K. Park, Appl. Opt. 14, 2508 (1975).
- F. 0. Huck, N. Halyo, and S. K. Park, Appl. Opt. 20, 1990 (1981).
- F. 0. Huck et al., Opt. Laser Technol. 15, 21 (1982).
- H. Schade, Sr., J. Soc. Motion Pict. Telev. Eng. 56,131 (1951);
- L. M. Biberman, Ed., Perception of Displayed Information (Plenum, New York, 1973).
- D. P. Peterson and D. Middleton, Inf. Control 5, 279 (1962).
- R. M. Mersereau, Proc. IEEE 67, 930 (1979).
- F. 0. Huck, N. Halyo, and S. K. Park, Appl. Opt. 19, 2174 (1980).
- S. K. Park and R. A. Schowengerdt, Appl. Opt. 21, 3142 (1982).
- Y. Itakura, et al., Infrared Phys. 14, 17 (1974).
- H. H. Hopkins, Proc. R. Soc. London 231, 91 (1955).
- M. Born and E. Wolf, Principles of Optics (Pergamon, New York, 1965).
- M. Mino and Y. Okano, Appl. Opt. 10, 2219 (1971). Expressions for P2 and q2 in Eq. (9) contain a typographical error.
- Y. L. Lee, Statistical Theory of Communications (John Wiley and Sons, New York, 1964).
- J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, New York, 1968).
- A. V. Oppenheim and R. W. Schafer, Digital Signal Processing (Prentice-Hall, Englewood Cliffs, N.J., 1975).
Richard Samms