Camera Image Quality Metrics (Sharpness): Difference between revisions
imported>Projects221 |
imported>Projects221 No edit summary |
||
Line 4: | Line 4: | ||
This paper is organized as follows. An overview of past work done in the image quality metrics area is given in the background section. The methods section talks about our methodology in metrics data generation and the design-compare flow. The results section encompasses our project results, including sample figures showing photos simulated under different conditions with varying viewing experiences. We will draw our conclusions in the conclusions section and show our references and source codes in the references and appendix sections, respectively. | This paper is organized as follows. An overview of past work done in the image quality metrics area is given in the background section. The methods section talks about our methodology in metrics data generation and the design-compare flow. The results section encompasses our project results, including sample figures showing photos simulated under different conditions with varying viewing experiences. We will draw our conclusions in the conclusions section and show our references and source codes in the references and appendix sections, respectively. | ||
== | ==Background== | ||
== | In a modern camera system, the only widely accepted and marketed camera quality metric is the megapixel count. This metric, however, does not correlate well with the actual image quality. Many image quality metrics have been proposed both in literature and industry, such as ISO12233, MTF and image acuity. These are all objective metrics that try to measure color rendition, sharpness, signal to noise ratio and so on, hoping to achieve a strong correlation with how well an image actually look like. The well studied quality metrics may correspond very poorly between measured and experienced quality, even when the important aspects of the human visual system has been taken into account. It is due to the fact that image quality in the human eye is an aesthetic standard that varies from person to person. Various projects (such as Camera Phone Image Quality (CPIQ)) have been proposed to find the best image quality metrics that aims to model and correct for the human visual system. Nonetheless, they fail to to address the factor of actual human aesthetics. | ||
On the other hand, the university of Texas has compiled a subjective image quality assessment (SIQA) database that collected image quality data based on human subjective feedback. Some research has been done in order to correlated a specific metric with the Mean Opinion Score (MOS), such as PSNR, SSIM and VDP. Yet they mostly focus on the difference between a distorted image and the original image, not on the camera and environmental settings. We aim to estimate the correlation between the device based metrics, such as MTF50, image acutance, pixel size, etc. and the consolidate image viewing experience by computing the metric data and the MOS. | |||
==Methods== | |||
The Modulation Transfer Function (MTF) is defined as the normalized magnitude of the Fourier Transform of the imaging system’s point spread function. Practically,it is a metric quantifying the sharpness of the image over the range of special frequencies below Nyquist frequency of the system. We measured the system MTF using the ISO standard ISO 12233. This method uses a B/W slanted bar image as the test scene and then measure the response on a rectangular area at the edge to estimate the system MTF. MTF50 is a specific value which indicate the spatial frequency where the amplitude of MTF curve fall to 50% of the maximum amplitude. | |||
We uses the Image Systems Evaluation Toolkit (ISET) to simulate the image system response in MATLAB. This software set consists of four modules: Scene, Optics, Sensor, and Processor. The Scene module represents the input scene as a multidimensional spectral radiance array at each pixel. The Optics module converts the scene radiance data into an irradiance image through the optics. The Sensor module then transform the irradiance to sensor signal using a model account for both optical and electrical properties of the sensor and pixel. The Processor module finally transforms the electrons on each pixels into a digital image that is rendered for a color display. This module includes algorithms for demosaicing, color conversion to a calibrated color space, and color balancing. | |||
We first evaluate the sensitivities of each parameter of the image system to the final image quality. As a result, the F# and pixel size of the sensor changes the sharpness of image a lot and the effect of other parameters as photon noise, view distance and luminance are insignificant. By sweeping F# and pixel size, we obtain the metric values of MTF50 and acutance for each settings. Then for each specific setting, we simulate the response image using multispectral data of actual scenes. In this project, we choose the image data of face, fruit and landscape to test. | |||
==Results== | |||
Organize your results in a good logical order (not necessarily historical order). Include relevant graphs and/or images. Make sure graph axes are labeled. Make sure you draw the reader's attention to the key element of the figure. The key aspect should be the most visible element of the figure or graph. Help the reader by writing a clear figure caption. | |||
==Conclusions== | |||
Describe what you learned. What worked? What didn't? Why? What should someone next year try? | |||
==References== |
Revision as of 00:53, 20 March 2014
Introduction
Image system quality evaluation has been researched for a long time in industry and academia. The International Standards Organization (ISO) is developing a set of camera image quality metrics to quantify the spatial resolution, noise and color accuracy of digital cameras. Several metrics and methods are implemented to measure and calculate the spatial frequency response, which qualifies the sharpness of the image system. More specifically, the metric of MTF50 and acutance in ISO12233 standard are most appropriate to estimate the blurriness quality of the system. By varying some camera setting such as F/# of the lens and pixel size of the sensor, optimized image system could be achieved by plotting the MTF50 and acutance metrics. In this project, we also try to study the correlation between these objective metrics and human viewing experience estimated as the mean opinion score (MOS) which is computed by averaging all team members’ subjective image quality rating of the given image.
This paper is organized as follows. An overview of past work done in the image quality metrics area is given in the background section. The methods section talks about our methodology in metrics data generation and the design-compare flow. The results section encompasses our project results, including sample figures showing photos simulated under different conditions with varying viewing experiences. We will draw our conclusions in the conclusions section and show our references and source codes in the references and appendix sections, respectively.
Background
In a modern camera system, the only widely accepted and marketed camera quality metric is the megapixel count. This metric, however, does not correlate well with the actual image quality. Many image quality metrics have been proposed both in literature and industry, such as ISO12233, MTF and image acuity. These are all objective metrics that try to measure color rendition, sharpness, signal to noise ratio and so on, hoping to achieve a strong correlation with how well an image actually look like. The well studied quality metrics may correspond very poorly between measured and experienced quality, even when the important aspects of the human visual system has been taken into account. It is due to the fact that image quality in the human eye is an aesthetic standard that varies from person to person. Various projects (such as Camera Phone Image Quality (CPIQ)) have been proposed to find the best image quality metrics that aims to model and correct for the human visual system. Nonetheless, they fail to to address the factor of actual human aesthetics.
On the other hand, the university of Texas has compiled a subjective image quality assessment (SIQA) database that collected image quality data based on human subjective feedback. Some research has been done in order to correlated a specific metric with the Mean Opinion Score (MOS), such as PSNR, SSIM and VDP. Yet they mostly focus on the difference between a distorted image and the original image, not on the camera and environmental settings. We aim to estimate the correlation between the device based metrics, such as MTF50, image acutance, pixel size, etc. and the consolidate image viewing experience by computing the metric data and the MOS.
Methods
The Modulation Transfer Function (MTF) is defined as the normalized magnitude of the Fourier Transform of the imaging system’s point spread function. Practically,it is a metric quantifying the sharpness of the image over the range of special frequencies below Nyquist frequency of the system. We measured the system MTF using the ISO standard ISO 12233. This method uses a B/W slanted bar image as the test scene and then measure the response on a rectangular area at the edge to estimate the system MTF. MTF50 is a specific value which indicate the spatial frequency where the amplitude of MTF curve fall to 50% of the maximum amplitude.
We uses the Image Systems Evaluation Toolkit (ISET) to simulate the image system response in MATLAB. This software set consists of four modules: Scene, Optics, Sensor, and Processor. The Scene module represents the input scene as a multidimensional spectral radiance array at each pixel. The Optics module converts the scene radiance data into an irradiance image through the optics. The Sensor module then transform the irradiance to sensor signal using a model account for both optical and electrical properties of the sensor and pixel. The Processor module finally transforms the electrons on each pixels into a digital image that is rendered for a color display. This module includes algorithms for demosaicing, color conversion to a calibrated color space, and color balancing.
We first evaluate the sensitivities of each parameter of the image system to the final image quality. As a result, the F# and pixel size of the sensor changes the sharpness of image a lot and the effect of other parameters as photon noise, view distance and luminance are insignificant. By sweeping F# and pixel size, we obtain the metric values of MTF50 and acutance for each settings. Then for each specific setting, we simulate the response image using multispectral data of actual scenes. In this project, we choose the image data of face, fruit and landscape to test.
Results
Organize your results in a good logical order (not necessarily historical order). Include relevant graphs and/or images. Make sure graph axes are labeled. Make sure you draw the reader's attention to the key element of the figure. The key aspect should be the most visible element of the figure or graph. Help the reader by writing a clear figure caption.
Conclusions
Describe what you learned. What worked? What didn't? Why? What should someone next year try?