Human Optics as a Function of Eccentricity: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Student2018
imported>Student2018
Line 59: Line 59:
== Results ==  
== Results ==  


== Conclusions ==  
== Conclusions ==
 
The eccentricity automation algorithm functions properly in that it successfully takes in any give region of interest and plots the MTF characteristics across different eccentricities with expected trends in a computational time of a fraction of what it would take without utilizing the eccentricity automation algorithm.  One possible source of error is that since the Navarro model is radially symmetric, it doesn’t account for added aberrations from a human eye which isn’t radially symmetric. Another source of error is that the 0.285 mm/deg conversion used by the model only holds for small fields of view and falls apart for wider angles.
This project still has a good amount of room for future work.  Namely, algorithm support for other senses besides the slanted bar and algorithm support for non radially symmetric eye models (astigmatism).  These improvements will require significant algorithm modifications as the current algorithm is slanted bar specific and assumes a radially symmetric model.  Since our algorithm assumed radial symmetry, we did not have to translate the scene, which otherwise would be required.  Lastly, the eccentricity algorithm should be ran through different eye models so compare and contrast results.


== Appendix ==
== Appendix ==

Revision as of 02:14, 15 December 2018

Introduction

Anatomically close schematic eye models that can reproduce optical properties are extremely beneficial as these models can simulate the performance of a human eye. These models can be used for research and development purposes such as for ophthalmic lens design, refractive surgery, and studying the features of optical component systems[1]. Optical properties such as spherical and chromatic aberrations along with polychromatic point spread functions (PSF) and modulation transfer function (MTF) have been studied on axis.

While a real eye is not rotationally symmetric, the schematic models are taken to be axially symmetric. Several wide angle models have been designed which provide good predictions for on- and off-axis aberrations but do not fit exactly each aberration at every retinal location due to rotational symmetry. Off-axis performance of the human eye is comparatively less understood. Resultantly, verification of eye models are required. In this project, we quantify the off-axis performance of the Navarro model by calculating optical images of a slanted bar at different angles away from the center of the retina, and quantifying the MTF at each of these locations. We can compare the performance with known values in the literature.

Background

Several schematic eye models have been compared in literature to the data from real eyes to assess their relative utility with metrics such as their wavefront aberrations, image quality metrics and peripheral refraction profiles. The Navarro eye model is of particular interest in this project. The following figures are from the Williams[4] paper, and are the known results for the Navarro eye model from literature which will be compared to our results.

Figure 1: MTFs at eccentricities 0, 10, 20 and 40 deg for an average optical quality of the eye in the horizontal meridian of the temporal retina.

Figure 2: MTFs at eccentricities 0, 10, 20 and 40 deg for an average optical quality of the eye at the tangential and sagittal focii of the temporal retina.

The slant edge target is used as the test scene. It is the optical equivalent of an electrical step function. The illuminance plot of the test scene across the boundary is the edge spread function, and its Fourier transform is the Modulation Transfer Function. Effective resolution of the lens is greater due to slant since spacing of “samples” of ESF is calculated as pixel pitch times the angle of rotation of the target[3].

Methods

We developed an automation algorithm to measure a high resolution MTF within the user’s selected region of interest on the retina from the Navarro eye model. The methods followed to create the algorithm will be detailed in the Methods section, however our extensively commented code in Appendix I will serve as a great resource for a user to understand the algorithm line-by-line.

One goal of the eccentricity automation algorithm is to not require a high resolution rendering of a large field of view image. Rendering such an image is often times prohibitively computationally expensive. Resultantly, the eccentricity automation algorithm provides the same benefits of having rendered a full, high resolution image in fractions of the computational cost (exact reduction depends on the region of interest and is parameter dependent). Second, the modulation transfer function data from the selected region of interest at different eccentricities is calculated and plotted for the user to interpret utilizing the built-in ISO12233 function in ISET.

The user selected input to the function is the retinal region of interest in units of degrees relative to the fovea (in both the x and y direction) and the crop-window resolution. The hard-coded parameters of the algorithm are FOV4MTF which determines the number of degrees each crop-window spans, and LOW_RES which is the low resolution of the entire FOV image to locate the crop windows off of. It was determined that a crop-window of 2 degrees contained enough information and the entire blurred edge even at eccentricities of 30 degrees as seen in Figure 3.

Figure 3: A crop-window at an eccentricity of 0 and 30 degrees.

Similarly, our crop-window solution requires a whole scene render with a lower resolution first. A resolution of 100 was determined to be a reasonable choice since the data of row numbers and column numbers for setting the cropwindow parameter in sceneEye is normalized. Thus, a 2 digit of accuracy is enough.

Figure 4: ROI and crop-window comparisons for FOV values of 1, 15, and 30 degrees.

All of the crop-windows must contain a slanted edge to calculate the MTF at said eccentricity. Even though the slanted bar only appears in some parts of the retinal rendered image, the user can input regions of interest that don’t contain the edge of the slanted edge scene at all. This is due to the fact that the Navarro eye model in ISET is radially symmetric. Leveraging this symmetry, it is permissible to transform the user-input to an equivalent position containing a slanted edge to successfully calculate the MTF at the crop-window’s radial distance as seen in Figure 4. The method is to calculate the radial distance from the fovea of the x, y coordinates:


r=sqrt(x2+y2)

and use this value as the relative field of view:

relativedegree=r/(LOWRES*FOV)

We calculate the closest point from the user inputed region of interest (the green box) to the center of the first crop-window. This looped process ensues until our automation algorithm exceeds the region of interest, where the loop terminates. This performance can be seen in Figure 5 where the region of interest does not go though (0, 0).

Figure 5: ROI and crop-window comparisons for off-center ROIs.

The automation of properly placing the crop-windows has to be done row by row because when FOV is large, the scene will be distorted and slanted bar is no longer a straight line due to the off-axis optical properties of eye. Figure 4 shows these optical effects when the field of view is 30 degrees. The crop-windows are determined by finding the first column in the respective row where the value is nonzero. Then the window is shifted left ⅓ its width so that the slanted bar will not be at the edge of the crop-window, allowing for the while blurred edge to in the crop-window. Then, the next crop-window is located by shifting downward by the size of a crop-window and finding the new nonzero column number. Again, this loop continues until the center of the image is reached or the radius of interest is left.

Lastly, for each crop-window ISO12233 is leveraged to plot reduction in contrast with respect to spatial frequency traces and an MTF50 plot of the different eccentricities.

Results

Conclusions

The eccentricity automation algorithm functions properly in that it successfully takes in any give region of interest and plots the MTF characteristics across different eccentricities with expected trends in a computational time of a fraction of what it would take without utilizing the eccentricity automation algorithm. One possible source of error is that since the Navarro model is radially symmetric, it doesn’t account for added aberrations from a human eye which isn’t radially symmetric. Another source of error is that the 0.285 mm/deg conversion used by the model only holds for small fields of view and falls apart for wider angles. This project still has a good amount of room for future work. Namely, algorithm support for other senses besides the slanted bar and algorithm support for non radially symmetric eye models (astigmatism). These improvements will require significant algorithm modifications as the current algorithm is slanted bar specific and assumes a radially symmetric model. Since our algorithm assumed radial symmetry, we did not have to translate the scene, which otherwise would be required. Lastly, the eccentricity algorithm should be ran through different eye models so compare and contrast results.

Appendix

You can write math equations as follows: y=x+5

You can include images as follows (you will need to upload the image first using the toolbox on the left bar, using the "Upload file" link).