Human Optics as a Function of Eccentricity

From Psych 221 Image Systems Engineering
Revision as of 02:38, 15 December 2018 by imported>Student2018 (Appendix II)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Introduction

Anatomically close schematic eye models that can reproduce optical properties are extremely beneficial as these models can simulate the performance of a human eye. These models can be used for research and development purposes such as for ophthalmic lens design, refractive surgery, and studying the features of optical component systems[1]. Optical properties such as spherical and chromatic aberrations along with polychromatic point spread functions (PSF) and modulation transfer function (MTF) have been studied on axis.

While a real eye is not rotationally symmetric, the schematic models are taken to be axially symmetric. Several wide angle models have been designed which provide good predictions for on- and off-axis aberrations but do not fit exactly each aberration at every retinal location due to rotational symmetry. Off-axis performance of the human eye is comparatively less understood. Resultantly, verification of eye models are required. In this project, we quantify the off-axis performance of the Navarro model by calculating optical images of a slanted bar at different angles away from the center of the retina, and quantifying the MTF at each of these locations. We can compare the performance with known values in the literature.

Background

Several schematic eye models have been compared in literature to the data from real eyes to assess their relative utility with metrics such as their wavefront aberrations, image quality metrics and peripheral refraction profiles. The Navarro eye model is of particular interest in this project. The following figures are from the Williams[4] paper, and are the known results for the Navarro eye model from literature which will be compared to our results.

Figure 1: MTFs at eccentricities 0, 10, 20 and 40 deg for an average optical quality of the eye in the horizontal meridian of the temporal retina.
Figure 2: MTFs at eccentricities 0, 10, 20 and 40 deg for an average optical quality of the eye at the tangential and sagittal focii of the temporal retina.

The slant edge target is used as the test scene. It is the optical equivalent of an electrical step function. The illuminance plot of the test scene across the boundary is the edge spread function, and its Fourier transform is the Modulation Transfer Function. Effective resolution of the lens is greater due to slant since spacing of “samples” of ESF is calculated as pixel pitch times the angle of rotation of the target[3].

Methods

We developed an automation algorithm to measure a high resolution MTF within the user’s selected region of interest on the retina from the Navarro eye model. The methods followed to create the algorithm will be detailed in the Methods section, however our extensively commented code in Appendix I will serve as a great resource for a user to understand the algorithm line-by-line.

One goal of the eccentricity automation algorithm is to not require a high resolution rendering of a large field of view image. Rendering such an image is often times prohibitively computationally expensive. Resultantly, the eccentricity automation algorithm provides the same benefits of having rendered a full, high resolution image in fractions of the computational cost (exact reduction depends on the region of interest and is parameter dependent). Second, the modulation transfer function data from the selected region of interest at different eccentricities is calculated and plotted for the user to interpret utilizing the built-in ISO12233 function in ISET.

The user selected input to the function is the retinal region of interest in units of degrees relative to the fovea (in both the x and y direction) and the crop-window resolution. The hard-coded parameters of the algorithm are FOV4MTF which determines the number of degrees each crop-window spans, and LOW_RES which is the low resolution of the entire FOV image to locate the crop windows off of. It was determined that a crop-window of 2 degrees contained enough information and the entire blurred edge even at eccentricities of 30 degrees as seen in Figure 3.

Figure 3: A crop-window at an eccentricity of 0 and 30 degrees.

Similarly, our crop-window solution requires a whole scene render with a lower resolution first. A resolution of 100 was determined to be a reasonable choice since the data of row numbers and column numbers for setting the cropwindow parameter in sceneEye is normalized. Thus, a 2 digit of accuracy is enough.

Figure 4: ROI and crop-window comparisons for FOV values of 1, 15, and 30 degrees.

All of the crop-windows must contain a slanted edge to calculate the MTF at said eccentricity. Even though the slanted bar only appears in some parts of the retinal rendered image, the user can input regions of interest that don’t contain the edge of the slanted edge scene at all. This is due to the fact that the Navarro eye model in ISET is radially symmetric. Leveraging this symmetry, it is permissible to transform the user-input to an equivalent position containing a slanted edge to successfully calculate the MTF at the crop-window’s radial distance as seen in Figure 4. The method is to calculate the radial distance from the fovea of the x, y coordinates:


r=x2+y2

and use this value as the relative field of view:

relativedegree=rLOWRES*FOV

We calculate the closest point from the user inputed region of interest (the green box) to the center of the first crop-window. This looped process ensues until our automation algorithm exceeds the region of interest, where the loop terminates. This performance can be seen in Figure 5 where the region of interest does not go though (0, 0).

Figure 5: ROI and crop-window comparisons for off-center ROIs.

The automation of properly placing the crop-windows has to be done row by row because when FOV is large, the scene will be distorted and slanted bar is no longer a straight line due to the off-axis optical properties of eye. Figure 4 shows these optical effects when the field of view is 30 degrees. The crop-windows are determined by finding the first column in the respective row where the value is nonzero. Then the window is shifted left ⅓ its width so that the slanted bar will not be at the edge of the crop-window, allowing for the while blurred edge to in the crop-window. Then, the next crop-window is located by shifting downward by the size of a crop-window and finding the new nonzero column number. Again, this loop continues until the center of the image is reached or the radius of interest is left.

Lastly, for each crop-window ISO12233 is leveraged to plot reduction in contrast with respect to spatial frequency traces and an MTF50 plot of the different eccentricities.

Results

The first diagnostic test was a sanity check. Figure 6 shows that as the eccentricity increased, the contrast reduction was of a greater value. This response is of expected behavior. It can also be seen that the MTF traces followed expected trends satisfying the sanity check. Figure 7 shows the MTF50 spatial frequencies for the same eccentricities of the same simulation, also following the expected trend.

Figure 6: MTF plot at different eccentricities with crop window resolution = 800, accommodation = 1, pupil diameter = 4 mm, CA Bands = 1, Rays = 64.
Figure 7: MTF50 plot at different eccentricities with crop window resolution = 800, accommodation = 1, pupil diameter = 4 mm, CA Bands = 1, Rays = 64.

Having the sanity check completed, the parameters were set to the parameters used in the William’s[4] paper. It is to be noted that a low ray number was used to reduce computation time in forming the plots. It is suggested that that ray number be increased to 4,000 from the current 128 for future tests. The results can be seen in Figure 8 and Figure 9. The MTF trends with the parameters in the publication still follow expected behavior.

Figure 8: MTF plot at different eccentricities with crop window resolution = 800, accommodation = 0.33, pupil diameter = 3 mm, CA Bands = 8, Rays = 128.
Figure 9: MTF50 plot at different eccentricities with crop window resolution = 800, accommodation = 0.33, pupil diameter = 3 mm, CA Bands = 8, Rays = 128.
Figure 10: MTF plot at different eccentricities comparing the ISET results to the Williams publication results[4].

From the comparison between the ISET Navarro model simulations and the publication as shown in Figure 10, similar trends were seen between the corresponding eccentricity values such that the larger the eccentricity, the faster the MTF plot descended for both the ISET and publication results. However, the MTF50 values for the ISET results were all at lower spatial frequencies compared to the publication.

Conclusions

The eccentricity automation algorithm functions properly in that it successfully takes in any give region of interest and plots the MTF characteristics across different eccentricities with expected trends in a computational time of a fraction of what it would take without utilizing the eccentricity automation algorithm. One possible source of error is that since the Navarro model is radially symmetric, it doesn’t account for added aberrations from a human eye which isn’t radially symmetric. Another source of error is that the 0.285 mm/deg conversion used by the model only holds for small fields of view and falls apart for wider angles.

This project still has a good amount of room for future work. Namely, algorithm support for other senses besides the slanted bar and algorithm support for non radially symmetric eye models (astigmatism). These improvements will require significant algorithm modifications as the current algorithm is slanted bar specific and assumes a radially symmetric model. Since our algorithm assumed radial symmetry, we did not have to translate the scene, which otherwise would be required. Lastly, the eccentricity algorithm should be ran through different eye models so compare and contrast results.

References

[1] Bakaraju, Ravi, et al. “Finite schematic eye models and their accuracy to in-vivo data”, Vision Research 48, pg. 1681-1694, Elsevior, Apr. 2008.

[2] Escudero-Sanz and R. Navarro, "Off-axis aberrations of a wide-angle schematic eye model," J. Opt. Soc. Am. A 16, pg. 1881-1891, Aug. 1999.

[3] Kerr, Douglas. “Determining MTF with a Slant Edge Target”, Dougkerr.Net, Oct. 2010.

[4] Williams, D R, et al. “Off-Axis Optical Quality and Retinal Sampling in the Human Eye”, Current Neurology and Neuroscience Reports, U.S. National Library of Medicine, Apr. 1996.

Appendix I

%% 2018 Autumn Psych 221 Project script % Human Optics as a Function of Eccentricity % Fang-Yu Lin, Nicholas Gaudio, Paavani Dua

%% Warning % 1. If there is an error saying "cannot copy files", go into recipeSet % Change strcmp(thisR.exporter,'CD4') to strcmp(thisR.exporter,) % If not, ignore this. % I actually don't understand why that is an issue.

%% Naming % HIGH_RES: resolution for the full image in higher quality % LOW_RES: resolution for the initial low resolution image % ROI: Radius of interest, defined by user inpus, in LOW_RES pixel value % POI: points of interest, upper-left vertices of cropp-windows, in LOW_RES pixel value % size_low: height/width of the crop-windows, in LOW_RES pixel value % FOV4MTF: height/width of the crop-windows, in degree

%% User Input x = [-5 5]; % range of interest degree, x direction y = [-5 5]; % range of interest degree, y direction res = 100; % resolution of

%% Initialize and Scene Setting ieInit; if ~piDockerExists, piDockerConfig; end myScene = sceneEye('slantedBar', 'planedistance', 1); myScene.name = 'slantedBar_Initial_Lowres'; myScene.accommodation = 1; myScene.pupilDiameter = 4; myScene.numRays = 64; myScene.numCABands = 1;

%% Start function

% function [freq, mtf, radius_val] = calculateEccentricityMTF(x, y, res) %% Parameters

   FOV4MTF = 2;
   LOW_RES = 100;
   %% Basic Scene Setting
   myScene.fov = 2 * max(abs([x y]));

HIGH_RES = myScene.fov/FOV4MTF*res; myScene.resolution = LOW_RES; %% Find out ROI, Radius of Interest

   x_min = x(1); x_max = x(2);  y_min = y(1); y_max = y(2);
   center_px = floor([LOW_RES/2, LOW_RES/2]);
   
   Radius_min_px = 0;
   if ~(sign(x(1)) ~= sign(x(2)) && sign(y(1)) ~= sign(y(2)))
       Degree_min_x = min([abs(x)]);
       Degree_min_y = min([abs(y)]);
       Degree_min = norm([Degree_min_x Degree_min_y]);
       Radius_min_px = round(Degree_min*LOW_RES/myScene.fov); % smallest radius we are interested in
   end
   %% Render a low resolution image first, with the whole fov
   scene = myScene.render;
   ieAddObject(scene);
   oiWindow;
   
   %% Get the low resolution image to calculate POI
   rgb_full = oiGet(scene,'rgb'); % Get low resolution image
   slanted_bar = rgb_full(:,:,1); % size = [LOW_RES, LOW_RES]
   %% Find crop-windows with fov4MTF(2 degree, default)
   size_low = round(res*(LOW_RES/HIGH_RES)); % height/width of the crop-windows, in LOW_RES pixel value
   shift_left = floor(size_low/3);
   POI = [];
   % Find the starting point of the slanted bar, since retinal image is a circle
   c_start = find(any(slanted_bar),1); % column number, in LOW_RES pixel
   r_start = find(slanted_bar(:,c_start), 1, 'first'); % row number, in LOW_RES pixel
   nrow = 0;
   radius_val = [];
   while(true) % Find crop-windows
       row_pivot = nrow*size_low+r_start; % row number, where we will look for slanted-bar
       c = find(slanted_bar(row_pivot,:), 1, 'first'); % column# for the slanted-bar
       r = norm([row_pivot + floor(size_low/2), c + floor(size_low/2)]-center_px); % row# for the upper-left point
       % exit when exceed radius of interest, or
       % exit when arrived in center of retina
       if r < Radius_min_px || row_pivot > center_px(2) 
           break;
       end
       radius_val = [radius_val; r]; % save radius value for plotting
       POI = [POI; row_pivot c-shift_left]; % save all upper-left vertices for crop-windows
       nrow = nrow + 1; % iterate to the window
   end
   
   radius_val = radius_val*myScene.fov/LOW_RES; % turn radius value(pixel) to degree 
   N = length(POI(:,1)); % # of crop-windows
   %% Show Crop-windows position and User Input Range
   % figure();
   % image(rgb_full);
   % user_input = [x(1) -y(2) abs(x(2)-x(1)) abs(y(2)-y(1))]*LOW_RES/myScene.fov;
   % user_input = user_input + [center_px(1) center_px(2) 0 0];
   % rectangle('Position',floor(user_input),'EdgeColor','g','LineWidth',1)
   % N = length(POI(:,1));
   % for r = 1:N
   %     rectangle('Position',[POI(r,2) POI(r,1) size_low size_low],'EdgeColor','r','LineWidth',1)
   % end
   % xlabel('pixels'); ylabel('pixels');
   % axis image;
   %% Render all crop windows and calculate their MTF
   myScene.resolution = HIGH_RES; % Set the resolution to higher quality
   freq = {N,1}; % cell array for spatial freqnency
   mtf = {N,1}; % Contrast reduction
   
   for r = 1:N % Rendering here!
       cropwindow_px = [POI(r,2) POI(r,2)+size_low POI(r,1) POI(r,1)+size_low];
       cropwindow_norm = cropwindow_px./LOW_RES;
       myScene.recipe.set('cropwindow',cropwindow_norm);
       myScene.name = sprintf('slantedBar_degree_%0.2f_Highres', radius_val(r));
       scene = myScene.render;
       ieAddObject(scene);
       oiWindow;
       [f_new, m_new] = calculateMTFfromSlantedBar(scene,'cropflag',false);
       freq{r,1} = f_new;
       mtf{r,1} = m_new;
   end    

% end % End Function

% %% Plot % % Draw MTF % [s,~] = size(freq); % figure(); % Draw all MTF from rendering % for i = 1:s % r = round(radius_val(i)); % lg = [int2str(r), ' degree']; % plot(cell2mat(freq(i)),cell2mat(mtf(i)),'DisplayName',lg); % hold on % end % legend % hold off % xlabel('Spatial Frequency (cycles/deg)'); % ylabel('Contrast Reduction (SFR)'); % title('MTF Plots'); % grid on; % axis([0 60 0 1]) % % % Draw MTF50 % f_50 = []; % for i = 1:s % Find MTF50 % mtf_50 = (cell2mat(mtf(i)) < 0.5); % index_50 = find(mtf_50, 1); % store = cell2mat(freq(i)); % f_50 = [f_50; store(index_50)]; % end % figure(); % plot(radius_val*myScene.fov/LOW_RES, f_50, '-o'); % ylabel('Spatial Frequency (cycles/deg)'); % xlabel('Relative Degree from Center (deg)'); % title('MTF 50') % grid on; % ylim([0, 15]);

Appendix II

Fang-Yu Lin - Implemented many elements of the code and algorithm. Lead the interface of the algorithm with ISET.

Nicholas Gaudio - Conceptualized the algorithm and plots. Wrote the methods, results, and conclusion of the report and presentation. Compared the ISET results to the publication results.

Paavani Dua - Lead the literature understanding and wrote introduction and background of the report.