AniqDan

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search

Introduction

Modeling optical systems is an important part of performing simulations of imaging systems. However, optical system design and analysis is often done independent of actual scenes that a camera would be capturing. Instead, metrics such as zernike polynomials, point spread functions (PSF) and modulation transfer functions (MTF) are used to determine the quality of an optical design. Although these metrics are useful, having metrics that are scene dependent are also of value.

ISET models optics by defining an aperture and point spread function to simulate the effects optics have on the imaging pipeline. These parameters are applied to a two dimensional image and analysis further down the imaging pipeline resumes. This limits the effect the optics have on the image to a planar blur. Chromatic aberrations can be modeled if spectral scene information is known, and wavelength dependent point spread functions are computed in the simulation.

However, other three dimensional effects such as depth of field, astigmatism, and spherical aberrations cannot be simulated with a scene in ISET using this process. The ability to model and develop metrics to analyze these aberrations per scene would empower the optical designer to create better optical systems.

Background

Figure 1: Ray tracer pipeline.

To generate scenes for simulating an image processing pipeline, 3D graphics are often used. Photorealistic rendering uses a 3D graphics ray tracer to create pristine scenes that can simulate the interaction of light with objects. As shown in Figure 1, the rendering pipeline starts with a camera defined as a point in 3D space. The image is defined as a plane in front of the camera,which is parametrized using a length, width, pixel size, and horizontal and vertical resolution. Ray tracers define the path of light in the opposite direction as defined in the real world; rather than tracing light from a light source to the image plane or human eye, light rays originate at the image plane and project out into the scene. Rays traverse the scene as defined by three dimensional origin (ox,oy,oz) and direction (dx,dy,dz) vectors. When these rays traverse the scene and interact with objects placed in the virtual 3D world, paths to light sources are calculated, and they are assigned and recorded on the pixel they pass through at the initial image plane.

What makes this ray tracing procedure useful in creating realistic looking scenes is the fact that the scattering of light with materials, textures, and mediums (water, smoke, etc) can be modeled using real physics or any other mathematical process. For example, reflection, transmission, and scattering of light on a surface can be defined in a myriad of ways using real-world or artificial material properties. However, the main drawback of ray tracing is that it is an extremely time intensive procedure. The number of ray computations grows with image size and resolution, the number of samples per pixel, lighting and material properties, and scene content. Depending on the complexity of the rendered scene, image generation can take as short as a few seconds to as long as a few days. The more complex and realistic looking image, the more time it takes to create.

Figure 2: Tracing rays through lens system. Rays are traced from the image plane out into the scene.


The ray tracer used for this project, Physically Based Ray Tracer (PBRT) has the capability to simulate ray paths as if they were refracted by lenses or other optical optical interface. An optical interface is defined by a curvature radius, thickness, index of refraction, and aperture diameter. Ray paths are perturbed by computing the intersection of rays with spherical lenses and using Snell's law to "refract" rays. An additional computation is done at every interface to see if the ray actually travels through to the end of the optical system. If the ray is redirected and propagates outside the aperture diameter of any interface, it is immediately eliminated from the final image. This process multiplies the already lengthy procedure of simulation, and increases with more complex optical designs. If simulations take too long, the data lens ray tracing simulations provide for the imaging scientist become less practical to use. As such, finding solutions to speeding up this simulation is imperative.

Methods

Describe techniques you used to measure and analyze. Describe the instruments, and experimental procedures in enough detail so that someone could repeat your analysis. What software did you use? What was the idea of the algorithms and data analysis?

This project proposes two possible solutions to this problem. The first solution uses ray transfer matrix analysis (ABCD matrices). ABCD matrices are an approximation of each optical interface, and treats the mapping of rays from image plane to the end of the optical system as a linear system. The second solution precomputes and saves ray paths to be used multiple times, bypassing the ray-lens intersection calculation all together.

Ray Transfer Matrix Analysis

Figure 3: Approximating an optical component using ABCD matrices.

ABCD matrices are commonly used as a first order approximation in designing optical systems for lasers. As shown in Figure 3, an optical element is defined by input and output planes before and after the element that are perpendicular to the optical axis (z-axis). Assuming that the cross section of the optical element is circularly symmetric, the ray can be parametrized in the same way in the x and y axes. Using the xz plane for explanation, each ray is parametrized with a position x, the distance away from the optical axis, and an angle, , the angle relative to the optical axis. ABCD matrices use the paraxial approximation of ray optics. Specifically, all rays are assumed to be traveling a small distance away and directed at a small angle relative to the optical axis. This approximation simplifies Snell's law from to .

These approximations allow us to describe the maping of rays from the input plane to the output plane as:

where A, B, C, and D are coefficients that change from the type of optical component. Figure 4 below shows some example ABCD matrices for common components.

Figure 4: ABCD matrices for common optical components.

The main advantage of ABCD matrices is that an optical system with multiple components can be represented by one ABCD matrix. For example, the system below is a Double-Gauss lens system, a common arrangement used for imaging. In PBRT, each interface, M_{1} through M_{N} would have to be dealt with sequentially and ray paths are computed at each place. With ABCD matrices, each interface is cascaded into one by multiplying each matrix, M, in the order they are crossed by the ray. Also, the z position of the output plane is calculated by adding the thicknesses of each component together. This simplifies the calculation of ray position and direction after a lens system to a single matrix, rather than calculating the state of the ray at each interface.


An evaluation of the ABCD approximation is done using PBRT and the Double-Gauss prescription. Ray positions and angles at the image plane are saved, as well as after they propagate through the lens system using the current method and the ABCD approximation. A comparison is done by observing the final ray traced images and the error of the ray position and angles at the end of the lens system.

Saving Ray Paths

Do not traverse lenses for the same film pixel sample twice!

- Save result and look up later
- In-loop for the same scene and out-of-loop for other scenes with same lenses
- New type of camera class for out-of_loop look up: RealisticLookUpCamera

Where to save result?

- Need to be in camera space: RealisticCamera::GenerateRay()
Back track from pixels on the film to the point in exit pupil then traverse lens interfaces to arrive at exit ray from the front face of the lenses toward the scene to evaluate radiance.
- Update sample by sample and write out at last:
updateRayTable() save exit rays and its weight (to account for vignetting) after lenses traversal
writeRayTable() at ~RealisticCamera() when rendering is done
- Inloop Lookup, check table for saved result before moving forward with lenses traversal

How to configure settings?

- New .pbrt parameters can be added with test editors
"string RayTableFile""RayTable-700x700x1.dat"
"bool InloopLookUpFlag" ["true"]
"bool PerfDispFlag" ["true"]

Results

Organize your results in a good logical order (not necessarily historical order). Include relevant graphs and/or images. Make sure graph axes are labeled. Make sure you draw the reader's attention to the key element of the figure. The key aspect should be the most visible element of the figure or graph. Help the reader by writing a clear figure caption.

Ray Transfer Matrix Analysis

Below is the position and angle error using the ABCD approximation. The image plane is square with a 25mm width, and a resolution of 500x500 pixels with 32 samples per pixel. The plots shows that the ABCD matrix approximation accumulates error further away from the optical axis, and with large angles, as expected by the paraxial approximation. With large sensor and aperture sizes, the approximation gets worse and becomes a less viable method. However, the plots show that within a 6mm width and 0.1 radian half angle, the approximation produces the minimum error. This range falls in the regime of cell phone camera sensor and aperture sizes.

In addition, the images generated give more insight into the effectiveness of this approximation. Compared to the standard lens ray trace, the ABCD image is much blurrier, and the objects in the scene appear to be larger in comparison. Further investigation using small sensor sizes and camera lens systems should theoretically yield better results, making ABCD matrix approximation a suitable first order approximation in that context. At the time of this project, a cell phone camera lens system was not available, and a lens system for a full size sensor available with PBRT was used.


Saving Ray Paths

  • Image result


300 X 300 4 samples/pixel 300 X 300 2 x 2 Look Up
"300 X 300 4 samples/pixel" "300 X 300 2 x 2 Look Up"


  • Rendering time result


Conclusions

Describe what you learned. What worked? What didn't? Why? What should someone next year try?

  • Paraxial approximation limits accuracy for simulating large optics with relatively large sensor sizes
  • Could be useful as first order approximation for chromatic aberrations due to wavelength dependent indices of refraction
Advantage over thin lens approximation
  • Useful for simulating cell phone cameras (relatively small sensor size)
  • Saving ray paths can avoid repetitive calculation therefore reduce rendering time for lenses simulation, We can achieve great speed up with precision controllable by parameters in .pbrt file. For future work, one can look at further improvement on high resolution look up with optimized memory usage and CPU time, and go for GPU acceleration!

References

  1. M. Pharr, W. Jacob, G. Humphreys, “Physically Based Rendering From Theory to Implementation”, Third E.dition, 2016.
  2. PBRT source code: https://github.com/mmp/pbrt-v3
  3. Bahaa E. A. Saleh and Malvin Carl Teich (1991). Fundamentals of Photonics. New York: John Wiley & Sons. Section 1.4, pp. 26 – 36
  4. https://en.wikipedia.org/wiki/Ray_transfer_matrix_analysis

Appendix I

Upload source code, test images, etc, and give a description of each link. In some cases, your acquired data may be too large to store practically. In this case, use your judgement (or consult one of us) and only link the most relevant data. Be sure to describe the purpose of your code and to edit the code for clarity. The purpose of placing the code online is to allow others to verify your methods and to learn from your ideas.

Ray Transfer Matrix Analysis

ABCD matrix code: File:Abcd.zip

Saving Ray Paths

Source code and test scrip & data: File:Psych221-pbrt-project.zip

Test spew with camera time for speed up analysis: File:TestSpew.zip


Appendix II

Aniq implemented the ABCD matrix method. He also wrote the introduction and background of the report. Dan worked on the implementation, presentation and write up for saving ray paths method.