ChhangMak

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search

Introduction

Motivation

As we all know, oral cancer is in increasingly large industry, and there is a driving need to catch these illnesses earlier. Currently, the way the screening is done is by shining a short wavelength light (typically purple in appearance), into the mouth. When certain wavelengths of light are emitted into the mouth, cancerous and pre-cancerous tissues will have different fluorescent properties compared to healthy tissue. As such, we can use this property to identify cancerous and pre-cancerous tissue in the mouth. Normal tissue is less fluorescent than the cancerous tissue; however, in the mouth, emitted fluorescent light is often much weaker in magnitude than the reflected ambient light or even from our original light source we put into the mouth in the first place.

This method would look something like this:

Thus, we want to be able to create a light that excites the fluorescent areas of the mouth. We want to be able to detect this fluorescence, however weak the fluorescence may be relative to the reflected light, so that we can identify cancerous and pre-cancerous tissue in the mouth.

Describing a Physical System

That the fluorescence is much weaker than the reflected light presents a challenge to us. We create a system where we pass a narrow bandwidth light (within the excitation spectrum of the fluorophore, which is our fluorescent object, we want to measure) through a shortpass filter. The resultant light would ideally be one that does not have any light with wavelength outside of the excitation spectrum, so that we see as little of the reflected light as possible. This light hits the fluorophore, which absorbs some of the energy, and returns to us via the camera. We pass the light heading into the camera through a longpass filter, again to allow fluorescent light through and block any reflected light (ambient).

This will allow us to detect the fluorescence in a standard camera with Bayer (red-green-green-blue) sensors. We would ideally want to present the camera's output to a doctor who can examine the fluorescence to screen for oral cancer. Thus, we want to optimize our filters and light such that the camera's output demonstrates a great amount of contrast between fluorescing and non-fluorescing parts of the image.

Simulating the Physical System

In this project, we use the ISETCam and ISETCamFluorescence repositories to simulate our camera, longpass filter, shortpass filter, light spectrum, and fluorophore. Using many simulations, we can find optimal cutoff wavelengths for our filters, optimal wavelengths for light emitted for each fluorophore tested.

A representation of our physical system that we simulate is produced below in the figure.

Background

Two important bits of background are necessary as a basis for this project. We must know which color filter arrays we are using, as well as the mechanics of fluorescence to be able to model the system successfully.

Fluorescence Properties

The properties of fluorescence are known to us for various fluorophores, including porphyrins, NADH, FAD, etc., a few fluorophores one might expect to see in the mouth. We use their emission and excitation spectra in our simulations. Displayed here is what the spectra for porphyrin looks like. We can see that the activation spectra for fluorescence peaks around 400nm, but that the reflected light peaks at around 625nm, with a secondary peak at around 675nm. Therefore, our ideal filter should have a shortpass capturing our 400nm peak, and our long pass filter capturing the second set of peaks.

Filtering to find fluorescence is nothing new, as Edmund Optics has a line of filters called "Fluorescence Filters" that consumers can purchase. This line of filters, as well as many others, provide us with known transmittances that we can incorporate into our simulation. We want to keep this in mind as we explore different contrast parameters since making ideal filters is impossible, and we want to ground our solution in real life.

The above is a molecule of porphyrin, and the emission spectra makes sense since when light hits the porphyrin, the double bonds will allow energy to be absorbed in, and as such, the emitted spectra (everything that is not absorbed), is of longer wavelength and therefore is lower and energy. This is typically how fluorophores work based off of the ability of the molecules to absorb energy, and we utilize the difference in spectra in order to construct our two different filters.

Color Filter Arrays

We know that the sensor will interact with the fluorescent and reflected light through its Bayer filter array. That the three filters are sensitive to different wavelengths will play a part in determining what image we see at the output.

We use the above information to guide us in our quest to model physical objects. We build on the code bases of ISETCam and ISETFluorescence, which already contain a lot of the models we wish to use in our simulations. Here, the red blue is the leftmost peak, and the blue is the rightmost. We see that the green peak predominates across many of the wavelengths.

Methods

To approximate the physical system in simulation, we must have suitable models that are accurate enough to produce realistic-enough outputs on a real camera. We then input these models into ISETCam so that we can acquire simulated images.

Modeling the Light

In the physical system, we would ideally choose a light with a narrow bandwidth. After all, a blue LED in real life has a very narrow spectrum. We choose a simple model to model our light: a Gaussian distribution with a standard deviation of 35 nanometers. We can vary the mean of this gaussian to simulate LEDs of different wavelengths and spectral distributions. Here is an example of a light spectrum we may use, with wavelength 400 nanometers and standard deviation of 35 nanometers.

We have also elected to select a few lights that come stock with ISETCam as well, in our many iterations of simulations. In all of our simulations, we stayed consistent with the light we chose with parameter of 2 in ISETCam.

Modeling the Longpass and the Shortpass Filters

Real-life shortpass and longpass filters look like so:

We see that the actual filters provided by different real life optical filters look similar to a normal distribution and to our initial approximation. To better capture the emission spectrum created by these filters, we can then modify our Gaussian using a piecewise function through a Gaussians and constants. We can also approximate that the nonlinear, sinusoidal, and nonzero parts of the transmission spectrum are constant. This piecewise function can be modeled as:

which represents the zero part of the function as just 0, and and all other places as the nonzero Gaussian distribution specified with a certain mean and covariance. Thus, we model our filters using a piecewise function: before the cutoff frequency, the transmission spectrum is equal to the maximum of the Gaussian distribution; after the cutoff frequency, the transmission spectrum is a one-sided Gaussian. Since the boolean is fulfilled at nonzero locations, this will give us a piecewise function that is essentially 0 if it is not centered at the given peak, and a half normal Gaussian distribution otherwise.

For the shortpass (left), we will have the one sided Gaussian centered around our ideal capturing spectra, and have a sharp dropoff to 0 elsewhere, which is shown above. For our longpass (right), we will have the same equation, except the onesided Gaussian will be the left half with 0% transmittance ramping up to 100% transmittance rather than the other way around.

Modeling the Fluorophore

In reality, the image that our camera captures is essentially a 3D image since it includes depth information, and you will have various separate artifacts in your picture. However, since our model is to maximize the contrast between non-fluorescing parts and fluorescing parts of the image, we simplify this problem into creating a 2D image. Therefore, we create a two-dimensional scene in ISETCam which contains a square tile. Half of it contains the fluorescent material of our choice. Half of it does not contain the fluorescent material. We use stock fluorescent materials from ISETCamFluorescence (which have preset excitation and emission spectra) for our simulations. Below is an image representing which parts of the tile are fluorescent and which are not, as well as an example camera output that very clearly demonstrates which part is fluorescent.

Modeling the Camera

We model a generic RGB camera in ISETCam. For reference, we use color filter arrays from the Nikon D100 camera. The color filter array sensitivities for this camera is contained in the background section of this project writeup. Here are some of the specifications that we arbitrated for this camera.

  • Exposure Time: 1 s or less
    • This is significantly longer than exposure times for normal cameras. This is because the fluorescence luminance is so weak and so the exposure time must be much longer to capture enough photons from the fluorescence. We know that this may be suboptimal since the mouth may move within a second, but we think 1 second is a reasonable duration to ask a screened patient to keep the mouth still.
    • However, once we find an optimal solution for cutoff filters and lights, we find that we can greatly reduce the exposure time and still get good contrast.
  • Pixel Size: 2.2 μm squares
  • Fill Factor: 90%
  • Voltage Swing: 1.15 V
  • Electron Well Capacity: 9000

Figure of Merit

To effectively quantify how well a configuration of highpass cutoff, lowpass cutoff, light spectrum, etc. does, we must have a quantitative method for finding how well a configuration produces contrast. To this end, we have elected for a simple approach. Since we know where the fluorophore is in the scene (because we put it there), we also know which parts of the image belong to the fluorescent part and which parts of the image belong to the non-fluorescent part. We can take a section of the fluorescent part of the image and take the average of the intensities of its pixels, and we can do the same for the non-fluorescent part. The difference of these two averages is a simple but effective way of quantifying contrast, and we have elected to use this method as a figure of merit for different cutoff and light configurations. Below we display two partitions we might choose as sections we would want to average over and take their differences to find a figure of merit.


Results

The simulation, when run, produces a simulated camera image on the screen. Here is an example of such output. This particular output is acquired from an exposure time of 1 s, Porphyrins fluorophore model, a lowpass cutoff of 430 nm, a highpass cutoff of 510 nm (both with Gaussian standard deviation of 15 nm), and a narrow bandwidth light that came stock with ISETCam.

Finding Optimal Cutoff Frequencies

Using our method to calculate figure of merit, we sweep across many different lowpass cutoff frequencies and highpass cutoff frequencies for a given light spectrum and fluorophore. We compile the data together to produce a heatmap which is helpful for visualizing which cutoff frequencies are optimal for producing the most contrast. The below heatmap was generated using auto-exposure in the ISETCam settings, the Porphyrins fluorophore, and stock ISETCam light no. 2. We also filtered out noise in the heatmap by setting all negative values to zero, since we don't expect negative values and thus such values are produced by noise.

We see in the contrast map that there is an optimal configuration for a highpass cutoff frequency around 510 nm and a lowpass cutoff frequency around 430 nm. These cutoff frequencies can be used to produce the first image shown in this results section using an exposure time of 1 second. This image is higher contrast than an image produced with no filters:

We can run this simulation again on other fluorophores, like NADH:

Similarly, we see here that the frequencies float in a similar range to the porphyrins, which makes sense since they are fluorophores and roughly can contain a the same absorption spectra.

Finding Optimal Light Spectrums

We can also run our simulation to plot our figure of merit for different wavelengths of light. Sweeping across the different wavelengths of light yields the following plot. This plot in particular was made with a very small Gaussian distribution of standard deviation 5 nm, and using the fluorophore Porphyrins.

For lights modeled with Gaussian distributions of higher standard deviations, we see that contrast drops off much quicker as we sweep across the wavelengths. We also see that the contrast is much lower as we get more reflected light mixed into the image.

We see that it is thus optimal to use a light with a very narrow spectrum.

Quantifying Contrast Changes

The figure above displays our original contrast spectrum (left) from our image. It is clearly visible that there was a lot of noise, but after applying our filtering (right), there is significant improvement in the contrast images.

Product Realization

After finding out our optimal contrast ratios, we wanted to quantify our ideas and find specific filters that would accomplish what we wanted. The below image shows realizable filters that are able to be purchased online at Edmund Optics, a supplier of the lab currently.

Conclusions

We learn that in simulation that we can select cutoff frequencies for the filters that are optimal for displaying contrast. The optimal cutoff frequencies are complexly related to the emission and excitation spectra of the fluorophore, the spectrum of the light we choose to shine on the fluorophore, and the color filter of the sensor. By sweeping through these system parameters, we are able to find maxima for our figure of merit for which we can achieve maximum contrast.

Areas of Improvement / Future Direction

  • Our simulation deals only with two-dimensional images, but human mouths are 3-dimensional. It may be helpful to be able to run these simulations in three dimensions as well.
  • Our Figure of Merit is relatively simple in that it is simply a difference of two averages. To better quantify contrast, we could consider using the CIELAB color space or using a different, more complex algorithm to better quantify the contrast.
  • It takes a long time to sweep through all wavelengths to find an optimal figure of merit. However, this operation is easily parallelizable, as all simulations, regardless of parameter selection, are independent of each other. In the future, one could parallelize the simulation when sweeping for an optimal figure of merit.
  • We could add more fluorophores so that we can test more of them.
  • This program is merely a simulation. We must validate our simulations by seeing if camera images produced in real life using similar parameters match with our simulations.

Skills we learned

  • Acquired familiarity with ISETCam and ISETCamFluorescence tools
  • Strengthened MATLAB capabilities
  • Properties of Fluorescence
  • Understanding of complete research to implementation lifecycle

Appendix

Appendix I: Source Code

Our source code is linked here: https://github.com/jonc586/psych221_project For figures and examples, refer to the figures interspersed throughout this report. The script titled "fluorescence_oraleye.m" contains the code we used for our models and simulations.

Appendix II: Work Distribution

Jonathan Chhang's work distribution

Jonathan Chhang (jchhang) designed the shortpass and longpass filter models, light spectrum models, and implemented them in ISETCam software. Jonathan Chhang also designed and implemented in ISETCam the contrast metric that was used in a figure of merit for this project. He also adapted much of Dr. Joyce Farrell's code in ISETCam and ISETFluorescence to be able to sweep across many camera parameters, and plot the resulting figure of merit matrix in a heatmap. He also generated many of the figures in the final project presentation as well as wrote much of the methodology and results section of this report, and contributed to other sections of this final project report as well.

Jonathan Mak's work distribution

Jonathan Mak (jmak) also designed the shortpass and longpass filter models, light spectrum models, and helped with implementation in ISETCam software with Jonathan Chhang. In addition, he helped Chhang with adapting ISETCam and other fluorescence related visualization methods. He generated initial figures and modeled the problem for comparing and contrasting various fluorophores, as well as came up with identifiable metrics for quantifying contrast and how to model filters properly both mathematically and physically. Once the optimal contrast ratios were identified, he found the filters that were able to be purchased by the lab to realize digital implementation into physical reality. He helped generate many of the figures in the final project presentation as well as wrote much of the introduction and discussion sections of this report, and contributed to the methodology and results.

All authors contributed equally to the project.

Special Thanks

We would like to thank Professor Brian Wandell for teaching us this quarter. We learned much about cameras, camera sensors, color and color-matching, displays, psychophysics, etc. We would also like to thank Dr. Joyce E. Farrell for providing us with the starter code for this project and for her advice. We would also like to thank Zheng Lyu for his continued mentorship and supervision of our project.