Super Resolution Microscopy: Difference between revisions

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
imported>Projects221
Initial dump.
 
imported>Projects221
Added images.
Line 29: Line 29:
I used a real camera to simulate the microscope to experiment with SMI. The test images were displayed on a laptop screen, and the camera was pointed at the laptop, as seen below. The raw resolution of the camera was about twice that of the reference image on screen (i.e. the camera recorded about 2x2 pixels for every pixel in the reference image), but the lens used was stopped down to a very small aperture: f/32, so that the true resolution of the system was limited by diffraction.
I used a real camera to simulate the microscope to experiment with SMI. The test images were displayed on a laptop screen, and the camera was pointed at the laptop, as seen below. The raw resolution of the camera was about twice that of the reference image on screen (i.e. the camera recorded about 2x2 pixels for every pixel in the reference image), but the lens used was stopped down to a very small aperture: f/32, so that the true resolution of the system was limited by diffraction.


[X photo of experimental setup]
[[File:skysmith_experimental_setup.jpg|600 px]]


The test image chosen was a dollar bill. This image has three main advantages: it is very recognizable, it has some lare smooth areas that will show the noise characteristics of the system, and it has a lot of high frequency detail at varying contrasts (in particular text of many sizes, and many fine parallel lines), to test the ability of the system to preserve detail. The reference image is 512x512 and greyscale for simplicity. Performing the tests in color would be analogous, but greyscale allowed white balance and color calibration issues to be ignored, both problems unique to the standard camera used for testing. Neither should be a problem in a tightly controlled microscope.
The test image chosen was a dollar bill. This image has three main advantages: it is very recognizable, it has some lare smooth areas that will show the noise characteristics of the system, and it has a lot of high frequency detail at varying contrasts (in particular text of several sizes, and many fine parallel lines), to test the ability of the system to preserve detail. The reference image is 512x512 and greyscale for simplicity. Performing the tests in color would be analogous, but greyscale allowed white balance and color calibration issues to be ignored, both problems unique to the standard camera used for testing. Neither should be a problem in a tightly controlled microscope.


[X reference image]
[[File:skysmith_dollar.png]]


To simulate the spatially modulated illumination, subsets of the pixels in the image were displayed on screen. Then, many "smaples" were taken while the screen displayed different subsets of pixels. At nanometer scale, exact control of points of illumination in this way isn't easy, so in this sense the assumption that points can be exactly individually illuminated is optimistic. This assumption was made to simplify the resulting procedures and make them feasibly implementable by one person in a few weeks.
To simulate the spatially modulated illumination, subsets of the pixels in the image were displayed on screen. Then, many "smaples" were taken while the screen displayed different subsets of pixels. At nanometer scale, exact control of points of illumination in this way isn't easy, so in this sense the assumption that points can be exactly individually illuminated is optimistic. This assumption was made to simplify the resulting procedures and make them feasibly implementable by one person in a few weeks.
Line 39: Line 39:
== Processing ==
== Processing ==


The samples were processed in Lightroom before being run through the reconstruction algorithm, to correct lens distortion. In a microscope the exact characteristics of the lens will be known, so this isn't unreasonable. The reconstruction algorithm independently corrects rotation and perspective for each image, to minimize the effect of camera movemnt between samples as much as possible. The colored squares at the corners of the samples were to facilitate this.
The samples were processed in Lightroom before being run through the reconstruction algorithm, to correct lens distortion. An example of a processed sample is below. In a microscope the exact characteristics of the lens will be known, so this isn't unreasonable. The reconstruction algorithm independently corrects rotation and perspective for each image, to minimize the effect of camera movemnt between samples as much as possible. The colored squares at the corners of the samples were to facilitate this.


[X sample after processing]
[[File:skysmith_test_frame.jpg|512 px]]


Finally, the reconstruction algorithm takes the samples and assembles them into a single composite super-resolution image. For each sample, the algorithm first locates all pixels that are significantly brighter than their surroundings. This pass usually identifies multiple adjacent pixels for each true point, due to the diffraction blurring. The algorithm then runs Flood-Fill on each contiguous region of bright pixels, to locate the center of the blurred point. This center is used as the derived location of that point, and the brightness of that point in the final reconstruction is the brightness at the center in the sample.
Finally, the reconstruction algorithm takes the samples and assembles them into a single composite super-resolution image. For each sample, the algorithm first locates all pixels that are significantly brighter than their surroundings. This pass usually identifies multiple adjacent pixels for each true point, due to the diffraction blurring. The algorithm then runs Flood-Fill on each contiguous region of bright pixels, to locate the center of the blurred point. This center is used as the derived location of that point, and the brightness of that point in the final reconstruction is the brightness at the center in the sample.
Line 50: Line 50:
The first experiment used randomly distributed light patterns. In particular, the pixels of the original image were partitioned randomly into 50 equal-size subsets, one for each sample. This led to pixels that were usually well isolated. A resulting reconstructed image can be seen below. There is significant blurring due to problems localizing the pixels: points in the reconstructed image were generally within one pixel of where they should have been, but enough were off to cause significant detail loss and artifacts. This may have been due to remaining uncorrected lens distortion, or slight shifts in the camera between frames (movement parallel to the screen and roll was corrected for, but pitch or yaw are more difficult).
The first experiment used randomly distributed light patterns. In particular, the pixels of the original image were partitioned randomly into 50 equal-size subsets, one for each sample. This led to pixels that were usually well isolated. A resulting reconstructed image can be seen below. There is significant blurring due to problems localizing the pixels: points in the reconstructed image were generally within one pixel of where they should have been, but enough were off to cause significant detail loss and artifacts. This may have been due to remaining uncorrected lens distortion, or slight shifts in the camera between frames (movement parallel to the screen and roll was corrected for, but pitch or yaw are more difficult).


[X first reconstruction: random pixels are harder to localize]
[[File:skysmith_reconstructed1.png|512 px]]


To alleviate these problems, a more structured light pattern was used. For these tests 56 samples were used, and each sample showed every 56th pixel. 56 was chosen because (512*512)/56 = 4681 1/7, and 56/7 = 8. Hence, each point is separated from others in its sample by approximately 7-8  pixels.  The reconstruction code was then modified to "snap" each detected PSF centerpoint to the nearest pixel that was expected in that sample. This completely eliminated all point localization problems. The resulting image can be seen below. For all further experiments, this regular pattern was used.
To alleviate these problems, a more structured light pattern was used. For these tests 56 samples were used, and each sample showed every 56th pixel. 56 was chosen because (512*512)/56 = 4681 1/7, and 56/7 = 8. Hence, each point is separated from others in its sample by approximately 7-8  pixels.  The reconstruction code was then modified to "snap" each detected PSF centerpoint to the nearest pixel that was expected in that sample. This completely eliminated all point localization problems. The resulting image can be seen below. For all further experiments, this regular pattern was used.


[X second reconstruction: structured pattern is much better]
[[File:skysmith_reconstructed2.png|512 px]]


Finally, the images were filtered slightly before display to remove any remaining artifacts. This significantly improved the subjective appearance of the images, but did muddy some fine detail, as seen below. Note that the overall appearance improves, but the pattern in Washington's coat is almost completely destroyed.
Finally, the images were filtered slightly before display to remove any remaining artifacts. This significantly improved the subjective appearance of the images, but did muddy some fine detail, as seen below. Below, the unfiltered version is on the right, and the filtered version is on the left. Note that the overall appearance improves, but the pattern in Washington's coat is much less clear.  


[X comparison crop of smoothed vs not]
[[File:skysmith_filtering_comparison.png]]




Line 65: Line 65:
== Qualitative Results ==
== Qualitative Results ==


Below is a comparison of three different blurred images, the reconstructed images for the same amounts of diffraction/blurring, and the original image. To simulate diffraction more severe than could be created in-camera, for the latter sets of images below, the samples were convolved with an airy disk. D2, D4, and D10 correspond to diffraction limited images where the airy disk has radius 2, 4, and 10 pixels respectively (distance to 50% intensity, specifically). Similarly, R2, R4, and R10 correspond to the reconstructed images with the same amount of diffraction.
Below is a comparison of three different blurred images, the reconstructed images for the same amounts of diffraction/blurring, and the original image. Click through to see full resolution. To simulate diffraction more severe than could be created in-camera, for the latter sets of images below, the samples were convolved with an airy disk. D2, D4, and D10 correspond to diffraction limited images where the airy disk has radius 2, 4, and 10 pixels respectively (distance to 50% intensity, specifically). Similarly, R2, R4, and R10 correspond to the reconstructed images with the same amount of diffraction. The original image is reproduced twice for comparison.


[X comparison]
[[File:skysmith_comparison_grid.png|800px]]


As we can see, the results are a significant improvement at all diffraction levels. Note in particular that the "Washington" text in the center and the "The Department (of the Treasury)" text on the right side are mostly illegible in D2, but clearly defined in R2. The "Legal Tender Public and Private" text on the left side is similarly illegible in D10 but mostly clear in R10. The texture in Washington's forehead and hair are clearly visible in R2 and R4, but not in D2 or D4.
As we can see, the results are a significant improvement at all diffraction levels. Note in particular that the "Washington" text in the center and the "The Department (of the Treasury)" text on the right side are mostly illegible in D2, but clearly defined in R2. The "Legal Tender Public and Private" text on the left side is similarly illegible in D10 but mostly clear in R10. The texture in Washington's forehead and hair are clearly visible in R2 and R4, but not in D2 or D4.
Line 83: Line 83:
|-
|-
! D2
! D2
| 10
| 10.6
|-
|-
! R2
! R2
| 3
| 3.4
|-
|-
! D4
! D4
| 17
| 17.2
|-
|-
! R4
! R4
| 7
| 7.5
|-
|-
! D10
! D10
| 26
| 26.1
|-
|-
! R10
! R10
| 30
| 30.8
|}
|}



Revision as of 06:55, 21 March 2014

Schuyler Smith

Introduction

Microscopy is essential to many scientific fields. Advances in microscope design enable new discoveries. Optical microscopes, in particular, are used in everything from cellular biology to pharmaceutical research. Optical microscopes have several unique advantages that make them particularly useful, but they are also fundamentally limited by diffraction. In the past few decades many advances have been made in so called super-resolution microscopy, techniques that subvert the diffraction limit to improve the resolution of optical microscopes.

This project investigates one general and powerful approach to super-resolution microscopy. Through experiments conducted with a camera simulating the microscope, we explore methods of breaking an image into many parts and reconstructing it in higher resolution.


Background

Diffraction is a fundamental consequence of the wave nature of light. After passing through any aperture, light will spread out slightly, which at very small scales causes blurring. The usual measure of the useful resolution of a diffraction-limited system is the Abbe diffraction limit, which states that the minimum resolvable distance is about λ2N, where λ is the wavelength of the light, and N is the numeric aperture.

Creating microscopes with apertures larger than about 1.4 is impractical, so the diffraction limit of most microscopes is at best λ/3. For visible light around 600nm, this means the microscope can't normally resolve detail smaller than 200nm. However, features smaller than this can still be imaged, they'll just be blurred. Hence, if we can ensure that features are optically isolated -- that is, that the point spread functions of different features don't significantly overlap -- we can correct for the blurring.

There are several methods to ensure that features in an image will be optically isolated. One, called near-field scanning, moves a very small probe across the sample just nanometers above it. This close to the sample diffraction isn't an issue -- essentially, at such a short distance the light doesn't have time to spread out. However, this method has significant drawbacks and often isn't practical. In particular, the probe collects very little light, so scanning the entire sample is slow.

One of the most general methods is called Spatially Modulated Illumination. By using very carefully cnotrolled light patterns to illuminate the sample, images can be collected at high resolution relatively quickly. The light patterns ensure that small areas of the sample are illuminated at a time, and neighboring areas are dark. By moving the pattern (or sample) between exposures, we can cover the entire sample.

One way to create these light patterns is with multiple interfering lasers, as used by the Vertico SMI microscope. In fluorescence microscopy, fluorescent markers can be excited with a laser. Then, the size of the laser beam is less important than the density of markers of a given type. One way to improve localization in fluorescence microscopy is to deplete the markers around the point of interest (see [3]). However, different methods of modulation weren't explored in this project.

Microscopy methods that don't use visible light -- and hence aren't subject to the same diffraction limits -- exist, but they aren't suitable for all applications. For example, electron microscopy can't be used on live samples, so it can't be used to visualize cellular processes over time.


Methods

Test Setup

I used a real camera to simulate the microscope to experiment with SMI. The test images were displayed on a laptop screen, and the camera was pointed at the laptop, as seen below. The raw resolution of the camera was about twice that of the reference image on screen (i.e. the camera recorded about 2x2 pixels for every pixel in the reference image), but the lens used was stopped down to a very small aperture: f/32, so that the true resolution of the system was limited by diffraction.

The test image chosen was a dollar bill. This image has three main advantages: it is very recognizable, it has some lare smooth areas that will show the noise characteristics of the system, and it has a lot of high frequency detail at varying contrasts (in particular text of several sizes, and many fine parallel lines), to test the ability of the system to preserve detail. The reference image is 512x512 and greyscale for simplicity. Performing the tests in color would be analogous, but greyscale allowed white balance and color calibration issues to be ignored, both problems unique to the standard camera used for testing. Neither should be a problem in a tightly controlled microscope.

To simulate the spatially modulated illumination, subsets of the pixels in the image were displayed on screen. Then, many "smaples" were taken while the screen displayed different subsets of pixels. At nanometer scale, exact control of points of illumination in this way isn't easy, so in this sense the assumption that points can be exactly individually illuminated is optimistic. This assumption was made to simplify the resulting procedures and make them feasibly implementable by one person in a few weeks.

Processing

The samples were processed in Lightroom before being run through the reconstruction algorithm, to correct lens distortion. An example of a processed sample is below. In a microscope the exact characteristics of the lens will be known, so this isn't unreasonable. The reconstruction algorithm independently corrects rotation and perspective for each image, to minimize the effect of camera movemnt between samples as much as possible. The colored squares at the corners of the samples were to facilitate this.

Finally, the reconstruction algorithm takes the samples and assembles them into a single composite super-resolution image. For each sample, the algorithm first locates all pixels that are significantly brighter than their surroundings. This pass usually identifies multiple adjacent pixels for each true point, due to the diffraction blurring. The algorithm then runs Flood-Fill on each contiguous region of bright pixels, to locate the center of the blurred point. This center is used as the derived location of that point, and the brightness of that point in the final reconstruction is the brightness at the center in the sample.


Experiments

The first experiment used randomly distributed light patterns. In particular, the pixels of the original image were partitioned randomly into 50 equal-size subsets, one for each sample. This led to pixels that were usually well isolated. A resulting reconstructed image can be seen below. There is significant blurring due to problems localizing the pixels: points in the reconstructed image were generally within one pixel of where they should have been, but enough were off to cause significant detail loss and artifacts. This may have been due to remaining uncorrected lens distortion, or slight shifts in the camera between frames (movement parallel to the screen and roll was corrected for, but pitch or yaw are more difficult).

To alleviate these problems, a more structured light pattern was used. For these tests 56 samples were used, and each sample showed every 56th pixel. 56 was chosen because (512*512)/56 = 4681 1/7, and 56/7 = 8. Hence, each point is separated from others in its sample by approximately 7-8 pixels. The reconstruction code was then modified to "snap" each detected PSF centerpoint to the nearest pixel that was expected in that sample. This completely eliminated all point localization problems. The resulting image can be seen below. For all further experiments, this regular pattern was used.

Finally, the images were filtered slightly before display to remove any remaining artifacts. This significantly improved the subjective appearance of the images, but did muddy some fine detail, as seen below. Below, the unfiltered version is on the right, and the filtered version is on the left. Note that the overall appearance improves, but the pattern in Washington's coat is much less clear.


Results

Qualitative Results

Below is a comparison of three different blurred images, the reconstructed images for the same amounts of diffraction/blurring, and the original image. Click through to see full resolution. To simulate diffraction more severe than could be created in-camera, for the latter sets of images below, the samples were convolved with an airy disk. D2, D4, and D10 correspond to diffraction limited images where the airy disk has radius 2, 4, and 10 pixels respectively (distance to 50% intensity, specifically). Similarly, R2, R4, and R10 correspond to the reconstructed images with the same amount of diffraction. The original image is reproduced twice for comparison.

As we can see, the results are a significant improvement at all diffraction levels. Note in particular that the "Washington" text in the center and the "The Department (of the Treasury)" text on the right side are mostly illegible in D2, but clearly defined in R2. The "Legal Tender Public and Private" text on the left side is similarly illegible in D10 but mostly clear in R10. The texture in Washington's forehead and hair are clearly visible in R2 and R4, but not in D2 or D4.

Overall, D2 is an almost perfect reconstruction. D4 is still very good, but with noticeably more artifacts. D10 is significantly worse. In particular, the D10 reconstruction clips a lot of the darker tones to black (so adjusting the contrast of the image makes little difference). This is because points in the samples of D10 weren't well isolated: locating points is more difficult in the blurriest samples where everything is nearly uniform and local maxima are unclear. Hence, the algorithm tends to miss dark pixels completely (they can be swamped by the edges or rings from brighter nearby pixels). With further tuning this problem could likely be alleviated somewhat, but for good performance the spacing between pixels will likely have to be increased.

Quantitative Results

Plots of the frequency information contained in each image are misleading, because noise appears as high frequency detail. However, we can quantify the performance of the algorithm by measuring the root mean squared error between the reconstructed images and the reference image. That is, we measure the difference between each pixel value and the expected value in the reference image, then take the RMS average of the differences to estimate how different the entire image is. These measurements can be seen below. Lower is better, and 0 would be perfect.

Image RMSE
D2 10.6
R2 3.4
D4 17.2
R4 7.5
D10 26.1
R10 30.8

By this measure, the reconstructed images (except for 50, because of the black clipping discussed above) are about 3x better than their diffraction-limited counterparts.

Conclusions and Next Steps

Clearly, spatially modulated illumination works well in principle. In the experiment, the spacing between sample points was about 7 pixels. When a sample point's airy disc had a radius of 2-4 performance was very good and the image could be reconstructed very accurately. However, with a blur radius of 10 reconstruction was poor. Hence, we can also conclude that reconstruction really does perform better when points in each sample are well isolated. This is pretty intuitively obvious.

The most interesting followup incestigation would probably be to address the biggest assumption made in the existing analysis: the effect of different modulation techniques, which in practice won't be as simple as the single pixels investigated here. Interferometric techniques are significantly more complex, but potentially much more powerful.

References

[1] Mats G. L. Gustafsson. "Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution" 2005.

[2] R. Heintzmann, C. Cremer. "Laterally Modulated Excitation Microscopy: Improvement of resolution by using a diffraction grating" 1999.

[3] Stefan Hell, Jan Wichmann. "Breaking the diffraction resolution limit by stimulated emission: stimulated-emission-depletion fluorescence microscopy" 1994.

[4] Jürgen Reymann, et. al. "High-precision structural analysis of subnuclear complexes in fixed and live cells via spatially modulated illumination (SMI) microscopy" 2008.

[5] Bernhard Schneider, et al. "High Precision Localization of Fluorescent Targets in the Nanometer Range by Spatially Modulated Excitation Fluorescence Microscopy" 1998.

Appendix

Source code can be found at the following link: