AndreyIgorEvgeny

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search

Introduction

  1. In semiconductor industry electronic micro chips like CPU/flash memories for different devices are created on silicon wafer.
  2. This is very complicated process which takes long time and requires wafer to go through many different processes on different tools/machines.
  3. Before wafer processing/measurement it is very important to place wafer in certain orientation and being able to identify it’s edges for centering purposes.
  4. Every silicon wafer has specific shape or notch according to which it is possible on alignment device to identify exact wafer orientation.
  5. To identify wafer edges we use camera to picture wafer and apply image recognition algorithm on produced image.
  6. In our project we are going to investigate impact of different camera apertures and gain values on edge detection algorithm.


Background

Edge detection is a technique for locating and identifying the sharp discontinuities available in an image. The term discontinuities can be referred as sudden modifications intensity of pixel which characterizes objects boundaries in a scene or image. Standard methods for detecting edge consist of involving the image with an edge detection operator, and that is constructed to be sensitive for large gradients in the image while returning values zero in uniform regions. Now a day, a large number of edge detection techniques are available, and their operations are designed to be sensitive toward certain types of edges. Edge orientation is one of the variables which can be considered by edge detection operator for edge detection of image. The geometry of an operator is responsible for determining the characteristic direction which is the direction in which it is most sensitive to edges. Operators can be optimized by looking for various edges such as vertical edge, horizontal edge or diagonal edges. In the noisy image, finding the edge is very difficult because both i.e. edge and noise contain high frequency.

There several major edge detection techniques:

1.Sobel Operator

The Sobel operator one of the operator which is used to find the edge of image in the field of image processing. Sobel operator is a discrete differentiation operator, which computes an approximation of the gradient of the image intensity function. The result of the Sobel operator at each point in the image is any relatively gradient vector and the normal to this vector. The Sobel operator is built on convolving the image with a minor, separate, and numeral valued filter in horizontal and vertical direction, due to this it is relatively inexpensive in terms of computations.

Figure 1: Original image and Sobel operator operation result.


2. Robert’s Cross Operator

The Roberts Cross operator performs a simple and quick processing to compute 2-D spatial gradient measurement on an image. Pixel values at each point in the output represent the estimated absolute magnitude of the spatial gradient of the input image at that point.This operator contains of a pair of 2×2 convolution kernels. One kernel is basically represents the rotation of other by 90° .This operator is very similar to the Sobel operator. These kernels are designed to respond maximally to edges running at 45° to the pixel grid.Generally, one kernel is responsible to respond for each of the two perpendicular orientations.This operator contains of a pair of 2×2 convolution kernels. One kernel is basically represents the rotation of other by 90° .This operator is very similar to the Sobel operator. These kernels are designed to respond maximally to edges running at 45° to the pixel grid. Generally, one kernel is responsible to respond for each of the two perpendicular orientations. Kernels can be applied separately to the input image, to produce separate measurements of the gradient component in each orientation, i.e. Gx and Gy.

Figure 2: Masking used for Roberts operator.
Figure 3: Original image and Roberts operator operation result.


3. Prewitt Operator

Prewitt operator is one of the edge detection operators which are also similar to the Sobel operator. Generally this operator is used for finding both vertical and horizontal edges in cover images.

Figure 4: Masking used for Prewwitt operator.


4. Canny Edge Detector

The Canny operator takes as input a gray scale image, and produces as output an image showing the positions of tracked intensity discontinuities. The Canny operator works in a multi-stage process. First of all the image is smoothed by Gaussian convolution. Then a simple 2-D first derivative operator is applied to the smoothed image to highlight regions of the image with high first spatial derivatives. Edges give rise to ridges in the gradient magnitude image. The algorithm then tracks along the top of these ridges and sets to zero all pixels that are not actually on the ridge top so as to give a thin line in the output, a process known as non-maximal suppression. The tracking process exhibits hysteresis controlled by two thresholds: T1 and T2, with T1 > T2. Tracking can only begin at a point on a ridge higher than T1. Tracking then continues in both directions out from that point until the height of the ridge falls below T2. This hysteresis helps to ensure that noisy edges are not broken up into multiple edge fragments.

Figure 5: Canny operator operation result and Original image.

Methods

1. HW Setup

Figure 6: HW Setup.

The notch and edge finder is a functional unit which includes a telecentric strobe illuminator and an imaging camera see figure below. The unit communicates with the motion controller and the computer. The main function of the unit is to generate an image sequence. The controller provides synchronization signals to the camera and the strobe controller.

Monochrome Mightex USB2.0 camera with built-in frame buffers, external trigger-in, strobe-out, and a powerful camera engine that supports multiple cameras. Monochrome cameras often exhibit 20% higher spatial resolution than their color counterparts because no pixel interpolation is necessary. Since there is no Bayer color filter on the sensor, monochrome cameras are more sensitive than color sensors, especially in near IR and UV regions. Camera's data sheet is available in the following link [1]











2. Tests

We preformed set of tests:

  • Aperture change as function of edge calculation accuracy
  • Gain change as function of calculation accuracy

We made 4 images with static object( substrate/wafer 300mm) while changing aperture gradually from 1.4->2.8->6 and 4 images when changed its gain from nominal to 85/70/55, then we derived each image in x and y direction and added matrices we got to each other. In the end of this stage we got 4 derived images where we can see edge

Figure 7: Aperture value change

3. Image Processing

  • All images were derived in two orthogonal directions: x and y
  • As a result we got contour of item without background
Figure 8: Contour without background

4. Subpixel recognition of edge position

As a next step we performed subpixel calculation of edges we got. For that we applied following algorithm:

  1. Find maximum value for each column
  2. Take N values around maximum value (2 from each side)
  3. Perform Parabola fit/Gaussian fit for those points (per lens PSF)
  4. Derive à value we get will be a real subpixel edge
  5. Perform this algorithm for each column
Figure 9: Subpixel recognition

5. Subpixel recognition for different Aperture values

  • Now we got number of numerical arrays with sub-pixel recognized substrate edge
  • As a next step we will compare their positions relatively to each other
Figure 10: Subpixel recognition for different apertures

6. Raw data collection for Digital Gain changes

  • We created set of images while changing camera Gain
  • All images were taken while silicone substrate and camera were static
Figure 11: Digital gain change

7. Subpixel recognition for different Camera Gain values

  • Now we got number of numerical arrays with sub-pixel recognized substrate edge
  • As a next step we will compare their positions relatively to each other
  • The last image with reduced gain is not applicable anymore for further analysis
Figure 12: Subpixel recognition for different Camera Gain values

Results

Figure 13: Aperture change results

1. Results for Aperture changes

  • We performed 2 calculations to understand effect of aperture change:
    • COG of the edges we got
    • Circle fit of edges we got
  • After calculations we performed we see 2 phenomena:
    • Saturated image is >200um out of other images (y direction), when calculating circle fit
    • We see ~5um difference when calculating COG (X directional)










Figure 14: Gain change results

2. Results for Gain changes

  • We performed 2 calculations to understand effect of aperture change:
    • COG of the edges we got
    • Circle fit of edges we got
  • After calculations we performed we see 2 phenomena:
  • Change in gain causes loss of accuracy ~1000um per gain change (20 counts of light intensity) of center of calculated circle (y direction)
  • When we compare COG( x direction) results they seems to be similar - about 40 um difference per Gain change














Conclusions

  • Saturated image gave us the highest tolerance position of substrate edge and edge center fit relatively to other non-saturated images ~245um
  • Images with f-number between 2.8 and 6 were 5 – 10um from each other
  • We can conclude that influence of aperture is critical when we work close to saturation, then we deform image physical position, in all other cases until light starvation accuracy of edge physical position is very good about 0.1 pixel = 5-10um
  • Gain change increases gradually image processing error and does not represent good way to overcome saturation, we see that gain change will change physical position of substrate every time it is applied*
  • Huge decrease of gain brought us to total inability to recognize substrate edge
  • We see huge noises in the images where only gain was changed

References

[1] Mokrzycki W.S., Samko M.A., New edge detection algorithm in colour imageusing perception function, [2]

[2] Shaveta Malik and Tapas Kumar, Comparative Analysis of Edge Detection between Gray Scale and Color Image, [3]

[3] Ranjeet Kumar Singh and Dilip Kumar Shaw, Experimental Analysis of Impact of Noise on Various Edge Detection Techniques , [4]

Source code

Zipped source code can be found here: Media:RawData_Code.zip

Appendix

Andrey’s role: to prepare a HW setup and develop algo

Evgeny’s role: to develop algo and run tests

Igor’s role: to develop algo and summarize test results