<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://vista.su.domains/psych221wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sirong88</id>
	<title>Psych 221 Image Systems Engineering - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="http://vista.su.domains/psych221wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Sirong88"/>
	<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Special:Contributions/Sirong88"/>
	<updated>2026-04-18T19:33:30Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.45.3</generator>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33761</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33761"/>
		<updated>2023-12-18T06:00:35Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: /* Is Signal to Noise Ratio not impacted by the pixel size? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Background intensity and noise vs. pixel size ===&lt;br /&gt;
[[File:16 BG intensity.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From the result of our simulation model, background intensity increases quadratically to the pixel size until it saturates at 11.3um pixel size. This is expected as area increases quadratically to the pixel size as well. &lt;br /&gt;
&lt;br /&gt;
[[File:17 BG Noise.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
Background noise increases linearly to the pixel size until it saturates around 11.3um pixel size. This is also expected since number of photons collected is proportional to the area. &lt;br /&gt;
&lt;br /&gt;
=== Capture rates when the pixel size and effective defects size match ===&lt;br /&gt;
&lt;br /&gt;
Our simulated capture rate is calculated for 1um defect with different pixel sizes. &lt;br /&gt;
&lt;br /&gt;
==== Sensor with 7um pixels ====&lt;br /&gt;
[[File: Capture rate at 7um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
With 7um pixel size sensor, the capture is only around 50%. &lt;br /&gt;
&lt;br /&gt;
[[File: 14 Sensor with 7um pixel.JPG |300px]] [[File: 19 Defect Intensity 7um.JPG |400px]]&lt;br /&gt;
&lt;br /&gt;
Black dots are the 1um defects and green circles are where our simulation detects as defect.&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 10um pixels ====&lt;br /&gt;
[[File: Capture rate at 10um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
With 10um pixel size sensor, the capture is only around 100%. &lt;br /&gt;
&lt;br /&gt;
[[File:20 Sensor with 10um pixel.JPG|300px]] [[File: 21 Defect Intensity 10um.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
As you can see from above, our simulation detects more defects when using 10um pixel size sensor. &lt;br /&gt;
&lt;br /&gt;
==== Sensor with 11.4um pixels ====&lt;br /&gt;
[[File: Capture rate at 11.4um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
Starting from 11.3um, saturation kicks in and the capture rate start to decrease. By 11.4um, the capture rate is 0. &lt;br /&gt;
&lt;br /&gt;
[[File: 22 Sensor with 11um pixel.JPG|300px]] [[File: 23 Defect Intensity 11um.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
=== Is Signal to Noise Ratio not impacted by the pixel size? ===&lt;br /&gt;
[[File: 24 SNR to pixel size.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From the result of our simulation, we would say Signal to Noise Ratio is NOT impacted by the pixel size. It is also dependent on the detection algo. From the graph, you can see the SNR trend is not close the linear line. &lt;br /&gt;
&lt;br /&gt;
We would expect a linear trend of SNR vs pixel size for a larger defects or higher reflectivity difference where signal and background are further separated.&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33750</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33750"/>
		<updated>2023-12-18T05:54:11Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: /* Capture rates when the pixel size and effective defects size match */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Background intensity and noise vs. pixel size ===&lt;br /&gt;
[[File:16 BG intensity.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From the result of our simulation model, background intensity increases quadratically to the pixel size until it saturates at 11.3um pixel size. This is expected as area increases quadratically to the pixel size as well. &lt;br /&gt;
&lt;br /&gt;
[[File:17 BG Noise.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
Background noise increases linearly to the pixel size until it saturates around 11.3um pixel size. This is also expected since number of photons collected is proportional to the area. &lt;br /&gt;
&lt;br /&gt;
=== Capture rates when the pixel size and effective defects size match ===&lt;br /&gt;
&lt;br /&gt;
Our simulated capture rate is calculated for 1um defect with different pixel sizes. &lt;br /&gt;
&lt;br /&gt;
==== Sensor with 7um pixels ====&lt;br /&gt;
[[File: Capture rate at 7um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
With 7um pixel size sensor, the capture is only around 50%. &lt;br /&gt;
&lt;br /&gt;
[[File: 14 Sensor with 7um pixel.JPG |300px]] [[File: 19 Defect Intensity 7um.JPG |400px]]&lt;br /&gt;
&lt;br /&gt;
Black dots are the 1um defects and green circles are where our simulation detects as defect.&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 10um pixels ====&lt;br /&gt;
[[File: Capture rate at 10um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
With 10um pixel size sensor, the capture is only around 100%. &lt;br /&gt;
&lt;br /&gt;
[[File:20 Sensor with 10um pixel.JPG|300px]] [[File: 21 Defect Intensity 10um.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
As you can see from above, our simulation detects more defects when using 10um pixel size sensor. &lt;br /&gt;
&lt;br /&gt;
==== Sensor with 11.4um pixels ====&lt;br /&gt;
[[File: Capture rate at 11.4um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
Starting from 11.3um, saturation kicks in and the capture rate start to decrease. By 11.4um, the capture rate is 0. &lt;br /&gt;
&lt;br /&gt;
[[File: 22 Sensor with 11um pixel.JPG|300px]] [[File: 23 Defect Intensity 11um.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
=== Is Signal to Noise Ratio not impacted by the pixel size? ===&lt;br /&gt;
[[File: 24 SNR to pixel size.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33748</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33748"/>
		<updated>2023-12-18T05:51:46Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
=== Background intensity and noise vs. pixel size ===&lt;br /&gt;
[[File:16 BG intensity.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From the result of our simulation model, background intensity increases quadratically to the pixel size until it saturates at 11.3um pixel size. This is expected as area increases quadratically to the pixel size as well. &lt;br /&gt;
&lt;br /&gt;
[[File:17 BG Noise.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
Background noise increases linearly to the pixel size until it saturates around 11.3um pixel size. This is also expected since number of photons collected is proportional to the area. &lt;br /&gt;
&lt;br /&gt;
=== Capture rates when the pixel size and effective defects size match ===&lt;br /&gt;
&lt;br /&gt;
Capture rate was also calculated for 1um defect with different pixel sizes. &lt;br /&gt;
&lt;br /&gt;
==== Sensor with 7um pixels ====&lt;br /&gt;
[[File: Capture rate at 7um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
With 7um pixel size sensor, the capture is only around 50%. &lt;br /&gt;
&lt;br /&gt;
[[File: 14 Sensor with 7um pixel.JPG |300px]] [[File: 19 Defect Intensity 7um.JPG |400px]]&lt;br /&gt;
&lt;br /&gt;
Black dots are the 1um defects and green circles are where our simulation detects as defect.&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 10um pixels ====&lt;br /&gt;
[[File: Capture rate at 10um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
With 10um pixel size sensor, the capture is only around 100%. &lt;br /&gt;
&lt;br /&gt;
[[File:20 Sensor with 10um pixel.JPG|300px]] [[File: 21 Defect Intensity 10um.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
As you can see from above, our simulation detects more defects when using 10um pixel size sensor. &lt;br /&gt;
&lt;br /&gt;
==== Sensor with 11.4um pixels ====&lt;br /&gt;
[[File: Capture rate at 11.4um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
Starting from 11.3um, saturation kicks in and the capture rate start to decrease. By 11.4um, the capture rate is 0. &lt;br /&gt;
&lt;br /&gt;
[[File: 22 Sensor with 11um pixel.JPG|300px]] [[File: 23 Defect Intensity 11um.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Is Signal to Noise Ratio not impacted by the pixel size? ===&lt;br /&gt;
[[File: 24 SNR to pixel size.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33746</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33746"/>
		<updated>2023-12-18T05:37:56Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
Results&lt;br /&gt;
=== Background intensity and noise vs. pixel size ===&lt;br /&gt;
[[File:16 BG intensity.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From the result of our simulation model, background intensity increases quadratically to the pixel size. This is expected as area increases quadratically to the pixel size as well. &lt;br /&gt;
&lt;br /&gt;
[[File:17 BG Noise.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
Background noise increases linearly to the pixel size. This is also expected since number of photons collected is proportional to the area. &lt;br /&gt;
&lt;br /&gt;
=== Capture rates when the pixel size and effective defects size match ===&lt;br /&gt;
==== Sensor with 7um pixels ====&lt;br /&gt;
[[File: Capture rate at 7um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
[[File: 14 Sensor with 7um pixel.JPG |300px]] [[File: 19 Defect Intensity 7um.JPG |400px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 10um pixels ====&lt;br /&gt;
[[File: Capture rate at 10um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File:20 Sensor with 10um pixel.JPG|300px]] [[File: 21 Defect Intensity 10um.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 11.4um pixels ====&lt;br /&gt;
[[File: Capture rate at 11.4um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File: 22 Sensor with 11um pixel.JPG|300px]] [[File: 23 Defect Intensity 11um.JPG|400px]]&lt;br /&gt;
&lt;br /&gt;
=== Is Signal to Noise Ratio not impacted by the pixel size? ===&lt;br /&gt;
[[File: 24 SNR to pixel size.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33745</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33745"/>
		<updated>2023-12-18T05:33:21Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Background intensity and noise vs. pixel size ===&lt;br /&gt;
[[File:16 BG intensity.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From result of our simulation model, background intensity increases quadratically  to the pixel size. This is expected as area increases quadratically to the pixel size as well.&lt;br /&gt;
&lt;br /&gt;
[[File:17 BG Noise.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
Background noise increases linearly to the pixel size. This is also expected since number of photon collected is proportional to the area.&lt;br /&gt;
&lt;br /&gt;
=== Capture rates when the pixel size and effective defects size match ===&lt;br /&gt;
==== Sensor with 7um pixels ====&lt;br /&gt;
[[File: Capture rate at 7um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[File: 14 Sensor with 7um pixel.JPG |300px]] [[File: 19 Defect Intensity 7um.JPG |300px]]&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 10um pixels ====&lt;br /&gt;
[[File: Capture rate at 10um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 11.4um pixels ====&lt;br /&gt;
[[File: Capture rate at 11.4um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33744</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33744"/>
		<updated>2023-12-18T05:31:42Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Background intensity and noise vs. pixel size ===&lt;br /&gt;
[[File:16 BG intensity.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From result of our simulation model, background intensity increases quadratically  to the pixel size. This is expected as area increases quadratically to the pixel size as well.&lt;br /&gt;
&lt;br /&gt;
[[File:17 BG Noise.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
Background noise increases linearly to the pixel size. This is also expected since number of photon collected is proportional to the area.&lt;br /&gt;
&lt;br /&gt;
=== Capture rates when the pixel size and effective defects size match ===&lt;br /&gt;
==== Sensor with 7um pixels ====&lt;br /&gt;
[[File: Capture rate at 7um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:14 Sensor with 7um pixel.JPG&lt;br /&gt;
File: 19 Defect Intensity 7um.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 10um pixels ====&lt;br /&gt;
[[File: Capture rate at 10um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 11.4um pixels ====&lt;br /&gt;
[[File: Capture rate at 11.4um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33737</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33737"/>
		<updated>2023-12-18T05:27:39Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Background intensity and noise vs. pixel size ===&lt;br /&gt;
[[File:16 BG intensity.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From result of our simulation model, background intensity increases quadratically  to the pixel size. This is expected as area increases quadratically to the pixel size as well.&lt;br /&gt;
&lt;br /&gt;
[[File:17 BG Noise.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
Background noise increases linearly to the pixel size. This is also expected since number of photon collected is proportional to the area.&lt;br /&gt;
&lt;br /&gt;
=== Capture rates when the pixel size and effective defects size match ===&lt;br /&gt;
==== Sensor with 7um pixels ====&lt;br /&gt;
[[File: Capture rate at 7um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 10um pixels ====&lt;br /&gt;
[[File: Capture rate at 10um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
==== Sensor with 11.4um pixels ====&lt;br /&gt;
[[File: Capture rate at 11.4um.jpg|500px]]&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:Capture_rate_at_11.4um.jpg&amp;diff=33732</id>
		<title>File:Capture rate at 11.4um.jpg</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:Capture_rate_at_11.4um.jpg&amp;diff=33732"/>
		<updated>2023-12-18T05:21:40Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:Capture_rate_at_10um.jpg&amp;diff=33731</id>
		<title>File:Capture rate at 10um.jpg</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:Capture_rate_at_10um.jpg&amp;diff=33731"/>
		<updated>2023-12-18T05:21:29Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:Capture_rate_at_7um.jpg&amp;diff=33730</id>
		<title>File:Capture rate at 7um.jpg</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:Capture_rate_at_7um.jpg&amp;diff=33730"/>
		<updated>2023-12-18T05:21:18Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33727</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33727"/>
		<updated>2023-12-18T05:14:40Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: /* Results */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Background intensity and noise vs. pixel size ===&lt;br /&gt;
[[File:16 BG intensity.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From result of our simulation model, background intensity increases quadratically  to the pixel size. This is expected as area increases quadratically to the pixel size as well.&lt;br /&gt;
&lt;br /&gt;
[[File:17 BG Noise.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
Background noise increases linearly to the pixel size. This is also expected since number of photon collected is proportional to the area.&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33726</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33726"/>
		<updated>2023-12-18T05:13:00Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
=== Background intensity and noise vs. pixel size ===&lt;br /&gt;
[[File:16 BG intensity.JPG|500px]]&lt;br /&gt;
&lt;br /&gt;
From result of our simulation model, background intensity increases quadratically  to the pixel size. This is expected as area increases quadratically to the pixel size as well. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33703</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33703"/>
		<updated>2023-12-18T04:02:45Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
These days, we do not get by a day without technology. Yet, we hardly stop to think about the very foundations that allow our way of life: semiconductors. Notably so, as the early computers took up a huge room. Today, we hold them in our palms, and fork out hundreds or thousands of dollars to buy them. With the world revolving around digital transformation and technology, the pressure is on semiconductor manufacturers to take on ever increasing challenges of creating higher complexity chip designs, explore new processes and materials. They also have to grapple with yield loss, which is an ever-present problem.&lt;br /&gt;
&lt;br /&gt;
Typically, advanced semiconductor nodes are made by combining many processes together on a silicon wafer to fabricate the dies containing the intended designs. The dies are then diced into individual chips before being packaged and shipped. As complexity grows, so does the number of processes, and thus the manufacturing steps. These are all inception points for defects, which can cause the chips to fail and discarded. The earlier the defects can be caught, the less material wastage. Since fabs achieve economies of scale by high volume manufacturing [https://www.hitachi.com/rev/archive/2022/r2022_04/pdf/04b01.pdf], every discarded chip eats into operating margin. If the yield of a fab is low, these costs get passed onto the device manufacturer, and ultimately onto us, the consumers! Therefore, it is clear that yield control and improvement is critical and can be achieved by inspecting the dies before they proceed with wafer packaging.&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
[[File:1 Intro.JPG|thumb|Common defects that can find their way onto wafer pattern designs [https://www.mdpi.com/2079-9292/12/1/76].]]&lt;br /&gt;
[[File:2 Camera Type.JPG|thumb|Various optical defect detection methods [https://www.mks.com/mam/celum/celum_assets/Defect-Detection-Non-Patterned-Wafers_800w.jpg].]]&lt;br /&gt;
&lt;br /&gt;
Wafer defects can be broadly classified in the following categories: (a) Particles, which are commonly dust or contamination in the air which gets stuck on the wafer surface and are small; (b) Scratches, typically caused by instrument faults and are continuous features; (c) Ripples, commonly due to film interference from coating defects during lithography and are irregularly shaped with fringing patterns; (d) Stains, due to contamination from lithography and are large patchy features. Depending on where the type of defects, they may be further classified into killer defects, also known as defects of interest. For example, a break in a data-carrying connector would not be acceptable.&lt;br /&gt;
&lt;br /&gt;
Defect analysis involves finding the defect, and optical methods are well-suited for direct detection of defects. A few common optical detection methods are shown, rotating non-patterned wafer and specular reflection are commonly used for bare wafers, as suggested by their name. Dark field imaging is best suited for finding small defects such as particles and surface defects like faint scratches. Bright field imaging is used for macroscopic detection, and the results are most intuitive because the defect sensitivity is equivalent to the system resolution [https://www.mdpi.com/2072-666X/14/8/1568#sec2dot2-micromachines-14-01568]. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;gallery&amp;gt;&lt;br /&gt;
File:3 Different pixel size.JPG&lt;br /&gt;
File:4 Different quantam efficiencies.JPG&lt;br /&gt;
File:5 Different noise performance.JPG&lt;br /&gt;
&amp;lt;/gallery&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Automated defect detection of a substrate consists of a system that has both hardware and software components. The software component is for machine control, and detection algorithm which is detailed in the Methods. Focusing on the hardware imaging path, we have the imaging optics and camera. The imaging optics introduces an optical blur. The camera is our hardware of interest because it has a finite lifetime and usually undergoes multiple revisions in a product&#039;s lifecycle. However, choosing a camera suited for a particular application is complex as the camera itself has multiple parameters, such as pixel size, quantum efficiencies, and noise performance.&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
[[File:8 simulation flow.JPG|400px|Generating an optically blurred image.]]&lt;br /&gt;
&lt;br /&gt;
In our project, we take a simplistic approach and consider only changing the pixel size while keeping other parameters like camera noise performance, quantum efficiency, and illumination constant. A simple microscope is simulated using a 10X objective with NA=0.5, 200mm tube lens and monochromatic illumination of 550 nm. We assume the illumination flux on the sensor is 100 photons/s/u&amp;lt;math&amp;gt;m^2&amp;lt;/math&amp;gt;. The FOV at the sensor plane is 2000 um by 2000 um, and we vary the pixel size from 5um to 12um to determine the optimal pixel size for our defect of interest. &lt;br /&gt;
&lt;br /&gt;
Our defects of interest are 1um by 1um squares (400 of them), with a relative reflectivity that is 10% lower than the background. The defects are placed on a 10um square grid, and a +/- 1um shift in both directions to randomise the defects.&lt;br /&gt;
&lt;br /&gt;
[[File:9 simulation flow with equation.JPG|400px|Simulating noise and getting a sensor readout.]]&lt;br /&gt;
[[File:15 threshold.JPG|thumb|Simple defect detection algorithm.]]&lt;br /&gt;
&lt;br /&gt;
To simulate an optical image, the perfect defect image is convolved with a generated PSF. Noise and sensor read out is simulated to produce a sensor image. Using this sensor image, the detection algorithm is applied to determine which are defects or not. The detection is based on a simple thresholding algorithm by taking the intensity difference between the defect and the background. The background is calculated by taking the mean of the image at the edges where there are no defects.&lt;br /&gt;
If the signal intensity is larger than the set threshold, it is classified as a defect. In our simulation, we used a threshold of 1000 DN.&lt;br /&gt;
&lt;br /&gt;
The capture rate of the system can be then calculated:&lt;br /&gt;
Capture Rate = (detected defects)/(total number of defects)&lt;br /&gt;
&lt;br /&gt;
A high capture rate is desirable for a defect inspection system.&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
We have successfully established a simulation model for brightfield defect detection microscopy for semiconductor application. Moreover, for the system used for our simulation model, we have found that detection is optimized when the pixel size matches with the defect of interest.&lt;br /&gt;
&lt;br /&gt;
For future works, out simulation model could use different illumination that can be varied per pixel size to achieve specific gray level. Our scope could be also expanded to compute contributions from other parameters. &lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
Camera parameters, taken from Photonic Science.&lt;br /&gt;
[[File:7 actual CMOS camera.JPG|left|400px|Camera parameters used for the simulation|https://photonicscience.com/products/optical-cameras/cooled-scmos-camera/]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:24_SNR_to_pixel_size.JPG&amp;diff=33419</id>
		<title>File:24 SNR to pixel size.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:24_SNR_to_pixel_size.JPG&amp;diff=33419"/>
		<updated>2023-12-16T12:05:55Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:23_Defect_Intensity_11um.JPG&amp;diff=33418</id>
		<title>File:23 Defect Intensity 11um.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:23_Defect_Intensity_11um.JPG&amp;diff=33418"/>
		<updated>2023-12-16T12:05:35Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:22_Sensor_with_11um_pixel.JPG&amp;diff=33417</id>
		<title>File:22 Sensor with 11um pixel.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:22_Sensor_with_11um_pixel.JPG&amp;diff=33417"/>
		<updated>2023-12-16T12:05:17Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:21_Defect_Intensity_10um.JPG&amp;diff=33416</id>
		<title>File:21 Defect Intensity 10um.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:21_Defect_Intensity_10um.JPG&amp;diff=33416"/>
		<updated>2023-12-16T12:05:07Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:20_Sensor_with_10um_pixel.JPG&amp;diff=33415</id>
		<title>File:20 Sensor with 10um pixel.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:20_Sensor_with_10um_pixel.JPG&amp;diff=33415"/>
		<updated>2023-12-16T12:04:56Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:19_Defect_Intensity_7um.JPG&amp;diff=33414</id>
		<title>File:19 Defect Intensity 7um.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:19_Defect_Intensity_7um.JPG&amp;diff=33414"/>
		<updated>2023-12-16T12:04:33Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:17_BG_Noise.JPG&amp;diff=33412</id>
		<title>File:17 BG Noise.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:17_BG_Noise.JPG&amp;diff=33412"/>
		<updated>2023-12-16T12:03:35Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:16_BG_intensity.JPG&amp;diff=33411</id>
		<title>File:16 BG intensity.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:16_BG_intensity.JPG&amp;diff=33411"/>
		<updated>2023-12-16T12:03:22Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:15_threshold.JPG&amp;diff=33410</id>
		<title>File:15 threshold.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:15_threshold.JPG&amp;diff=33410"/>
		<updated>2023-12-16T12:03:03Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:14_Sensor_with_7um_pixel.JPG&amp;diff=33409</id>
		<title>File:14 Sensor with 7um pixel.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:14_Sensor_with_7um_pixel.JPG&amp;diff=33409"/>
		<updated>2023-12-16T12:02:47Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:9_simulation_flow_with_equation.JPG&amp;diff=33404</id>
		<title>File:9 simulation flow with equation.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:9_simulation_flow_with_equation.JPG&amp;diff=33404"/>
		<updated>2023-12-16T12:00:35Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:8_simulation_flow.JPG&amp;diff=33403</id>
		<title>File:8 simulation flow.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:8_simulation_flow.JPG&amp;diff=33403"/>
		<updated>2023-12-16T11:59:31Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:7_actual_CMOS_camera.JPG&amp;diff=33402</id>
		<title>File:7 actual CMOS camera.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:7_actual_CMOS_camera.JPG&amp;diff=33402"/>
		<updated>2023-12-16T11:54:57Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:5_Different_noise_performance.JPG&amp;diff=33400</id>
		<title>File:5 Different noise performance.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:5_Different_noise_performance.JPG&amp;diff=33400"/>
		<updated>2023-12-16T11:52:44Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:4_Different_quantam_efficiencies.JPG&amp;diff=33399</id>
		<title>File:4 Different quantam efficiencies.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:4_Different_quantam_efficiencies.JPG&amp;diff=33399"/>
		<updated>2023-12-16T11:52:33Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:3_Different_pixel_size.JPG&amp;diff=33398</id>
		<title>File:3 Different pixel size.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:3_Different_pixel_size.JPG&amp;diff=33398"/>
		<updated>2023-12-16T11:51:18Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:2_Camera_Type.JPG&amp;diff=33397</id>
		<title>File:2 Camera Type.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:2_Camera_Type.JPG&amp;diff=33397"/>
		<updated>2023-12-16T11:50:56Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=File:1_Intro.JPG&amp;diff=33396</id>
		<title>File:1 Intro.JPG</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=File:1_Intro.JPG&amp;diff=33396"/>
		<updated>2023-12-16T11:50:04Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: Intro background of semiconductor flow&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Summary ==&lt;br /&gt;
Intro background of semiconductor flow&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33395</id>
		<title>Simulation of pixel-size impact for optical brightfield wafer defect inspection</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulation_of_pixel-size_impact_for_optical_brightfield_wafer_defect_inspection&amp;diff=33395"/>
		<updated>2023-12-16T11:20:41Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: Created page with &amp;quot;== Introduction ==  == Background ==  == Methods ==  == Results ==  == Conclusions ==  == Appendix ==  You can write math equations as follows: &amp;lt;math&amp;gt;y = x + 5 &amp;lt;/math&amp;gt;  You can include images as follows (you will need to upload the image first using the toolbox on the left bar, using the &amp;quot;Upload file&amp;quot; link).  200px&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Introduction ==&lt;br /&gt;
&lt;br /&gt;
== Background ==&lt;br /&gt;
&lt;br /&gt;
== Methods ==&lt;br /&gt;
&lt;br /&gt;
== Results ==&lt;br /&gt;
&lt;br /&gt;
== Conclusions ==&lt;br /&gt;
&lt;br /&gt;
== Appendix ==&lt;br /&gt;
&lt;br /&gt;
You can write math equations as follows:&lt;br /&gt;
&amp;lt;math&amp;gt;y = x + 5 &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
You can include images as follows (you will need to upload the image first using the toolbox on the left bar, using the &amp;quot;Upload file&amp;quot; link).&lt;br /&gt;
&lt;br /&gt;
[[File:Snip 20210106183207.png|200px]]&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Psych221-Projects-2023-Fall&amp;diff=33394</id>
		<title>Psych221-Projects-2023-Fall</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Psych221-Projects-2023-Fall&amp;diff=33394"/>
		<updated>2023-12-16T10:18:00Z</updated>

		<summary type="html">&lt;p&gt;Sirong88: /* Projects for Psych 221 (2023-2024) */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
[http://vista.su.domains/psych221wiki/index.php?title=Main_Page#Psych221  Return to Psych 221 Main Page]&lt;br /&gt;
&lt;br /&gt;
There are two deliverables for the project:&lt;br /&gt;
# A group presentation&lt;br /&gt;
# A wiki-style project page write-up&lt;br /&gt;
&lt;br /&gt;
* The write-up should roughly follow [http://vista.su.domains/psych221wiki/index.php?title=Project_Guidelines this organization from the Project Guidelines Page]&lt;br /&gt;
* Please visit [https://www.mediawiki.org/wiki/Help:Editing_pages MediaWiki&#039;s editing help page].&lt;br /&gt;
&lt;br /&gt;
== To set up your project&#039;s page ==&lt;br /&gt;
* Log in to this wiki with the username and password you created.&lt;br /&gt;
* Edit the Projects section of this page (just below). Do this by clicking on &amp;quot;[edit]&amp;quot; to the right of each section title. &lt;br /&gt;
* Make a new line for your project using the format shown below, pasting the line for your project under the last item/group. The first part of the text within the double brackets is the name of the new page.  This must be unique, and putting the group member names is a safest way to assure this. The second part, after &#039;|&#039; is the displayed text and can be your project title.&lt;br /&gt;
* Save the Project section by clicking the Save button at the bottom of the page&lt;br /&gt;
* Finally, clink on the link for your project.  This will take you to a new blank page that you can edit.  You can use the basic format for your page that is in the Sample Project.&lt;br /&gt;
* Math tip: Use the tags &amp;amp;lt;math&amp;amp;gt; and &amp;amp;lt;/math&amp;amp;gt; to wrap an equation. For example, this code:&lt;br /&gt;
&lt;br /&gt;
&amp;lt;pre&amp;gt; &amp;lt;math&amp;gt; a + b = c^2 &amp;lt;/math&amp;gt; &amp;lt;/pre&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Renders as this equation:&lt;br /&gt;
&lt;br /&gt;
:&amp;lt;math&amp;gt;a + b = c^2&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Projects for Psych 221 (2023-2024) ==&lt;br /&gt;
&lt;br /&gt;
Please edit your project pages starting from the links below.  The first one is a sample that includes some wiki formatting.  The second one indicates how you should format the link on this page.  When you format it and save it, then clicking on the link will automatically create a blank page that you can edit.&lt;br /&gt;
&lt;br /&gt;
# [[WandellFarrellLian|Sample Project]]&lt;br /&gt;
#* Brian Wandell, Joyce Farrell, Zheng Lyu.&lt;br /&gt;
# [[Wavefront Retrieval from Through-Focus Point Spread Functions with Machine Learning]]&lt;br /&gt;
#* Wanting Xie, Yi Hong To.&lt;br /&gt;
# [[AlexOliviaAudrey|Calibration of Headlight Brightness in ISET Simulations]]&lt;br /&gt;
#* Alex Sun, Olivia Loh, Audrey Lee&lt;br /&gt;
# [[Impact of Camera Characteristics on DNN Model Inference Performance]]&lt;br /&gt;
#* Mohammad Salem, Bogdan Burlacu&lt;br /&gt;
# [[Simulation Studies of an Ultraviolet Laser Absorption Imaging System]]&lt;br /&gt;
#* Jackie Zheng, Steve (Cao) Dong, Amy Dumphy&lt;br /&gt;
# [[Simulate an Underwater Imaging System and Explore Water Absorption and Scattering Estimation Methods]]&lt;br /&gt;
#* Tianyun Zhao, Cecilia Xie, Shiqi Xia&lt;br /&gt;
# [[Simulation of pixel-size impact for optical brightfield wafer defect inspection]]&lt;br /&gt;
#* David Giovanni, Sweesien Lim, Choi Han&lt;/div&gt;</summary>
		<author><name>Sirong88</name></author>
	</entry>
</feed>