<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>http://vista.su.domains/psych221wiki/index.php?action=history&amp;feed=atom&amp;title=Simulating_Vision_through_Retinal_Prothesis</id>
	<title>Simulating Vision through Retinal Prothesis - Revision history</title>
	<link rel="self" type="application/atom+xml" href="http://vista.su.domains/psych221wiki/index.php?action=history&amp;feed=atom&amp;title=Simulating_Vision_through_Retinal_Prothesis"/>
	<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;action=history"/>
	<updated>2026-04-18T09:12:51Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.45.3</generator>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15304&amp;oldid=prev</id>
		<title>imported&gt;Projects221: /* References - Resources and Related Work */</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15304&amp;oldid=prev"/>
		<updated>2014-03-20T18:47:02Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;References - Resources and Related Work&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 18:47, 20 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l118&quot;&gt;Line 118:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 118:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[1] &amp;quot;Project Xense Retinal Implant Simulation&amp;quot; etc.cmu.edu. Carnegie Mellon University, 2012. Web. 14 Mar 2014. &amp;lt;http://www.etc.cmu.edu/projects/tatrc&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[1] &amp;quot;Project Xense Retinal Implant Simulation&amp;quot; etc.cmu.edu. Carnegie Mellon University, 2012. Web. 14 Mar 2014. &amp;lt;http://www.etc.cmu.edu/projects/tatrc&amp;gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[2] &quot;The Argus® II Retinal Prosthesis System&quot; 2-sight.eu&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;/en/product-en&lt;/del&gt;.  The Argus II Retinal Prosthesis System, 2014. Web. 15 Mar 2014. &amp;lt;http://2-sight.eu/en/about-us-en&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[2] &quot;The Argus® II Retinal Prosthesis System&quot; 2-sight.eu.  The Argus II Retinal Prosthesis System, 2014. Web. 15 Mar 2014. &amp;lt;http://2-sight.eu/en/about-us-en&amp;gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[3] &amp;quot;Holographic display system for restoration of sight to the blind&amp;quot; G A Goetz et al 2013 J. Neural Eng. 10 056021&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[3] &amp;quot;Holographic display system for restoration of sight to the blind&amp;quot; G A Goetz et al 2013 J. Neural Eng. 10 056021&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15303&amp;oldid=prev</id>
		<title>imported&gt;Projects221 at 18:45, 20 March 2014</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15303&amp;oldid=prev"/>
		<updated>2014-03-20T18:45:58Z</updated>

		<summary type="html">&lt;p&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 18:45, 20 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l58&quot;&gt;Line 58:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 58:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Facial Recognition ==  &lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== Facial Recognition ==  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;For facial recognition, I used the OpenCV training set and cv2.cascadeClassifier method for determining whether or not there was a face on the screen. The algorithm is not robust enough to be able to distinguish a particular face; it can only detect the presence of a face. With face detection in place, we could superimpose over the face an object that would be much easier to detect. In this particular case, we used meme faces as the objects with which to superimpose over the face.  &lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;For facial recognition, I used the OpenCV training set and cv2.cascadeClassifier method for determining whether or not there was a face on the screen. The algorithm is not robust enough to be able to distinguish a particular face; it can only detect the presence of a face. With face detection in place, we could superimpose over the face an object that would be much easier to detect. In this particular case, we used meme faces as the objects with which to superimpose over the face.  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Pixellation &lt;/del&gt;==  &lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;== &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Pixelation &lt;/ins&gt;==  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;I employed 4 methods of &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixellation&lt;/del&gt;:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;I employed 4 methods of &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelation&lt;/ins&gt;:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the first, I shrink down the source image by an adjustable constant, and then restore the original image to its original size. the result is a computationally quick method of &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixellation &lt;/del&gt;that doesn&#039;t perfectly constrain image coloring to well defined pixel blocks. However, it is passable as a means of simulating visual acuity of restored sight, and can be used to test the effectiveness of visual acuity with and without the assistance computer vision algorithms.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the first, I shrink down the source image by an adjustable constant, and then restore the original image to its original size. the result is a computationally quick method of &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelation &lt;/ins&gt;that doesn&#039;t perfectly constrain image coloring to well defined pixel blocks. However, it is passable as a means of simulating visual acuity of restored sight, and can be used to test the effectiveness of visual acuity with and without the assistance computer vision algorithms.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the second, I iterate through an image via blocks of an adjustable size. Within each block, I take the mean intensity value of the block and assign that value to each pixel in the block. This method is computationally more expensive, although it does allow for sharply defined pixel blocks.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the second, I iterate through an image via blocks of an adjustable size. Within each block, I take the mean intensity value of the block and assign that value to each pixel in the block. This method is computationally more expensive, although it does allow for sharply defined pixel blocks.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l80&quot;&gt;Line 80:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 80:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The main features I found I needed to incorporate into my simulation were: encoding the dynamic range applied voltages of individual pixels, varying resolution in response to pixel density on the retinal prothesis, and removing color from images (as it&amp;#039;s doubtful our prothesis will be able to transmit color information).  &lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The main features I found I needed to incorporate into my simulation were: encoding the dynamic range applied voltages of individual pixels, varying resolution in response to pixel density on the retinal prothesis, and removing color from images (as it&amp;#039;s doubtful our prothesis will be able to transmit color information).  &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:Face.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:Face.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:filteredface.png|400px|center| Image after undergoing &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixillation&lt;/del&gt;, color removal, and Otsu binarization]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:filteredface.png|400px|center| Image after undergoing &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelation&lt;/ins&gt;, color removal, and Otsu binarization]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;These specifications were met. A user of the simulator has control of pixel density, color, as well as how dynamic range is expressed. To elaborate dynamic range can be expressed either by pixel color or pixel radius, and this dynamic range can span the entire spectrum of grays, to just two colors, as demonstrated by the Otsu Thresholding. If the user chooses to &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixellate &lt;/del&gt;the image, there are three options from which to choose: square, dot, and radial.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;These specifications were met. A user of the simulator has control of pixel density, color, as well as how dynamic range is expressed. To elaborate dynamic range can be expressed either by pixel color or pixel radius, and this dynamic range can span the entire spectrum of grays, to just two colors, as demonstrated by the Otsu Thresholding. If the user chooses to &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelate &lt;/ins&gt;the image, there are three options from which to choose: square, dot, and radial.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Here is some information on specific parameters:&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Here is some information on specific parameters:&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Pixellation &lt;/del&gt;yields pixel blocks of 80X80, 40X40, 20X20, 10X10, 8X8, and 4X4 pixels.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelation &lt;/ins&gt;yields pixel blocks of 80X80, 40X40, 20X20, 10X10, 8X8, and 4X4 pixels.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The neighborhood size used by the blurring method to determine central pixel intensity ranges from 1 to 30&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;The neighborhood size used by the blurring method to determine central pixel intensity ranges from 1 to 30&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Frame rate ranges from the camera&amp;#039;s advertised fps (~30fps in the case of my webcam) to  1/4 fps&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Frame rate ranges from the camera&amp;#039;s advertised fps (~30fps in the case of my webcam) to  1/4 fps&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l98&quot;&gt;Line 98:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 98:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Utilizing facial recognition, I was able to substitute faces for symbols that are generally easier to recognize through low resolution vision as a face. There was the caveat that some threshold and edge detection filters compromised the effectiveness of this face substitution&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Utilizing facial recognition, I was able to substitute faces for symbols that are generally easier to recognize through low resolution vision as a face. There was the caveat that some threshold and edge detection filters compromised the effectiveness of this face substitution&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:mask.png|400px|center|  Image after &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixillation &lt;/del&gt;demonstrates the difficulty of recognizing a face]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:mask.png|400px|center|  Image after &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelation &lt;/ins&gt;demonstrates the difficulty of recognizing a face]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:maskmeme.png|400px|center|  Superimposing the mask symbol helps distinguish the presence of a face]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:maskmeme.png|400px|center|  Superimposing the mask symbol helps distinguish the presence of a face]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixellation &lt;/del&gt;once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixillation &lt;/del&gt;of the image. In practice, the implementation of edge detection was not successful. Sobel Edge Detection accentuates noise too much to really communicate image content, they may work in a pinch in events of very low dynamic range. The other two don&#039;t define edges strongly enough to persist through &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixillation&lt;/del&gt;.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelation &lt;/ins&gt;once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelation &lt;/ins&gt;of the image. In practice, the implementation of edge detection was not successful. Sobel Edge Detection accentuates noise too much to really communicate image content, they may work in a pinch in events of very low dynamic range. The other two don&#039;t define edges strongly enough to persist through &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelation&lt;/ins&gt;.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:threshold.jpg|frame|center| From left to Right: No threshold, Canny Edge Detection, Sobel Edge Detection, Laplacian Edge Detection. &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Pixellation &lt;/del&gt;renders only the first and third columns interpretable.]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:threshold.jpg|frame|center| From left to Right: No threshold, Canny Edge Detection, Sobel Edge Detection, Laplacian Edge Detection. &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Pixelation &lt;/ins&gt;renders only the first and third columns interpretable.]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l109&quot;&gt;Line 109:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 109:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;= Conclusions and Future Work =&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;= Conclusions and Future Work =&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This experiment of simulating yielded some insights into the difficulty of conveying important information to to the wearer, as well as the difficulty in developing an adequate simulator. First off, the limited information regarding what restored vision looks like from patient testimonials made confidence in any particular strategy of simulating restored vision difficult. The dot array and &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixillation &lt;/del&gt;strategies I did employ however did convey the difficulty of perceiving objects at low resolutions. In light of this development, the edge detection, thresholding, and face detection strategies I employ, while arguably improving perception, left a lot of room for improvement, and justified the need for more advanced computer vision algorithms to improve how restored vision functionality.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;This experiment of simulating yielded some insights into the difficulty of conveying important information to to the wearer, as well as the difficulty in developing an adequate simulator. First off, the limited information regarding what restored vision looks like from patient testimonials made confidence in any particular strategy of simulating restored vision difficult. The dot array and &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;pixelation &lt;/ins&gt;strategies I did employ however did convey the difficulty of perceiving objects at low resolutions. In light of this development, the edge detection, thresholding, and face detection strategies I employ, while arguably improving perception, left a lot of room for improvement, and justified the need for more advanced computer vision algorithms to improve how restored vision functionality.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the future, we could, accompanied with a better understanding of what corrected vision looks like, develop a more accurate model of what patients actually see. Once we have a better base of pixels from which to work, we could add more sophisticated computer vision algorithms to incorporate features such as object detection and edge enhancement. Additionally, having a smart camera determine  from context what information to send to the patient, such as deciding to emphasize a bathroom sign if one is caught in the patients visual field, could do wonders in enhancing the functionality of restored vision. Hopefully, this simulation can be used to better illustrate the the quality of vision patients have to work with, and can be used as a tool in developing computer vision algorithms that can improve the functionality of this retinal prosthesis and future methods of restoring eyesight.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;In the future, we could, accompanied with a better understanding of what corrected vision looks like, develop a more accurate model of what patients actually see. Once we have a better base of pixels from which to work, we could add more sophisticated computer vision algorithms to incorporate features such as object detection and edge enhancement. Additionally, having a smart camera determine  from context what information to send to the patient, such as deciding to emphasize a bathroom sign if one is caught in the patients visual field, could do wonders in enhancing the functionality of restored vision. Hopefully, this simulation can be used to better illustrate the the quality of vision patients have to work with, and can be used as a tool in developing computer vision algorithms that can improve the functionality of this retinal prosthesis and future methods of restoring eyesight.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15302&amp;oldid=prev</id>
		<title>imported&gt;Projects221: /* Computer Vision Assistance */</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15302&amp;oldid=prev"/>
		<updated>2014-03-19T00:25:08Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Computer Vision Assistance&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 00:25, 19 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l102&quot;&gt;Line 102:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 102:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful. Sobel Edge Detection accentuates noise too much to really communicate image content, they may work in a pinch in events of very low dynamic range. The other two don&amp;#039;t define edges strongly enough to persist through pixillation.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful. Sobel Edge Detection accentuates noise too much to really communicate image content, they may work in a pinch in events of very low dynamic range. The other two don&amp;#039;t define edges strongly enough to persist through pixillation.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:threshold.jpg|&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;800px&lt;/del&gt;|center| &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Threshold comparisons&lt;/del&gt;]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:threshold.jpg|&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;frame&lt;/ins&gt;|center| &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;From left to Right: No threshold, Canny Edge Detection, Sobel Edge Detection, Laplacian Edge Detection. Pixellation renders only the first and third columns interpretable.&lt;/ins&gt;]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15301&amp;oldid=prev</id>
		<title>imported&gt;Projects221: /* Computer Vision Assistance */</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15301&amp;oldid=prev"/>
		<updated>2014-03-19T00:21:52Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Computer Vision Assistance&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 00:21, 19 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l102&quot;&gt;Line 102:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 102:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful. Sobel Edge Detection accentuates noise too much to really communicate image content, they may work in a pinch in events of very low dynamic range. The other two don&amp;#039;t define edges strongly enough to persist through pixillation.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful. Sobel Edge Detection accentuates noise too much to really communicate image content, they may work in a pinch in events of very low dynamic range. The other two don&amp;#039;t define edges strongly enough to persist through pixillation.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:threshold.&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;jpeg&lt;/del&gt;|800px|center| Threshold comparisons]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:threshold.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;jpg&lt;/ins&gt;|800px|center| Threshold comparisons]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15300&amp;oldid=prev</id>
		<title>imported&gt;Projects221: /* Computer Vision Assistance */</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15300&amp;oldid=prev"/>
		<updated>2014-03-19T00:15:48Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Computer Vision Assistance&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 00:15, 19 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l102&quot;&gt;Line 102:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 102:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful. Sobel Edge Detection accentuates noise too much to really communicate image content, they may work in a pinch in events of very low dynamic range. The other two don&amp;#039;t define edges strongly enough to persist through pixillation.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful. Sobel Edge Detection accentuates noise too much to really communicate image content, they may work in a pinch in events of very low dynamic range. The other two don&amp;#039;t define edges strongly enough to persist through pixillation.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:threshold.&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;jpg&lt;/del&gt;|800px|center| Threshold comparisons]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:threshold.&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;jpeg&lt;/ins&gt;|800px|center| Threshold comparisons]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15299&amp;oldid=prev</id>
		<title>imported&gt;Projects221: /* Computer Vision Assistance */</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15299&amp;oldid=prev"/>
		<updated>2014-03-19T00:14:57Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Computer Vision Assistance&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 00:14, 19 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l100&quot;&gt;Line 100:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 100:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:mask.png|400px|center|  Image after pixillation demonstrates the difficulty of recognizing a face]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:mask.png|400px|center|  Image after pixillation demonstrates the difficulty of recognizing a face]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:maskmeme.png|400px|center|  Superimposing the mask symbol helps distinguish the presence of a face]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:maskmeme.png|400px|center|  Superimposing the mask symbol helps distinguish the presence of a face]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;. Sobel Edge Detection accentuates noise too much to really communicate image content, they may work in a pinch in events of very low dynamic range. The other two don&#039;t define edges strongly enough to persist through pixillation.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[[File:threshold.jpg|800px|center| Threshold comparisons]]&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu1.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu2.png|400px|center| &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Unfiltered &lt;/del&gt;image]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:otsu2.png|400px|center| &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Filtered &lt;/ins&gt;image]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;= Conclusions and Future Work =&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;= Conclusions and Future Work =&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15298&amp;oldid=prev</id>
		<title>imported&gt;Projects221: /* Computer Vision Assistance */</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15298&amp;oldid=prev"/>
		<updated>2014-03-19T00:05:38Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Computer Vision Assistance&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 00:05, 19 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l100&quot;&gt;Line 100:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 100:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:mask.png|400px|center|  Image after pixillation demonstrates the difficulty of recognizing a face]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:mask.png|400px|center|  Image after pixillation demonstrates the difficulty of recognizing a face]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:maskmeme.png|400px|center|  Superimposing the mask symbol helps distinguish the presence of a face]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:maskmeme.png|400px|center|  Superimposing the mask symbol helps distinguish the presence of a face]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian &lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;derivatives&lt;/del&gt;, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding edge detection, I found that only Canny edge detection seemed to be useful, at least in theory. My other two high pass filter implementations, Sobel and Laplacian, failed to sharply distinguish edges from noise, enough to be recognized in latter stage pixellation once the low frequency content is lost. With that said, Canny edge detection still outputs an image that is difficult to interpret with large enough pixel sizes, and fails to communicate edge information upon pixillation of the image. In practice, the implementation of edge detection was not successful&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Regarding thresholding, I found Otsu thresholding to be more effective at communicating information than the other two threshold implementations I use: Gaussian and mean adaptive thresholding. At very large pixel sizes, the image is still able to maintain the 2 contiguous shapes generated by the Otsu method. In contexts where key information, such as the presence of a face or large object, is held in the foreground, Otsu binarization does an adequate job of still relaying that information at very low resolutions. While a high dynamic range gray scale will generally communicate information better than a low dynamic range, the prothesis will have difficulty meeting the level of dynamic range displayed in the left image. In cases of very limited  dynamic range, Otsu binarization is an adequate means of communicating information.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15297&amp;oldid=prev</id>
		<title>imported&gt;Projects221: /* Results */</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15297&amp;oldid=prev"/>
		<updated>2014-03-19T00:04:03Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Results&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 00:04, 19 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l81&quot;&gt;Line 81:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 81:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:Face.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:Face.png|400px|center| Unfiltered image]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:filteredface.png|400px|center| Image after undergoing pixillation, color removal, and Otsu binarization]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:filteredface.png|400px|center| Image after undergoing pixillation, color removal, and Otsu binarization]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;These specifications were met. A user of the simulator has control of pixel density, color, as well as how dynamic range is expressed. To elaborate dynamic range can be expressed either by pixel color or pixel radius, and this dynamic range can span the entire spectrum of grays, to just two colors, as demonstrated by the Otsu Thresholding. If the user chooses to pixellate the image, there are three options from which to choose: square, dot, and radial&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;These specifications were met. A user of the simulator has control of pixel density, color, as well as how dynamic range is expressed. To elaborate dynamic range can be expressed either by pixel color or pixel radius, and this dynamic range can span the entire spectrum of grays, to just two colors, as demonstrated by the Otsu Thresholding. If the user chooses to pixellate the image, there are three options from which to choose: square, dot, and radial&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt; &lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Here is some information on specific parameters:&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Pixellation yields pixel blocks of 80X80, 40X40, 20X20, 10X10, 8X8, and 4X4 pixels.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;The neighborhood size used by the blurring method to determine central pixel intensity ranges from 1 to 30&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Frame rate ranges from the camera&#039;s advertised fps (~30fps in the case of my webcam) to  1/4 fps&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-side-deleted&quot;&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;Regarding the track bar governing Facial Recognition Accuracy vs. Speed, that parameter n corresponds a resizing of the image examined for faces by 1/n. This resizing allows for the code to run faster, although it does compromise the accuracy of facial recognition. This bounds of this parameter are somewhat arbitrary, provided I don&#039;t shrink down the image so much that facial recognition if completely compromised. I set this maximum to n = 10.&lt;/ins&gt;&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:ThreeOptions.jpeg|800px|center| Three types of pixel expressions]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:ThreeOptions.jpeg|800px|center| Three types of pixel expressions]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15296&amp;oldid=prev</id>
		<title>imported&gt;Projects221: /* Introduction */</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15296&amp;oldid=prev"/>
		<updated>2014-03-18T23:46:05Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Introduction&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 23:46, 18 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l5&quot;&gt;Line 5:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 5:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;= Introduction =&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;= Introduction =&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:PalankerDevice.jpg|400px|right]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:PalankerDevice.jpg|400px|right]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Retinal degenerative diseases such as age-related macular degeneration or retinitis pigmentosa are among the leading causes of blindness in the developed world. These diseases lead to a loss of photoreceptors, while the inner retinal neurons survive to a large extent. Electrical stimulation of the surviving retinal neurons has been achieved either epiretinally, in which case the primary targets of stimulation are the retinal ganglion cells (RGCs), or subretinally to bypass the degenerated photoreceptors and use neurons in the inner nuclear layer (bipolar, amacrine and horizontal cells) as primary targets [&lt;del style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;1&lt;/del&gt;]. Other fully optical approaches to restoration of sight include optogenetics, in which retinal neurons are transfected to express light-sensitive Na and Cl channels, small- molecule photoswitches which bind to K channels and make them light sensitive or photovoltaic implants based on thin-film polymers.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Retinal degenerative diseases such as age-related macular degeneration or retinitis pigmentosa are among the leading causes of blindness in the developed world. These diseases lead to a loss of photoreceptors, while the inner retinal neurons survive to a large extent. Electrical stimulation of the surviving retinal neurons has been achieved either epiretinally, in which case the primary targets of stimulation are the retinal ganglion cells (RGCs), or subretinally to bypass the degenerated photoreceptors and use neurons in the inner nuclear layer (bipolar, amacrine and horizontal cells) as primary targets [&lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;3&lt;/ins&gt;]. Other fully optical approaches to restoration of sight include optogenetics, in which retinal neurons are transfected to express light-sensitive Na and Cl channels, small- molecule photoswitches which bind to K channels and make them light sensitive or photovoltaic implants based on thin-film polymers.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Recent clinical studies with epiretinal and subretinal prosthetic systems have demonstrated improvements of the visual function in certain tasks, with some patients being able to identify letters with equivalent visual acuity of up to 20/550. Despite progress in improving visual acuity, normal vision at this resolution lacks much functionality. Simulating vision through retinal prothesis, as well as processing the image through various means could determine better methods in transferring information through the retina at this limited bandwidth. In order to aid the development of future image processing software, this group will simulate vision through the retinal prothesis developed by the Palanker Lab.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Recent clinical studies with epiretinal and subretinal prosthetic systems have demonstrated improvements of the visual function in certain tasks, with some patients being able to identify letters with equivalent visual acuity of up to 20/550. Despite progress in improving visual acuity, normal vision at this resolution lacks much functionality. Simulating vision through retinal prothesis, as well as processing the image through various means could determine better methods in transferring information through the retina at this limited bandwidth. In order to aid the development of future image processing software, this group will simulate vision through the retinal prothesis developed by the Palanker Lab.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
	<entry>
		<id>http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15295&amp;oldid=prev</id>
		<title>imported&gt;Projects221: /* Introduction */</title>
		<link rel="alternate" type="text/html" href="http://vista.su.domains/psych221wiki/index.php?title=Simulating_Vision_through_Retinal_Prothesis&amp;diff=15295&amp;oldid=prev"/>
		<updated>2014-03-18T23:45:25Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Introduction&lt;/span&gt;&lt;/p&gt;
&lt;table style=&quot;background-color: #fff; color: #202122;&quot; data-mw=&quot;interface&quot;&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;col class=&quot;diff-marker&quot; /&gt;
				&lt;col class=&quot;diff-content&quot; /&gt;
				&lt;tr class=&quot;diff-title&quot; lang=&quot;en&quot;&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;← Older revision&lt;/td&gt;
				&lt;td colspan=&quot;2&quot; style=&quot;background-color: #fff; color: #202122; text-align: center;&quot;&gt;Revision as of 23:45, 18 March 2014&lt;/td&gt;
				&lt;/tr&gt;&lt;tr&gt;&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot; id=&quot;mw-diff-left-l5&quot;&gt;Line 5:&lt;/td&gt;
&lt;td colspan=&quot;2&quot; class=&quot;diff-lineno&quot;&gt;Line 5:&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;= Introduction =&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;= Introduction =&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:PalankerDevice.jpg|400px|right]]&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;[[File:PalankerDevice.jpg|400px|right]]&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;−&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #ffe49c; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Retinal degenerative diseases such as age-related macular degeneration or retinitis pigmentosa are among the leading causes of blindness in the developed world. These diseases lead to a loss of photoreceptors, while the inner retinal neurons survive to a large extent. Electrical stimulation of the surviving retinal neurons has been achieved either epiretinally, in which case the primary targets of stimulation are the retinal ganglion cells (RGCs), or subretinally to bypass the degenerated photoreceptors and use neurons in the inner nuclear layer (bipolar, amacrine and horizontal cells) as primary targets. Other fully optical approaches to restoration of sight include optogenetics, in which retinal neurons are transfected to express light-sensitive Na and Cl channels, small- molecule photoswitches which bind to K channels and make them light sensitive or photovoltaic implants based on thin-film polymers.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot; data-marker=&quot;+&quot;&gt;&lt;/td&gt;&lt;td style=&quot;color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #a3d3ff; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Retinal degenerative diseases such as age-related macular degeneration or retinitis pigmentosa are among the leading causes of blindness in the developed world. These diseases lead to a loss of photoreceptors, while the inner retinal neurons survive to a large extent. Electrical stimulation of the surviving retinal neurons has been achieved either epiretinally, in which case the primary targets of stimulation are the retinal ganglion cells (RGCs), or subretinally to bypass the degenerated photoreceptors and use neurons in the inner nuclear layer (bipolar, amacrine and horizontal cells) as primary targets &lt;ins style=&quot;font-weight: bold; text-decoration: none;&quot;&gt;[1]&lt;/ins&gt;. Other fully optical approaches to restoration of sight include optogenetics, in which retinal neurons are transfected to express light-sensitive Na and Cl channels, small- molecule photoswitches which bind to K channels and make them light sensitive or photovoltaic implants based on thin-film polymers.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;br&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;tr&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Recent clinical studies with epiretinal and subretinal prosthetic systems have demonstrated improvements of the visual function in certain tasks, with some patients being able to identify letters with equivalent visual acuity of up to 20/550. Despite progress in improving visual acuity, normal vision at this resolution lacks much functionality. Simulating vision through retinal prothesis, as well as processing the image through various means could determine better methods in transferring information through the retina at this limited bandwidth. In order to aid the development of future image processing software, this group will simulate vision through the retinal prothesis developed by the Palanker Lab.&lt;/div&gt;&lt;/td&gt;&lt;td class=&quot;diff-marker&quot;&gt;&lt;/td&gt;&lt;td style=&quot;background-color: #f8f9fa; color: #202122; font-size: 88%; border-style: solid; border-width: 1px 1px 1px 4px; border-radius: 0.33em; border-color: #eaecf0; vertical-align: top; white-space: pre-wrap;&quot;&gt;&lt;div&gt;Recent clinical studies with epiretinal and subretinal prosthetic systems have demonstrated improvements of the visual function in certain tasks, with some patients being able to identify letters with equivalent visual acuity of up to 20/550. Despite progress in improving visual acuity, normal vision at this resolution lacks much functionality. Simulating vision through retinal prothesis, as well as processing the image through various means could determine better methods in transferring information through the retina at this limited bandwidth. In order to aid the development of future image processing software, this group will simulate vision through the retinal prothesis developed by the Palanker Lab.&lt;/div&gt;&lt;/td&gt;&lt;/tr&gt;
&lt;/table&gt;</summary>
		<author><name>imported&gt;Projects221</name></author>
	</entry>
</feed>