App for Programmable Camera in Android

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search

Group Members: Kaitlyn Benitez-Strine, Ronnie Instrella, Joe Maguire

Back to Psych 221 Projects 2014


Introduction

The basic description of our project is as follows:

We have a prototype programmable camera to be used with iOS or Android devices. The project's goal is to make an app that will run on iOS or Android and uses the camera. Think of an interesting camera app, and we can work together to build it. Prior experience in iOS or Android is needed.

Our group's task was to use the power of a programmable camera to make seemingly difficult photos much easier to produce. We immediately started thinking of types of photos that are somewhat easy to do (if you know how to manipulate your camera a little), but that are difficult enough that the average person would not immediately understand how to create. Today there are various cool effects your average digital camera can do, if you understand how to manipulate the numerous modes. However, most people don't understand the concepts behind the various modes and thus stick to the simple "point and shoot" automatic shooting.

We wanted to hone in on a single "cool" effect, and after turning to the internet for inspiration, we found our project - slow shutter speed photography. Inspired by the magical picture to the right, our group set out to make slow shutter speed photography an easy experience through the phone.

What has Been Done in the Past

Our group investigated prior applications on the iPhone/ Android to see what sorts of options were available to users on a typical slow shutter speed photography app. We picked two different apps for the iPhone and analyzed how the average person would approach taking a photo with these apps. We picked a paid app and a free app to analyze: Slow Shutter Cam and LongExpo respectively. We asked a subject try to match a photo we had taken with the app while doing a think-aloud protocol.

Mobile Application: Slow Shutter Cam

The interface of Slow Shutter Cam

Featured by Apple in "App Store Essentials: Camera & Photography" and "Extraordinary Photo Apps", Nominated for the 2010 and 2011 "Best App Ever Award - Best Photo App", and Recently mentioned by the NY Times is the best slow shutter speed camera on the market right now for IOS - Slow Shutter Cam [1].

For $0.99 one can take three types of slow shutter speed photography photos.

  1. Motion Blur: Allowing the user to turn on the camera's shutter priority mode, it captures images over an extended period of time for lovely blurring effects (perfect for ghost images, waterfalls, and adding a notion of movement to photos).
  2. Low Light: Optimized so that the camera sensor picks up a large number of photons, this mode allows the user to capture people or moving objects under low light conditions.
  3. Light Trail: Used to capture light in motion (such as fireworks or cars at night), this mode allows the user to capture a moving light source.

And in addition to these modes, users have the ability to adjust focus/exposure, locking the exposure/focus from shot to shot, seeing a live preview of the captured image, and much more. After the photo is taken, the user can compensate for exposure, and adjust brightness, contrast, saturation, and hue.

All of this is wonderful for a photographer wishing to take nearly Digital Single Lens Reflex (DSLR) shots with his or her iPhone camera, but for the average person we wondered what they would make of the features.

Review of Slow Shutter Cam

Never having been exposed to the app before, our subject was confused by a couple of things. First, she was unaware of what the AF/ AE locks were at the top of the screen and even after much experimentation she could not figure out the purpose of their function (locking exposure and focus for numerous shots). Plus it wasn't exactly intuitive where to find the major modes of the app (which could be found by pressing the shutter looking object in the lower left side of the screen). Once she discovered the button (after having exhausted all other options), she didn't know which option would be best for the purpose of taking a photo of a waterfall in a shady region of the Stanford Shopping Center in the afternoon. After experimenting with the modes she was never entirely certain what terms like "exposure", "exposure boost", or "light sensitivity" exactly meant.

As a result, we intend to make the "difficult" terms more easily understood, and pictures to change modes more readily observed for the average non-photographer user. This could be done with better symbols/ pictures, and little blurbs to explain the effects each option has on the captured image.

Mobile Application: Long Expo

The interface of Long Expo

LongExpo doesn't have quite as many great reviews as Slow Shutter Cam, but it is a great free app for long exposure and light trail photos. It's current rating for the latest update is 4.5 stars. [2]

LongExpo also has three modes:

  1. Standard mode: Used for the typical blurring effect, this mode is for regular long exposure photography.
  2. Low Light Mode: With an adjustable bar for exposure in low light conditions, this mode allows the user to take photos of movable objects/ people in darker settings.
  3. Light Trail Mode: Focused on recording moving light sources, this mode captures light trails, and light stream photography.

And beyond these modes, one can edit the photo in numerous ways after taking the shot. The user can adjust brightness, contrast, saturation and where the photo freezes. Plus they can add filters (similar to Instagram), as well as frames, stickers, focus, drawings, and meme lettering, as well as other interesting effects.

Again we had an average person test the app without having seen it before to see how she would match a photo we had taken previously.

Review of Long Expo

The user had difficulty finding the back/ delete buttons when she wanted to take a different picture. It was also unintuitive how to switch shutter speed. After tapping random buttons, she eventually discovered the adjusters she had wanted to find, but it took some time to do so. And even then, when she found the variable shutter speed she discovered a B "time" (which basically goes as long as you want to take the picture for), but that was not obvious from the start.

She also never found out that there were several modes (the three listed above), so she just stuck to what she was given from the start and never discovered the other modes to choose from.

All of this made it clear that the buttons for adjusting modes, and going back should be clear and explicit as soon as you open the app. Plus, proper labeling of what certain adjusters denote would also be helpful.

What We Intend to Do

The best part about our app, BaseCamA, is that we have a mirrorless, programmable camera to take the photos for our users, meaning that the user will be free control the camera remotely via their smartphone, while taking high resolution photos. Although we intend to have similar modes to the reviewed app described above, the provided programmable camera offers a few advantages that make for a better shot:

  • Stability: The programmable camera can be positioned remotely so that when the user intends to take the image, the photo isn't affected the their ability to hold the camera still (without the use of a tripod). The slight change in position could negatively affect the resulting image, but the programmable camera could easily be placed on a table or attached to a tripod.
  • Mobility: The programmable camera again can be placed anywhere, so the user does not have to be where the camera is and yet the user can see what the camera sees and control it remotely. Thus, the user can be in the photo without running to beat a self timer.
  • Fine Tuned Control: The programmable camera has more possibilities than a smartphone camera allowing for a better shot - for instance there are finer tuned choices for shutter speed, ISO speed, and aperture.

With these benefits, my group intends to create an app with four modes:

  1. Motion Blurring: Great for capturing movement either by blurring the moving objects/ water, or for blurring the background by panning means.
  2. Painting With Light: This technique is when either the user is behind the camera and lighting up an object in front of the camera (thus the emitted light becomes a sort of paint brush unto the night scene), or the user is in front of the camera with a light source and movement of the light source is captured by the camera (light trail).
  3. Montage: In this mode the camera takes continuous photos quickly of a moving subject and combines all stills of the movement into one shot.
  4. Thresholding: Here the camera takes the high contrast between light and dark pixels and gets rid of all the color to become a black and white photo highlighting the light parts in white and dark parts in black.

And along with these modes the user will be able to adjust the brightness, hue, and contrast after the picture has been taken.

Background

In addition to our basic knowledge on slow shutter speed photography, we searched for more information about motion blurring and painting with light, and we interviewed a professor for continuing studies, Joel Simon, for more helpful tidbits. We aim to make slow shutter speed photography as intuitive as possible for any prospective user, but in the end, practice is what will make anyone better at taking photos under any conditions.

Motion Blurring and Panning

Slow Shutter Speeds introduce two sorts of blurring sources: subject and camera movement. All light is blocked by the shutter - since it sits in front of the camera's sensor - until it's opened by the shutter button for an extended period of time.

Motion Blurring

Motion Blur at Stanford Shopping Center

For motion blurring, ideally the only movement will be of the subject since the camera should be steady on a table or tripod. This will allow the image to be clearer (at least the background will be). Nikon provides a great artist's rendering of what the camera captures with a moving subject at certain shutter speeds at this website here: http://imaging.nikon.com/history/basics/04/03.htm .[3] When a slower shutter speed is selected, a longer time elapses from the instant the shutter opens until the instant it closes. The camera can record as much movement from the subject it can, given the time available (shutter speed value).

A potential problem is that the longer the camera shutter is open, the more of a risk there is of blowing out or overexposing the photo. To account for this, Darren Rowse of the Digital Photography School and Joel Simon , a continuing Studies professor at Stanford, had some helpful tidbits of information. They both suggested closing the aperture, decreasing the ISO, or trying a neutral density filter [5][7], however we could not change the aperture due to a fixed lens size, nor could we add a neutral density filter since we would not be able to give everyone with the app a neutral density filter. Thus our only defense was lowering the ISO speed for photos taken during the day.

Panning

Another way of capturing movement is by panning the moving object. Joel suggested we add this feature since it fell perfectly along with motion blurring. This entails rotating the camera to follow the moving object (like turning your head on a swivel to observe a passing bicycle or a child on the swings). This captures movement by streaking the background in a motion opposite to the direction the camera was rotated while keeping the subject still. The key idea is to keep the subject in the same position in the frame the entire time the shutter is open.

To find the ideal settings for panning, Darren Rowse's guide to mastering panning was very useful. He suggested keeping the shutter speed slightly lower than you normally would, stating, "Start with 1/30 second and then play around with slower ones. Depending upon the light and the speed of your subject you could end up using anything between 1/60 and 1/8"[6]. So we ensured to have those settings available to the user when they decided to pan the camera.

Painting With Light and Light Trail

Painting With Light

Here the real creativity of the user shines through because in this mode the user can utilize their phone screen, a flash light, or a laser pointer to "draw" on the objects to highlight or "etch in" things they wouldn't normally impose on the real world. Here however, the camera's sensor is capturing the emission and reflection of the light source (typically by shining the light source from behind the camera).

Joel suggested adding this type of slow shutter speed photography to complement the initial light trail idea we had, but to get more details on what were important features for the user, we turned to the internet. Darlene Hildebrandt suggested: locking the focus so that the camera won't "hunt" in the dark for the best focusing power; setting the exposure to manual or bulb so the camera doesn't keep guessing the best exposure; and putting the ISO as low as possible to minimize noise [10]. She notes, "Basically what you do is set your camera on Bulb, open the shutter using your locking release and walk into your scene and start lighting the objects in the camera view using your flashlight." [10]

Light Trail

When the camera shutter is open for an extended period of time, the sensor continues to receive photons from any available light sources in front of it. Thus, it continues to record information from all the positions of the light source during the time the shutter was open, so that the resulting image has a trail of light. Again, a tripod or steady camera position is pivotal for this mode so that the light trails are of high quality and the background illuminated by the light remain clear as well. For example, one can capture the movement of traffic, or make their own objects with a flashlight, phone screen, or sparkler light in front of the camera. In our app, we envision that the phone screen changes it's color and intensity to create light trail images using light of various colors. Again during this time, the sensor continues to capture photons from each emitting light source to create an interesting image.

Darren Rowse once more had some helpful tips for this type of slow shutter speed photography. Like Darlene suggested for Painting with Light, Darren suggested keeping a low ISO value for noise reduction, using Bulb mode to keep the exposure steady, and having the focus be manually locked for the best results [11].

Montage

Burst shooting allows photographers to capture action scenes on a single image. Scenes that include objects moving across a stationary background are displayed at multiple timeframes, producing an action shot of the object at various stages of the captured event. This feature is already available on a number of devices and apps, including the Samsung Galaxy 4S (Drama Shot) [12] and the iOS app Camera Shoot Quick Burst Shot [13]. A programmable mirrorless interchangeable lens camera allows the user to generate burst shots with a higher quality than those available on integrated cell phone cameras. This type of action shot usually requires at least two people (i.e. the photographer and subject) using a standard cell phone camera. Using the provided programmable camera, the user has the ability to take self actions shots by controlling the camera remotely using a standard tablet or smartphone. We believe this simple feature would help demonstrate the camera’s capabilities to potential users, especially those with little experience in photography or knowledge of digital image processing.

Thresholding

Sample Functions from Catalano Framework

Two of the members of our project have taken Computer Vision and Digital Image Processing courses at Stanford University. To implement these methods we used the Catalano Framework for Android. This framework contains a wide range of optimized android image processing tools.

Methods

Equipment and Software Development

All of the equipment and software development tools used in this project were provided. The app was built upon existing Android software, and was implemented on a 10.1 inch Sony Xperia Z tablet. An experimental programmable camera was also provided and used for testing purposes. The camera projects a wifi signal that connects wirelessly to the tablet. The existing Android software includes code to handle network connectivity and hard drive storage associated with the camera. The provided software also includes code that captures photos, and buttons to change the camera settings on a simple user interface. The work of this project was to supplement this code with a redesigned user interface and simple image processing features that could allow users to easily take advantage of the camera’s capabilities.

The presented image processing algorithms were originally developed and tested using MATLAB 2013b. Image thresholding, motion blur and painting with light were implemented in Java using Android Studio. The thresholding algorithm was written with the help of the Catalano-Framework, an open course framework that includes an free image processing toolbox for Java and Android applications. This toolbox was imported into the existing software and includes a large library of image processing functions, many of which could be used to develop more features for future versions of BaseCamA. It is worth noting that the motion capture feature was not implemented directly on the app. All of the presented results using this algorithm were produced in MATLAB. This was due to difficulties using continuous shooting (burst shots) on the programmable camera, which is necessary for creating this type of photo.

Camera Parameters

To accomplish the defined tasks, it is important to have control over the settings of the camera. This includes the camera's shutter speed, exposure compensation, f stop, and ISO. Motion blurring occurs at nominal levels of illumination so we must minimize the amount of light acquired by the CCD or else the image will appear washed out. To do this we automatically reduce the ISO values, exposure compensation while increasing the camera's f-stop. For light trail and painting with light, the level of background illumination is very low, so we can preserve nominal ISO, exposure compensation, and f stop will increasing the exposure time. This is because the light source is orders of magnitude brighter then background, so it will always leave a distinct noticeable trail when the photo is integrated over many frames.

Image Thresholding

We used image thresholding both as an artistic feature and as a method to segment the image to the illuminated section and the background. The latter enables alteration of only the illuminated section. In practice the user can change the color of illumination, or, in Light Trail, actually change the light source. A simple example of the latter is extracting the single pixel wide path of the light trail and superimposing a laser-pointer type source along the path. In our results we show how we used this method to alter the color of illumination.

Algorithmically, we used Otsu Thresholding to binarize images. Thresholding works by choosing an intensity cutoff based on the intensities in a greyscale image and setting values above the intensity to whit and below to black. Otsu is an optimization that seeks the cutoff that results in minimal variance of intensities above and below it. More rigorously, it seeks to minimize between class variance where the classes are the pixels categorized as black and the pixels categorized as white.

Noise Suppression

Low illumination coupled with high exposure results in noise issues for most imaging systems. The most serious noise source is photon noise, which is intrinsic to the camera. As discussed in lecture [8], this noise source is due to the discrete nature of photons. Treating light as a wave, it may appear that a proper wavefront impacts the CCD causing a gaussian distribution of intensity across the sensor. Using this approximation of reality, modern systems treat image noise as independent additive gaussian noise. However, reality dictates a more complex behavior. Light arrives in wave packets at nonuniform intervals and quantities across the sensor. This results in a noise signature directly related to scene brightness. Specifically, like most random processes involving arrival times, arriving light is Poisson distributed with respect to intensity of light. This means that the arriving signal varies around it's mean value with a standard deviation of the square root of the mean value. This variation is called photon noise.

As shown above, a signal that varies with respect to the square root of it's mean has a SNR of the square root of the single. Since the square root function is monotonically increasing, as scene intensity increases so does the signal to noise ratio. This has serious consequences on our project because two of its modes, Light Trail and Light Painting operate at low illuminations.

Noise is clear in raw image

The noise, shown above, is apparent. Since this noise is fundamental to the nature of light, we must resort to post processing techniques to suppress it. After experimenting with many Digital Image Processing techniques, we settled for median filtering our captured images. A median filter works by passing a small window over an image replacing the pixel at the center of the window with the median value of the pixels inside the window. We used the Catalano Android Imaging Framework to accomplish this. Noise suppression shown below is clear. It should be noted that Dark Current Noise is also an issue for our project as we deal with long exposures. However, our filtering technique also suppresses this.

Noise suppressed

Montage (Motion Capture)

The montage feature aggregates images from multiple timeframes onto a single background, which is useful for action shots such as those captured during sporting events. This type of image is created using a simplified version of an image processing algorithm described in “Automatic Generation of Action Sequence Images from Burst Shots” by S. Chen, B Stabler and A Stanley [4]. In our implementation, a series of still images are taken of a moving object on a stationary background, designating either the first or last image from the series as the background. For each image in the sequence, the absolute difference is calculated, and median filtering is applied to reduce salt and pepper noise in the background image [9]. Image closing using a disk filter is used to close the holes within large objects in the image. The pixels from any of the filtered difference images that lie above a predefined threshold are replaced in the designated background image, and the resulting picture is saved. Designating a subset of the collected images allows us to alter the total number of objects in the final picture, which can be displayed as an option to the user in future implementations of the app.

Montage Image Processing Algorithm

Results

Design and User Interface (UI)

BaseCamA redesigned UI
Users can still fine tune the settings

BaseCamA’s redesigned interface is shown to the right. A series of buttons allow the user to toggle the Paint with Light and Motion Blur features prior to capturing an image. These buttons, labeled Paint with Light Long, Paint with Light Short, Motion Blur Long, Motion Blur Short, automatically change the camera shutter speed, exposure compensation and ISO values to take a specific type of photo. This way, novice photographers can avoid manually setting the camera parameters to take these types of images, which at times can be confusing and nonintuitive. For instance, the user has the ability to choose between long or shot forms of these shots, which when triggered automatically alters the shutter speed. While the lower half of our UI includes very user-friendly methods for changing the camera settings, the user still has the ability to manually change settings. The upper portion of the interface includes a series of menus to change the aperture size, shutter speed, ISO and exposure compensation. We kept this feature to allow users to have full access to the capabilities of the camera. We designed the UI in this way to reach to a wide audience of possible users, from novices to experienced photographers.

Image Thresholding

The image below shows an example of image thresholding with OTSU Thresholding. Clearly, beyond its use to generate masks for illuminated pixels, thresholding has some artistic qualities. Uses for image thresholding in the other functions are shown in sections below.

Our Artistic Rendering of Thresholding

Painting with Light & Light Trail

Our painting with light and light trail options are enabled by pressing the designated buttons to the right of the main shutter button. The camera settings for both of these types of images are the same, so for the purposes of this project we do not distinguish between the two on the UI (i.e. we do not include a separate Long or Short button for Light Trail). When triggered, a short message appears telling the user that the feature is either enabled or disabled. Once enabled, predefined camera settings are set automatically and are visible to the user on the upper portion of the interface. Disabling this feature resets the camera parameters back to the camera's default settings.

The results of the binarizing algorithm for both of these features are shown below on two test images. Altering the RGB values of thresholded pixels allow us to easily change the color scheme in the brightest regions of the image. In the light trail example, the color of the written message can be changed without altering the overall quality of the image. Similarly in the painting with light example, the color of the regions that received the most light (i.e. the subject’s pants and portion of the illuminated background wall) could be altered without changing the picture quality.

Painting with Light

An example of color replacement for painting with light is shown below:

An example of light painting.

We could produce a wide array of colors via post processing. In future implementations the user could select a color using a slider.

A few of the colors available to the user.

Light Trail

An example of using image thresholding to replace the color of the illumination source for light trail is presented below:

An example of a light trail image.

With a slider bar we could produce a wide array of possible colors.

The user can select from a wide array of colors, a few of which are shown above.

Montage (Motion Capture)

Sample of Montage Results

Overall, our algorithm properly identified and superimposed a moving subject on a designated image background, although areas near the edges were sometimes incomplete; for instance, some areas near the subject’s shoes and hands were missing. The total number of objects within the resulting image could also be altered; a few examples of this feature are shown to the left. In future implementations of this algorithm, we hope to add an option that allow users to vary the number of objects within the final image. This way, the user can create a customized picture based on the chosen shutter speed and the speed of the moving object.

It is important to note that our group could not take successive photos (continuous shooting) using the provided programmable camera, and therefore did not implement this feature onto the app itself. The following images were taken using a standard DSLR camera, and the image processing algorithm was implemented and tested in MATLAB. The code for these algorithms are available online.

Conclusions

We successfully implemented Light Trail, Light Painting, Image Thresholding on the mobile device and Montage on Matlab. Although we encountered some technical difficulties with the implementation, specifically memory issues with large photos and some difficulties with the API, we emerged comfortable with the platform and its capabilities. The programmable camera offers the user a very powerful imaging tool, one with serious specifications and the ability to use replaceable lenses. Modern smartphones themselves offer excellent onboard image processing capabilities, only a few years behind regular computers in speed. Together, these systems act as a small and mobile yet powerful image processing platform with a high-end camera. Our project shows the results of brief experimentation with this system. With more time, we are confident that we can better leverage the advanced processing of the smartphone to give this imaging system capabilities not offered by current camera systems.

References - Resources and related work

References

[1] "Slow Shutter Cam on the App Store on iTunes." itunes.apple.com. Apple Online Store, 18 Dec 2013. Web. 15 Mar 2014. <https://itunes.apple.com/us/app/slow-shutter-cam/id357404131?mt=8>.

[2] "LongExpo - slow shutter and long exposure camera on the App Store on iTunes." itunes.apple.com. Apple Online Store, 09 Jan 2014. Web. 15 Mar 2014. <https://itunes.apple.com/us/app/longexpo-slow-shutter-long/id594078421?mt=8>.

[3] "Digital SLR Camera Basics: Shutter Speed." imaging.nikon.com. Nikon Corporation. Web. 17 Mar 2014. <http://imaging.nikon.com/history/basics/04/03.htm>.

[4] Andrew Stanley, Ben Stabler, and Sean Chen. "Automatic Generation of Action Sequence Images from Burst Shots." EE 368: Digital Image Processing, Stanford University, 2013. Print.

[5] Rowse, Darren. "How to Capture Motion Blur in Photography - Digital Photography School." http://digital-photography-school.com. Digital Photography School. Web. 20 Mar 2014. <http://digital-photography-school.com/how-to-capture-motion-blur-in-photography>.

[6] Rowse, Darren, ed. "Mastering Panning - Photographing Moving Subjects - dPS." digital-photography-school.com. Digital Photography School. Web. 20 Mar 2014. <http://digital-photography-school.com/mastering-panning-to-photograph-moving-subjects/>.

[7] Simon, Joel. Personal Interview. 06 Mar 2014.

[8] Wandell, Brian. "Image Capture Sensor 2014 Slides." Psych 221 Class. Stanford University. Stanford. Feb 2014. Lecture.

[9] Lim, Jae S., Two-Dimensional Signal and Image Processing, Englewood Cliffs, NJ, Prentice Hall, 1990, pp. 469-476.

[10] Hildebrandt, Darlene. "Light Painting Part One - the Photography - Digital Photography School." digital-photography-school.com. Digital Photography School. Web. 20 Mar 2014. <http://digital-photography-school.com/light-painting-part-one-the-photography>.

[11] Rowse, Darren. "How to Shoot Light Trails - Digital Photography Schools." digital-photography-school.com. Digital Photography School. Web. 20 Mar 2014. <http://digital-photography-school.com/how-to-shoot-light-trails>.

[12] Molen, Brad. "Samsung Galaxy S 4 review." http://www.engadget.com. AOL Tech, 24 Apr 2013. Web. 20 Mar 2014. <http://www.engadget.com/2013/04/24/samsung-galaxy-s-4-review/>.

[13] "Quick Burst Shot (free) -Android Apps on Google Play." play.google.com. Google, 11 Nov 2013. Web. 20 Mar 2014. <https://play.google.com/store/apps/details?id=co.kr.easysoft.fastcamera_free>.

Software

Matlab: http://www.mathworks.com/

Android Studio: http://developer.android.com/sdk/installing/studio.html

Catalano Framework: http://code.google.com/p/catalano-framework/

Appendix I - Code and Data

In the belief that the techniques used may be illustrated best by example, the MATLAB code used to perform the image altering is available below along with sample data from the project.

Code

We are restricted from sharing the code for most of our app, BaseCamA. For more information on this code, please contact, Steven Lansel.

However, the code written on MATLAB can be shown.

File:Matlab script.zip

Data

Here is a zip file of all of the images we collected through this project.

File:Data BaseCamA.zip

Presentation

This project was given as a 12-minute presentation to the PSYCH221 Winter 2014 class at Stanford. The presentation file used is linked below.

12min PDF Presentation File

Appendix II - Breakdown of Work Within the Project

We all did the wiki and presentation together, writing up our specific contributions to the project.

Kaitlyn Benitez-Strine

Kaitlyn came up with the specific idea to do an app on slow shutter speed photography. She coordinated meeting times and made sure everyone was on track with what they were supposed to be doing, and focused on the design of the app and how the average non-photographer would handle the application. She did background research on other slow shutter photography apps (by having a friend do a think-aloud protocol while using them) and met with a photography professor at Stanford to ensure the group had enough of a photography background to create a photography app.

Joe Maguire

Joe Maguire leveraged his knowledge of Android and Image Processing to develop the basic framework of the application and implement certain client facing demos. The former focused on backend software development (e.g memory management, inter-session communications, altering data types, extending the API). On the client side, he implemented the torch flashlight mode, thresholding, noise suppression and image processing tasks related to the Catalano Framework.

Ronnie Instrella

Ronnie Instrella implemented and tested most of the digital image processing algorithms, including Color Selection for Light Trail and Painting with Light, as well as the motion capture algorithm. He was also responsible for redesigning the app’s new user interface, adding all of the buttons, pop-up messages and altered user menus. He also met with Steve Lansel to give progress reports and updates. Ronnie created the final presentation slides, which are available as a PDF file Appendix I.