Inexpensive LED Video Wall

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search
Our 50 panel LED display in Ram's Head's production of Hairspray

Stephen Hitchcock and Matt Lathrop

We created a large, modular, LED video display to be used in a variety of activities from concerts, to theatrical productions, to art installations. The wall is made up of 50 4’ x 4’ panels for a total assembled size of 20’ x 40’ and an effective resolution of 200 x 100 pixels.

LED technology has always been expensive, primarily due to the high costs associated with producing batches of quality LEDs to create a uniform image. This video wall was made for roughly 1/10th the cost of a professional product with similar pixel density by using inexpensive LEDs and then imaging our panels with a dSLR to measure relative luminance. Furthermore, we used a color spectrometer to record the gamut, white point, and gamma of the LEDs. With this data we mapped the sRGB color space into the color space of the LED wall allowing us to produce content then display it on the wall while preserving the colors in the final image.

These techniques, combined with the hardware and software design, produced a professional looking video wall for a fraction of the cost of alternatives.

Background

Although LEDs were invented in 1927, when O. V. Lossev of Russia constructed the first LEDs in his paper Luminous Carborundum Detector and Detection Effect and Oscillations with Crystals, it is only within the past 25 years that they have become useful for displays. Many professionally made displays are available today; however, our constrained budget necessitated a search for affordable components adequate to accomplish our goals.

LED Display Technology

The first major LED display was unveiled by James P. Mitchell at 29th International Science and Engineering Exposition in 1978. Though only monochromatic due to the poor performance of blue LEDs at the time, the display was an important both as a prototype and as a demonstration of LED capabilities. Unlike Cathode Ray or Liquid Crystal technologies, LEDs serve as both the source and control on a per pixel level which allows their displays to be both incredibly thin and incredibly large. Furthermore, their vibrant color rendering and low power draw make them a clear candidate for large scale projects, as they are both visible at long ranges and can be reasonably powered with existing infrastructure.

To understand our methodology, it is important to note that in commercial LED display production, individual LEDs are sampled after manufacturing and matched with other, similar performing LEDs. This way, when a large number of LEDs are used in parallel, such as in a display, color rendering and luminance is consistent across the surface. As a result, the manufacturing process is extremely expensive, as the vast majority of LEDs fail to match performance and subsequently cannot be used in a commercial display.

Specifications

The unique qualities of LEDs made them the obvious choice when designing our large scale project. We knew from the outset that our display would consist of fifty 4' x 4' modular panels with a pixel pitch of 2.4 inches. This created a 20 x 20 pixel per panel resolution, allowing all fifty panels to be configured in a single 40' x 20', 200 x 100 pixel display. A commercially available product at this size and resolution would cost in the neighborhood of $100,000 and was therefore well beyond our limited budget of approximately $10,000. Therefore, we also needed to devise a strategy for building a display at 1/10th of normal cost, all while fulfilling our previous requirements.

Methods

The spectral power distributions of approximately 20 sampled LEDs. Note the variation in power but not in wavelength for R, G, & B

In order to meet our specifications, we devised a number of processes and investigations that served the overall processes of sourcing, manufacturing, and controlling our display. Unfortunately, detailing all of these is beyond the scope of this article, and as such, we will be focusing primarily on our methodology for color calibration and control.

LED Sourcing

The obvious solution to reducing costs was simply to purchase inexpensive LEDs. As stated above, commercial displays rely on LEDs that are color matched after manufacturing, resulting in astronomical costs. Since we lacked the means to obtain such LEDs, our strategy from the outset was to find the best performance to cost ratio and attempt to correct inadequacies in our software.

We began sampling products from a number of different vendors internationally. To meet our specification of a 20,000 pixel display, we needed to spend approximately $0.20 per LED. We also needed LEDs with a WS2811 chipset, as that allowed us to control all three colors in a single, well documented module. With the assistance of Joyce Farrell and her lab at Stanford University, we were able to measure our candidates with a color spectrometer to assess quality and consistency. The LEDs we ended up selecting were chosen for one reason in particular; while they exhibited an expected variation in luminance across approximately twenty samples, their wavelength output was extremely consistent, making the color correction process considerably easier.

Our control pipeline

Control Architecture

In order to control our display, we developed a pipeline that captures video from a monitor, applies color correction, and distributes chunks of the image to specific panels. In order to accomplish this task cheaply and efficiently, we turned to popular Arduino based products for their low cost and well documented libraries.

We choose PJRC’s OctoWS2811 for its 8 separate data outputs; since our LEDs update sequentially, injecting data along separate strips mitigates delay issues, as 8 LEDs can be updated at a time per chip instead of just one. Each chip has two RJ45 jacks, each of which carries four of the data lines. The RJ45 jacks are paired with Cat6 Ethernet cables to carry the signals over long distances with minimal loss in fidelity. Since each adapter has two jacks, we ran a single Cat6 wire to each panel, which allows the wall to be highly modular.

The software works by presenting the user with a transparent window that can be moved around the screen. Anything under the window is captured into memory as a matrix of RGB values, allowing the wall to be independent of any particular software package. Once the software has a matrix of RGB values representing the image, we perform a series of operations for controlling color, which will be detailed in further sections. The matrix is then divided up into pieces representing the various panels and sent to the individual Teensy boards. This had to be done incredibly quickly, as there is only 1/64th of a second to transfer all data from the computer to the chips if we wanted to keep a reasonable frame rate. It is worth mentioning that for this project’s purposes, a frame rate of about 48fps seemed to give the wall the most responsive and smooth motion without inducing dropped frames. To achieve this frame rate we simultaneously outputted data for all 25 Teensy boards in parallel.

Once the data leaves the computer, the Teensy boards read the data and output it to the LEDs. One of the 25 Teensy boards is designated as a master, and once it finishes receiving its data, the master board waits for 75% of the frame rate’s time to pass before sending a pulse on frame sync wire. This pulse tells all other Teensy boards to begin sending their stored data from the computer to the wall. Once all the data had been sent out the boards began listening for new data from the computer.

Our panel imaging setup

Luminance Correction

At this point, the largest roadblock preventing our display from looking like a professional product was a per pixel inconsistency when the display was set to white or other non-primary colors. This is largely due to the fact that when mixing sub pixels to create white and intermediate colors, the variance in output brightness from LED pixel to LED pixel became increasingly pronounced. Unfortunately, humans are very good at spotting the difference between two lights of slightly different color, so when LEDs of varying brightness try to mix colors, it becomes quite apparent to a viewer that the display is not a uniform color. To combat this issue, we developed an efficient and cost effective method for color correcting our LED panels. The process was handled in three parts: imaging, processing, and correcting.

Imaging

The first step in this process was to actually attempt to record the relative illumination that each LED was outputting. We did this with a high quality camera and imaged the panels individually in full red, full green, and full blue. The largest issues we had with this step of the calibration process was the fact that the panels had to be imaged over multiple days, which can lead to slight inconsistencies in the distance between the panels and the camera. We also had issues with some panels not being wired correctly for imaging (i.e. some of our ground power leads were improperly assembled, creating a voltage drop and shift in color), and these issues were then subsequently fixed later. The color of the affected LEDs changed significantly between the capture process and the final assembly of the display, which meant the color correction was inaccurate for these pixels. These strips are fairly evident in pictures of the finished display.

A before and after of the corrected display. Note that we closed the greatest variance between pixels by a factor of 3

Processing

Once we had all of the images, we needed to distill the images into useful information. In order to make the wall look uniform, we needed consistency in brightness across individual flavors of sub pixels. Since there is no easy way to make an LED brighter than its maximum brightness, we needed to identify the three lowest performing sub-pixels (one red, one green, one blue) in the display and set all other LED maximums to this brightness. After organizing image pixels into buckets corresponding to a particular sub pixel, we calculated the luminance as defined by CIE XYZ color space . To do this we found the brightest image pixel for each LED pixel and then used the RGB components and the CIE defined conversion from sRGB to XYZ. Once we had a single number representing the brightness of each LED pixel’s sub-pixel LEDs’ brightness level, we could then search for which three LED pixels had the worst red, green, and blue components. Once we found the worst pixels, we saved the brightness values of these LEDs. We then divided the brightness values for every LED in the wall by this value, producing a correction coefficient for each LED sub-pixel. This color correction coefficient was designed to be between 0 and 1 and could be multiplied by the input signal to get a corrected output value.

Correcting

The final step of process was to store these coefficients in a look up table so that the software could access color correction data on a per pixel basis.

Color Space Conversion

The SPD of our sample before and after normalization
The SPD of our sample before and after normalization

While the luminance correction was successful in normalizing the color across the display, the outputted image suffered from a number of color rendering issues such as a need for white point and gamma correction. Initially, this was handled by a series of "magic numbers" that we found through trial and error. These numbers were problematic for a number of reasons, not the least of which was poor color performance when comparing our display to a computer monitor (our device seemed to particularly struggle in the blues and the purples). Therefore, we remeasured a number of the LED properties with a spectrometer and employed ISET to help develop a more accurate system.

Our first step was to determine the output gamma of our LEDs, a feature we had neglected to measure the first time around. By recording the spectral power distribution for each sub pixel at value 255 (full) and incrementing that value by 20 (255, 235, ...), we were able plot each color's power output in relation to its controlled value. It quickly became obvious that the relationship was linear for all three sub pixels, which greatly simplified that end of the process. Next, we needed to normalize the pixel we used to capture SPDs in reference to the other 20,000 pixels in the wall; this ensured that the matrices we would generate for color conversion reflected the average pixel in the display.

Once we had gathered this background information, we began developing our method for accurately portraying a given color on a computer monitor. We start with a given RGB pixel value from said monitor, . The first step is to convert this value from sRGB color space into linear RGB. Next, we multiply that linear RGB value by the coefficient for the output pixel (generated in the luminance correction process); we call this coefficient . Next, we convert this clamped RGB value into XYZ color space.

Now, we need to generate a matrix to convert from XYZ into the color space of our LEDs. We approach this by finding the inverse of a matrix that converts from our LED’s color space to XYZ. To find that inverse matrix, we use the following formula, where is the spectral power distribution of our LEDs, are the XYZ color matching functions, and is the sampling density of wavelengths, which in our case is 4 nm:

is a 3x3 matrix that converts from our LED color space into XYZ. Taking the inverse of this matrix then created our desired conversion from XYZ into our LED color space. Finally, we multiplied the XYZ pixel values by , generating an RGB value that is sent to the LED pixel. In practice, the process can be simplified into a single 3x3 matrix that converts sRGB into our display RGB.

Gamut Mapping

Early on in our research, we were concerned that our display may not be able to reproduce all of the colors in a given color space. We therefore investigated techniques for compressing color gamuts, fully expecting to include this in our mathematical operations. When measuring the SPD of our LEDs, we were happy to discover our display was capable of producing a much wider color gamut than most commercially available monitors, rendering this step unnecessary.

Results

A simulated image of the raw RGB values sent to the display to render a Macbeth color chart

After combining the work of Luminance Correction with the correct color space conversions, we were able to dramatically improve the color rendering of our display.

Color Correction Pipeline

Our final process for color correction is as follows:

  1. Convert captured RGB values from sRGB monitor into linear RGB
  2. Correct per pixel variance using dSLR generated look up table
  3. Convert clamped values into XYZ color space
  4. Convert from XYZ into the RGB color space for our LEDs
  5. Send RGB values to the Teensy chip for distribution

Below is a side by side comparison of two panels, one without color correction (left) and one with (right).

Conclusions

Overall, we discovered that it is possible to build an LED video wall for significantly less than commercial alternatives. Through our experience, we have realized just how crucial accurate color rendering is both from a technical and artistic perspective. We have certainly increased both the credibility and the usefulness of our product by taking measures to ensure that colors on a computer monitor match what will eventually be sent to our display. We would like to thank Joyce Farrell, Pat Hanrahan, Michael Ramsuar, and Brian Wandell for their support of this project.

Appendix A

Appendix B

This project was completed jointly by Stephen Hitchcock and Matt Lathrop. Almost all work was done jointly, and there was no clear devsion of work for the work done for this class.