JeremyBregman

From Psych 221 Image Systems Engineering
Jump to navigation Jump to search

Introduction

Coral reefs are complex under aquatic ecosystems; they are essential to maintaining Earth's balance and biodiversity due to the delicate interaction of hundreds of thousands of species. Despite their inaccessibility, humans benefit reefs both directly and indirectly: as a source for cancer-treating drug development, as stability for tourism or fishing-based economies, and as necessary coastal protection from waves and erosion. Underwater pressure and the lack of a continual air supply renders in-person acquisition of data extremely difficult. As such, scientists rely upon a number of spectral-based remote sensing techniques to derive geomorphic, species-specific, or water quality information from images and spectroscopy methods. Spectral information is critical for analyzing the continual health of coral reefs and society’s potentially unknown effects on the surrounding environment. Coral bleaching, the mass loss of symbiotic zooxanthellae that inhabit the coral, is attributed to fractions of changes in the underwater environment. This project investigates the depth-dependent attenuation of light by common tap water and its effect on determining color information underwater. These methods can be extrapolated to measuring the components and quality of seawater by determining how light is attenuated at different wavelengths.

Background

The propagating photons through a media involves matter interactions that result in one of three energy-transfer outcomes: transmission, scattering, or absorption. Collisions with particles in the water will lead to a redirection of the electromagnetic wave propagation in either an elastic or inelastic fashion. Two special cases of scattering result in reverse or forward propagation – forward scattering is just transmission. Backscattering at the detector, however, leads to signal detection from photons that did not reach the designated target – adding more flux to a measurement – and must be considered for accurate models of the physical process. If a photon is not scattered in any direction, than it is absorbed by the molecules of the media and the energy is transferred or changes form. Considering infinitely thin slices of material and representing the interactions as a differential equation [Eq. 1] leads to the exponential attenuation model for light propagation , governed by a wavelength-dependent attenuation coefficient. [Eq.2]

Measuring light attenuation and water quality dates back to the early 1930s. A rudimentary photometer was used to take measurements of scattered light through an eyeport of a submarine and well validates the linear relationship of the log(Intensity) to underwater depths . Modern technology has allowed for the improvement of light sources and spectrophotometers. Sensors measuring spectral irradiation are submerged, or placed on boats, aircraft, or satellites. These rely upon hyper-spectral, multi-spectral, or radiometric imaging techniques to determine properties of the water such as salinity, algae content, or sedimentation levels .

A modern understanding of spectra light attenuation is also critical for color correction of digital images taken underwater. Beer’s Law is employed, yet in terms of an apparent optical property – the diffuse attenuation coefficient – which depends on the structure of the ambient light field. A diffuse attenuation coefficient is a depth-dependent parameter, and defined in terms of the ambient down dwelling irradiance of photons heading in all downward directions rather than from a collimated light beam. The exponential attenuation law is redefined with respect to an integral of depth-dependent coefficients, and can be similarly employed by using an average coefficient value for the depth in question . In underwater images, this will result in false colors – certain algorithms can be used to recreate the image as if it was taken at a certain depth, or even in air. Others focus more directly at removing the effects of scattering from images taken in turbid environments, resulting in clear images .

Methods

Image formation is described as a simple linear model [Eq. 3], relating the irradiance at the sensors m the imaging condition. I0(λ) describes the illuminant, r(λ) describes the spectral reflectance of the scene, s(λ) characterizes the spectral responsivity of the sensor, and g represents all of the imaging settings that effectively influence gain, the conversion from photons to a RAW R,G,B image value per pixel [0-255]. Some of these parameters include exposure duration, f-number, and pixel size.

In this experiment, a Photo Research SpectrasScan PR715 was used to measure the spectral radiance from the target area in terms of W/str/nm/m^2. RGB Images were taken using the IntelRealSense RS200 camera, a RGBD sensor [Fig. 2] that will soon ramp up to be integrated with many low-cost electronic systems. The RS camera was also controlled by a Matlab script, capable of setting the shutter time and gain to return R, G, B values at each pixel ranging from 0-255.

Illumination condition and geometry is of great importance to these measurements. The most ideal case achievable in the experiment and a tungsten-illumination studio lights with diffusing umbrellas was used to generate an illumination spectrum. Not used in this experiment is a LED array of 7 different colors ranging from 380-700nm. The intensity and color of the illumination from the LED is also controlled by the MATLAB script and can be used in future experiments to investigate the color-dependent properties of the illumination condition on image quality and attenuation coefficient estimates.

Spectral reflectance is a wavelength-dependent measurement of a material’s effectiveness at reflecting radiant energy. It is defined as the ratio of the radiant flux reflected by that surface to the radiant flux received by that surface [Eq. 4]. Therefore, to determine the surface reflectance spectrum of different colors, we can use a radiance measurement device – a spectrophotometer – of both the target and a known white reflector. It is assumed that the white reflector has a surface reflectance spectrum of 1, so a SPM measurement of this target would actually result in a measure of the illuminance condition – the radiant flux received by the target. Dividing a target radiance spectrum by that of the white reflector results in the surface reflection spectrum per target.

The target is a waterproof version of the DKC-Pro Multifunction Color Chart from DGK Color Tools. The target was placed at the bottom of the fishtank and filled up with variable depths of tap water [Fig 3]. The SPM and the RS camera were used to take radiance spectrum and RGB images of the targets from above, while illuminated with the tungsten studio lamps. The tungsten studio lamps and the shutter time on the RS camera were chosen respectively in order to limit the amount of saturated pixels/target squares. In the current setup, only the yellow and white squares are saturated given a shutter condition of “22”. Increasing the shutter time will increase the exposure duration, and hence the average values of R, G, and B taken from the RS camera. The set-up has been optimized to have the SPM placed in a position as normal as possible to the bottom of the fishtank given the limitation of the tripod mount posts. When taking an image, the RS camera was duct-taped to the SPM front surface. In this fashion SPM measurements are close to representing the photons received by the RS camera.

Results

It is important to note that the illumination is spatially non-uniform. To mitigate the effects of this spatial nonuniformity, illuminance at each position of the squares was measured using the reflector target when performing R, G, B calculations in each individual square using the linear model presented above. The responsivities of each channel derived in a previously calibrated experiment run by Henryk B [Fig .4] and represent the probability that a photon of a particular wavelength is turned into an electron by the sensor.

In the air environment, the square (1,2) was exposed to the largest spectral irradiance – as shown by both the grayscale map [Fig 5] and per-square spectral comparison [Fig 6]. In the spectral comparison, the 2nd column is plotted, indicating (1,2) as the max illumination spectrum. This spatially varying illuminant is an indication of a non-ideality in the system. This light source is also not collimated; non-idealities continue to build up, adding to the difference between predicted and experimental values.

The spectral reflectance curves for each square are necessary for validating the linear model for predicting image formation in an ambient air environment [Fig 8]. Since the attenuation coefficient in air is assumed to be 0, the m value of RS camera can be approximated as the integral of spectral reflection * the illuminant spectrum * the responsivity of the camera. This predicted value can be compared to the experimental value in order to determine how well the linear model can be used to represent image formation in air [Fig 7]. The mPredicted value will come from the linear model described above, and the mMeasured value will come from RS images taken of the Color Checker. When graphing mMeasured vs. mPredicted, the slope = g should be = 1, indicating the validity of the linear image formation model in air.

Since the spectrum included wavelength-dependent energy (A fundamental concept of quantum mechanics), the spectral radiance is in W/str/nm/m^2 must first be converted to a measure of the number of photons using the Matlab ISET command “Energy2Quanta.” In measuring the ‘m’ value for the R, G, B channels, the value was averaged over a square area in the center of each checker square. This is to average out any noise in the measurement that might be due to Poisson statistical arrival process of photons at each pixel, or even variability in the camera itself per pixel. Illustrated below is the variability in pixel value within a given square, comparing square (1,4) to square (1,5) [Fig 9a+b]. Overall, for the Red Channel across this set of pixels, the average R value is 170 in (1,5) as compared to 41 in (1,4), indicating a higher signal value. As such, the SNR of a measurement taken in square (1,5) will be less than (1,4). This is indicated by both the amount and extent of variability in the normalized pixel value in the below two graphs. Squares which demonstrate the average value are normalized to 1: Since the (1,5) square has a higher SNR, the variability is reduced to 0.9-1.1 as compared to 0.85-1.15. In addition, the number of pixels equal to the average values is higher, indicated by the larger amount of gray pixel values in (1,5) vs. the number of outliers. This comparison is helpful in demonstrating that a better linear fit (slope closer to 1) will be generated by prioritizing squares that have a high SNR. A linear fit will fail for pixels demonstrating a lot of noise. Squares that saturate to values above 255 should be disregarded in this comparison as the m-value misrepresents the amount of photons incident upon the detector channel.

Performing a linear regression on the data for all color squares within each of the channels individually results in the above three fits [ Fig 10a,b,c]. The all have R^2 values ~1, indicating the goodness-of-fit and the slopes are all nearly 1, suggesting that the linear model can in fact be used to predict the M-values of the RGB camera. This is the first step towards validating that the linear image formation can then be used to model situations where alpha is not equal to 0. Gain values closer to 1 can be achieved by using only high SNR squares, however this is at the expense of number points within the sample set used to generate the linear fit.

The next step in validating this image formation model is to observe how a specific media attenuates the intensity of light experimentally and compare that wavelength-dependent attenuation to what is expected given a known attenuation spectrum of a common liquid. Here, tap water is chosen, due to the low amount of scatterers and its close availability to the laboratory setup. Since seawater fundamentally is comprised of water plus a small percentage of other species responsible for attenuating or scattering light (plankton, colorphoric dissolved organic matter), tap water to the first order should demonstrate the same attenuation trend. For reference to literature, an example attenuation spectrum is plotted by Hale et al demonstrates absorption over the visible wavelength region spanning two orders of magnitude between 400 and 700nm. [Fig 11]

First, the fishtank was filled with 5cm of water as well as the spectral radiance at each color square was measured in the same illumination condition. All radiance measurements of the white reflector target submerged are less than their counterpart in air. This is the first successful measurement qualifying the statement: “Propagation through water attenuates the intensity of light’ as both of the dotted curves – the radiance from a white color target – are reduced (both in terms of power and incident photons) [Fig 12]. Interestingly enough – they are not reduced by the same amount. The location of highest radiance from the white reflectance target now comes from position (1,6) rather than (1,2) [Fig. 13]. This redistribution of the spatial illumination is perhaps due to differences in optical path length underwater. Since the light sources are not collimated and the rays are hitting the surface at an angle, perhaps scattering or reflection from the top surface of the water is causing this non-ideality.

The SPM was then used to build upon the statement “propagation through water attenuates light.” The exponential decay model for scattering/absorption properties suggests that exponentially lower amounts of photons (or measured in terms of power) should hit the sensor based upon the depth of the target. In order to validate this, the fish tank was filled with different amounts of water, and the SPM captured the spectral radiance of white reflector kept in the same position. The the resulting data displays contradictory trends: Measurements taken at depths 0 between 10cm [Fig 14] display a decreased sprectral response across the entire visible band, resulting in a maximum 50% atttenuation at 10cm. After 10cm, however, increasing the depth of the target by adding more water into the fish tank resulting in raising the incident number of photons on the detector across the entire visible band. Nearing the height of the fishtank, a depth of 18cm resulted in a 36% attenuation as compared to in air [ Fig 15]. It is clear that this is the result of inconsistances in the set-up of the of the measurement – adding in more water should only cause the effect of absorbing more photons due to a larger number of interaction events along an increased optical path through an absorptive media.

To this effect, the sample data set analyzed is confined to measurements below 10cm, yet the data is still presented above to cast a large shadow of uncertainty on using the values delievered by the spectrophotmeter as validly representing the desired effect. Nevertheless, measured values for a single wavelength can be plotted with respect to depth by rawing a vertical line through the curves in Fig. 14a and using the intersection points. Applying a curve-fitting first-order exponential to the results from 700nm in a yields in a positive attenuation coefficient, a successful indication that the data starts to at least suggest some underlying exponential behavoir [Fig 15]. This process of curve fitting general form exponentials a*exp(b*depth) using Matlabs cftoolbox could be repeated for every wavelength data set, resulting in an overall spectral attenuation curve for the liquid.

The spectral attenuation coefficient can also be calculated by isolating K in Eq. 2 : K= -ln⁡(I_d/I_o )/2d [Eq. 5]

A factor of two arises  from a double pass through the water column before hitting the detector (downwards + upwards).  If the assumption is made that the attenuation in clear tap water is not depth-dependent as it is in the sea, then the attenuation spectrum calculated from Eq. 5 shoud be the same irrespective of the depth of the target. Using the SPM results from 0-10cm, the calculated attenuation spectra have been plotted in Fig 16. It is expected that all of these spectra should overlap, since the subsequesent effects of changing Id and d itself should cancel each other out; an exponential function is linear in the natural log domain. However, this is not what the experimental data indicates in the figure.  The predicted attenuation coefficient spectrum is translated down the y-axis when calculated with increasing depth spectra, another strong indication that the data deviates from an exponential relationship. 


The effects of spectral attenuation, due to both scattering and absorption, can be seen visually with images taken with the RS camera of a submerged target [Fig 17a+b]. In the same illumination conditions, with the same camera gain and shutter value, the color square on the DGK Kustom Balance Matrix already show some altered R, G, and B values. The difference in skin tones in (1,1) and (1,2) are less pronounced underwater. Overall the colors seem less vibrant – this is especially pronounced in the yellow of (1,1). In the water image, surface reflections and also scattering particles can be seen in the water. Examining difference in the average pixel intensity of the color should reveal some information about the spectral attenuation coefficient weighed by responsivity of the camera.

The following colormaps of the pixel intensity value of square (2,3) [Fig 18a-f] display some interesting features. In water, all of the R,G,B values are reduced, indicating that less signal has reached the detector – this is another qualification of water’s ability to attenuate light. Now, this has been demonstrated by both a calibrated scientific device – the SPM – and a common household RGB camera. The scatterer (dark spot on the RS image above square (2,3)) is detected as 20%-25% lower pixel value than the average reading in the Green and Blue channels and goes undetected in the red Channel, suggesting that that scatterer particle has its own wavelength-dependent attenuation spectrum. There is also lower SNR readings in the water image, as the dynamic range of the R, G, and B values from +/- 10 to +/- 20 for each channel for the same color square. This is where color-correction algorithms would come into play – to change the imaged R,G, and B values based on the perceived underwater depth of the target.

It would be useful to evaluate linear image formation model’s ability to predict the R, G, and B values of a known reflecting color target "underwater, given the depth of the target and a known attenuation spectrum. Variations in the result from the measured value using the RS camera could be attributed to non-idealities in the attenuation spectrum or from the presence of scatters in the liquid itself. In order to conduct such an experiment, the absolute Absorbance was measured using a Jasco UV-Vis Spectrophotometer in the SNF-Shared Facilities nSiL Lab. Since the absorbance is defined as the log(Id/Io), the Absorbance can linearly related to the attenuation coefficient by A = 0.4343*K*depth. There must be something wrong with the machine, since the measured attenuation coefficient from the Jasco exhibits not only the exact opposite trend as literature (downsloping instead of upsloping), but it also does not demonstrate a 1-2 order of magnitude difference between 400 and 700 nm, not even a factor of 2 [Fig 19]. At first it was believed that the Jasco might have been flipping one of the datasets during the read-out, but since the measurement does not span the expected range, the erroneous data is attributed to a malfunctioning system. Therefore, it is suggested to use the experimental values from literature, published by Pope and Fry who measured the absorption spectrum of pure water using an integrating cavity sphere [Fig 20]. Renormalizing the predicted values between [0 and 1] and plotting this against the experimental values should result in a linear slope of 1 – deviations from this slope will give a sense as to how accurately the linear model, which only includes attenuation effects at the moment, can predict the RGB values of underwater targets. Coincidently, the plotted attenuation coefficient seems to exactly overlay that observed by the SPM from a 1cm depth in water, however that is believed to be simply a coincidence since the SPM does not indicate readings in accordance with basic physical principals.

Conclusions

Before sitting down in the laboratory to conduct any experiment, a scientist should first ask him/herself: “What fundamental forces affect what I am trying to do here.” Understanding the underlying components of the experiment, the scientist should then make clear that those fundamental forces are well understood and well characterized before moving on to higher aspirations. This project is a tribute to that very check: This originally began as a plan to estimate the spectral attenuation coefficient of an unknown liquid, based upon some convex optimization of the differences between measured and predicted values from the Intel RS Camera. As more and more problems realized, what became apparent was that some fundamental properties of the system – even just the attenuation itself – cannot be taken for granted. Reducing the project to measuring the spectral attenuation of coefficient of a known liquid with a known reflectance target still proved to be no easy task. Although this project successfully demonstrates the predicted loss of light due to propagation through a liquid medium, the experimental data only vaguely resembles an exponential trend and the attenuation spectrum derived does not match literature. It presents some of the problems associated with a simplified model, and how experimental data can quickly deviate from an assumed result. Despite this, the project presents the beginning steps towards creating a calibrated set-up and measuring some of the effects of inert optical properties on the image formation of underwater targets.

Future Work

The first suggestion is to drastically improve the set-up of the experiment. First, a uniform illumination, collimated light source should be used. It might be necessary to model the light path in order to understand actually how much and where the light is going before it hits the surface. The sensor – be it SPM or RS camera – should then be placed normal to the target. Next, a new fishtank should be designed or built more properly suited for depth measurements. The tank should consist of a tube or a thin rectangular prism at least a few feet tall that can hold a Color Checker Target at the bottom, with a drainage spigot. The walls are made out of acyllic or glass, with sealant on the inside edges to make the tank waterproof. The outside walls can be covered in black absorbing material. A further improvement would be to use a waterproof SPM and waterproof the RealSense camera. Since the camera has an extremely small form-factor, it would not be difficult to create a similarly shaped rectangular prism out of acrylic that could house the camera during the experiment. More robust housings can be modeled based on current underwater imager design. Once the experimental set-up is in place, actual data reliable data can then be taken. This reliable data can be used to evaluate if the linear image formation model can be used to predict R,G,B values of a camera sensor at a specific depth. If it cannot, then some modelling of the medium’s scattering properties is required.

All of the data acquisition in this experiment represents the necessary measurements required to use a linear model to estimate the spectral attenuation of an unknown liquid . This was the original goal of the project, but can only be seriously considered once the basic properties of water attenuation and camera modeling have been validated. The code for a basic MatLab script – estimateAttn.mat - is presented in the Appendix with some helpful comments to understand how to use the measurement the attenuation coefficient is extremely attractive as it provides an alternative to expensive spectrophotometer devices by only using a cheap and small-form. I-factor Intel RealSense R200 camera.

Appendix

Important files + Matlab Code Descriptions:

acquireRealSenseR200LED

[ Color, Depth, nativeDepth, shutterVal, gainVal ] = acquireRealSenseR200LED( leds, intensity, shutter, gain)

This program is responsible for taking an image with the RS R200 RGB camera. The first input variables determines which LEDS you will turn on, and input variable indicates their intensity value on a range from 0-255. The ShutterValue determines the exposure time of the image, and the gain is the camera gain. A sample set of input variables is ([1:7], 255*ones(1,7), 22, 1).

The important result is saved into the Color variable, which is a 4dimensional matrix of intensity values saved as (Rows, Columns, Channel, LED). A common command would be imagesc(Color(:,:,:,1)) which plots the image from the entire RS camera. Isolating an individual channel image would be called with imagesc(Color(:,:,1,1)). Intensity values are saved as int8 values between 0 and 255.

The script uses Matlab to control an Arduino (which in turn controls the LEDS) and the RS camera through COM 3 and COM 8 open ports.

AllSp.mat

This is a <173x10> matrix that contains the measurements of the SPM taken from white reflector target at variables depths. Depths = 0, 1, 2, 4, 6, 10, 12, 14, 16, 18 cm.

initLEDcontroller.m

An initialize function that sets some properties of the LEDs

measure.m

A sample script that contains commands on how to use the PR715 SPM in Matlab

offLED.m

A small function that turns the LEDs off after taking an image

pr715init.m

Requires an input id of the port number used. This value is set to 8 for the PR715 SPM. It is essential to call this initialization function after the device turns on and completes its detector calibration test.

pr715spectrum.m

[data, wavelengths, peak] = pr715spectrum(port) Makes one measurement with the PR715 spectraphotometer. Returns the % spectrum, the wavelengths at which the spectrum was sampled and the % wavelength sample at which the peak of the SPD occurs. It’s responsible for talking to the device after you’ve initialized it and taking a spectral measurement.

RealSensR200sensor.mat

This contains the wavelengths and the data for the spectral responsivities of the R, G, and B channels, saved in a <61x3> matrix. Wavelengths are sampled every 5nm between 400 and 700nm

ReflDataAir.mat

This contains all of the radiance measurements from each of the 18 color checker squares on the DGK Kustom Balance Card. The 19th column contains the sampled wavelengths.

ReflDataAirWhiteCircle.mat

This contains all the radiance measurements from the white reflector target at the 18 color checker squares – assuming a 100% reflectivity of the reflector target – this could be a measure of the irradiance at a color square. The 19th column contains the sampled wavelengthts.


ReflDataWater5cm.mat

This contains the same information as 10, except now with the DGK Card submerged in 5cm of tap water.

ReflDataWaterWhiteCircle5cm.mat

This contains the same information as 11, except now with the white reflector target submerged in 5cm of tap water.

RSAirChart.mat

This has saved the Color output matrix of the acquireRealSenseR200LED.m function of a target in Air. Since there is only one illumination condition – tungsten studio lamps – the matrix is reduced to 3 dimensions of int8 values.

RSWaterTarget5cm.mat

This has saved the Color output matrix of the acquireRealSenseR200LED.m function of a target submerged in 5cm of water. Since there is only one illumination condition – tungsten studio lamps – the matrix is reduced to 3 dimensions of int8 values.

setLED.m

A short function used to set the intensity of a LED for a given illumination condition.

SimpleRealSEnseR200.exe

This file needs to be present in the active Matlab Folder for the camera to properly take an image.

TapWater.csv

This is the absorbance output file from the JASCO UV-Vis spectrophotometer.

p221finalTrial2.m

Most of the code has been commented in-line, but this was the main workhorse for generating datasets from the IntelRS camera images that were taken.

s_example.m

This is some filler code that populates matricies with random variables in order to create some random data to display how the estimateAttn.m function works. The most important part of this script is this line:

[ attnEst, wghtsEst ] = estimateAttn( measVals, gainVals, C, refl, ill, B );

And specifically understand what each of the input and outputs represent.*

EstimateAttn.m

(for implementation, see line above). Example case: 3 channels (RGB filters) 7 illumination conditions (LEDS) 24 Color Checker Squares (Macbeth) 61 Wavelengths Values (Visible Light) 5 Spectral Basis Components (To be explained)

“For each channel, for each illumination condition, for each color square, there is a value” measVals <3x7x24> - this should come from the output of the RS camera matlab function. These are the raw RGB values from the camera gainVals <3x7x24> - this is defined as the ratio between the measured and predicted value in air. It is responsible, in terms of units, for the conversion from spectral radiance to electrons representing pixel intensity.g=m/(∑_λ▒〖resp(λ)*I_0 (λ)*Δλ〗) C <61x3> - this is the responsivity curve for each channel refl <61x24> - this is the surface reflection spectrum for each target sample B <61x5> - this is the Spectral Basis Matrix

Spectral Basis Matricies have to do with large data sets: It’s easy enough to plot data of one, two, or even three dimensions (given your GPU) and visualize it on your computer – but how about higher dimensional information? Principal Component Analysis is a statistical method applied to data sets in order to reduce the number of variables necessary to represent set – the basis functions. These functions are representation of the directions of the largest density of data points (orthogonal). They are then weighted by their variances; a higher variance of a basis vector indicates that it a stronger component used in representing the original data. The estimation algorithm assumes that it is possible to represent light and its attenuation by a small number of basis functions 3, 4, or 5. The 3, 4th, or 5th variance will drop off to a value close to 0. In this case, the basis functions are as a result of statistically analyzing the attenuation curves of unknown liquid’s components. As an example, using colored dye in water generates different shaped spectral attenuation curves – these 5 samples can be measured in a well-calibrated scientific device such as a working UV-Vis Spectrophotometer in order to create 5 spectral attenuation coefficient curves. Performing Matlab’s pca command on the data returns the basis functions matrix required for the script.

let matrix A represent your data, every column of A denotes a wavelength (say n columns, n=61) and every row of A a single spectral measurement (say m rows, m=5). In Matlab run: [B, ~, stdDevs] = pca(A,’centered’,’false’); The matrix B contains all the principal vectors of the dataset of A. This matrix will have the size of x by min(m,n). You pick the first k columns of B as follows basis = B(:,1:k) Choose k on the basis of the entries in the stdDevs vector. Look at the entries and the entry index for which the entry is small, relative to the first entry. The value of this index gives you k.

All the values are now described. In the ideal setup where the data used is actually representative of a physical process, this estimateAttn.m script will output an estimated wavelength-dependent coefficient. The liquid is unknown, but everything else is: the reflectance, the depth, the illuminant, the detector responsivity, the resulting image value from the RAW camera file. The algorithm is based on the idea of linearizing the exponent around the current weight estimate using Taylor series approximation. A quadratic program is solved to obtain the next weight estimate and this is repeated until convergence. Specifically, the convex optimization is trying to solve a quasiconvex function - the norm of the difference between the linear model predicted value and the measured value – and find the estimated weights for the spectral basis matrix in order back out the attenuation coefficient. The amount of attenuation is given by basis*weights.

Given that this algorithm works mathematically for estimated the attenuation function, the thing which is left to do is making sure the experimental set-up is taking good data. If this good data can be measured, then the program can be used to compare how well the code does at predicting the attenuation coefficient, before bringing the unknown liquid to the UV-VIS SPM and measuring it absolutely.


References

 I. Garbet, Light attenuation and exponential Laws. Website, January 200
 E.B. Stephenson J.O.S.A. Vol 24, August 1934
 Y.Zhang et al. Optics Express, 20, 18, August 2012
 Julia Ahlen Thesis. Universitatis Upsaliensis Uppsala (2005)
 J.W. Kaeli and H. Singh, Illumination and Attenuation Correction Techniques for Underwater Robotic Optical Imaging Platforms, IEEE Journal of Oceanic Engineering, In Review
 R. Pope, E. Fry Applied Optics, 36, 33, 1997
 H. Blasinski, J. Breneman, J. Farrel “A model for estimating spectral properties of water from RGB images” ICIP (2014)
 J. Worthey ”Color Rendering: a Calculation That Estimates Colorimetric Shifts” JAW Color Research and Application 2001