Open Source Code for Light Stage Capture Sequences

Today I’m posting updates (1/n) to the Light Stage open source project codebase.

The updates mark improvements for integrating experimental result data and 3d geometry data with light and camera-trigger hardware controllers (3). Included are two new lighting sequence improvements (1) and (2) and a way to get started, no matter your stage design and target capture application (4). These changes contribute towards standardised capture sequences and integrated 3d reconstruction pipeline processing, while supporting stage design tools and retaining visualisations, measurable evaluations and optimisations at each step.

Altogether, this work takes a step towards the vision of a comprehensive open source framework for open hardware light stages, find more details at the Build a Light Stage website.

These recent updates to the LightStage-Repo on github include:

  1. Spherical gradient” lighting sequence.
  2. Balanced lighting baseline”.
  3. Local web service (on port 8080) to return data requested by an HTTP client, such as a hardware controller with Ethernet/Wifi module.
  4. Configuration file designed for each Light Stage, to easily get the web service responding with correct sequence data.

“Spherical gradient” lighting sequence.

  • In Ma 2007 the authors introduced “spherical gradient” lighting sequences for photometric stereo in spherical light stages. They showed that from 3 gradient diffuse lighting sequences (x,y, and z-axis, plus 1 albedo pattern) and 1 (x-axis) gradient specular sequence they can adequately estimate a geometry’s surface normals. The resonating idea is that “spherical gradient lighting” leads to finer differences between capture images, as the granularity in target lighting is finer compared to more coarse lighting changes. For photometric stereo, this means finer resolution detail in the normal maps (texture) and in the 3d mesh geometry.
  • With access to frame geometry data and light vertices, we can simply designate lighting patterns into the capture sequences for calibrated photometric stereo. This enables correspondence of each light’s intensity-level upon each image which opens a technique to evaluate lighting patterns. Thus gaining feedback by evaluating (i.e. A-B test or rank) each lighting sequence variation by their quantitative contribution to the final reconstructed 3d model.
  • PDF file here steps through an example output of how the spherical gradient lighting rotations will perform as part of a lighting sequence. Find its Jupyter notebook (.ipynb) here. Ma 2007 applied X, Y and Z axis gradients (of light position vertices across their frame). To date, our implementation is for x-axis gradients (with natural ordering) for a configurable number of camera capture shots (or lighting rotations).

“Balanced lighting baseline”.

  • This is a per light brightness intensity level and a trivial way to improve target lighting for free. By tuning each light’s power output we improve the balanced illumination of the image capture target. For non-optimally balanced light positions (or for unusually shaped targets), the tuning process can improve upon “all on full power”, which can counter the imbalance effects from lights out of position. This difference is measured by an independent standard deviation; the coefficient of variance (CoV) of the light received (before refraction/reflection) across the surface of a target sphere.
  • For comparisons, this CSV file (see column `coefficient_of_stdev`) show this “tuned baseline” delivers an improved illumination balance for the Aber Light Stage’s 44 LED lighting set-up compared to full power from each of those 44 light positions. The file here reports the light index numbers and their corresponding intensity values with the greatest improvement so far (47% reduction in imbalance measured by CoV) over “all on full”.

Local web service (on port 8080) to return data requested by an HTTP client, such as a hardware controller.

  • The local web service let’s us serve data, such as the lighting sequence data described above. The main advantages are guiding lighting and capture sequences and piping data from cameras into a 3d reconstruction pipeline (e.g. photometric stereo). Access to 3d geometry data, experimental results and processing with numerical libraries (such as NumPy, SciPy and Pandas) let’s us achieve this. Secondarily, support using Python libraries and GUIs to validate/demonstrate before and after running capture sequences. Integrating in this way, let’s us transfer simulation experiment results directly to the light stage dome and feed the capture image data back to verify the results.
  • How-to get started notes / articles are here that should give an idea on how the web service works and how to start it; it’s built-in and straightforward.

Configuration files designed for each Light Stage.

  • The Lightstage-Repo application depends on getting the correct set-up for frame geometry, light mounting positions, mounting types, lighting intensities, target, etc, etc. Each of Debevec et al’s LS3/5 frame (~2007-9), Dutta/York’s frame (2012), Ghosh et al’s frame at ICL (2017) and the Aber Light Stage (2015-19) frame have different requirements.
  • A configuration file for each stage is included and can be run with a command line argument (examples here).

If you would like to get in touch to discuss this or you’re interested to contribute to the project, please feel free to get in contact.


If you’re interested to read more via occasional content/ project updates, etc, feel free to keep in touch via email or on social @pmdscully (see footer).

Processing…
Success! You're on the list.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.