XBMC and Philips Hue ambilight, first steps

This post takes the first steps into creating an add-on for XBMC that controls your Philips Hue lights.

nemo

The add-on will consist of three parts:
1. Snapping pictures of the video
2. Calculating the most dominant/average color
3. Controlling the lights

I started with calculating the average color of an image. As a sample image I used the trailer for Finding Nemo, and snapped a picture every 10 seconds.

My first, naive, approach was to resize the image and iterate over every pixel in the image. For every pixel I updated a global “red”, “green” and “blue” counter, and devided the end result with the number of pixels. This works quite well, but the average colors are a bit dull. In a similar way I calculated average HSB values. There appears to be a small bug in my code (blue pictures get an average HSB of green-ish). The first results:

run_1

In a second attempt I still iterate over every pixel, but calculate how the pictures are aligned over the Hue spectrum. See the images below, where the right pictures shows the Hue spectrum from 0 to 360 degrees and how many pixels are in the image per Hue value.

sample_picspectrum

For the resulting value I divide the Hue in 10 degree sections, and use this as the most occuring Hue (+5). So in the example the most dominant section/”bin” would be 20-30, and the resulting Hue will be 20 + 5 = 25.

The results:

run_2

Next up: building a  XBMC package

Update 2013-03: The XBMC add-on is available, see https://meethue.wordpress.com/2013/02/22/xbmc-add-on-improved-ambilight/

Advertisements

4 thoughts on “XBMC and Philips Hue ambilight, first steps

  1. Ben Hoyle

    Excellent stuff. Had some similar thoughts but was going to use a web cam, basing it on a XBMC media server is a much better idea – can also port easily to the Pi.

    With the 1 second time lag – can delay the video by 1 second? Or scan the video to build up a Hue profile. Also will you need some smoothing for scene transitions?

    Reply
    1. meethue Post author

      My initial idea was using the XBMC buffer (the buffered video) and pre-calculate the colors. But the way I understand it is that XBMC’s internal buffer are compressed and not available from the Python API. Delaying the video would be a good work-around too. For now, I made some improvements in the code and it’s a bit more responsive. But it’s not quite there yet. Feel free to play around with the code yourself 🙂

      I thought about creating a Hue profile for the current video as well. Didn’t have time to play around with it though..

      Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s