Dienstag, 26. Juni 2012

4th OpenCV PSMove Example (HTML Debug, CL-SDK, INI-parser, Linux RC1)

This is the fourth blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is less about the tracker, but more about debugging and some other useful stuff.

1. HTML Debug

Since it is quite hard to understand what happens during the calibration process without having a camera-image to observe, a HTML-Trace of the calibration process is now created during runtime. Here are two examples what the HTML-Trace will look like if the calibration fails, or if it succeeds.

The first big 4x4 Table show the "blinks" of the color calibration process, already described in [1st Example Color Calibration]. The following row shows the result of the estimated color, i.e. the final mask that is used to estimate the color, the color the sphere was lit and the estimated color. After that a test is performed with the estimated color on the images in the first column of the 4x4 table, to see whether the color was a good match. Additionally warnings and errors are posted unter "Extended logging information" and finally a live camera image is shown (only if calibration was a success).

2. CL-SDK Integration

On windows the PS-EYE SDK from "Code Laboratories" was integrated and is now used to accquire images (previously via openCV) and to configure camera-settings like exposure, auto-whitebalancen and so on. I'd have preferred to stay with openCV, however the CL-SDK does not offer to access the camera with openCV and the CL-SDK simultaneously nor are the camera-settings applied to the camera permanently. i.e. In order to use the CL-SDK to switch off the auto-exposure, i also have to use the CL-SDK to grab the frames from the camera.

For this reason a new "class" named "camera_control.h" was introduced abstracting access to the camera (configuration, frame grabbing, initialisation) which encapsules v4l2, CL-SDK, openCV in order to provide a single object to access the camera and its configuration for linux and windows.

3. INI-parser

Depending on camera access mode (CL-SDK or v4l2) the camera settings may be permanently changed (even after a restart). Therefore it might be useful to make a backup of the camera-configuration before modifying it and restoring it again on termination.
To store the configuration, and without inventing the wheel the "iniparser" from [ndevilla.free.fr/iniparser] was utilized to easily write and read INI-files. It might also come in handy in future to save lense-distorion parameters.

4. Linux RC1 (binaries available for Linux & Windows)

I am happy to announce, that the demo application now perfectly runs under linux (ubuntu linux 12.04 tested) as well as on Windows 7. If you like to give it a try, probably the easiest way is just to take the binaries from the OpenCVExample.zip file within the zipball [ex4].




- startDemo.bat: click this to quickstart on windows
- startDemo.sh: click this to quickstart on linux (sudo & chmod a+x may be required)
- Debug(xxx): contains the binary for your plattform *
- lib: contains prebuild libraries of psmove-api, openCV, CLEyeMulticam  *
- debug.hml: click this to view the HTML-Trace within your browser (do not remove!)
- debug.js: contains the actual debug data (generated during runtime)
                                                         *: all binaries are either build on Win7x32 or Ubuntu Linux 12.04 x32

enjoy!
   cherio benjamin


System requirements:

Montag, 11. Juni 2012

New labs application: Sensorfilter

If you've been watching the PS Move API repository recently, you might have noticed the new "labs/" subdirectory. In there, I'll push some small utilities that I use for debugging and visualization of the current inner workings of the library. The first tool to be put there is "sensorfilter", which is a quick visualization utility that I wrote for testing the new sensor filtering and calibration APIs. It makes use of both PSMoveFilter and PSMoveCalibration, as well as the original PSMove API. With a properly calibrated controller, you can get good readings (again, I've moved the controller a lot for this screenshot):



The slider at the left controls the current low-pass filter implementation's alpha value (i.e. how quickly should the sensor values converge to the newly-read value). As the sensor filter API is kept modular, it's possible to try to stick other sensor implementations in there without having to change client applications (of course, if there are tweakable settings, the client application has to know about these). With the Sensor Filter utility, it's easy to try out new filters and to sanity-check the calibration code.

The utility is available on github.com/thp/psmoveapi in the "labs/sensorfilter" subdirectory. Have a look at the README file to find out how you can build it. It depends on Qt 4 (tested with 4.8).

Plans for the next few days:

  • Have a look at the OpenCV status, provide feedback to Benjamin
  • Try improved sensor filtering algorithms and compare them
  • Finish the calibration backend code, supporting USB "calibration blob" modes
  • Clean up and document the code, extend the Python and Java bindings

Sensor calibration: Custom method and calibration blob

In the last few days, I've been working on getting a basic sensor data filtering infrastructure set up. In addition to that, I've added support for getting and storing the calibration data that is saved on the controller (the axis naming is a bit different in the PS Move API compared to what you will find on that Wiki page). In addition to the factory-set calibration data, I've also implemented support for a "custom" calibration scheme where the user has to do a 6-point tumble test, which will be used as anchor points for calibration.

The custom calibration scheme works a bit like "mccalibrate" from linmctool, but has (at the moment) a bit simpler algorithm (taking the average over 200 sensor readings). The new calibration tool that I wrote (c/calibrate.c) can detect if you have moved your controller too much while the readings were taken, and will ask you to do the given position again. A custom calibration could look like this (I've moved the controller a lot for the first "buttons up" reading to demo the move detector code):

~S/psmove/psmoveapi% build/calibrate 
Serial number: 04:76:6e:XX:XX:XX
Put the controller in the position 'bulb up' and press the Move button
All readings done for bulb up.
bulb up:
a (avg:     1 |  4359 |   188)
a (dev:    20 |    13 |    43)
m (avg:     2 |    -8 |  -421)
m (dev:     4 |     8 |     5)

Put the controller in the position 'bulb down' and press the Move button
All readings done for bulb down.
bulb down:
a (avg:  -165 | -4379 |  -113)
a (dev:    30 |    20 |    48)
m (avg:   -69 |   287 |  -435)
m (dev:     5 |    10 |     5)

Put the controller in the position 'buttons up' and press the Move button
All readings done for buttons up.
buttons up:
a (avg:   177 |    62 |  4173)
a (dev:  3940 |  2079 |   987)
m (avg:   -34 |    57 |  -250)
m (dev:    22 |    16 |     6)



  DEVIATION TOO HIGH - PLEASE RETRY

Put the controller in the position 'buttons up' and press the Move button
All readings done for buttons up.
buttons up:
a (avg:   -41 |   358 |  4362)
a (dev:    22 |    19 |    19)
m (avg:   -29 |    77 |  -250)
m (dev:     5 |    10 |     5)

Put the controller in the position 'buttons down' and press the Move button
All readings done for buttons down.
buttons down:
a (avg:  -128 |   422 | -4343)
a (dev:    28 |    21 |    25)
m (avg:   -61 |    84 |  -515)
m (dev:     5 |    10 |     7)

Put the controller in the position 'buttons left' and press the Move button
All readings done for buttons left.
buttons left:
a (avg:  4252 |   188 |    63)
a (dev:    38 |    41 |    49)
m (avg:    96 |    76 |  -392)
m (dev:     4 |    13 |     8)

Put the controller in the position 'buttons right' and press the Move button
All readings done for buttons right.
buttons right:
a (avg: -4458 |   338 |   -82)
a (dev:    26 |    24 |    35)
m (avg:  -187 |    85 |  -369)
m (dev:     5 |    13 |     6)

Now that we have done a calibration run, we need a tool to display the results (also, we need a tool that reads the data from USB and stores it): Enter "dump_calibration". This tool will read and persist all calibration blobs of connected USB controllers (the "calibrate" tool will only store custom calibration, and only for Bluetooth controllers). When run with a Bluetooth controller (and again assuming that you have already done the USB fetching part), you can get output like this:

~S/psmove/psmoveapi% build/dump_calibration 
File: /Users/thp/.psmoveapi/04_76_6e_XX_XX_XX.calibration.txt
Flags: 3
Have USB calibration:
10 00 67 07 4f 7f a4 7f c2 90 68 6e 25 80 05 80
60 7f 10 80 bf 6e 75 90 c6 7f c5 7f c1 7f bb 90
33 80 47 7f c7 6e 90 7f d2 08 db 7f 57 80 47 80
d7 07 d2 7f 58 80 4b 80 00 00 00 00 00 00 00 00
00 01 ce 08 e0 01 04 97 53 80 5b 80 e0 01 cc 7f
7b 90 39 80 e0 01 dd 7f 4d 80 64 94 f4 07 d1 d7
12 41 72 fc d0 c0 c9 3e 0d c2 a4 1c 6f 3f a9 90
7b 3f 37 5c 71 3f 02 1d 32 3f 87 69 a1 3d 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
# Orientation #0: ( -177 |   -92 |  4290)
# Orientation #1: (-4504 |    37 |     5)
# Orientation #2: ( -160 |    16 | -4417)
# Orientation #3: ( 4213 |   -58 |   -59)
# Orientation #4: (  -63 |  4283 |    51)
# Orientation #5: ( -185 | -4409 |  -112)
# Gyro X, 80 rpm: ( 5892 |    83 |    91)
# Gyro Y, 80 rpm: (  -52 |  4219 |    57)
# Gyro Z, 80 rpm: (  -35 |    77 |  5220)
# byte at 0x3F: 00

Have custom calibration:
         ax         ay         az         mx         my         mz
#0:       1.27    4359.10     187.74       2.04      -8.18    -421.33 
#1:    -164.57   -4378.57    -112.98     -69.21     286.53    -435.20 
#2:     -41.38     358.21    4361.73     -28.52      77.03    -249.96 
#3:    -127.78     421.94   -4342.93     -61.33      83.87    -514.74 
#4:    4251.81     187.99      62.83      95.57      75.67    -391.71 
#5:   -4458.25     338.40     -81.67    -187.27      85.35    -369.02 

This calibration file can be used by the new PSMoveCalibration API that wraps a PSMove object and provides calibration features on top of it. The function that users will probably use most is psmove_calibration_map() - it takes as input 3, 6 or 9 integer values and converts them into corresponding float values that have been normalized.

With the tumble test ("custom calibration"), we only get values for the accelerometer and magnetometer - for calibrating the gyro, we would need to have access to a turntable and control its speed - something that's not impossible to do, but very hard. Thanks to the research done by other MoveOnPC people, we can extract the information from the USB calibration blob - it stores the expected readings for 80 rotations/minute (according to the wiki page).

You can find the new code on github.com/thp/psmoveapi - expect some rough edges and more updates in the coming days and weeks :)

Donnerstag, 7. Juni 2012

3nd OpenCV PSMove Example (Region Of Interest)

This is the third blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is about the optimizing the trackers computation time, and some other stuff.

1. Increasing calculation speed

Thanks to Budaházi Viktor from [moveframework] i learned [mailinglist], that 70fps is probably not fast enough, as feature applications using the psmoveapi may already cause a heavy load to the system.

Therefore i introduced a technique called ROI (region of interest) [opencv roi example] in order to reduce calcualtion time. The main idea is, that instead of evaluating the whole picture, just evaluating a region of the picture in which it is very likely to find what we are searching for. This can speed up calculations tremendiously, however the framerate is not constant anymore, as one is downsizeing or extending the ROI during runtime.

So here is what i did: I defined in the aplication to have a arbitrary number of levels of ROIs which will reduce their size by 40% on each level downwards. In the demo i choose to have 5 leves of ROI, so that:
Level 1.                     640x480px (full camera image)
Level 2. 60% of (1) --> 383x383px
Level 3. 60% of (2) --> 229x229px
Level 4. 60% of (3) --> 137x13  px
Level 5. 60% of (4) --> 82x82    px

In the main loop of the tracker i calculated the bounding-box of the sphere found in the current image. Then for the next iteration of the loop, the ROI level is set to one that can hold that bounding box (multiplied by 2). If the sphere was not found at all, i'll go upwards in the hierarchy of ROI levels until the sphere is found again. The center of ROI is always set to the last location where the sphere was found. Future implementations my use movement prediction and additional sensor-data to put the center more into the direction the user is likely to move the controller. This would reduce switchin between the ROI leves for fast movements.

In the top of the video you may now see the framerate, expected sphere color, average luminace, camera exposure and the ROI size. The white square in the image denotes size and location of the current ROI. Note how the framerate increases when i go further away from the camera. With the help of ROI i now get framerates up to 1500fps :P.

2. Handling distortions from (flourescent) light sources

I also figured out, that having fluorescent lightsource in a room may cause the calibration to fail and highly influence the accuracy of the tracking. The lamps cause due to there operational mode to have, just like the camera, something similar to a refresh rate. As the camera and the lightsources are not synchronized and don't have the same refresh rate, the video feed seems to flicker, that means there are travelling darker/brigher horizontal/diagonal regions within the camera image. [light flickering]

In one of the previous posts, i explained, that i perform the color calibration with the help of a sequence of difference images. The light flickering causes small but recognizeable differences in these difference images wich are then in turn regarded as the sphere beeing lit/unlit.

Increasing the number of image pairs taken by one already decreased false-detections reasonably. However it is still not enough as there still remain false-detections in all image pairs, and further increasing the number of pairs taken may neither be bearable by the user nor is it clear if it would be beneficial.

I learnd from a colleague about morpholgical operations like [dilation] and [erosion] which were quit helpful to cleanse the image from smaller false-detections.

1) orignial image
2) difference image
3) thresholded image
4) eroded/dilated image

Notice how in the lower left corner the lamp causes a greater white area in the thresholded image (3) and how it is removed by a subsequent erosion and dilation in image (4).

3. Choosing the right camera exposure time

Depending on the current average luminance in the camera image, the camera exposure is choosen appropriately. I found out, that a exposure value smaller than 0x10 causes colors to be very grey-ish and going higher than 0x40 increases motion unsharpness (fast moving controller) and the sphere looks very white-ish due to the long exposure.

In the very beginning of the calibration i therefore starte with exposure 0x10 and go step by step up to exposure 0x40 until i get a average luminace of 25.

The average luminance is defined in my case as:

IplImage* cameraImage;
CvScalar avgColor = cvAvgS(cameraImage,0x0);
float averageLuminance = (avgColor[0] + avgColor[1] + avgColor[2]) / 3;

If the resulting average luminance is about 0x20, i reduce the spheres brightness to 70% and if it is above 0x30 i decrease it to 50%. This assures, that the shperes color does not look too white-ish for longer exposure times.

Well thats it for this time. ...

system requirements:

Montag, 4. Juni 2012

2nd OpenCV PSMove Example (color tracking)


This is the second blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is about the tracker itself.

Given the hsv-color (hc)of the sphere in the camera-image. We can now try to track the sphere within the video feed.

Here is what i do:
  • convert the color frame to HSV
  • filter the HSV frame with cvInRange(src,min,max,mask) 
    • min  = cvScalar(hc[0] - 5,   hc[1] - 85,  hc.val[2] - 35, 0);
    • max = cvScalar(hc[0] + 5, hc[1] + 85, hc.val[2] + 35, 0); 
    • As you can see, the hue-value is only in a very limited range (+-5), whilst the saturation my vary quite largely (+-85) and the intensity moderately (+-35). Giving saturation so much space to vary, gives better robustness for fast movements.
  • remove small noise from the image with a median filter, cvSmooth(CV_MEDIAN, 5, 5)
  • find all blobs within the binary image with cvFindContours
    • iterate all found contours and remember only the biggest one
  • calculate the center of mass from that biggest contour to approximate the center of the sphere
    •  cvMoments(mask, &mu, 0);
    •  cvPoint(mu.m10 / mu.m00, mu.m01 / mu.m00);






The tracking (just the calculation steps above) run on my computer (Win7, Intel E5200) @ around 70fps. And seems to be quite robust. However, as you can see at the end in the first video, calculating the center of mass may be fast, but it is error-pronte to occlusions!

system requirements:



1st OpenCV PSMove Example (color calibration)

Hi there everybody. This is my first blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. For this time i'll just propose some of the difficulties i have encounterd and how i solved them.

1st Problem (what color has the sphere)

I intended the tracker to work with a color-filter in order to find the glowing sphere in the camera image. Therefore it is important to know the actual color of the sphere in the camera image!

The color may be highly influenced by the current lighting conditions, the cameras sensor and driver-functionality of the camera like auto-whitebalance, auto-exposure and auto-gain. This gets even worse with respect to the fact, that lighting conditions may change over time (switching on/off lights, closing curtains ...).

The main idea to bypass this problem is to take two pictures within a short time, on in which the sphere is off and one in which it is lit. From this pair it'd be easy to compute its difference with cvAbsDiff() in order to find the area in the image where the sphere is located an then extract the color information of the lit sphere with cvAvg() for that area.

However as there may be motion in the picture, either by the user it self or someon/someting else, calculating a single difference-image is not enough. e.g. like it can be seen in the following picture.
difference-image with a lot user-motion (white areas)
In order to diminish unwanted motion in the difference-image, three ore more image-pairs are taken. As it is likely that only the difference of the lit sphere is visible in all the calculated difference-images, the area where the sphere is located can be approximated by combining the difference images. e.g. like it can be seen int the following picture.



Still this procedure may produce unexpected behaviour if the user moves a lot. Therefore i introduced further checks.

For all images where the sphere was lit (of the previous series) do:
  1. filter the image with the color we just approximated with the help of cvInRangeS(). In the resulting binary image perform a search for contours with cvFindContours(). If there is not exactly ONE contour found, discard the color and start with the color-calibration again.
  2. If the area of that contour is too small (e.g. <100px) --> discard color and start again
  3. If the the area of the contour differs to much from image to image --> discard color and start again

These steps show to be quite robust for different lighing conditions.

If you like the check out the code use the tag https://github.com/benniven/psmoveapiplayground/tree/ex1 to get the code from github.

system requirements:

Samstag, 2. Juni 2012

Multi-threaded LED writing

Today I've experimented a bit with using multi-threading in Linux to write LED status updates. On Linux, setting LEDs actually blocks the process - depending on the controller - for 30 ms up to 500 ms in some cases. I'm not yet sure if there is a way to make this faster (in Mac OS X on the same hardware, updating LEDs is no problem, but maybe OS X does not wait for an acknowledgement when sending out the set LEDs packet).

You can find the multi-threading branch in the multithreaded branch on Github. Now that we have the nice test_read_performance tool, we can compare the results with the previous results from the OS X installation (same hardware):

 -- PS Move API Sensor Reading Performance Test --

Testing STATIC READ performance (non-changing LED setting)
1000 reads in 13043 ms = 76.669478 reads/sec (144x seq jump = 14.40 %)
1000 reads in 12563 ms = 79.598822 reads/sec (100x seq jump = 10.00 %)
1000 reads in 12451 ms = 80.314834 reads/sec (88x seq jump = 8.80 %)
=====
Mean over 3 rounds: 78.861043 reads/sec

Testing SMART READ performance (rate-limited LED setting)
1000 reads in 20073 ms = 49.818164 reads/sec (177x seq jump = 17.70 %)
1000 reads in 20113 ms = 49.719087 reads/sec (196x seq jump = 19.60 %)
1000 reads in 19858 ms = 50.357539 reads/sec (190x seq jump = 19.00 %)
=====
Mean over 3 rounds: 49.964930 reads/sec

Testing BAD READ performance (continous LED setting)
1000 reads in 26187 ms = 38.186887 reads/sec (310x seq jump = 31.00 %)
1000 reads in 25985 ms = 38.483741 reads/sec (306x seq jump = 30.60 %)
1000 reads in 26784 ms = 37.335723 reads/sec (343x seq jump = 34.30 %)
=====
Mean over 3 rounds: 38.002116 reads/sec

Testing RAW READ performance (no LED setting)
1000 reads in 12612 ms = 79.289565 reads/sec (101x seq jump = 10.10 %)
1000 reads in 12639 ms = 79.120184 reads/sec (109x seq jump = 10.90 %)
1000 reads in 12576 ms = 79.516539 reads/sec (107x seq jump = 10.70 %)
=====
Mean over 3 rounds: 79.308762 reads/sec
Still not as good as on OS X, but a good starting point. Without multi-threading on Linux, the results are much worse when the LEDs are updated often:

 -- PS Move API Sensor Reading Performance Test --

Testing STATIC READ performance (non-changing LED setting)
1000 reads in 12890 ms = 77.579519 reads/sec (125x seq jump = 12.50 %)
1000 reads in 12660 ms = 78.988942 reads/sec (104x seq jump = 10.40 %)
1000 reads in 12650 ms = 79.051383 reads/sec (108x seq jump = 10.80 %)
=====
Mean over 3 rounds: 78.539948 reads/sec

Testing SMART READ performance (rate-limited LED setting)
1000 reads in 14596 ms = 68.511921 reads/sec (161x seq jump = 16.10 %)
1000 reads in 14315 ms = 69.856794 reads/sec (143x seq jump = 14.30 %)
1000 reads in 14224 ms = 70.303712 reads/sec (139x seq jump = 13.90 %)
=====
Mean over 3 rounds: 69.557475 reads/sec

Testing BAD READ performance (continous LED setting)
1000 reads in 41132 ms = 24.311971 reads/sec (69x seq jump = 6.90 %)
1000 reads in 41014 ms = 24.381918 reads/sec (70x seq jump = 7.00 %)
1000 reads in 41153 ms = 24.299565 reads/sec (76x seq jump = 7.60 %)
=====
Mean over 3 rounds: 24.331151 reads/sec

Testing RAW READ performance (no LED setting)
1000 reads in 12683 ms = 78.845699 reads/sec (114x seq jump = 11.40 %)
1000 reads in 12723 ms = 78.597815 reads/sec (114x seq jump = 11.40 %)
1000 reads in 12640 ms = 79.113924 reads/sec (107x seq jump = 10.70 %)
=====
Mean over 3 rounds: 78.852478 reads/sec


Interestingly, the multi-threaded variant is worse in the SMART READ test (the rate-limited LED update variant). Not sure why this is the case - maybe it's some threading overhead. For some of my older PS Move controllers that I have here, it's even worse - seems like some controllers are faster to respond than others. I wonder if there's a difference in the firmware or if it's different hardware (the faster controllers is the one that I bought more recently).

Freitag, 1. Juni 2012

Bluetooth on Linux: Fixing hidapi's HID enumeration

One obvious goal of the PS Move API is to be cross-platform. On Linux right now, we can get the controller working via USB with no problems, but Bluetooth devices are not found via hidapi's hid_enumerate() method. One could work around this by opening the /dev/hidraw[0-9]* devices directly, and picking one. However this is kludgy and would require another special-case. The reason why the hid_enumerate() method does not work on Linux is because it always tries to find the USB device for a given hidraw device (even if the hidraw device is a Bluetooth one) and returns the VID/PID from there (this works fine for USB devices, but in case of Bluetooth devices what is returned is the VID/PID of the Bluetooth host adapter, which is not what we are interested in).

As a workaround, I've now implemented a better detection mechanism based on the sysfs path of the hidraw file. For Bluetooth devices, the path could look like this:

/sys/devices/pci0000:00/0000:00:06.0/usb4/4-1/4-1.1/4-1.1:1.0/bluetooth/hci0/hci0:12/0005:054C:03D5.0007/hidraw/hidraw5

For USB devices, it could look like this:

/sys/devices/pci0000:00/0000:00:06.1/usb2/2-2/2-2.1/2-2.1:1.0/0003:054C:03D5.0008/hidraw/hidraw6

In the case of the PS Move Motion Controller, the vendor ID is 0x054c and the product ID is 0x03d5. I've highlighted the occurence of these IDs in boldface above. Given that the hidapi already determines the sysfs path of the hidraw device, it's easy to write a function that extracts the IDs from the path. I've done so now in a patch against hidapi: commit 8ba92edb519 in thp/hidapi.

In order to get it merged into hidapi upstream, I've created pull request #62 at the signal11/hidapi repository on Github. Let's see if I have to rework/improve the patch or if it will be accepted.

After the patch has been merged (you can merge it locally or clone from my hidapi repo), connecting to the PS Move on Linux will become even easier. The only part remaining then is figuring out the permission problems on the hidraw devices (we could probably fix this with some udev rules) and getting Bluez' bluetoothd to reliably accept connections from new PS Move controllers.