Mittwoch, 15. August 2012

5th OpenCV PSMove Example (Multi-Controller Tracking, World Distance Estimation, Build-Toolchain)


This is the fifth blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is less about the multiple controller tracking, real world distance estimation, performance issues, and the build-toolchain.

1. Multi Controller Tracking

A lot has happened in the meantime. Most exciting regarding the tracker are its ability of multi controller tracking, increased robustness and the calulation for the distance to the camera.

Have a look at a short video featuring tracking of two controllers.



There are some things that are noticeable in that example. First of all and of course the obvious ... two controllers are tracked at the same time at reasonable speed of 350-900 FPS depending of its distance to the camera. The farer away the controller is from the camera the higher the FPS and vice versa. This is caused by the different levels of regions of interest i use to search for the colored blob in the image. But i already told you that in one of the previous posts [3rd OpenCV Example].

2. Real world distance estimation

The Example now displays also a estimate of the distance between the controller and the camera in [mm]. This might get very handy vor applications that like to make use of the 3D position of the controller. You might ask why to use the distance in [mm] rather than the spheres radius. Using the realworld unit has two advantages. One is, that [mm] is easier to understand as it relates directly to the real world in which the interaction takes place. And the second ist, that the distance in [mm] has 1:1 relation with the user on the Z-Axis, while the readius has a logarithmic relation. To put it simply ... it makes app development easier :P.

3. Increased worst case performance 


One big problem of the old examples was, that if the controller was not visible in the camera image at all, the framerate dropped drastically (around 60FPS on a 2.5Ghz dual core) which was a no go on slower systems e.g. laptops that are in power-saving mode. To increase the FPS, if the controller is not tracked, the tracker simply only scans a quater of the image, and if it does not find the controller, it immediately returns NOT_FOUND. In the next iteration a different quater of the image is evaluated an so on. This is best recogizeable in the video at second 00:17 when i hide the controllers behind my back. 

4. Increased robustness to occlusions

 Another nice feature, is its increased robustness to occlusions. Have a look of the screenshots taken from the video above.

In all of the screenshots you can see, that the circle of the controller is estimated very good, although it is partly occluded. The estimation for the center of the sphere is quite simple and can therefore only correct occlusions smaller than one half of the spheres size. The estimation utilizes the idea, that the two most distant points of a detected contour are equal to the diameter of the sphere and the center of these two points is the the center of the very same. This assumption is true, if the occlusion occurs from one side or from two conterparting sides. For others situations it may fail, but the approach tourned out to be convenient and fast.

5. The new buildprocess and PSMoveAPI integration

Thanks to Thomas Perl the optical tracking of the controller has now been integrated into the branch "tracker" of the official psmoveapi repository hosted on github. It'll soon be moved the master branch but for now just stick with the "tracker" branch. It also contains a detailed description how to build the whole system and the different examples (also the one from the video) that are part of the psmoveapi. As the build-instructions for windows are quite long, they are included at the end of this post :).

6. Remaining problems of the optical tracking

Unfortunately the optical tracking has still some problems.
  1. Only the Magenta, Cyan and Blue can be tracked robustly. Others colors suffer a lot of motion blur which resuls in a low tracking performance. Especially regarding the z-position of the controller.
  2. Strong daylight coming trough windows might be big problem for tracking performance, too. Avoid direct sunlight  by closing curtains if the controllers cannot be tracked.
  3. There are still too much false detections (sphere estimated to small / wrong position) when the controller is partly occluded. 
  4. Artifical light, especial the one emitted of fluorescent light, causes unwanted jitter in the estimation of the 3D position.

enjoy!
   cherio benjamin


System requirements:


---- Build Instructions For Windows -----------
Get and install
- MinGW       : http://sourceforge.net/projects/mingw/files/latest/download?source=files
- CMake       : http://www.cmake.org/cmake/resources/software.html
- OpenCV      : http://sourceforge.net/projects/opencvlibrary/files/opencv-win/
- GIT         : e.g. http://code.google.com/p/msysgit/
- PSEyeDriver : http://codelaboratories.com/get/cl-eye-driver/
[optional]
- CLEyeSDK    : http://codelaboratories.com/get/cl-eye-sdk/

1. build and configure OpenCV with cmake
    :: you may skip to build OpenCV on your own, however i had no luck
    :: the binary distribution did not work on my system
    cd <where you extracted opencv>
    mkdir build
    cd build
    cmake .. -G "MinGW Makefiles"
    mingw32-make
    :: now go for a coffe-break

2. Get you clone of the psmoveapi
    git clone https://github.com/thp/psmoveapi.git
   
3. Check out the "tracker" branch
    cd psmoveapi
    git checkout tracker

4. Init and update the submodules
    git submodule init
    git submodule update
   
4. Copy blue-tooth headers and library to your MinGW installation
    :: e.g. MinGW installed at C:\MinGW\
    :: e.g. your cloned repository is at D:\dev\psmoveapi
   
    copy D:\dev\psmoveapi\external\mingw-w64-headers\*.h  C:\MinGW\include
    copy D:\dev\psmoveapi\external\mingw-w64-headers\*.a  C:\MinGW\lib

5.  make OpenCV known to your system and the cmake toolchain
    set OpenCV_DIR=<the path where you extracted opencv>
    set PATH=%PATH%;%OpenCV_DIR%\build\bin

7. prepare a new build with cmake for the psmoveapi
    ::
    mkdir build
    cd build
    :: only with OpenCV Camera access
    cmake .. -G "MinGW Makefiles"
    :: additionally with Code Laboratories PS Eye SDK
    cmake .. -G "MinGW Makefiles" -DPSMOVE_USE_CL_EYE_SDK=ON
   
8. finally build
    mingw32-make
   
9. start one of the desired test applications
------------------------------------------------

Samstag, 28. Juli 2012

Gyro calibration experiments with a turntable

Last weekend, I've dug out an old turntable to see how well the gyroscope of the Move can be calibrated with the USB-based calibration blob. The turntable has the advantage that it has a known rotation speed (two modes: 33 RPM and 45 RPM), so this can be used to see if the values we get back from one of the gyro axes somehow relates to real-world values.

Before I tried the turntable method, I just played around with the raw Gyro values to see what I can get out of them. I wrote a very simple QGraphicsView-based GUI to see the output visually, and this is what came out of that example:



As you see, that was not really anything to write home about, so next up was the turntable experiment. With that, I could scale the raw gyro readings so that "1.0" (in my case) corresponds to e.g. 45 RPM. Coupling that with an audio player using Qt MultimediaKit, one can translate the turntable movements into playback rate values and control the media player just as if it were a vinyl record:



In this week, I've been working on perfecting the calibration algorithm, cleaning up the API for the calibration part of the library and hooking everything up to Sebastian Madgwick's AHRS algorithm and visualizing the result with Qt3D.

Mittwoch, 18. Juli 2012

hidapi on Linux: Now supporting hidraw enumeration

As I've been posting about previously, I've been working on a hidapi patch to get device enumeration working correctly for Bluetooth HID devices on Linux. After about two months, and thanks to the great support and feedback of Alan Ott (the hidapi maintainer), the patch landed in mainstream hidapi yesterday.

How does this benefit the MoveOnPC project? It now allows us to use the PS Move Motion Controller under Linux via Bluetooth and without having to resort to source-code-level hacks. For most users, this will just be a transparent improvement.

In other news, I've been working with Benjamin yesterday on getting his OpenCV code working on Linux, and while it worked, the LED writing did cause a noticeable pause every 4 seconds. Fixing this by using my experimental "multithreading" branch did help, but we had to increase the delay for the initial calibration blinking. I hope to look into possibilities to improve this for Bluetooth devices on Linux, so that we get the same write performance as on OS X and Windows.

Dienstag, 26. Juni 2012

4th OpenCV PSMove Example (HTML Debug, CL-SDK, INI-parser, Linux RC1)

This is the fourth blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is less about the tracker, but more about debugging and some other useful stuff.

1. HTML Debug

Since it is quite hard to understand what happens during the calibration process without having a camera-image to observe, a HTML-Trace of the calibration process is now created during runtime. Here are two examples what the HTML-Trace will look like if the calibration fails, or if it succeeds.

The first big 4x4 Table show the "blinks" of the color calibration process, already described in [1st Example Color Calibration]. The following row shows the result of the estimated color, i.e. the final mask that is used to estimate the color, the color the sphere was lit and the estimated color. After that a test is performed with the estimated color on the images in the first column of the 4x4 table, to see whether the color was a good match. Additionally warnings and errors are posted unter "Extended logging information" and finally a live camera image is shown (only if calibration was a success).

2. CL-SDK Integration

On windows the PS-EYE SDK from "Code Laboratories" was integrated and is now used to accquire images (previously via openCV) and to configure camera-settings like exposure, auto-whitebalancen and so on. I'd have preferred to stay with openCV, however the CL-SDK does not offer to access the camera with openCV and the CL-SDK simultaneously nor are the camera-settings applied to the camera permanently. i.e. In order to use the CL-SDK to switch off the auto-exposure, i also have to use the CL-SDK to grab the frames from the camera.

For this reason a new "class" named "camera_control.h" was introduced abstracting access to the camera (configuration, frame grabbing, initialisation) which encapsules v4l2, CL-SDK, openCV in order to provide a single object to access the camera and its configuration for linux and windows.

3. INI-parser

Depending on camera access mode (CL-SDK or v4l2) the camera settings may be permanently changed (even after a restart). Therefore it might be useful to make a backup of the camera-configuration before modifying it and restoring it again on termination.
To store the configuration, and without inventing the wheel the "iniparser" from [ndevilla.free.fr/iniparser] was utilized to easily write and read INI-files. It might also come in handy in future to save lense-distorion parameters.

4. Linux RC1 (binaries available for Linux & Windows)

I am happy to announce, that the demo application now perfectly runs under linux (ubuntu linux 12.04 tested) as well as on Windows 7. If you like to give it a try, probably the easiest way is just to take the binaries from the OpenCVExample.zip file within the zipball [ex4].




- startDemo.bat: click this to quickstart on windows
- startDemo.sh: click this to quickstart on linux (sudo & chmod a+x may be required)
- Debug(xxx): contains the binary for your plattform *
- lib: contains prebuild libraries of psmove-api, openCV, CLEyeMulticam  *
- debug.hml: click this to view the HTML-Trace within your browser (do not remove!)
- debug.js: contains the actual debug data (generated during runtime)
                                                         *: all binaries are either build on Win7x32 or Ubuntu Linux 12.04 x32

enjoy!
   cherio benjamin


System requirements:

Montag, 11. Juni 2012

New labs application: Sensorfilter

If you've been watching the PS Move API repository recently, you might have noticed the new "labs/" subdirectory. In there, I'll push some small utilities that I use for debugging and visualization of the current inner workings of the library. The first tool to be put there is "sensorfilter", which is a quick visualization utility that I wrote for testing the new sensor filtering and calibration APIs. It makes use of both PSMoveFilter and PSMoveCalibration, as well as the original PSMove API. With a properly calibrated controller, you can get good readings (again, I've moved the controller a lot for this screenshot):



The slider at the left controls the current low-pass filter implementation's alpha value (i.e. how quickly should the sensor values converge to the newly-read value). As the sensor filter API is kept modular, it's possible to try to stick other sensor implementations in there without having to change client applications (of course, if there are tweakable settings, the client application has to know about these). With the Sensor Filter utility, it's easy to try out new filters and to sanity-check the calibration code.

The utility is available on github.com/thp/psmoveapi in the "labs/sensorfilter" subdirectory. Have a look at the README file to find out how you can build it. It depends on Qt 4 (tested with 4.8).

Plans for the next few days:

  • Have a look at the OpenCV status, provide feedback to Benjamin
  • Try improved sensor filtering algorithms and compare them
  • Finish the calibration backend code, supporting USB "calibration blob" modes
  • Clean up and document the code, extend the Python and Java bindings

Sensor calibration: Custom method and calibration blob

In the last few days, I've been working on getting a basic sensor data filtering infrastructure set up. In addition to that, I've added support for getting and storing the calibration data that is saved on the controller (the axis naming is a bit different in the PS Move API compared to what you will find on that Wiki page). In addition to the factory-set calibration data, I've also implemented support for a "custom" calibration scheme where the user has to do a 6-point tumble test, which will be used as anchor points for calibration.

The custom calibration scheme works a bit like "mccalibrate" from linmctool, but has (at the moment) a bit simpler algorithm (taking the average over 200 sensor readings). The new calibration tool that I wrote (c/calibrate.c) can detect if you have moved your controller too much while the readings were taken, and will ask you to do the given position again. A custom calibration could look like this (I've moved the controller a lot for the first "buttons up" reading to demo the move detector code):

~S/psmove/psmoveapi% build/calibrate 
Serial number: 04:76:6e:XX:XX:XX
Put the controller in the position 'bulb up' and press the Move button
All readings done for bulb up.
bulb up:
a (avg:     1 |  4359 |   188)
a (dev:    20 |    13 |    43)
m (avg:     2 |    -8 |  -421)
m (dev:     4 |     8 |     5)

Put the controller in the position 'bulb down' and press the Move button
All readings done for bulb down.
bulb down:
a (avg:  -165 | -4379 |  -113)
a (dev:    30 |    20 |    48)
m (avg:   -69 |   287 |  -435)
m (dev:     5 |    10 |     5)

Put the controller in the position 'buttons up' and press the Move button
All readings done for buttons up.
buttons up:
a (avg:   177 |    62 |  4173)
a (dev:  3940 |  2079 |   987)
m (avg:   -34 |    57 |  -250)
m (dev:    22 |    16 |     6)



  DEVIATION TOO HIGH - PLEASE RETRY

Put the controller in the position 'buttons up' and press the Move button
All readings done for buttons up.
buttons up:
a (avg:   -41 |   358 |  4362)
a (dev:    22 |    19 |    19)
m (avg:   -29 |    77 |  -250)
m (dev:     5 |    10 |     5)

Put the controller in the position 'buttons down' and press the Move button
All readings done for buttons down.
buttons down:
a (avg:  -128 |   422 | -4343)
a (dev:    28 |    21 |    25)
m (avg:   -61 |    84 |  -515)
m (dev:     5 |    10 |     7)

Put the controller in the position 'buttons left' and press the Move button
All readings done for buttons left.
buttons left:
a (avg:  4252 |   188 |    63)
a (dev:    38 |    41 |    49)
m (avg:    96 |    76 |  -392)
m (dev:     4 |    13 |     8)

Put the controller in the position 'buttons right' and press the Move button
All readings done for buttons right.
buttons right:
a (avg: -4458 |   338 |   -82)
a (dev:    26 |    24 |    35)
m (avg:  -187 |    85 |  -369)
m (dev:     5 |    13 |     6)

Now that we have done a calibration run, we need a tool to display the results (also, we need a tool that reads the data from USB and stores it): Enter "dump_calibration". This tool will read and persist all calibration blobs of connected USB controllers (the "calibrate" tool will only store custom calibration, and only for Bluetooth controllers). When run with a Bluetooth controller (and again assuming that you have already done the USB fetching part), you can get output like this:

~S/psmove/psmoveapi% build/dump_calibration 
File: /Users/thp/.psmoveapi/04_76_6e_XX_XX_XX.calibration.txt
Flags: 3
Have USB calibration:
10 00 67 07 4f 7f a4 7f c2 90 68 6e 25 80 05 80
60 7f 10 80 bf 6e 75 90 c6 7f c5 7f c1 7f bb 90
33 80 47 7f c7 6e 90 7f d2 08 db 7f 57 80 47 80
d7 07 d2 7f 58 80 4b 80 00 00 00 00 00 00 00 00
00 01 ce 08 e0 01 04 97 53 80 5b 80 e0 01 cc 7f
7b 90 39 80 e0 01 dd 7f 4d 80 64 94 f4 07 d1 d7
12 41 72 fc d0 c0 c9 3e 0d c2 a4 1c 6f 3f a9 90
7b 3f 37 5c 71 3f 02 1d 32 3f 87 69 a1 3d 00 00
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
# Orientation #0: ( -177 |   -92 |  4290)
# Orientation #1: (-4504 |    37 |     5)
# Orientation #2: ( -160 |    16 | -4417)
# Orientation #3: ( 4213 |   -58 |   -59)
# Orientation #4: (  -63 |  4283 |    51)
# Orientation #5: ( -185 | -4409 |  -112)
# Gyro X, 80 rpm: ( 5892 |    83 |    91)
# Gyro Y, 80 rpm: (  -52 |  4219 |    57)
# Gyro Z, 80 rpm: (  -35 |    77 |  5220)
# byte at 0x3F: 00

Have custom calibration:
         ax         ay         az         mx         my         mz
#0:       1.27    4359.10     187.74       2.04      -8.18    -421.33 
#1:    -164.57   -4378.57    -112.98     -69.21     286.53    -435.20 
#2:     -41.38     358.21    4361.73     -28.52      77.03    -249.96 
#3:    -127.78     421.94   -4342.93     -61.33      83.87    -514.74 
#4:    4251.81     187.99      62.83      95.57      75.67    -391.71 
#5:   -4458.25     338.40     -81.67    -187.27      85.35    -369.02 

This calibration file can be used by the new PSMoveCalibration API that wraps a PSMove object and provides calibration features on top of it. The function that users will probably use most is psmove_calibration_map() - it takes as input 3, 6 or 9 integer values and converts them into corresponding float values that have been normalized.

With the tumble test ("custom calibration"), we only get values for the accelerometer and magnetometer - for calibrating the gyro, we would need to have access to a turntable and control its speed - something that's not impossible to do, but very hard. Thanks to the research done by other MoveOnPC people, we can extract the information from the USB calibration blob - it stores the expected readings for 80 rotations/minute (according to the wiki page).

You can find the new code on github.com/thp/psmoveapi - expect some rough edges and more updates in the coming days and weeks :)

Donnerstag, 7. Juni 2012

3nd OpenCV PSMove Example (Region Of Interest)

This is the third blogpost about the implementation of a tracker for the colored sphere of the PSMove controller. This time it is about the optimizing the trackers computation time, and some other stuff.

1. Increasing calculation speed

Thanks to Budaházi Viktor from [moveframework] i learned [mailinglist], that 70fps is probably not fast enough, as feature applications using the psmoveapi may already cause a heavy load to the system.

Therefore i introduced a technique called ROI (region of interest) [opencv roi example] in order to reduce calcualtion time. The main idea is, that instead of evaluating the whole picture, just evaluating a region of the picture in which it is very likely to find what we are searching for. This can speed up calculations tremendiously, however the framerate is not constant anymore, as one is downsizeing or extending the ROI during runtime.

So here is what i did: I defined in the aplication to have a arbitrary number of levels of ROIs which will reduce their size by 40% on each level downwards. In the demo i choose to have 5 leves of ROI, so that:
Level 1.                     640x480px (full camera image)
Level 2. 60% of (1) --> 383x383px
Level 3. 60% of (2) --> 229x229px
Level 4. 60% of (3) --> 137x13  px
Level 5. 60% of (4) --> 82x82    px

In the main loop of the tracker i calculated the bounding-box of the sphere found in the current image. Then for the next iteration of the loop, the ROI level is set to one that can hold that bounding box (multiplied by 2). If the sphere was not found at all, i'll go upwards in the hierarchy of ROI levels until the sphere is found again. The center of ROI is always set to the last location where the sphere was found. Future implementations my use movement prediction and additional sensor-data to put the center more into the direction the user is likely to move the controller. This would reduce switchin between the ROI leves for fast movements.

In the top of the video you may now see the framerate, expected sphere color, average luminace, camera exposure and the ROI size. The white square in the image denotes size and location of the current ROI. Note how the framerate increases when i go further away from the camera. With the help of ROI i now get framerates up to 1500fps :P.

2. Handling distortions from (flourescent) light sources

I also figured out, that having fluorescent lightsource in a room may cause the calibration to fail and highly influence the accuracy of the tracking. The lamps cause due to there operational mode to have, just like the camera, something similar to a refresh rate. As the camera and the lightsources are not synchronized and don't have the same refresh rate, the video feed seems to flicker, that means there are travelling darker/brigher horizontal/diagonal regions within the camera image. [light flickering]

In one of the previous posts, i explained, that i perform the color calibration with the help of a sequence of difference images. The light flickering causes small but recognizeable differences in these difference images wich are then in turn regarded as the sphere beeing lit/unlit.

Increasing the number of image pairs taken by one already decreased false-detections reasonably. However it is still not enough as there still remain false-detections in all image pairs, and further increasing the number of pairs taken may neither be bearable by the user nor is it clear if it would be beneficial.

I learnd from a colleague about morpholgical operations like [dilation] and [erosion] which were quit helpful to cleanse the image from smaller false-detections.

1) orignial image
2) difference image
3) thresholded image
4) eroded/dilated image

Notice how in the lower left corner the lamp causes a greater white area in the thresholded image (3) and how it is removed by a subsequent erosion and dilation in image (4).

3. Choosing the right camera exposure time

Depending on the current average luminance in the camera image, the camera exposure is choosen appropriately. I found out, that a exposure value smaller than 0x10 causes colors to be very grey-ish and going higher than 0x40 increases motion unsharpness (fast moving controller) and the sphere looks very white-ish due to the long exposure.

In the very beginning of the calibration i therefore starte with exposure 0x10 and go step by step up to exposure 0x40 until i get a average luminace of 25.

The average luminance is defined in my case as:

IplImage* cameraImage;
CvScalar avgColor = cvAvgS(cameraImage,0x0);
float averageLuminance = (avgColor[0] + avgColor[1] + avgColor[2]) / 3;

If the resulting average luminance is about 0x20, i reduce the spheres brightness to 70% and if it is above 0x30 i decrease it to 50%. This assures, that the shperes color does not look too white-ish for longer exposure times.

Well thats it for this time. ...

system requirements: