See also
The ALVisionToolbox module is a vision module, aggregating different features that don’t justify to have their own module.
You will find here functions that:
When calling this function, auto-gain/auto-exposure is turned on for one second - if they were not before - in order to let the camera adjust automatically gain and exposure values. Then gain value will be send by the camera itself.
Checking the camera gain value has two advantages:
Backlighting occurs when there is high contrast between a light source (e.g. a window) and the observed scene.
To identify this, the backlighting method analyses first in the current frame if more than 2% of pixels are clipped (a phenomenon that occurs when a pixel sensor receives more light that it can “store”, resulting in a ‘255’ value after digital conversion for that pixel). In that case, we check if more than 60% of pixels present a luminance below ‘70’.
The final result of this method is a weighted combination of both the percentage of pixels at ‘255’ and the percentage of pixels below ‘70’.
This function takes three successive jpeg pictures at the highest camera resolution (NAO v3.3 and below have a VGA camera, NAO v4 and above have an HD 960p camera).
Pictures are stored on the robot in the user data folder (on 1.10 versions it was in home/nao/naoqi/share/naoqi/vision/ directory, on 1.12 this is in /home/nao/.local/share/naoqi/vision). To get them on your PC, you can use ssh or use the FTP browser in Choregraphe.
Note
Taking 3 images allows the user to select the best one, as it could be found on some compact cameras (don’t mistake with the bracketing mode).
Note
A button is available in the Monitor GUI for taking pictures by calling this function.
Note
If you need to shorten the delay between calling the function and grabbing the first image, you can call the halfPress function. In a digital camera, this will set the autofocus. Here, it will turn on the frame grabber if it was off and set it in the highest resolution mode if resolution was lower. Calling again halfPress function or taking a picture will return to the previous mode.
With this function, you can grab regularly an image from NAO. Several parameters are available:
Note
You can ask up to 8 instances for this function, recording images at different time intervals and different format for different use.
Warning
TakePictureRegularly is a blocking function. To stop a specific instance, call the stopTPR function by providing it the path and the root of the name corresponding to this instance, as well as the file format, both elements describing together a unique instance.
Finally, if you want to remember the different instances running currently, call the logTPRInstanceInfo function. It prints this information in the logger.
These two functions are used to record AVI videos on the robot.
Note
Video recording can be quickly accessed using Monitor, which internally uses VisionToolbox’s functions.
They are quite straightforward to use. Just call startVideoRecord passing the video name as argument. NAO will then start to record what it can see with its active camera. The record is done at a frame rate of 15 frames per second, 320*240 pixels, with MJPG compression. Please note that the frame rate may go down below 15 fps depending on what is running on the robot.
Note
startVideoRecord is not a blocking function, it launches a separate thread that performs the video recording.
When you want to stop the record, call stopVideoRecord. The record is then saved on the robot in the user data section (/home/nao/naoqi/data/vision/ directory) using the file name you specified.
Warning
Please note that you can only have one active record at a time. If you want to know if there is an active record, you can call isVideoRecording().
Note
VisionToolbox also offers a “startVideoRecord_adv” function, which gives you access to more parameters regarding your video record (frame rate, format, resolution, etc). Please refer to ALVisionToolboxProxy::startVideoRecord_adv for exact details.
This function is useful when you need to use the absolute color information in the images.
As you might know, the automatic white balance is usually performing correctly. However, colors perception drifts with the light temperature (expressed in °K) and will still have effect on the automatic white balance, even if this mode aims to reduce this effect. Moreover, the automatic white balance can’t work properly when there is a dominent color in the image. So for instance, finding a blue ball using its color will be a problem when the automatic white balance is active, as this blue shape will delude the system trying to fix the correct lengths for a neutral grey.
A solution is to look at a white or grey pattern used as a reference and to disable the automatic white balance. This is what is done when calling setWhiteBalance function: NAO looks at his hands and freezes the white balance lengths.
Note
To avoid white balance calibration to be disturbed by NAOs leds of its eyes, they are turned off before and restored at the end of the process.
The easiest way to get started with most of these features is to use the corresponding Choregraphe vision boxes (Is BackLit, Is in Darkness). Recording capabilities can be easily used through the camera panel of MONITOR.