Results
We were able to achieve very good results for many different sets of objects. For example, using a scene with two green bottles, we took the following images:
Left Image
Right Image
For this test, we set the behavior to look at green objects, and obtained this mask for the scene:
Scene mask (large green objects)
Running the depth mapping algorithm, we obtained the following depth map (lighter colors indicate close objects)
Result image
Overlaying this onto the lefthand image, we can clearly see that the algorithm has correctly labeled the righthand bottle as closer to the Chiara, and the lefthand one as further away:
Result overlaid on scene
We also performed an experiment to prove that depth mapping could help the Chiara in overcoming the planar world assumption. We used a scene with two balls, one on the ground and one on a pedestal, both aligned to the same height in the Chiara's camera, as follows:
"Pedestal" scene
Normally, Tekkotsu would see these two balls as being at the same depth, thanks to the planar world assumption. This is clearly incorrect, as the righthand one is much closer and simply higher up, but from observing the results of the local map builder, we can see that this is the case:
Local map
In fact, the Chiara sees the the green ball as further back and larger than the blue basketball, which is obviously not the case. However, applying depth mapping to this scene, we got the following depth map:
Result of depth mapping
Here, we can clearly see that the righthand ball (the green one on the pedestal) is much closer than the lefthand one (the blue basketball). From this depth mapping, we could build a more accurate local map with the balls the proper sizes at the proper depths. This is currently unimplemented, but it is a topic for future work.