Extending
Tekkotsu to New
Platforms
Sensor Input and the EPoll Server
In order to
perform cognitive robotics, the robot needs to be able to receive input
from the world in order to interact meaningfully with it. We didn't
have a camera that we could use, although that should be
straightforward (just a matter of controlling it from Linux and getting
its input to the right place in the Tekkotsu pipeline). We did have a
force sensor that connects to the controller board that we are using
with the arm. However, this is not nearly as simple to get working as
one would think. The board doesn't simply broadcast the value of the
force sensor at regular intervals. Instead, it must be polled, and this
support does not yet exist in Tekkotsu. However, once Ethan adds the
necessary hooks to the simulator core, the force sensor should be
accessible just as any sensor on the Aibo is. Of course, the current
values of the servos will never be known because of the one-way
communication (and resultant P control).
So, we have an arm that is blind, has no idea where it is, and unable
to reach for objects even if it could see them. How can we do cognitive
robotics on such a piece of equipment?
We've constructed a new background behavior for Tekkotsu that we call the EPoll server. EPollServiceBehavior.h exposes the epoll(7)
interface to its clients and will dispatch events on a per-file
descriptor basis. The file EventBase.diff introduces a new EGID for
this server. The source ID is the FD in question. The type
is activate or deactivate for adding & removing FDs from the
server's watched set; status is used to signal that one or more of the
conditions requested are met. It spawns a pthread to do the actual
waiting, allowing the rest of Tekkotsu to continue uninterrupted.
As a result, we can now construct blocking services, such as our sample
NetworkMotionServer.h. Essentially, it is a version of ControllerGUI
that has no GUI. It uses an extremely simple (and inelegant) protocol,
but it works as a demo of how one could use the EPollServiceBehavior
interface in order to write a real application. It listens for UDP
packets on port 15000. The commands are case-sensitive. More than one
client may be connected at once, but this is not necessarily useful at
the moment. Here's a sample transcript of an exchange between client
and server. <-- denotes data being sent to the server, and -->
denotes data being sent back to the client.
There is an immediate ping/pong for testing server liveness:
<- PING
-> PONG
One can get forward kinematics data per link (FK link#):
<- FK 3
-> FK 3 ROW 0 : 1.000000E+00 0.000000E+00 0.000000E+00 1.540000E+02
-> FK 3 ROW 1 : 0.000000E+00 -4.371139E-08 -1.000000E+00 -6.731554E-06
-> FK 3 ROW 2 : 0.000000E+00 1.000000E+00 -4.371139E-08 2.340000E+02
-> FK 3 ROW 3 : 0.000000E+00 0.000000E+00 0.000000E+00 1.000000E+00
Or ask the arm to run motion sequences (RUNMS filename):
<- RUNMS arm_pickup_move.mot
-> MOTMANREG 3
[delay while arm runs]
-> MOTMANRESP 3 STATUS
-> MOTMANRESP 3 DEACTIVATE
Or solve point-wise inverse kinematics (orientation-aware inverse kinematics are not yet available) (IKP link X Y Z):
<- IKP 4 1.0 1.0 1.0
-> MOTMANREG 4
[delay while arm runs]
-> MOTMANRESP 4 STATUS
-> MOTMANRESP 4 DEACTIVATE
The MOTMANREG
messages serve to tell the client which ID is associated with the
command just sent and is dispatched before the server reenters the
command loop. MOTMANRESP messages are served from the event
processing framework (similar, in a sense, to the telepathy endeavor)
to the client which requested the particular command.
It is our hope that this will serve as an interface between the arm and
other devices (like the Aibo) that are capable of sensory input,
allowing meaningful cognitive robotics to be performed with the arm.
Even if the arm is only capable of motion sequences at the moment,
basic state machines could be constructed with the Aibo and the arm
cooperating to play a game or something to that effect. Sadly, we did
not have quite enough time to implement Tic-Tac-Toe, our original idea.