Data format
Data files are generally named with a .dat extension.
Each data file contains a collection of frames (snapshots) of sensor data.
The file is organized as a collection of lines. Each line begins with a
keyword, followed by some information. Any line beginning with a word other
than a known keyword is considered to be a comment line. In a file there are
required keywords and (possibly) optional keywords. The required keywords are
@OPEN, @SENS, #ROBOT.
@OPEN date time
Example: @OPEN 10-12-93 16:03:00.557068
This marks the first entry in a sequence, stating when that sequence starts.
A sequence is simply a sequence of robot data snapshots and (optionally)
actions. Date = mm-dd-yy, Time = hr:mm:ss.microsecs.
@SENS date time
Example: @SENS 10-07-93 20:07:43.800980
Marks the beginning of a new sensation, or snapshot of sensor data.
#ROBOT x y theta
Example: #ROBOT 0.000148 2.256479 0.007522
x,y and theta give the sensed (via dead reckoning) robot position. At the
start of each sequence, x, y theta are all 0. x and y are in cm's, theta is in
degrees.
#SONAR n: s1 s2 .. s24
Example: #SONAR 24: 807.239990 249.455994 148.871994 121.440002 103.152000 ...
n is the number of sonar readings
s1..s24 sonar readings ranging from 8 cm to 808 cm. No sonar echo is reported
as a maximum sonar range reading.
#LASER n: x1 y1 x2 y2 ... xn yn
Example: #LASER 48: -14.114452 352.500000 0 0 -7.618582 352.500000 ...
n is the number of laser readings
xi and yi are the x and y coordinates of the ith laser reading, in a coordinate
frame centered at the direct center of the robot, with the y axis (x=0)
pointing straight forward. y varies from 40 cm to 350 cm. x varies
No laser return is reported as coordinate (0 0).
#LASER-Y n: y1 y2 ..yn
Example: #LASER-Y 51: ...
n number of laser readings (only y-coordinates)
y1..yn laser readings varing from 40 cm to 350 cm. No laser return is
reported as a zero reading
#CAMERA tilt pan n: l11 l12 l13 l21 l22 l23 ... (groups of three)
Example: #CAMERA -23.0 12.2 3: 12 344 1200
tilt = tilt angle (in deg)
pan = pan angle (in deg)
n : number of markers found * 3, ranges from 0 to 6 (all different color)
(n is exactly the number of numbers that follows, and there are
3 numbers per marker. Divide this value by three to get the number
of markers found)
lj1: horizontal pixel value (x) in camera coordinates
lj2: vertical pixel value (y) in camera coordinates
lj3: strength of the detected marker (by find.c)
#VISION x y xscale yscale:rgb*(x*y)
Example: #VISION 150 124 1.000000 1.000000:...
x and y give the dimensions of the image. xscale and yscale give the scaling
factors for x and y (not used thus far). This is followed by a colon, followed
by the x*y color pixel values. Each pixel is described by three values: the
red, green, and blue intensity levels. Each of these is stored as a single
8-bit integer (which reads out as a single ASCII character in most editors).
The original camera image is 600x496, and is converted into a 150x124 image by
reducing squares of 4 pixels into a single pixel (averaging values). The
(0,0) coordinate of the image is the top left corner; (150,124) is bottom
right.
#RECTANGLE name x1 y1 x2 y2 x-res y-res: r1 g1 b1 ... rn gn bn
where n= {(x2-x1)/xres}*{(y2-y1)/y-res}
Example: #RECTANGLE biggie 0 0 150 124 75 124: 59 87 250 84 23 190
Describes a set of r,g,b intensity values of rectangles from the #VISION
image. For instance, the above example defines the feature "biggie" as two
75x124 rectangles within a single 150x 124 rectangle begining at (0 0). The
three values are the average r,b,and g values over the two rectangles. These
features can be generated automatically by a program. See the
file example.dat and sample.conf for more examples.
@ACTION datestamp timestamp
Example: #ACTION 10-07-93 20:07:43.800980 POTENTIAL-FIELD-GOTO 150 0
Describes an action performed by the robot, at a specific time (when the
action is initiated). These may be interleaved with sensations.
#ROBOT x y orientation in_motion?
Example: #ROBOT 500 500 90 0
x, y, orientation are the dead-reconing coordinates of the robot
in_motion? flag: 1, if robot was in motion at this time, 0 if not
#SET-ROBOT x y orientation
Example: #SET-ROBOT 500 500 90
Command that sets the internal dead-reconing coordinates for the base.
x y orientation are the new coordinates
#SPEED forward-speed rotation-speed
Sets the max-speed values for the robot base.
forward-speed rotation-speed are in cm/sec and deg/sec
#SET-GOAL action_nr dist angle x_target y_target
Example: #SET-GOAL 3 125 0 514.147 653.179
Issues a set-goal command. This command navigates the robot to a
specified position using the potential-field obstacle avoidance
mechanism. and specify the goal point in polar
coordinates relative to the robot, and and
specify the same target point in the global world coordinate system.
The latter two parameters are the actual parameters in the
TCA-SET-GOAL command. refers to a list of 7 actions used
by Sebastian for his learning stuff, and will be irrelevant for
anybody else. If you look through his data files you will find that
only 7 different actions are invoked at all.
#STOP
The robot received a STOP command.
#CATEGORY label:
Describes the hand-input categorization of a sensation. At present time, the
value of 1 indicates a scene containing a door, 2 indicates a scene containing
a bin, 0 indicates a scene containing neither. No scene contains both a door
and a bin simultaneously.
Overall Points:
Keywords beginning with @ describe the format of sequences (such as the
time data collection commenced or the time a given sensation was
sampled).
Keywords beginning with # describe sensations and actions of the robot.
Last Updated:
9July94 12:00
josullvn+@cs.cmu.edu