Social Behavioral Robot that Stands in Line

Yasushi Nakauchi and Reid Simmons

Problem:

Recent research results on mobile robot navigation systems make it promising to utilize them in service fields. In general, the environments where service robots perform tasks are shared with humans. Thus, the robots have to interact with humans, whether they like or not.

Humans also interact with each other. Sometimes, the interactions lead to resource conflicts. In order to maintain order, humans use social rules. For example, at bank counters or grocery stores, human stands in line and wait for his/her turn comes. If a person does not understand nor obey the social rules, he/she will not be able to get the services.

This is also true for the service robots. If a service robot is asked to purchase merchandise and it does not know the social rules of humans, it may keep avoiding the humans who are standing in line as obstacles and will not be able to achieve its task. Therefore, it is also required for service robots to understand the social behaviors of humans, and to obey the rules.

There are many aspects to the social rules and behaviors in human society. Among them, standing in line is one of the most highly socialized and crucial skill required for robots which execute tasks in the peopled environments. Therefore, as a first step towards the social behavioral robot, in this research, we will develop the robot which can stand in line with other people.

Impact:

If service robots possess social skills, they will be able to execute tasks more reliably, since they can better interact with people. Also, such skills will enable robot applications that require social interactions with humans.

In the context of AI, we believe that embodiment is one of the crucial factor in human's social activity, and this can be realized only by the robot which posesses mass and mobility.

State of the Art:

The notion of human territoriality, or personal space, has been studied in the research field of cognitive science[2,5]. Personal space, which is a person's own territory, is oval in shape and is wider towards the front of a person. A person feels uncomfortable when other people are in his/her personal space. We employed this notion and modeled a line of people as a chain of personal spaces as shown in Figure 1. But so far, robots do not use this notion of personal space to interact with people.

To recognize a line of people, the robot has to detect their positions and orientations. Several face recognition systems have been developed to detect people, but they only recognize a human if he/she is facing towards a robot[1,4]. Oren et al. have developed people detection system within an image[3]. This system only detects segments of people in an image, therefore, it can not figure out the position and the orientation of the person.

Approach:

We employ a stereo vision system for recognizing line of humans, and ultrasonic sensors for tracking people once the robot moves to the end of the line.

The people-detection algorithm we have developed is as follows. An example of recognized humans are shown in Figure 1.

1.
capture left and right camera images using two CCD cameras.
2.
calculate disparity image from the two images using triangulation method.
3.
separate each person's disparity image data from the multiple human's disparity image using a clustering method.
4.
find the nose and the chest height position data for each person by using the characteristics of body shape (i.e. a neck is the narrowest point in width).
5.
find the ellipse that best fits the stereo data at the chest height, and find the circle that best fits the stereo data at the nose height.
6.
decide the body's direction by using the center of the ellipse and the circle (we assume that the head juts out in front of the body).


  
Figure 1: A line of people modeled by a chain of personal spaces and the detected people by the stereo vision system (black circle: robot, ellipse: chest, circle: head, arrow: body direction).
\begin{figure*}
\centerline{
\psfig{file=line.ps,angle=0,width=7.5cm}
\hfill
\psfig{file=grid.ps,angle=0,height=4.0cm}
}
\rule{\textwidth}{.2mm}
\end{figure*}

We assumed that a robot has an environmental map and knows the goal position where a service is provided. The navigation algorithm for the social behavioral robot is as follows.

A robot moves towards the goal position and detects people by using stereo vision system. If the goal position is occupied by people, the robot connects the personal spaces observed so far. Then the robot estimates where the end of line is. Also the robot estimates the position the robot should move to join the line and updates this position as the new goal position.

The robot iterates the above procedures until it finds out the vacant goal position. Then the robot moves in the goal position and joins the line. Also, the robot moves up in line by keeping the certain distance from the person in front so that their personal spaces are connected.

Future Work:

So far we have implemented the stereo vision system and we will integrate it to the robot motion program from now. The derived human positions and orientations from the vision system are sometimes not stable. So we are planning to employ some statistical methods to make the detection more reliable. Also we are planning to develop other applications that can be modeled by using the notion of personal space (i.e. to join a group of people are standing around talking together).

Bibliography

1
R. Brunelli and T. Poggio.
Template matching: Matched spatial filters and beyond.
MIT AI Lab., A.I. Memo, (1536), 1995.

2
M. Malmberg.
Human Territoriality: Survey of behavioural territories in man with preliminary analysis and discussion of meaning.
Mouton Publishers, 1980.

3
M. Oren, C. Papageorgiou, P. Shinha, E. Osuna, and T. Poggio.
A trainable system for people detection.
In Proceedings of Image Understanding Workshop, pages 207-214, 1997.

4
H. Rowley, S. Baluja, and T. Kanade.
Neural network-based face detection.
IEEE PAMI, 1998.

5
R. Sack.
Human Territoriality.
Cambridge University Press, 1986.

About this document...

This document was generated using the LaTeX2HTML translator Version 98.1p1 release (March 2nd, 1998).
The translation was performed on 1999-02-19.