(I wrote this in Spring 1997 as part of my case for reappointment/tenure at Carnegie Mellon.)
I am interested in Manipulation because it is an integral component of intelligence. "The hand is the cutting edge of the mind" says Bronowski. Dolphins may have brains forty percent larger than humans, but they are unable to mold their world, and thus they remain mere passive observers. Octopuses, with smaller, though not insignificant brains, exhibit highly active intelligent behavior, as they manipulate themselves and their world to build shelter and catch prey. (See Loren Eiseley, The Long Loneliness, American Scholar, 1960.) My conclusion: In order to build intelligent machines, we must understand Manipulation.
Scientifically, Manipulation is interesting because it requires the three key ingredients that any relevant science must possess: analysis of the world, synthesis in the world, and technological synergy. The first key component of Manipulation is the analysis and representation of the physical world. Examples include constructing friction models and geometrical representations by which one can understand how objects of various shapes and material properties will interact. I find this interesting because it tickles my mathematics and physics background. The second key component of Manipulation is robots. One tests ones models by building systems and programming robots to manipulate objects. The systems can be quite extensive, including planners and executives, language compilers and interpreters, as well as all the hardware issues surrounding the robots themselves. I find this interesting because it builds on my Computer Science and Artificial Intelligence background. And the third key ingredient is technology. As we make progress in Manipulation, we see the need for exotic new devices, such as force and touch sensors, CAD tools, MEMS structures, unconventional arm and hand designs, and as always, faster processors.
Industrially, I take my motivation from Manufacturing, in particular parts assembly. Manufacturing has slowly moved from hard automation to general purpose flexible automation, but the transition has been rough and is far from complete. Many of the promises of the late 1970s have not been realized. For instance, general purpose automation was supposed to facilitate rapid changeover of product lines, but instead it introduced enormous overhead costs. High degree-of-freedom general purpose robots are less reliable than low degree-of-freedom special purpose devices. Moreover, they are extremely difficult to program. It is still nearly impossible to program a robot in a quick general purpose fashion. Consequently, special purpose devices and tricks dominate product lines. As a robotics researcher, my goal should be to understand why this is so, and to offer solutions that move the field closer toward general purpose automation.
My work generally consists of a strong theoretical component, backed up by geometric implementations and simulations, followed by physical implementations on a robot arm. The arms include PUMAs, Zebras, and Adepts. The task domains include: grasping; parts sorting; parts orienting; assembly; collision avoidance; and sensor design and placement.
Three key questions around which I have structured my search are:
I will list my contributions here briefly, elaborate on my two most recent areas of research, and then summarize my older work. My contributions have been in six main areas:
A. Recent Progress
Over the past couple years I have been working on two-handed manipulation, in particular, two-palm manipulation. The novel and difficult aspect of this work is that the robots represent and reason about manipulating an object not merely by grasping it, but by sliding one hand or another relative to the object. Such actions are important when a human or robot wishes to turn an object over in its hands, as well as during a release operation, in which the human or robot is placing the object somewhere.
This research represents my most recent foray into the area of "modeling contact" listed above, focusing on nonprehensile contact. I have created a system in which two robots plan and execute cooperative strategies for manipulating parts using their "palms". The term "palm" refers to the use of the entire device surface during manipulation, as opposed to use of the fingertips alone. The term "nonprehensile" means that the palms hold the object without wrapping themselves around it, as distinguished from a force/form closure grasp often employed by a fingered hand. Indeed, nonprehensile operations such as purposeful sliding and constrained dropping constitute important manipulation primitives.
The system consists of a planner and an executive. As input, the system expects a geometric description of a part, its center of mass, the coefficients of friction between the part and each of the palms, and a start and goal configuration of the part in stable contact with one of the palms. As output, the system computes and executes a sequence of palm motions designed to reorient the part from the specified start to the specified goal configuration. The system is implemented using two Zebra robot arms.
In related work, one of my students, Nina Zumel, recently finished her Ph.D. work on another two-palm system. She looked at a pair of palms hinged together in a "V"-shape. She explored the mechanics of this two degree-of-freedom system and built an automatic planner for manipulating parts using these palms.
These palm systems are interesting for two basic reasons:
First, scientifically, the palm systems are autonomous and unique. They represent two separate passes through the analysis-synthesis-technology paradigm cited above. In particular, the palmar systems explore a mode of manipulation used by humans and animals that has not been explored in the past. The analysis phase focuses tightly on one aspect of that mode, namely friction and slip. The synthesis phase builds automatic planners, using the frictional models to decompose configuration space along critical sheets. Finally, the connection to technology is given in the next paragraph.
Second, practically, the palm systems offer some insight into the tension between hard automation and general purpose automation. In the past, part feeder designs have suffered from the "frozen hardware" problem. Too much of the mechanics of the task has been compiled into hardware, in the form of fixed feeder gates and orienting shapes. This has made it costly to retool a product line when last-minute part changes occur. At the other extreme, general purpose robots have suffered from too much generality and too little reliable software. The work on palm manipulation suggests an intermediate architecture, consisting of a sequence of very simple manipulators under software control. I envision, for instance, a feeder surrounded by an array of programmable low degree-of-freedom devices. Since the configurations and motions of the devices are under software control and since the devices are easy to program by virtue of having low degrees-of-freedom, last-minute part changes can be accommodated by the feeder with relative ease.
B. Information Requirements
Prior to my work on nonprehensile palm manipulation, I focused on information. I was and am interested in understanding the information requirements of robot tasks, and the manner in which these requirements constrain a robots abilities and guide the design of new robots. For instance, in picking up an object, what does the robot need to know? There is a tendency of some programmers to want to supply the robot with an elaborate CAD model of the object to be grasped, along with its mass, moment of inertia, and frictional properties. This information seems to be required, because it seems the robot must position its fingers in a way that creates force closure on the object. There are certain locations on the object that yield force closure, determined both by the shape of the object and the friction between it and the robots fingers. In turn, the objects mass and moment properties determine the relative stability of different such force closure grasps.
Yet, if we examine the task of establishing a force closure grasp carefully, we see that very little information is actually required. Local contact information at the fingers is all that is needed. Based on this information, the fingers can execute a simple feedback loop that tries to reduce the angle between the contact normals, that is, the feedback loop tries to make the normals point toward each other. Convergence of this feedback loop for simple shapes is easy to see; for more complicated shapes, differential topology theorems often used in control theory describe the resulting limit states and cycles.
The key point in this example is that the robot does not need to know the entire shape of the object. Instead, all it needs to know is the local shape at the fingers. The implications are significant. If the robot needs to know the global shape of the object, then it first needs to establish that shape, for instance with a vision system or using information obtained from a CAD model. In addition, when performing the actual grasp, the robot must first establish the pose of the object, then move the fingers to the appropriate force closure locations. In short, a designer of the robot must supply the robot with some form of global sensing, probably in the form of a vision system. On the other hand, if the robot merely requires local contact sensing, then the designer must merely provide the robot with an appropriate tactile sensor. The robots sensing architecture is completely different.
Several interesting pieces of work came out of this line of reasoning, including some very recent ongoing work by one of my students, Yan-Bin Jia.
First, I developed a method for automatically designing sensors from the specification of a robots task, its actions, and its uncertainty in control. The sensors provide precisely the information required by the robot to perform its task, despite uncertainty in sensing and control. The key idea is to generate a strategy for a robot task by using a backchaining planner that assumes perfect sensing while taking careful account of control uncertainty. The resulting plan indirectly specifies a sensor that tells the robot when to execute which action. Although the planner assumes perfect sensing information, the sensor need not actually provide perfect information. Instead, the sensor provides only the information required for the plan to function correctly.
For peg-in-hole problems, this methodology suggests that the sensors should be informationally equivalent to radial sensors. One implementation of such a radial sensor is to sense the torque that results when the peg overlaps the hole; the perpendicular to this torque points toward the center of the hole. For grasping examples, my sensor design methodology generates the feedback loop based on local contact sensing that I described above.
The basic approach is a follows. Given a manipulation task:
The progress cones thus computed describe the granularity with which a physical sensor must be able to distinguish states. In order for a strategy to be successful, the physical sensor must be able to report the identity of one or more progress cones that contain the current state of the system, no matter what the state of the system is. In other words, progress cones describe the shape and amount of uncertainty permitted in a sensor, in order for the strategy to accomplish the task successfully.
It turns out that progress cones constructed as above, from a plan itself constructed by backchaining, are provably fundamental to all strategies. Specifically, if we take a look at any strategy, there were will be contingency steps at which the strategy is obtaining information, either through sensing or action, that is informationally equivalent to establishing the system state in a progress cone. In short, progress cones provide an abstract geometric description of the information requirements of a robot task.
One of my students, Yan-Bin Jia, has looked at a number of sensor design problems. In work that was recently published, we looked at the problem of optimally placing sensors to recognize planar parts. Such problems are of interest in parts orienting. This work led to some interesting theoretical work as well as a practical algorithm. The theoretical work shows that the problem is NP-complete. The practical work uses a greedy algorithm that is provably near-optimal. The resulting method can be used both to ascertain the pose of a given part as well as to distinguish between a set of possibly distinct shapes.
Most recently, Yan-Bin Jia has been exploring non-linear observability theory in the context of tactile sensing. This research is of interest in grasping. For instance, as a finger makes contact with an object, the object may move. Is it possible to recover the objects pose, purely from the evolution history of the contact point on the finger? The answer is yes, for many objects. The proof is difficult, and requires delving into Lie brackets and derivatives. Constructing actual observers is even more difficult, but Yan-Bin has had some breakthroughs recently, and now has several different kinds of observers. We are very excited by the implications of this work. Again, the basic point is to build sensors and sensing algorithms that fit the task. Rather than go through a remote vision system, it is desirable to have the sensor co-located with the actual manipulation process, that is, in the hand. Moreover, the sensor should be simple. We now know that such a sensor is possible and understand its computational requirements.
C. Past Contributions
A robot acquires information from three sources: sensing, action, and internal models. Much of my past work has been an exploration of the importance of these different sources. One way to test that importance is to remove the source and see what capabilities the robot retains. In addition, I have developed general tools for representing uncertainty and modeling contact.
Some of my early work dealt with the issues of representing uncertainty and building planners based on those representations. Uncertainty is the main culprit that makes robot programming difficult. Parts are not exactly where they should be; sensors are not accurate enough to detect the parts; and the robot cannot move precisely enough to perform tight-tolerance assemblies. One apparent solution is to make feeders, sensors, and robots more accurate, but this will only work to a certain degree. Uncertainty is fundamental. Any successful robot programming tools must deal with uncertainty.
I have been influenced here most directly by the preimage methodology proposed by Lozano-Perez, Mason, and Taylor in 1983. Other influences are the backchaining methodology of the AI community of the 60s and the Dynamic Programming methodology of the Control community of the 50s. These three approaches all share the common theme of first representing the robots knowledge in an appropriate state space, then backchaining from the goal in that state space.
In the simplest case, the relevant state space consists of the world states relevant to the task. For instance, if the task is to orient a part and if information is perfect, then the relevant state space is simply the configuration space of the part. However, if the robots knowledge is uncertain, then the state representation becomes complicated; the relevant states now are knowledge states, that is, they are sets of possible primitive states --- for instance, sets of locations and orientations of a part rather than a unique location and orientation. Depending on the amount of information available, a knowledge state may also consist of a probability distribution describing the likelihood that the robot or part is in a particular primitive state.
The connectivity of the state space is a function of how the robot gains or loses information. Action and sensing are the mechanisms for moving in the state space. As the robot performs an action, the action transforms the robots current knowledge state into a new knowledge state. Often actions increase uncertainty, but sometimes they decrease uncertainty. For instance, a mobile robot that estimates its position using dead reckoning will increase its uncertainty if it rolls for awhile. The knowledge state might be a disk of possible locations. As the robot moves, this disk moves as well, but also grows in size. On the other hand, if the robot runs into a known wall, then it will have reduced its uncertainty along one dimension --- the robot knows it is in contact with the wall and thus the disk shrinks to a line segment of possible locations. The same principle applies to assembly and parts orienting: Although motion often increases uncertainty, contact establishes knowledge.
Hidden in the previous example are two other sources of information: sensing and prediction (i.e., internal models). Sensors provide information. They too transform knowledge states. The difference between sensing and action is that sensors introduce AND nodes in plans. This is because one cannot say ahead of time what sensory value the robot will see at runtime. Prediction is important both to interpret the sensory inputs and to construct the knowledge states. If a robot runs into a wall, its force sensors may signal the contact, but the robot must use its internal model to interpret which wall has been hit. In other words, past information, current information, and computation all combine to create a knowledge state.
Computing plans based on this space of knowledge states is very difficult. Indeed, by a result of Cannys, it is nondeterministic-exponential-time-hard. To a large extent the difficulty comes from the tight coupling between history, current sensed values, and the next action to execute. Even apparently simple domains are difficult. For instance, consider the task of inserting a three-pronged plug into an outlet. Now dont look at the outlet. Anytime the plug makes contact with the outlet, a lot of information is transmitted back to our brain through our hand. And yet, it is easy to misinterpret that information, instead fumbling for quite a while before inserting the plug. Modern robot assembly is analogous to that fumbling. I, along with many of my colleagues who have worked on uncertainty, have tried to understand the computational tricks that allow a robot to understand properly the information it receives and thus to reduce the fumbling.
C.1. Reachability and Recognizability
One of my earliest contributions was to untangle the connection between sensing, history, and action. I introduced the notions of reachability and recognizability, and showed how preimage plans for many tasks could be computed by independently considering the reachability and recognizability of the goal. Of course, such an approach reduces some of the power of planning with uncertainty. In essence, one only considers plans whose execution does not rely on subtle interpretations of the sensors. Still, by separating reachability and recognizability, plan computation is much easierin effect one can operate in the underlying state space of the robot, not in its more complicated knowledge space. This work started several years of related work at MIT, Stanford, Cornell, and Berkeley. As part of this work, I introduced the idea of backprojection --- a backprojection simply encodes the reachability of a goal under a given action. One of my backprojection implementations suggested that the strategy of tilting a peg before inserting it into a hole would work better if the peg were tilted in the opposite direction of the direction commonly used at the time.
C.2. Sensorless Manipulation
If one removes sensing as an information source, the robot must rely heavily on action and prediction. Matt Mason and I studied the implications of sensorless manipulation in our work on parts orienting using a tray tilter. We learned a number of things from this work. First, action combined with prediction can deliver an enormous amount of information; it is possible, without sensors, to fully orient and position a part whose initial configuration is completely unknown (within some bounded region). Second, before building a sensorless system one needs to analyze the dynamics of the domain carefully enough that the predictive power of the robot planner can detect information-producing actions. Prediction does not need to be precise; on the other hand, the imprecision cannot be so great as to hide the effect of actions that reduce uncertainty. Third, there is a tradeoff between sensing and action. In pathological examples this tradeoff is exponential in the size of the uncertainty, that is, an exponential number of actions may be required to achieve the same certainty delivered by a sensor. In practical examples, however, the cost of obtaining information from action seems to be a small polynomial function of the uncertainty. For instance, for orienting an Allen wrench with 24 stable states in the tray, the search depth of the planning tree was 9 ply. Thus, from an execution perspective, perfect sensing was "worth" 9 actions. Of course, finding this short sequence of 9 actions could still be expensive. Fortunately, from a planning perspective, it turned out that the tree was not very bushy.
C.3. Randomization
In other work I removed yet another leg from the sensing-action-prediction tripod, namely the prediction leg. What is left once sensing and prediction have been removed as information sources? The answer is randomization; the robot can make random motions until it attains the goal. Actually, without either sensing or prediction the robot can never know that it has attained its goal. It is therefore important to allow some sensing, namely just enough so the robot can check whether it has completed its task. Randomized strategies are interesting because they show what a robot can accomplish without any information from the world. More importantly, the study of randomized strategies tells us how much additional information is required in order to make a task solvable more quickly. In this sense, studying randomized strategies is similar to studying the behavior of random walks on graphs.
A useful technique is to cover the state space of the task with a progress measure. A progress measure is simply a continuous non-negative function that is zero at the goal. As a quality measure of a randomized strategy, I focused on the local progress velocity of the random walk at each state, as well as the total time to achieve the goal. The domain was parts assembly, that is, tasks such as peg-in-hole insertion. The interesting observations arose once I reintroduced sensing, bit by bit, just enough so that the strategies could sometimes make use of the sensing information, but at other times had to resort back to random walks. The data produced by this sensing-dribbling process described a tradeoff between sensing information and task convergence times.
One of my observations was that robots with noisy sensors tended to work better than robots with very good sensors. My explanation of this phenomenon is that noisy sensors cause a robot feedback loop to behave much like a randomized strategy! Of course, using errorful sensing to achieve the goal seems a bit backward, so let me explain why randomized strategies are useful in the first place. If models of the world are perfect, then randomized strategies are unnecessary. But models are imperfect. Parts are not machined properly, assemblies of parts twist slightly out of alignment, and so forth. There are always differences between a robots internal model and reality. We might hope to recognize these errors using sensing. This is often impossible, because sensors too have errors. Generally these errors can be modeled by two terms, namely a fixed bias term and a superimposed noise term. The superimposed noise can be removed with a Kalman filter --- really just an application of the Central Limit Theorem. The remaining error is a fixed bias in the sensor. Even very good sensors will have this fixed bias. It is in fact a reflection of the very discrepancy between reality and the robots internal model, and thus cannot be removed! Think of it as a fundamental calibration error, or hidden state, or an unknown DC term, or whatever expression makes sense. Because it is an unknown bias, it is very hard to detect, except maybe through repeated failures of the robots strategies. Randomization is useful for circumventing this bias. In effect, randomization finds a way to the goal past whatever barrier the unknown bias has created, much like a random walk on a graph will find its way to some desired state without knowing the connectivity of the graph explicitly. Consequently, in the face of sensing errors, a good strategy first employs a Kalman-like filter to remove noise from the sensor, thereby extracting from the sensor all the information it can deliver. Second, in order to overcome the fixed bias in the sensor, the robot randomizes its motions. The question of when to randomize is a complicated function of the task and sensor properties, which I will not explain here.
In practice, randomization yields familiar operations: jiggling a sprinkler head until it slips into place; pushing a desk drawer back and forth to unjam it; shaking a tray of silverware so the spoons line up in each other; twirling gears so they mesh more easily; shaking a tray containing parts and special depressions until the parts are fully oriented in the depressions; filtering via a bowl feeder to orient parts; tapping on a wall segment squeezed between ceiling and floor beams in order to align the wall properly; and so forth.
C.4. Coordinating Multiple Robots
Together with Tomas Lozano-Perez, I considered the problem of coordinating the motion of several robots. We developed practical planners that could coordinate the motions of many objects. In general, the problem of planning the motions of several objects requires computation exponential in the number of objects. We circumvented this problem by solving a series of motion planning problems in configuration space-time. The objects are assigned planning priorities, then motions are planned for each object in turn. The priorities are specified for planning purposes only; prioritization does not mean that an object whose motion has been planned first necessarily controls the task. A given objects plan respects all the (possibly time-varying) constraints imposed both by stationary obstacles in the environment and by the moving objects whose motions have already been planned. There are some subtleties to searching configuration space-time, but, essentially, space-time represents constraints much as would a regular time-invariant configuration space. This approach cannot solve all coordination problems, since it projects the original problem onto a series of simpler problems. Nonetheless, except for problems in which the objects must perform interchanges in tight passageways, this approach worked quite well.
C.5 A Configuration Space Friction Cone
I developed a generalized friction cone for representing planar objects in contact, when the objects can both translate and rotate. This representation automates the prediction of object motions given a description of the forces and contacts acting on the objects. The approach can handle multiple contacts as well as full Newtonian dynamics. The generalized friction cone also makes explicit various ambiguities and inconsistencies that can arise with Newtonian mechanics and Coulomb friction.
I first worked on this representation over a decade ago. At the time I was unable to locate any complete representation of friction; most work in robotics was hand-crafted to the particular task at hand, and included simplifying assumptions such as small angle approximations. In the meantime a number of approaches exist. Among those, the acceleration center approach of Mason and Brost is probably best. Others who have worked in this area recently include Goyal, Nguyen, Peshkin, Trinkle, Cutkosky, and Baraff.
The generalized friction cone provided the basic analysis tool for the tray-tilting planner mentioned earlier. It also served as a sanity check for the friction decompositions built by other means in my most recent work on two-palm manipulation.
D. Education
I have contributed to education at both the undergraduate and graduate levels. In addition, I have tried to fill needs that I perceived in the existing curriculum.
I have started three courses, namely the "Math Fundamentals" course in Robotics, the "Introduction to Geometry" course in Computer Science, and, together with Matt Mason, the "Undergraduate Manipulation Laboratory" course. The Math course is a survey of useful numerical techniques, widely used throughout the applied sciences. The topics include interpolation, approximation, linear algebra techniques such as singular value decomposition, root finding, numerical solutions to differential equations, optimization, and Calculus of Variations. From year to year I add some additional topics, such as Markov Chains and Polynomial Resultants. The purpose of the course is to provide a basic understanding of numerical techniques for all the Robotics students. Each student also prepares a short report on some specialized topic, copies of which are handed out in class. The students seem to enjoy the course and work hard.
The Geometry course was intended to serve the needs of both undergraduate and graduate students, in particular to provide techniques for dealing with geometric problems both abstractly and computationally. Geometry seems to be a topic that is not taught very thoroughly at the high school level, beyond perhaps plane geometry. Yet, geometry forms the core of many techniques in vision and robotics. The coupling between geometry and algebra and between geometry and analysis is so very tight, this seems like a shame. Consequently, I devote about half the semester to basic Differential Geometry. In a very precise sense, the topics really are generalizations of standard calculus, but now to higher dimensions, to curves, and to surfaces. Of course, the notion of differential forms is probably new to most everyone. We also cover curvature, torsion, fundamental forms.
The second half of the course is devoted to Computational Geometry. We cover most of the basic techniques, such as query algorithms, convex hull algorithms, proximity algorithms (Voronoi diagrams), and plane-sweep. The emphasis in the second half of the course is on implementations. We also discuss complexities and different data structures, so the students get some feel for the tradeoffs that exist between different computational approaches to the same geometric problem. Again, these are fairly fundamental techniques, that werent previously being taught in a coherent systematic fashion.
The Geometry course generally attracts about one-third undergraduates and two-thirds graduates. Homework consists of some theoretical work and a number of implementations. In addition, the graduate students are required to do a term project. I was a little worried that the graduate students would swamp out the undergraduates. I still have that worry, but it has been somewhat ameliorated. Class participation by the undergraduates has been good. Also, I have heard back from some of my former undergraduate students, who report using some of the computational geometry techniques in their current jobs.
The Laboratory Course is designed to give advanced undergraduate students an opportunity to work with state of the art robotic equipment. With dedicated funds from NSFs DUE ILI program, and matching funds from the Robotics Institute, we acquired an ADEPT/GENEX flexible feeder robotic workcell, a SONY Advanced Parts Orientation System, and a second ADEPT arm and vision system. The students work fairly independently on small projects. To date, we have seen a diverse range of projects, including a chip sorter, a foam sculptor, a golf player, a Lego assembler, and robotic calligraphy.
Finally, I have contributed to undergraduate education at a more personal level by advising students working in the Manipulation Lab on individual research projects. The most exciting of these interactions were with Michael Leventon, who was visiting us from Cornell University, and with Craig Johnson, who helped me with the implementation of the palms system.
E. Future Plans
E.1. Research
I approach my research goals with the belief that autonomous robots exhibiting human-level performance are several hundred years off in the future. Consequently, I believe that my present effort should be to explore different forms of manipulation, sensing, and reasoning in simple though realistic and well-defined domains. The aim of this exploration is to discover general principles upon which robots may be built. My current set of motivating tasks is drawn largely from industrial sources. Typical tasks include part insertion, part assembly, part orienting, part positioning, and part recognition. I hope that my exploration of these tasks will lead to improved industrial algorithms and better design of robots for practical applications. Primarily, I see my job as an obligation to ask foundational questions, and to provide a forum in which others can explore the answers to these questions with me. The research directions described above, namely to explore the information requirements of robot tasks, and to understand the basic mechanics of manipulation, have been and continue to be my current best estimate of the directions of inquiry along which all the basic questions of robotics arise naturally.
Specific future research issues include:
Three-Dimensional Manipulation. Most manipulation strategies are planar, that is, they deal with three degrees of freedom. Manipulation of three-dimensional objects requires reasoning in spaces with six degrees of freedom. While some theoretical results exist along these lines, there are essentially no automated systems for manipulating three-dimensional objects. We are now slowly seeing signs that the Robotics community is turning its attention in this direction. One of my students, Mark Moll, and I are also exploring manipulation strategies in three-dimensions. Again, taking our cue from industry, we are looking at parts orienters for non-planar objects. Our current strategy is to use probabilistic techniques. This ties in well with my earlier research on randomized robotic algorithms.
Methods of Manipulation. As I mentioned, most robot arms use force closure grasps to move parts. Yet much is to be gained by exploiting unconventional grasps. The palmar systems described above are two examples. I am currently exploring more exotic forms of manipulation. One of the lessons that comes out of my work on uncertainty and information, is the need to match strategies to the task. This sounds so obvious, but is very difficult. Most manipulation strategies are very brittle, primarily because assumptions made during the analysis phase are not satisfiable, and thus the strategies are not properly tuned to the needs of the task. Strategies that are not brittle tend to be ones that try not to know too much about the world. It is better to let nature takes care of the mechanics, than to try to tune the mechanics actively. For instance, in order to move an object, it is easier to surround it with a box, then move the box, than it is to grasp and move the object precisely. I believe the same principle carries over to mechanical design. The old guarded moves, compliant motion strategies, passively-compliant wrists, etc., are all examples of this principle. Applied to Manipulation I believe the principle suggests that we build fingers that are highly compliant, which automatically match the shape of the object. We then need only worry about moving the fingers, not about the detailed interaction between the fingers and the object. Manipulating objects with saran-wrap fingers is my current theme.
E.2. Education
Good ideas should be brought into the classroom. New good ideas should be presented and dissected in graduate seminars. Ideas that have withstood the test of time a little bit should be taught to undergraduates. That goal was one of the motivations for the Undergraduate Manipulation Laboratory Course. I would like to expand the role of that course. One difficulty right now is that students are not prepared sufficiently to take the course until they are seniors. I think this is mainly a timing issue. I would like to make the equipment available to much younger students, sophomores or juniors. I may experiment with that idea in the future, perhaps by making the equipment available to the existing undergraduate manipulation course (CS384) for advanced projects.
I would like to introduce some geometry into the introductory computer science courses, say into CS212. When I taught the course in the past, I focused on computer architecture issues in showing the students applications of their parsing, computability, and data structure lessons. I think there are a host of other applications out there that will be increasingly important. The simplest geometric results, such as convex hull, are some of these.
Copyright © 1997 Michael Erdmann