Final Report

                                                                                                      

Real Time Motion Capture

Summer 2002 


                       

Motion capture is a popular tool used for creating realistic motions. Motion capture has been used extensively in the entertainment business, a good example is the movie “Final Fantasy” in which most of the motions were created using motion capture. In general, motion capture is the process of capturing the motion of a live subject and applying it to a virtual (i.e. computer generated) subject.  Motion capture eliminates many of the problems and difficulties found in traditional keyframing animation, however motion capture does have its limitations. For example, it is very difficult to modify a motion once its been captured and so if the motion wasn’t as good as the animator wanted it to be, then that motion will most likely be discarded and another motion will be captured. Overall, motion capture can be a reliable tool to create realistic looking motions fast and easy.

 

My project focused on doing a real time motion capture. One prospect of using real time capture is to enable the capture subjects to see their motions in real time. Up until this point, the person being captured is not able to view his/her motion until after it has been captured. With real time, as the subject is moving, the generated motion is simultaneously displayed and any improvements or changes to the motion can be made at that point instead of waiting to view the motion after the capture and deciding on changes then, which will result in regenerating the whole motion. Being able to view the motion in real time is a prospect for future projects and applications here at CMU.

 

 

I’ve divided the projects into two parts:

 

Part I: Doing real time motion capture

 

Doing motion capture wasn’t a new concept here in the graphics lab; several people have conducted offline (non real-time) motion capture. However nobody has tried to do real time capture.

In order to understand real time capture I needed to understand how offline capture was done. I talked with the people who worked in the motion capture lab and asked them to show me a demo of how motion capture is conducted. The first capture I was simply an observer, however later I became involved in the process by working with the motion capture software called Workstation. This software is part of the motion capture system provided by Vicon.

 

It doesn’t take very long to learn doing a motion capture session. I had a couple of demos and I was given a set of instructions written by another student research group. To learn more about the steps involved in doing a motion capture please see my Project page. After learning the offline capture I started working on real time. This wasn’t as easy because there was hardly any available documentation for real time, so I ended up contacting the Vicon support group and requesting further information. The support team was very helpful and they e-mailed me a document on conducting real time capture. I spent a few days doing real time capture. For my initial trials I used a simple stick as my object with a total of 3 markers. Using a stick was simpler than having a real person dressed up in the motion capture suit everyday for a whole week.

 

Doing the real time captures wasn’t very difficult, I did run into some problems and that was because some of the instructions weren’t very clear. However, I believe that the capture sessions went rather well. I conducted both offline and real time captures for the stick and I was able to view the different motions in Workstation to make sure the data was being captured correctly.

 

Part II: Importing real time data into Maya

 

Like I said earlier, the object of doing motion capture is to be able to map the motion to a computer generated character later on. For my project all the models were created in Maya. Maya is a 3D modeling and animation tool. The models consisted of a simple human skeleton with 17 body parts. Each body segment is a Maya object that has a number of attributes such as position, orientation, scale, color …etc. In order for the model to be animated, the motion values coming from Vicon must be entered into the position and orientation attributes of each body part.

 

To input the data values into Maya I need a plugin. Initially I was going to try and write this plugin, luckily I found that Vicon already had an existing plugin to communicate with Maya. I requested the plugin along with its source files and I started testing it. Like before, I used the simple stick as my capture object. Throughout my captures I noticed that if I don’t create a skeleton model in Maya, that the plugin would automatically create little locators or objects that represent the number of markers on the capture object. The locators in Maya seemed to me moving correctly in relation with the stick. However, if I created a skeleton model in Maya and tried to map the data then it wouldn’t work. After numerous trials I found out that the model that I was creating had a hierarchy and that the data coming from Vicon wasn’t based on any hierarchical system. In other words, the data coming from Vicon represented the absolute position and orientation of each body relative to the world coordinate. On the other hand, Maya was expecting values to be relative to the parent segment. For example, the position and orientation of the foot is relative to the lower leg, and the lower leg relative to the upper leg and so on.

 

This wasn’t a major problem because I wasn’t interested in the position and orientation of the body segments, but rather in the joint angles. By calculating joint angles instead of position/orientation, the data can be mapped to different characters that might be in a different world coordinate system and thus the position/orientation data will not work in this new coordinate system. Vicon did not calculate joint angles; I would need to change the source code for the plugin so that it would output joint angles instead of position/orientation. To calculate joint angles I converted the position/orientation of two body parts to quaternion representation, found the difference between the two quaternions and from the result I could extract the angle for the joint linking the two body parts. A more detailed explanation on this mathematical conversion is found on my Project page.  

 

Modifying the plugin so that it would calculate joint angles only took a few lines. The hard part was studying the code in the first place and trying to understand it especially since there were a total of 25 source files and hardly any of them were commented or documented. To test my new code I first tried to hard code data values because this would make the debugging much easier. After eliminating a number of bugs I ran my code with actual motion capture data. This time, however, the data wasn’t real time but rather it was being generated by Tarsus (a Vicon application) that cyclically reads a motion data file and outputs the 3D data to client programs such as Maya in my case. I used this emulation method to avoid having to do a real time capture for purposes of debugging my program. As usual nothing that I do is free of bugs, the emulator was outputting the data at a very high frame rate and the motion didn’t seem right. Basically my model would me moving all about the screen jumping from one side to the other and flickering most of the time. Fixing the emulator was very easy but trying to locate the problem took a number of days to figure out. After I fixed the emulator I tested my code once again and it appeared to be working for the most part.

 

Unfinished Work

As the project stands now there are some problems with animating the hands and the feet. The feet for example are always oriented at an odd angle even if the subject is standing still. When I ran a walking motion on the model in Maya I detected some inconsistency in the movement of the feet. I am not sure if it’s simply bad data coming from Vicon or if it’s my program. On my last day I will be running more tests to see if I can locate the problem.

 

What I Learned

During my last year’s internship, I read about motion capture and I had wanted to learn more about it. Unfortunately my university (University of Utah) did not have a motion capture system and thus coming to Carnegie Mellon University was my chance to learn about motion capture. Overall it was a very exciting and interesting experience. I learned about concepts in computer graphics and animation that I wasn’t familiar with before.  As far as research itself, I realized that it can be very stressful and frustrating at times especially when you find yourself at a dead end and there’s nobody around you who can help. When you find yourself “stuck” the first thing to do is to ask around and see if you can get any help from the people around you, it’s also tricky to know whom to ask. A second approach is to refer back to books and paper publications. For this project I needed to study about Euler angles and quaternion representation and I spent some time going over math and graphics books. In end, the best feeling is when you get some part of your project to work; this gives you a sense of accomplishment and encourages you to go on. One hard thing I had to learn was never to give up and to be patient. Research isn’t always clearly defined, you have a goal in your mind but you’re not sure how to reach it or if you can even reach it. Because of the nature of research, frustration and disappointments are part of the game and you have to prepare yourself to face failure and to keep on going and not give up.