Featured Product
This Week in Quality Digest Live
Health Care Features
Etienne Nichols
How to give yourself a little more space when things happen
Chris Bush
Penalties for noncompliance can be steep, so it’s essential to understand what’s required
Jennifer Chu
Findings point to faster way to find bacteria in food, water, and clinical samples
NIST
Smaller, less expensive, and portable MRI systems promise to expand healthcare delivery
Lindsey Walker
A CMMS provides better asset management, streamlined risk assessments, and improved emergency preparedness

More Features

Health Care News
Showcasing the latest in digital transformation for validation professionals in life sciences
An expansion of its medical-device cybersecurity solution as independent services to all health systems
Purchase combines goals and complementary capabilities
Better compliance, outbreak forecasting, and prediction of pathogens such as listeria or salmonella
Links ZEISS research and capabilities in automated, high-resolution 3D imaging and analysis
Creates one of the most comprehensive regulatory SaaS platforms for the industry
Resistant to high-pressure environments, and their 3/8-in. diameter size fits tight spaces
Easy, reliable leak testing with methylene blue
New medical product from Canon’s Video Sensing Division

More News

Chris Hardee

Health Care

When Animation Meets Simulation

Movie-making tools help drive a virtual product evaluation using Abaqus FEA.

Published: Thursday, September 3, 2009 - 05:30

As moviegoers, we have all seen a wide range of animation—from early Disney features, such as “Snow White,” to Japanese anime, and Pixar’s “Toy Story,” to an assortment of recent blockbusters that seamlessly integrate animation with real actors. With each release, the movie magic gets more amazing as animated characters such as the Incredible Hulk or Gollum in “The Lord of the Rings” take on lifelike qualities and realistic human facial expressions. How in the world do filmmakers do it?

As animators know all too well, the human face is one of the most difficult objects to realistically model. A flexible layer of skin covers a complex array of muscles and bones, producing a seemingly endless number of subtle facial expressions that are an important component of our communication system. Technology that allows blending live-action with special effects has pushed the animation field into realms hardly imagined just a few years ago, as animators use computer-based physics in much the same way that design engineers use realistic simulation.

How animators model faces

Evolving beyond Disney and hand-drawn cell animation, one of the earliest computerized approaches used by filmmakers to generate human faces was called key framing. This method involved linear transformations from one face mesh to another, but the computations were extensive and the data sets large. Another method was to model the human face as a parametric surface and record movements only at specific control points. But this process was difficult to generalize over different face meshes.

Next came a marker-based motion capture system that is now used widely. With this method, from 30 to 200 reflective markers are applied to an actor’s face and an array of cameras captures facial movements and triangulates to determine location. The marker-method, however, still doesn’t provide enough resolution to fully capture the subtleties of human facial movements. However, the latest markerless performance capture technologies are now producing facial animation with much higher resolution.

How simulation analysts model faces

Motion capture animation isn’t just for making movies. “Representing the positions and movements of the human face is a big challenge in designing some of our products,” says Chris Pieper (see figure 4), associate research fellow at Kimberly-Clark Corp. in Neenah, Wisconsin, a global health and hygiene company. Although the company is most known for household brands such as KLEENEX and HUGGIES, they also manufacture dust masks and particle respirators that are worn by professionals and do-it-yourselfers involved in woodworking, machining, and other activities that create by-product materials that are unhealthy to breathe. The design challenge is to make a mask that’s comfortable and at the same time maintains an airtight seal against the changing shape of the face.

For Pieper and his engineering team, the simulation problem was to represent a moving deformable surface—a face—in contact with a flexible object—a dust mask. “We’re not worried about the strain of the materials in the product,” says Pieper. “However, it’s crucial that the mask conform to the face. The contact pressure between the mask and the face is very important to the proper function of the product and the comfort of the user.”

Pieper, who was familiar with motion-capture methodologies, thought that he could adapt techniques from the entertainment industry to the product development process.

From animation to simulation: the production process

To demonstrate that high-resolution motion-capture data could be used for virtual product design, Pieper and his group decided to do a proof-of-concept study, choosing Abaqus finite element analysis (FEA) software from SIMULIA, the Dassault Systèmes brand for realistic simulation. “Abaqus is well suited for studying soft, flexible structures with complex geometry in contact,” says Pieper. “The general contact feature makes problem setup easy and solutions stable.” Pieper used the submodeling capability in Abaqus as the basis for a simulation that allows a complex moving boundary (in this case, a moving face) to interact with another structure (the mask).

For his analysis, Pieper drew from the computer-generated animation world. He selected Contour Reality Capture, a high-fidelity performance capture technology from Mova LLC. This California-based company recently used its technology to capture the facial movements of actor Edward Norton to animate the face of the green superhero in the 2008 release “The Incredible Hulk.” The Mova system utilizes an array of cameras—much like contemporary marker-based systems—but also incorporates a stroboscopic fluorescent lighting setup, phosphorescent makeup, and images that are captured in both color and gray scale coordinated with the pulses of light. The result is 100,000 3-D points at 0.1 mm accuracy—high-resolution that realistically recreates human facial movements as well as a photographic image of the face at the same time.

Dust mask meets human skin: The plot thickens

The first step in creating a moving facial model for the dust-mask study involved extracting surface point position data from a lower resolution sample-set of facial motion-capture data provided to Pieper’s group by Mova. For easy information transfer, Pieper asked for the information in an open source format called C3D—a binary file format that is used in biomechanics, animation, and gait analysis laboratories to record synchronized 3-D and analog data.

After extracting the data from the file, the engineering team took the initial positions of the surface points, defined them as nodes, and completed finite element definitions using Geomagic, a surfacing software, to establish nodal connectivity. The team used a Python program to write the nodes and elements to an Abaqus input file so that they could be imported as an orphan mesh part. Using the orphan mesh as the basis for a minimal model definition, they then added a step definition and generated a sparse output database (ODB).

“The Abaqus ODB served as a kind of containment bucket for us,” Pieper says. “We added all the displacement data to it to create a global model.” They then used the global model to drive a submodel representing a human face undergoing a range of expressions and motions. The global ODB was completed by adding nodal displacements using the Abaqus Python scripting interface. To verify that all data was converted correctly, the team viewed the updated ODB as an animation using Abaqus (see figure 1). “Completing the global facial model was a big step all by itself,” Pieper notes.

Figure 1: Visualizations of several frames from the updated output database, showing deformed shapes of the deformable surface (face) at points in time

The engineering team next used the global model to drive the moving surface portion of the submodel, which included both the face and the virtual representation of the dust mask (see figure 2). As a final step in creating the finite element model, they added a submodel boundary condition and additional loads (including a pressure load on the nose piece and an inhaling load on the inner surfaces of the mask) to the moving portion of the facial structure and to the mask. Now the model was ready to run.

Figure 2:  Kimberly-Clark Professional Duckbill dust mask (real and simulated)

Simulation results get good reviews

Pieper and his team used an HP workstation with an Intel 32-bit Windows XP platform for pre/post process and a 64-bit Linux cluster for simulation. Post-processing revealed several regions that exhibited gapping between the mask and the face—such as the areas of greatest curvature around the nose. This was evidenced by gaps in contact pressure contours (see figure 3) suggesting the need for design changes. “More than just the results,” Pieper points out. “This demonstrates how simulation gives designers the means to rapidly evaluate the benefits of each alternative. We look to these simulations to help us narrow the field of design possibilities, so that when we do testing with human subjects, we are only looking at the design finalists. That can really shrink the product design cycle.”

Figure 3: Contact pressure contours as an estimate of sealing effectiveness of dust mask on face at various points in time

Figure 4: Chris Pieper

Pieper appreciates the role that simulation can play when used to control design features and variables. “This type of product evaluation is extremely difficult using real human subjects and physical measurements. You rarely get this high level of control outside of a simulation,” he adds.

While the dust mask simulation was a feasibility study and has not yet been fully validated, Pieper sees the future value of marrying motion-capture with simulation to model what he calls living surfaces—complex moving surfaces that are not easily described mathematically. “The technique provides a new way of representing a complex moving surface as a boundary condition or constraint in a simulation,” he concludes. “This methodology will certainly be useful and feasible for applications that haven’t even been considered yet.” In other words, a sequel is already likely in production. 

Discuss

About The Author

Chris Hardee’s picture

Chris Hardee

Chris Hardee is a writer, public relations specialist, multimedia producer, and marketing consultant who focuses on science and technology topics and organizations.