Computing That Serves

Advanced 3D Graphics

Dr. Parris Egbert

Students working with Dr. Parris Egbert in the Advanced 3D Graphics Laboratory have pioneered in a variety of areas in the field of 3D animation.

David Cline is one of those students. During the course of conducting research in an effort to complete his doctoral dissertation, he has come across revolutionary new ways to introduce light into computer generated images.

Light is as critical in computer generated animation as it is in filming and photography, yet it is extremely difficult to recreate in a realistic manner. Programs created by Cline and Dr. Egbert in the lab, however, simulate the movement of light from various sources in computer generated scenes, creating a photorealistic image. This "virtual photography" allows animators, graphic artists and computer scientists to create digital images which look astonishingly real, as if one had captured the scene with a camera, rather than with bits and bytes.
Other research in Dr. Egbert's Advanced 3D Graphics Lab have made advanced in computer animation. Students in the lab have created programs which allow graphic artists to automatically generate texture on a cell in minutes, a process which could otherwise take hundreds of hours. One such program was used in BYU's animated short, "Noggin." The texture on Noggin and the other creatures in the film, such as the hairs on their arms and chests, was created through a process known as "hatching." Attempting this level of minute detail by hand would take hours for a team of artists to create. However, the program used in "Noggin," turns this tedious task into a relatively simple automated process. Other animation-related research in the Advanced 3D Graphics Lab is venturing into the realm of machine learning. Using "cognitive modeling," students are able to generate animated scenes in which characters are able to move autonomously and make decisions as a human would. Animators are thereby able to quickly create and control hundreds of characters for group scenes, rather than spending hours painstakingly creating each figure and directing its animation.

Realistic fluid flow simulation is another important feature in many types of computer graphics and animation that is being tackled by Dr. Egbert and his students in the lab. The motion picture industry, for example, is interested in producing computer generated sequences that are photorealistic for scenes requiring water, mudslides, blood, lava flow, honey, or any other fluid desired.

The computer gaming industry is also looking for efficient mechanisms to produce convincing fluid flow that can be easily interacted with in video game scenes featuring, for example, mud bogs, toxic spills, water, and other liquids.

And the need to produce realistic fluid flow in computer graphics extends beyond the entertainment industry. Simulating a toxic spill into a river, lake, or ocean can provide insight into minimizing damage and cost in cleanup efforts. Similarly, hydrologists can use computer generated liquids to simulate the effects of run-off to prevent potentially dangerous fertilizers and chemicals from entering our water supply.

Unfortunately, the methods and algorithms used in the past to represent fluid flow are time consuming and difficult to produce-hundreds of man hours are required to produce a single photorealistic scene. To combat these problems, the lab is conducting research into algorithms and techniques for simulating photorealistic viscous fluid flow. Using a particle based solution and adaptive time steps, they have produced simultaneous photorealistic images of many types of liquids more quickly and efficiently than was possible using traditional techniques.

Dr. Egbert and his students are also creating algorithms and techniques for quickly generating and navigating virtual environments. The ability to create photorealistic virtual environments is in high demand as a variety of industries is recognizing its value. Architectural firms, advertising agencies, and tourist bureaus, wishing to entice potential clients, are interested, as are the computer gaming and motion picture industries. Defense organizations, which are interested in creating accurate virtual environments for training and tactical purposes, are also looking for the technology.

In response, Dr. Egbert and his students have developed algorithms for cross-network rendering of multiple gigabyte environment databases in real-time on PC-class machines. In addition, they have developed new techniques to seamlessly morph images together, as well as new culling and level-of-detail algorithms to reduce rendering time and associated costs and improve the quality of the virtual environments themselves.

Current and future work on this project includes the ability to map and navigate the entire earth, adding environmental conditions such as clouds, snow, rain, and sun. Additional detail is added through the use of local, higher resolution images, and makes this system easily adaptable to, and usable by, a broad spectrum of applications.