By Wallace Ravven, UCOP
Monday, April 23, 2010 -- UC San Diego experts in 3-D imaging, led by Calit2 research scientist Jürgen Schulze, refine virtual-reality devices to generate detailed and reliable models of spaces, people and objects. These will be valuable tools to other researchers, including archaeologists, as Wallace Ravven reports for the UC Research website:
In the desert south of the Jordanian capital of Amman, archaeologists aim to learn the social status of workers and craftsmen in different ancient cultures. Was manual labor the work of slaves? Was tool-making a respected profession?
The archaeologists take high-resolution pictures of their excavated sites and fly a weather balloon for aerial views. Still, the maps and images have their limits. Researchers want to develop accurate 3-D models of the sites and objects within them. They are counting on colleagues halfway around the world to ramp up their desert progress.
Experts in three-dimensional imaging at UC San Diego’s California Institute for Telecommunications and Information Technology, or Calit2, already are at work refining virtual-reality devices to generate detailed and reliable 3-D models of spaces, people and objects. Their tools: hand-held, off-the-shelf software and devices that they adapt to the task.
“We are trying to advance imaging technology, but using devices that are already at hand,” says Jürgen Schulze, a Calit2 research scientist and leader of the effort.
Calit2 graduate students can quickly produce 3-D scans of objects with an inexpensive scanning system based on a Microsoft Kinect camera from the Xbox video game. The advantage for researchers in the field is great portability. But the downside is that the user doesn’t know the exact orientation of each scan, so it’s hard to precisely seam the 3-D imagers together.
”We need our virtual reality system to track the Kinect’s position,” Schulze says. “In the future, we want to enable archeologists to use tools such as this outside, and in unfamiliar sites.”
The UC San Diego team wants to modify the tracking ability of the instruments so they can generate much higher resolution images of objects and refine their 3-D accuracy. As with Kinect, the idea devices would be hand-held.
Now, where can you find such a device? The team is thinking of using smartphones and tablets, but they’re making them smarter. They’re developing algorithms to enhance the smartphone’s geographic positioning ability. The goal is to pinpoint both the user and objects being studied on site down to a scale of inches.
“We’re combining a variety of sensors so the system can be handheld,” Schulze says. The approach uses a smartphone’s accelerometer, gyroscope and GPS capabilities, and analyzes pictures taken by the phone’s camera from different positions or of different parts of an object or site.
One key advantage of the new technology is that scans can be examined on a computer in real time, allowing users to fill in “blank spots” without having to make a return visit.
“We want to leverage imaging technology so that scientists can immediately verify the data they capture,” Schulze says.
Some of the new capabilities have already been adopted by architects and structural engineers, Schulze says. The research already enables real-time 3-D views and measurements of simulated shock waves generated by earthquakes.
The field is developing quickly. The Calit2 team is advancing the technology beyond what is commercially available. Much of the Calit2 work is privately funded, with the university owning the intellectual property for algorithms and devices. Funders get first rights of refusal to license the algorithms, Schulze says.
In the StarCAVE, one of Schulze’s students is working on a virtual-reality user interface based on Kinect to allow scientists to view complex data in more intuitive ways. Some day, Schulze says, this interface might make its way back into the living rooms and entertainment rooms where Kinect began.