By Daniel Kane, (858) 534-3262
San Diego, CA, September 1, 2006 -- For a group of engineers and computer scientists at UCSD, trips to a campus convenience store are crucial for more than the caffeine and sugar necessary to fuel the next breakthrough.
The convenience store itself provides an important testing environment for the development of a seeing, reading and navigating tool that will allow the visually impaired to shop for groceries independently.
Serge Belongie, a professor in the Computer Science and Engineering department at UCSD's Jacobs School of Engineering and a Calit2 collaborator, leads the research team. The multidisciplinary team includes Calit2 researcher John Miller and others at the Jacobs School, Calit2 and the University of Kentucky.
If the researchers receive the necessary funding and their prototypes perform as expected, portable devices that “see” and help the visually impaired navigate the environment and find objects and locations could be available for use in a wide range of settings, including airports and bookstores.
The project, however, is still in the early phases. In the next few months, the team will find out if they will be awarded a 3-year, 1.35 million-dollar grant from the National Institute on Disability and Rehabilitation Research to continue the project.
This summer, Belongie has been spreading the word about the computer vision project.
“People have been very positive. They are happy to see computer vision applications being designed to help people,” Belongie said.
How can an object the size of a cell phone help someone who has vision problems shop for groceries?
First, the person using the device will create a shopping list at home using software for the visually impaired. Images of the products on the list, along with their barcodes, will then be downloaded to a handheld device called a MoZi box.
Next, the MoZi box will help the person navigate their way from home to the grocery store. A group at the University of Kentucky is developing the outdoor navigational aspects of the technology.
If Tide laundry detergent were on the list, the MoZi box would alert you if you walked past the aisle labeled “laundry, household cleaning, dish detergent.”
The MoZi box isn’t exactly reading the aisle labels. It finds words and performs a spell-check to identify words relevant for grocery shopping.
Once you’re in the right aisle, the MoZi box scans the aisle for objects that look like specific products on your list.
It is unlikely that MoZi’s view of the box of laundry detergent on the shelf would be exactly the same as a picture of the product from your computer. To improve the chances that the MoZi box will identify the product you want, the researchers find as many different pictures of the items on the grocery list as possible and then crunch and stretch the images. These collections of images provide the MoZi box with the information necessary to do what the human visual system does exceptionally well: recognize objects even though they might be slightly different than how you remember them.
For example, a bag of your favorite cookies looks like your favorite cookies, even if the bag is crumpled or sitting at an unexpected angle.
Once the MoZi box has found an image that looks like a match, it will scan the barcode and verify the identity of the object.
When the person is done shopping, the MoZi box will help you get to the register and pay, with the assurance that you are handing the cashier a five-dollar bill and not a fifty.
And speaking of money, even if the researchers’ grant application is not funded, “We’ll just keep trying. There is a lot of good will here. We’ll keep the project going on a shoestring until we get real funding,” Belongie said.
Calit2's Summer Undergraduate Scholars at UCSD Dive In