Urban Location Recognition on Mobile Device
The goal of this project is to let a mobile device (such as a smartphone)
estimate its own position inside a city by analyzing what it sees with the
built-in camera. The main motivation is to use this as a part of a larger
augmented reality application where a user can look "though" the mobile
device and see useful information superimposed onto the view.
The main ingredients we use are:
- Large database of images collected by a capturing vehicle driving systematically through all streets (similar to Google's Street View)
- Coarse buiding geometry (footprints and building heights) allows generating gravity-aligned orthophotos
- Vanishing point detector to rectify query images
- Non-rotation-invatiant feature descriptors for increased discriminative power ("upright" SIFT)
- Quantization using Vocabulary Tree and indexing using Inverted Files for fast retrieval
- A specialized geometric verification scheme that need only one feature correspondence for improved robustness and speed
Publications
Poster
Data Sets
This is a low resolution version of the cellphone query set used in the ECCV paper:
The full database and query set used in the CVPR paper is available here:
www.nn4d.com/sanfranciscolandmark
Software
Acknowledgments
We would like to thank Friedrich Fraundorfer for valuable and helpful
discussions as well as Ramakrishna Vedantham and Sam Tsai for help with the
software and database infrastructure.
We also would like to thank the following peolple from Navteq: Bob Fernekes,
Jeff Bach, Alwar Narayanan and from Earthmine: John Ristevski, Anthony Fassero.
Contact
|