The goal of this project is to develop a system which allows users to build interactively 3D models of real-world environments
by means of mobile devices like smartphones and tablet computers. Regarding the repid progress in mobile technologies, this
methodology bears tremendous potential by offering unprecedented interactivity and mobility. In particular, the feedback, provided
to the user in the course of reconstruction, can assist his movements in order to construct a 3D model with the desired quality and
completeness. Additionally, suitable high-resolution still images could be stored and used in an offline refinement stage.
While mobile devices have been employed in distributed systems for live 3D modeling, where a smartphone or a tablet is used barely
to provide visual feedback to the user while all demanding computations are performed on a remote server machine, our efforts are
focused on building a complete reconstruction pipeline capable of operating entirely on-device. This poses strict requirements on
the efficiency of the developed algorithms, which are addressed by leveraging all computational resources available on-board like
multiple CPU cores and a GPU. Furthermore, we exploit the information provided by the inertial sensors, that modern mobile devices
are equipped with, to promote the camera motion tracking but also to estimate the metric scale of the captured scene.