||Zollmann Stefanie, Reitmayr Gerhard
||ISMAR Workshop on Visualization in Mixed Reality Environments, 2011
||A convincing combination of virtual and real data in an augmented reality (AR) application requires detailed model information about the real world scene. In many situations extensive model data is not available, while sparse representations such as outlines on a map exist. In this paper, we discus the idea of using such sparse 3D model data to perform automatic image segmentation and infer a dense depth map of an environment. We project the 3D model of known landmarks, such as points and lines from GIS databases, into a registered image and initialize 2D image segmentation at the projected locations of these sparse models. Therefore, we want to combine shape information, semantics given by the database, and the visual appearances in the referenced image. The resulting depth information of objects in the surrounding scene can be used in dif- ferent applications, including occlusion handling, realistic shadow and lighting effects, label placement, phantom geometries for inter- action with the real scene, and 3D modeling.