3D layout propagation to improve object recognition in egocentric videos

Conference Paper

Abstract

Intelligent systems need complex and detailed models of their environment to achieve more sophisticated tasks, such as assistance to the user. Vision sensors provide rich information and are broadly used to obtain these models, for example, indoor scene modeling from monocular images has been widely studied. A common initial step in those settings is the estimation of the 33D layout of the scene. While most of the previous approaches obtain the scene layout from a single image, this work presents a novel approach to estimate the initial layout and addresses the problem of how to propagate it

Conference Name

European Conference in Computer Vision Workshops (ECCVW)

Year of Publication

2014

Publisher

Springer