Camera Orientation Estimation in Natural Scenes Using Semantic Cues

Jan Brejcha,
Martin Čadík,


Camera orientation estimation in natural scenes has recently been approached by several methods, which rely mainly on matching a single modality -- edges or horizon lines with 3D digital elevation models. In contrast to previous works, our new image to model matching scheme is based on a fusion of multiple modalities and is designed to be naturally extensible with different cues. In this paper, we use semantic segments and edges. To our knowledge, we are the first to consider using semantic segments jointly with edges for alignment with digital elevation model. We show that high-level features, such as semantic segments, complement the low-level edge information and together help to estimate the camera orientation more robustly compared to methods relying solely on edges or horizon lines. In a series of experiments, we show that segment boundaries tend to be imprecise and important information for matching is encoded in the segment area and a coarse shape. Intuitively, semantic segments encode low frequency information as opposed to edges, which encode high frequencies. Our experiments exhibit that semantic segments and edges are complementary, improving camera orientation estimation reliability when used together. We demonstrate that our method combining semantic and edge features is able to reach state-of-the-art performance on three datasets.


Coming soon