Oyebode, KazeemDu, ShengzhiVan Wyk, Barend JacobusDjouani, Karim2025-01-232025-01-232019-07-012169-3536https://109/ACCESS.2019.2920686https://hdl.handle.net/20.500.14519/1225Visual localization of indoor environments enables an autonomous system to recognize its current location and environment using sensors such as a camera. This paper proposes a method for visual recognition of indoor environments leveraging on existing object detection, ontology, Bayesian-like framework, and speeded-up robust features (SURF) algorithms. Objects detected in such an environment are fed into a Bayesian-like framework for domain recognition. Finally, the SURF localizes the predicted environment. One of the objectives of the proposed model is to eliminate the image-based training phase encountered in traditional place recognition algorithms. The proposed model does not rely on any visual information on the environment for training. Experiments are carried out on two publicly available datasets with promising results.79783-79790 PagesenAttribution-NonCommercial-ShareAlike 4.0 Internationalhttp://creativecommons.org/licenses/by-nc-sa/4.0/Bayesian reasoningImage recognitionImage localizationConvolutional neural networkA sample-free Bayesian-like model for indoor environment recognition.Article