Visually impaired people struggle to live without assistance or face any aspect of life alone especially with people that cannot afford extra assistance equipment. Usually impaired people receive assistance by either human or wearable devices. The first one bears the burden on the human, while the second adds financial burdens nevertheless the hassle of identifying an object is not decreased. Smartphones are almost accessible to everyone and equipped with accessibility features including sensors that can be utilized to help both visually impaired and sighted people. Thus, this paper proposes an approach using Convolutional Neural Network (CNN), speech recognition and smartphone camera calibration aiming at facilitating the process of indoor guidance for visually impaired people. A smartphone's camera acts as the user's eyes. A pre-trained CNN model is used for object detection and the distance to objects is calculated to guide the user toward the right directions and to warn them of obstacles. The speech recognition part is used as a communication channel between visually impaired people and the smartphone. Also, the proposed approach supports object personalising that helps to distinguish user's item from other items found in the room. To evaluate the personalized objected detection, a customized dataset is created for two objects. The experimental results indicate that the accuracy is 92% and 87% for both objects respectively. Also, we experiment the detect distance of two objects against their real distances. The results achieve 0.05 and 0.08 error ratio.
n/a