ka | en
TSU

Helper application for the visually impaired

Author: aleksandre kakhetelidze
Co-authors: Aleksandre Kakhetelidze
Keywords: accessible, mobile, object recognition, stable
Annotation:

The project mainly involves one of the categories of machine learning, Computer vision, specifically Object recognition, their classification, and the integration of this technology into a mobile application to create an assistive application for the visually impaired. The purpose of the mobile application is to help the visually impaired user to recognize localized objects specific to Georgia, specifically a GEL bill, through the mobile camera. To achieve all this, we used company Google’s two frameworks MLKIT and Tensorflow-lite. With Tensorflow-lite we are able to create machine learning models with the dataset of many images of a specific object. Then with MLKIT we process the camera frames and pass them to the Tensorflow-lite model to get the name of the object (voice or text) and the location on the screen (coordinates or rectangular box over the object). Using the mobile application is simple, the customer just needs to point the camera at the GEL bill (to the application and it will be possible to navigate through the system Talkback functionality) and then the application will tell what object is visible on the screen.



Web Development by WebDevelopmentQuote.com
Design downloaded from Free Templates - your source for free web templates
Supported by Hosting24.com