I. Introduction
In recent years, machine reading has advanced from a research vision to reality. People who are visually challenged experience a wide range of difficulties in accessing the written text with modern technology, including the issues with layout, exactness, flexibility, and productivity. This research study presents a smart technique to assist both visually challenged people and travellers by reading paper-printed messages successfully. The proposed structure is built on deploying image acquiring technologies in an embedded framework on the Raspberry Pi board. The concept is based on preliminary research with people who are visibly impaired, and it is small and portable, allowing for a more appropriate activity with less planning. For voyagers and the visibly weaker, we presented a text-read-out framework in this project. A camera serves as an information device in the proposed fully integrated framework, which is used to manage the printed text record for digitalization. Discourse is perhaps the most effective way for the individuals to communicate with each other. An optical character recognition approach is used to remove the text from the image. Optical character recognition (OCR) is a process that translates verified or printed text, as well as manually typed data, into editable text for further processing. Discourse mixture is a phoney mix of human conversation. A Text-to-Speech (TTS) synthesiser is a computer framework that can read any text and make it audible to everyone, regardless of whether it was simply input into the computer by an admin or reviewed and sent to an Optical Character Recognition (OCR) framework. The device was put through its paces on the Raspberry Pi platform. The Raspberry Pi is a basic installed framework that, as a low-cost single-load up PC, is used to reduce the complexity of frameworks as applications grow. Python is the foundation of this level. The text is physically centred on the Pi camera. Then it takes a picture, with a 5 second delay to help centre the pi camera if it is accidentally defocused. After a delay, Raspy takes the image and processes it such that you may hear the message’s deciphered expressions over the headphones or speaker linked to Raspy through its 3.5mm sound connector.