
Febina Fathima V

Febina Fathima V
Communication is an integral part of human
relationships, but for mute individuals, it is really
hard to verbalize their mind due to the unavailability
of easy communication means. This project illustrates
a wearable device that will support mute individuals
in communicating by measuring jaw movements (submental
triangle) via Electromyography (EMG) signals
and processing them as text output. The device records
muscle activity used in speech articulation and analyzes
it using machine learning algorithms specifically
designed to identify vowels (a, e, i, o, u), outputting
any other input detected as a blank to provide clarity.
The low-cost, lightweight, and compact design offers a
convenient replacement for heavy, bulky assistive technologies.
With real-time signal processing and pattern
recognition integration, the device provides high accuracy
and quick response. Future developments could
involve hands-free network search functionality and
bone conduction technology for enhanced accessibility.
Through the integration of AI, biosignal processing,
and assistive technology, this project provides a novel
and effective communication aid for speech-impaired
individuals.