Inspiration

Having a deaf mentor and seeing the inefficiencies with the current mode of communication with them, we used technology to shorten this gap between the hearing impaired and hearing

What it does:

T-meet converts sign language into text and audio and also speech into text. It can be integrated into any video chat, telehealth, language learning platform or translation software to create an inclusive society.

How we built it

We used Teachable machine for training the data Python OpenCV and tensorflow for grabbing video frames and passing it through our trained model We used speech recognition for transcribing audio data into texts

Challenges we ran into

Training a substantial amount of data Working with tensorflow Manipulation of openCV for an aesthetic UI

Accomplishments that we're proud of

Working collaboratively and ensuring our tech is inclined with the problem we’re trying to solve Successfully completing the tasks

What we learned

Effective Time management, Team collaboration, OpenCV manipulation

What's next for T-Meet

We are keeping tabs on our to incorporate our software into mobile devices like facetime

Built With

Share this project:

Updates