Inspiration
We wanted to build a platform that automatically transcribed your meeting into text, because in the past we have had problems with meeting minutes not being accurate enough for our meetings.
What it does
Dictate Transcribes audio from a conversation onto the screen, into a file for easy instant meeting minutes for users. It works especially great in Interview scenario's Where a question and answer format it easy to display.
How we built it
The main chat uses the Web Speech API built-into webkit that allows us to translate our voice into text. By combining this with WebRTC, we are not only able to convert our voice to text, but also be able to stream our video and audio to different people. This can then be combined with Socket.IO to manage different "rooms" for meeting user to user, conferencing, or other uses.
Challenges we ran into
We had a tough time setting up AWS, and WebRTC took us the longest time to set up of the libraries/api's thatwe used. Also, defining punctuation, and sentence endings were really hard for us, and something we would like to improve on in the future.
Accomplishments that we're proud of
We are proud that we were able to successfully display mostly correct text to a users screen during a call. Since using WebRTC was something we are all new with it was fun playing with it to get it to work.
What we learned
We were able to learn how to use WebRTC along with sockets, and node.js. We also got to play with setting up an AWS.
What's next for Dictate
We would like to continue development of Dictate, and expand it to do what we couldn't finish here and expand the use of this past what it currently is. This includes implementing different chats, allowing for more user options as well as possible recording and other features that we would consider valuable.
Log in or sign up for Devpost to join the conversation.