Inspiration

The inspiration behind MedVise stemmed from the realization that non-critical health concerns often face delays in receiving immediate and reliable first-aid advice. Minor injuries, such as bruises, cuts, or sprains, though not life-threatening, can cause discomfort and worry. Traditional healthcare settings might not prioritize these issues, leading individuals to seek information online, which can be overwhelming and confusing. Our goal was to bridge this gap by creating an open-source medical assistant that offers personalized, timely, and accurate first-aid advice for minor health concerns.

What it does

MedVise is an innovative healthcare chat assistant app designed to address the lack of immediate Medical advice for non-critical injuries. The app boasts a multimodal and multilingual interface, allowing users to upload images or provide audio descriptions of their symptoms. This inclusivity caters to diverse user preferences, making healthcare information accessible across language barriers. MedVise leverages open-source models, including SeamlessM4T V2 for Automatic Speech Recognition and Text Translation, Bge v1.5 Large for Retrieval Augmented Generation, and GPT-Vision Language model for analyzing uploaded images. The project is anchored by a finetuned GPT-4 Large Language Model to provide comprehensive and accurate responses.

How we built it

Our approach was centered on utilizing open-source and openly available models. The project incorporates the following components:

1.Automatic Speech Recognition: SeamlessM4T V2 large model enables users to input audio instead of text.

2.Text Translation: Leveraging SeamlessM4T V2 large model for multilingual support.

3.Retrieval Augmented Generation: Bge v1.5 Large model, indexed using FAISS, to provide factual suggestions based on a doctor-vetted knowledge base.

4.Vision Language Model: GPT-Vision Language model for generating descriptions of uploaded images.

5.Large Language Model: A finetuned GPT-4 Large model for reasoning and generating suggestions based on user queries, retrieved suggestions, and image descriptions.

Challenges we ran into

Creating an open-source medical assistant posed several challenges. Integrating various models seamlessly and ensuring their interoperability required meticulous effort. Handling multilingual support and indexing the knowledge base for semantic search were particularly challenging. Additionally, fine-tuning GPT-4 to provide accurate and reliable medical advice required substantial experimentation.

Accomplishments that we're proud of

We are proud to have successfully developed MedVise, an open-source medical assistant that addresses the gap in immediate first-aid advice for minor injuries. The accomplishment lies in creating a comprehensive solution that combines various models to provide accurate, personalized, and timely medical guidance. The inclusion of multilingual and multimodal features enhances accessibility and user experience.

What we learned

Through the development of MedVise, we learned the importance of open-source models in creating inclusive and accessible solutions. Integrating different models to cater to diverse user needs taught us valuable lessons in collaboration and system architecture. The challenges encountered provided opportunities for learning and growth, contributing to our understanding of creating effective healthcare solutions.

What's next for Med Vise

The future of MedVise involves continuous improvement and expansion. We aim to enhance the knowledge base, incorporate more languages, and fine-tune models for increased accuracy. User feedback will play a crucial role in refining the system further. Ultimately, MedVise seeks to promote self-care practices, reduce unnecessary doctor visits, and contribute to increased health awareness among diverse communities.

Built With

Share this project:

Updates