The AI Model Comparison project is a web application built with Streamlit that allows users to compare the outputs of different AI models. It provides a simple and interactive interface for benchmarking and evaluating the performance of AI models on specific queries or data.
- Select and compare outputs from multiple AI models side by side.
- Calculate similarity scores between benchmark queries and model outputs.
- Provide recommendations based on the relevance of the outputs.
- Easy-to-use interface with text input and dynamic result display.
- Clone the repository:
git clone https://github.com/Suriyakumarvijayanayagam/ai-model-comparison.git
Navigate to the project directory:
shell
Copy code
cd ai-model-comparison
Install the required packages:
shell
Copy code
pip install -r requirements.txt
Usage
Run the Streamlit app:
shell
Copy code
streamlit run app.py
Access the app in your web browser at http://localhost:8502.
Enter your benchmark query or data in the provided text input.
Select the AI models you want to compare from the model selection checkboxes.
View the comparison results, similarity scores, and recommendations displayed on the app.
Explore different benchmark queries and AI models to evaluate their performance.
Configuration
Add or remove AI models in the models dictionary in the app.py file.
Customize the styling and layout of the app using HTML and CSS in the app.py file.
License
This project is licensed under the MIT License. See the LICENSE file for details.
Acknowledgements
The project uses the Streamlit framework for building the web application. Visit the Streamlit documentation for more information.
Contributing
Contributions to this project are welcome! If you have any suggestions, enhancements, or bug fixes, please open an issue or submit a pull request.
Feel free to modify the content based on your project's specific details and requirements.
Let me know if there's anything else I can help you with!