This project demonstrates the use of Explainable AI techniques in analyzing and detecting breast cancer. By leveraging advanced algorithms, the project aims to make machine learning predictions more transparent and actionable for medical professionals.
- Explainability with Shap, Banzhaf, and Remove Individual algorithms: Utilizes Shapley values, Banzhaf power index, and individual removal methods to explain model predictions.
- Breast Cancer Detection: Implements predictive models for breast cancer diagnosis using well-established datasets.
- Interpretation and Analysis: Provides detailed visualizations and explanations to help users understand the model's behavior and important features.
Early and accurate detection of breast cancer can significantly improve patient outcomes. This project provides:
- Transparency in Decision-Making: Enables medical professionals to understand why a model makes a certain prediction, building trust in AI systems.
- Feature Importance Analysis: Highlights the most critical factors influencing predictions, guiding clinicians toward key diagnostic indicators.
- Accountability: Ensures that AI-driven decisions in healthcare are understandable and justifiable.
- Adoption: Facilitates wider acceptance of AI tools by addressing the "black-box" nature of machine learning models.
- SHAP For generating Shapley explanations.
- Banzhaf For computing Banzhaf power indices.
- Remove Explanations Library: For individual feature removal impact analysis.
- Scikit-learn: For machine learning models.
- Matplotlib/Seaborn: For visualizations.
Among feature selection methods, SHAP (Top 10) provided the lowest cross-entropy loss, demonstrating its effectiveness in identifying the most relevant features for prediction.
Sample of features which indicates the effectiveness of the feature in diagnosis of breast cancer based on the gene type using shapley values

