Inspiration

We aim to inspire others to build upon the foundation of using AI to prevent errors in code development, thereby enhancing software quality and reliability. Our inspiration stems from real-world incidents like the CrowdStrike OTA update outage, which highlighted the need for robust automated code review systems.

What It Does

CodeSentinel automates the process of identifying and flagging faulty code for review. By leveraging Python, it pulls code submissions and subjects them to a comprehensive scan using a Language Model (LLM). This scan detects anomalies that might affect the quality and functionality of the code before it undergoes final quality assurance. Specifically, it checks for issues such as null pointers, code errors, and other inconsistencies.

How We Built It

We built CodeSentinel using Python as our primary programming language. The system integrates with an LLM, orchestrated by LangChain, to perform detailed scans of incoming code submissions. These scans are crucial for identifying potential issues early in the development process, allowing teams to address them promptly. Additionally, we utilized localized LLMs via AWS to address IP concerns.

Challenges We Ran Into

  • Integration Complexity: Ensuring seamless integration between Python scripts and the LLM posed initial challenges.
  • Performance Optimization: Fine-tuning the scanning process to balance thoroughness with efficiency required iterative adjustments.

Accomplishments That We're Proud Of

  • Automated Fault Detection: Developing a robust system that reliably identifies code anomalies without manual intervention.
  • Streamlined Workflow: Creating a streamlined workflow that enhances development efficiency by catching issues early.

What We Learned

  • AI Integration: Deepened understanding of integrating AI capabilities into software development workflows.
  • Python Development: Enhanced proficiency in Python scripting for automation and data processing tasks.
  • Quality Assurance: Gained insights into improving code quality through automated detection and preemptive measures.

What's Next for CodeSentinel

  • Enhanced AI Capabilities: Continuously improving the LLM to detect more sophisticated code issues.
  • Feedback Mechanisms: Implementing mechanisms to provide developers with actionable feedback based on scan results.
  • Expanded Integration: Integrating CodeSentinel more deeply into CI/CD pipelines for seamless code quality assurance.

Idea

  • Over-the-Air Update Screening with GPT: This feature will pull information from a push (including data, config files, change logs, and updates) to look for inconsistencies or errors in the code. A detailed bug report and concern list, specifying problematic code lines and files, will be generated for human review.

Features

  • Automated checking for null pointers, code errors, and other critical issues.

Tools

  • LangChain: For LLM orchestration.
  • AWS Bedrock: For LLM Finetuning.
  • Python: For putting everything.
  • Flask/Django: For server hosting and threading.
  • HTML/CSS: For web dashboard.

Built With

Share this project:

Updates