Inspiration

As organizations increasingly adopt multi-cloud strategies (AWS, Azure, GCP), managing security policies across different platforms becomes highly complex. Misconfigurations and policy drifts often lead to security breaches. We wanted to build an AI-driven solution that not only synthesizes and validates multi-cloud policies but also provides real-time insights and attack path analysis using Graph RAG and LLM-powered security verification.

What it does

CloudSentry is an AI-powered multi-cloud security assistant that:

  • Analyzes multi-cloud environments by ingesting cloud configurations from AWS, Azure, and GCP.
  • Uses Graph-Based RAG (Retrieval-Augmented Generation) to fetch relevant cloud relationships stored in ArangoDB.
  • Generates security insights via LLMs to identify misconfigurations and potential attack paths.
  • Verifies policies formally using SMT solvers (Z3) or model checkers (TLA+, Alloy).
  • Provides an interactive UI in Streamlit where users can query security policies, validate configurations, and interact with an AI security agent.

How we built it

We followed a modular approach to build CloudSentry:

  1. Cloud Data Ingestion:
    • Extracted cloud environment data from AWS using CloudGoat.
    • Stored extracted resources in Neo4j and synchronized them to ArangoDB.
  2. Graph-Based RAG for Security Insights:
    • Persisted graph relationships in ArangoDB.
    • Indexed data for fast retrieval using vector search.
  3. LLM Integration in Streamlit:
    • Built a Streamlit-based LLM agent to interact with cloud security policies.
    • Used RAG to inject contextual security knowledge into LLM prompts.
    • Integrated OpenAI API for generating responses.
  4. Formal Policy Verification:
    • Translated security policies into SMT formulas (Z3) for validation.
    • Implemented automated counterexample generation for policy violations.
    • Connected results back to LLM for an iterative policy refinement process.
  5. Multipage Streamlit UI:
    • Page 1: Graph RAG-based attack path visualization.
    • Page 2: Cloud policy validation and formal verification.

Challenges we ran into

  • Efficiently ingesting cloud security data: Mapping AWS, Azure, and GCP configurations into a unified graph model was challenging.
  • Optimizing Graph-Based RAG retrieval: Ensuring the most relevant attack paths were retrieved for LLM context injection.
  • Reducing LLM hallucinations: Fine-tuning prompt engineering to ensure security recommendations were accurate.
  • Integrating formal verification into an AI pipeline: Converting security policies into SMT-solvable formulas while keeping real-time performance.

Accomplishments that we're proud of

  • Successfully implemented Graph RAG with ArangoDB for security context retrieval.
  • Developed an interactive LLM security agent that dynamically assists in policy validation.
  • Designed an automated attack path detection system leveraging AI.
  • Integrated formal verification (Z3/TLA+) into the security validation pipeline.

What we learned

  • Graph databases (ArangoDB) are powerful for security intelligence and RAG-based applications.
  • Effective LLM prompt engineering improves response accuracy in security applications.
  • Combining AI with formal methods (Z3, Alloy) enhances policy validation reliability.
  • Multi-cloud security requires automation—manual checks are not scalable.

What's next for cloudsentry

  • RAG integration with ArangoDB, basic LLM API.
  • Implement policy verification using Z3/TLA+.
  • Enhance multi-cloud attack path visualization.
  • AI-powered automated cloud security policy correction.

Built With

Share this project:

Updates