Skip to content

Aruisop/AgileGPT_Deployed

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AgileGPT: Autonomous Agile Intelligence

  • This project implements a secure, scalable autonomous agile platform using the tech-stack below.
  • Next.js 14, FastAPI, PostgreSQL/SQLite, Docker, and AI Analysis with Sonar.
  • It simulates a real-world agile workflow where daily logs are processed asynchronously by a LLM to detect burnout risks in cross-functional teams.
  • These insights are then visualized in real-time for managers, optimizing team management for managers.
Screenshot 2026-01-11 165808

Architecture Overview

The system is built using a modern 3-tier monolithic architecture (Client-Server-Database) with a specialized service layer for the AI logic. Note the service layer is implemented as a standalone file called: ai_service.py

  • Frontend (Next.js 14): A responsive user interface for both the employee and manager, including daily check-ins and data visualization using Tailwind CSS and Recharts.
  • Backend (FastAPI): A high-performance API that handles authentication, data validation, and core business logic of helping automation in KPI workflows.
  • AI Service Layer: A dedicated module that analyzes text logs for stress indicators and calculates risk scores in real-time.
  • Database (PostgreSQL/SQLite): Provides persistent storage for user profiles, organizations, and daily logs.
  • Authentication: Secure OAuth2 with JWT implementation for role-based access control (RBAC) varying from organization to organization and employee to employee.
  • Containerization: Docker is used to containerize the entire system for consistent deployment, further hosting will be implemented on a render+vercel deployment stack.

Architecture Diagram

The system follows a linear flow from the frontend-client interaction to the backend to a persistent storage, all integrated with secure API endpoints.

graph LR
    subgraph Client ["Frontend (Next.js 14)"]
        UI[User Interface]
        AuthUI[Login / Signup]
        DashUI[Dashboard]
        Axios[Axios Interceptor]
    end

    subgraph Server ["Backend (FastAPI)"]
        API[Main Entry Point]
        
        subgraph Routers ["API Routes"]
            AuthR[Auth Router]
            LogsR[Logs Router]
            AnR[Analytics Router]
        end
        
        subgraph Logic ["Service Layer"]
            AI[AI Agent Simulation]
            JWT[JWT Auth Handler]
        end
        
        ORM[SQLAlchemy ORM]
    end

    subgraph Database ["Persistence Layer"]
        DB[(SQLite / PostgreSQL)]
    end

    %% Connections
    UI --> AuthUI & DashUI
    AuthUI & DashUI -->|HTTP Requests| Axios
    Axios -->|REST API / JSON| API

    API --> AuthR & LogsR & AnR

    %% Logic Flows
    AuthR -->|Verify Creds| JWT
    LogsR -->|Submit Log Data| AI
    AI -->|Return Risk Score| LogsR
    
    %% Database Interaction
    JWT & LogsR & AnR -->|Query/Commit| ORM
    ORM -->|Read/Write| DB
Loading

Module Workflows

  • The system comprises of primarily 3 modules.
  • The sequence diagrams below highlight the internal workings of these modules.

1. Authentication Module

  • Handles the user registration (creating a new organization based on the user form input).
  • The user can signup as either an employee or a manger.
  • As Role-Based Access Control is implemented, managers only within the organization can view employee pulses and analyze burnout.
  • Ensures secure login via JWT generation.
sequenceDiagram
   autonumber

   participant User
   participant FE as Frontend<br/>(Next.js)
   participant API as Backend API<br/>(Auth Router)
   participant Utils as Auth Utils<br/>(Hash · JWT)
   participant DB as Database

   Note over User,DB: User Signup & Login Flow

   %% Signup
   User->>FE: Enter signup details
   FE->>API: POST /auth/signup
   API->>DB: Check email existence

   alt Email already registered
       DB-->>API: User found
       API-->>FE: 400 Bad Request
   else New user
       API->>DB: Validate / create organization
       API->>Utils: Hash password
       Utils-->>API: Hashed password
       API->>DB: Persist user
       DB-->>API: Success
       API-->>FE: 201 Created
   end

   %% Login
   User->>FE: Enter credentials
   FE->>API: POST /auth/login
   API->>DB: Retrieve user by email
   API->>Utils: Verify password

   alt Credentials valid
       Utils-->>API: Verified
       API->>Utils: Generate JWT
       Utils-->>API: Access token
       API-->>FE: Token + user role
       FE->>FE: Store token
   else Invalid credentials
       API-->>FE: 401 Unauthorized
   end

Loading

Login

Screenshot 2026-01-11 165656

Signup

Screenshot 2026-01-11 165715

2. Daily Pulse & AI Analysis module

  • The pulse module is used to take crucial burnout info from the employee.

  • Once this employee starts the pulse, he/she can view pulse history.

  • Sends important JSON data to the LLM to extract inference on the basis of employee burnout.

  • The LLM responds with a burnout score, and an analysis assisting the manager, covered in depth further down in the readme.

    sequenceDiagram
      autonumber
    
      participant Emp as Employee
      participant FE as Frontend<br/>(Pulse Page)
      participant API as Backend API<br/>(Logs Router)
      participant AI as AI Service
      participant DB as Database
    
      Note over Emp,DB: Daily Pulse Submission Flow
    
      Emp->>FE: Enter mood (1–10), tasks, blockers
      FE->>API: POST /logs
    
      Note right of API: AI Analysis
      API->>AI: analyze_log_with_ai(mood, blockers)
      AI->>AI: Keyword evaluation
      AI->>AI: Risk score calculation
      AI-->>API: Risk score + analysis
    
      API->>DB: Save daily log (with AI data)
      DB-->>API: Success
    
      API-->>FE: 201 Created
      FE->>FE: Redirect to dashboard
    
    
    
    Loading

    Employee Dashboard

    • The employee dashboard shows a log of the pulses made by the employee.

    Screenshot 2026-01-11 003533 Screenshot 2026-01-11 003555

3. Analytics and Dashboard

  • The "Dashboard" in question is the managers dashboard.

  • Only the manager from the relevant organization has access to these analytics.

  • raw logs are used to create trend data for the charts.

  • Only those employees who have a high burnout risk are appended to the Action Needed tab, having an AI-powered advisory analysis on employee burnout.

  • The team-mood(average-based), high-burnout-risk-employees and total pulses are all displayed to the manager.

  • Along with this trend lines based on the risk assessment and a pie-chart are also displayed to the manager.

     sequenceDiagram
     autonumber
     participant PM as Product Manager
     participant FE as Frontend<br/>(Dashboard)
     participant API as Backend API<br/>(Analytics Router)
     participant DB as Database
    
     Note over PM,DB: Command Center Dashboard Load
    
     PM->>FE: Open /dashboard
     FE->>FE: Verify role = PM
     FE->>API: GET /analytics/dashboard
    
     %% Data aggregation
     API->>DB: Fetch logs (by organization)
     DB-->>API: Raw log records
    
     API->>API: Compute average mood
     API->>API: Identify high-risk users (>50)
     API->>API: Aggregate logs by date
     API->>API: Build sentiment distribution
    
     API-->>FE: Dashboard data (trends, risks, stats)
    
     %% Rendering
     FE->>FE: Render trend chart
     FE->>FE: Render sentiment breakdown
     FE->>FE: Render risk alerts
     FE-->>PM: Display dashboard
    
    
    Loading

    Manager Dashboard

    • The Manager Dashboard shows this dashboard to the Product Manager in the image.
    • Notice how the moderate tag is sort of the way the LLM tries to format output thinking it will appear bold to the end-user. This has been refactored after ignoring such outputs under the Output Contract section of the ai_service.py.

    Screenshot 2026-01-10 235142 Screenshot 2026-01-10 235236

Getting Started

  • Make sure you have Docker Desktop on your system, and running in the background.
  • git should be configured.

1. Structuring your .env file

  • Make sure to put this under the backend directory, else refactor the docker-compose.yml to find the .env wherever it is.
  • Feel free to try LLM's of your choice and see how differently different ones produce different outputs.
DATABASE_URL = postgres_conn_string(for local development use: sqlite:///./your_db_name.db; make sure you remember this name like: agile_brain.db)
SECRET_KEY = secret_key_of_your_choice (only for jwt).
ALGORITHM = HS256
ACCESS_TOKEN_EXPIRE_MINUTES = 60
API_KEY = pplx-your_perplexity_API_key

2. Start the infrastructure

docker compose up -d --build

3. Verify the deployment

  • Frontend runs on: http://localhost:3000
  • Backend: http://localhost:8000/docs to verify if the API is running.

4. Usage & Verification

  • Employee Flow:

    • Sign up as a new user with the role "Employee".
    • Navigate to "Start Daily Pulse".
    • Submit a log with low mood (e.g., 2) and a blocker containing the word "blocked" or "fail".
    • Verify that the log is saved in your history.
  • Manager Flow:

    • Sign up as a new user with the role "Product Manager" (use the same Organization name as the employee).
    • The dashboard should automatically load the "Command Center".
    • Verify that the employee's high-risk log appears in the "AI Risk Detection" panel and the "Team Velocity vs. Stress" chart updates.

5. Debugging

  • If any issues persist, the container logs can be analyzed along with other debugging tools that I used during development.
     docker logs agilegpt-backend-1
     docker logs agilegpt-frontend-1
  • Access the db directly
      docker exec -it agilegpt-db-1 sqlite3 agile_brain.db
      # Then run SQL: SELECT * FROM daily_logs;
  • Rebuild Containers:
       docker compose down
       docker compose up -d --build

A few engineering challenges and their solutions

  • Time-Series Aggregtaion: Implemented a defaultdict(list) in the backend analytics.py, which grouped logs by the date string. Without this grouping the line-charts failed in implementation.
  • Role-Based Access (RBAC): While working on this project, the main idea was to develop a system that would work for multiple organizations and multiple Agile teams across the world, hence RBAC was implemented with both a ManagerView and an EmployeeView for individual organizations.
  • AI-API-Integration: Achieved end-end integration with Sonar and used it to derive inference. Simple API calls were used in this case.

Key Takeaways

  • Decoupled nature of the frontend and backend, allows for independent scaling and development.
  • The SaaS provides immediate value to the manager, and helps in important decision-making.
  • The model currently uses Sonar, which can be changed based on a buisness use-case and maintains extensiblilty.

Future Work and Development

  • This serves as a foundation to a SaaS that buisnesses can actually use with their Agile teams or even in their CRM platforms.
  • Further integration with Jira and Github in order to analyze commits and tickets can be used to make more comprehensive evaluations on employee burnout within an organization.
  • Future work includes introducing RAG to the current model, and hopefully also implement a fully local offline model that doesnt require making expensive API requests to an LLM.
  • Feel free to submit any PR's if this looks cool to you!.

About

A business-oriented SaaS helping cross functional teams reduce burnout in Agile Environments.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors