Skip to content

CDataSoftware/connect-ai-command-center

Repository files navigation

BI Command Center

Powered by CData Connect AI — A natural-language BI dashboard that unifies data across any combination of enterprise sources (CRM, ERP, Support, HRIS, Product Usage, Marketing) and lets business users query all of them in plain English.

Next.js TypeScript Tailwind CSS


What It Does

BI Command Center sits on top of CData Connect AI. You connect your existing data sources through CData (Salesforce, HubSpot, QuickBooks, Zendesk, Snowflake, Google Sheets — anything CData supports), map them to entity categories in this app, then ask business questions in plain English.

The app automatically identifies which databases to query, writes the SQL, executes it across all sources in parallel, joins the results, and produces an executive-style summary.

Example questions you can ask:

  • "Show me all enterprise accounts with open support tickets and outstanding invoices"
  • "Which customers have high product usage but haven't renewed their contract?"
  • "What's our revenue by region compared to support ticket volume?"
  • "List employees who manage accounts with churn risk"

Architecture

See ARCHITECTURE.md for the full system diagram, query execution flow, and security model.

flowchart TB
    subgraph BROWSER["🌐  Browser  (Client)"]
        direction LR
        UI["Login · Dashboard · Settings"]
        LS["🔒 LocalStorage — encrypted per user<br/>CData Connect AI credentials · LLM key · mappings · table selections"]
    end

    subgraph SERVER["⚙️  Next.js API Server"]
        direction LR
        QUERY["POST /api/query"]
        OTHERS["GET /api/sources<br/>GET /api/cdata/catalogs  ·  schemas"]
    end

    subgraph LLM_BOX["🤖  LLM Provider"]
        direction LR
        PLAN["Query Planner<br/>(schema → SQL plan)"]
        SYNTH["Result Synthesizer<br/>(rows → insights)"]
    end

    subgraph CDATA["☁️  CData Connect AI"]
        direction TB
        META["REST Metadata API<br/>catalogs / schemas / tables / columns"]
        MCP["MCP Endpoint — SQL execution"]
        subgraph SOURCES["Connected Data Sources"]
            direction LR
            CRM_N["🔵 CRM"]
            ERP_N["🟢 ERP"]
            SUP_N["🟡 Support"]
            HR_N["🩷 HRIS"]
            USG_N["🟣 Usage"]
            MKT_N["🟠 Marketing"]
        end
    end

    BROWSER -- "HTTPS" --> SERVER
    SERVER -- "① schema" --> META
    SERVER -- "② plan" --> PLAN
    SERVER -- "③ SQL" --> MCP
    SERVER -- "④ synthesize" --> SYNTH
    META --- SOURCES
    MCP --- SOURCES

    style BROWSER fill:#0f172a,stroke:#3b82f6,color:#dbeafe
    style UI fill:#1e3a5f,stroke:#3b82f6,color:#bfdbfe
    style LS fill:#1e293b,stroke:#475569,color:#94a3b8
    style SERVER fill:#1e1b4b,stroke:#6366f1,color:#e0e7ff
    style QUERY fill:#312e81,stroke:#6366f1,color:#c7d2fe
    style OTHERS fill:#312e81,stroke:#6366f1,color:#c7d2fe
    style LLM_BOX fill:#042f2e,stroke:#0d9488,color:#ccfbf1
    style PLAN fill:#0f3d3b,stroke:#14b8a6,color:#99f6e4
    style SYNTH fill:#0f3d3b,stroke:#14b8a6,color:#99f6e4
    style CDATA fill:#052e16,stroke:#16a34a,color:#dcfce7
    style META fill:#14532d,stroke:#22c55e,color:#bbf7d0
    style MCP fill:#14532d,stroke:#22c55e,color:#bbf7d0
    style SOURCES fill:#052e16,stroke:#15803d,color:#d1fae5
    style CRM_N fill:#1e3a8a,stroke:#3b82f6,color:#bfdbfe
    style ERP_N fill:#14532d,stroke:#22c55e,color:#bbf7d0
    style SUP_N fill:#78350f,stroke:#d97706,color:#fde68a
    style HR_N fill:#831843,stroke:#db2777,color:#fbcfe8
    style USG_N fill:#4c1d95,stroke:#7c3aed,color:#ddd6fe
    style MKT_N fill:#7c2d12,stroke:#ea580c,color:#fed7aa
Loading

What Uses the LLM vs. Direct API Calls

LLM (Grok, Gemini, OpenAI, Mistral, or DeepSeek — your choice)

Step What the LLM does
Query Planning Reads your question plus the live schema context (table names, column names) for all mapped sources and produces a structured query plan: which entities to query, what SQL to run against each, and how to join the results
Result Synthesis After SQL execution, reads the raw result rows and writes a plain-English summary with key insights and suggested follow-up questions

The LLM never connects to your database directly. It only sees metadata (table/column names) during planning, and the final result rows during synthesis.

Direct API Calls (CData Connect AI — no LLM involved)

Operation API Used Triggered When
List available connections CData REST Metadata API Settings → "List Connections"
List schemas per connection CData REST Metadata API Settings → expand a connection
Get table count per entity CData REST Metadata API Dashboard sidebar loads
Fetch table list per entity CData REST Metadata API Expand a source card
Preview table contents CData REST Query API Click the eye icon on a table
Fetch column names (schema context) CData REST Metadata API On each query (cached 5 min)
Execute SQL across sources CData MCP Endpoint Every natural-language query

Key Features

Multi-Source Natural Language Queries

Ask a single question spanning multiple enterprise systems. The app queries all relevant sources in parallel and joins the results automatically.

Per-User Profiles with Full Isolation

Multiple users can share one browser. Each user's credentials, LLM key, connection mappings, and table selections live in separate localStorage namespaces keyed by their profile UUID. Switching users resets all state instantly — no page refresh required.

Dynamic Connection Mapping (No Hardcoding)

Zero hardcoded database connections. In Settings you browse your CData Connect AI catalogs, expand connections to see schemas, and assign each schema to one of six entity categories. Mappings are stored locally and sent to the server on every request as a base64-encoded header.

Table Selection (Query Focus)

Pin specific tables within each source card. When tables are selected, the LLM only receives schema context for those tables — faster queries and more accurate SQL. Selecting nothing defaults to using all tables (up to 30 per connection).

Encrypted Credential Storage

CData Connect AI credentials and LLM API keys are encrypted with AES-GCM (key derived via PBKDF2 from the user's PAT) before being written to localStorage. They are never stored in plaintext. They are passed to the server only as HTTP request headers, per request.

Results Table + Charts

Query results are displayed in a sortable data table and an auto-generated charts view (bar, line, or pie — chosen automatically based on data shape).

Token Usage Tracking

The Settings page shows live LLM token consumption: session tokens (since last login) and all-time totals, split by prompt vs. completion, with cost estimates per provider.

Save Button State Awareness

Save buttons in Settings are only active when unsaved changes exist. When configuration is up to date the button shows "✓ Saved" and is disabled — preventing accidental duplicate saves.


Entity Categories

The app organises all data sources into six fixed categories:

Category Colour Typical Sources Cache TTL
CRM Blue #3B82F6 Salesforce, HubSpot, Dynamics 365 1 hour
ERP Green #10B981 QuickBooks, NetSuite, SAP 24 hours
Support Amber #F59E0B Zendesk, Freshdesk, Jira Service Management 15 minutes
Usage Purple #8B5CF6 Snowflake, BigQuery, custom DBs 4 hours
HRIS Pink #EC4899 Workday, ADP 24 hours
Marketing Orange #F97316 Marketo, HubSpot, Google Analytics 1 hour

One entity category can map to multiple CData connections/schemas. The same connection can be assigned to multiple entity categories.


Setup

Prerequisites

  • Node.js 18+
  • A CData Connect AI account with at least one connection configured
  • An API key for one LLM provider: Grok (xAI), Gemini (Google), OpenAI, Mistral, or DeepSeek

1. Clone and install

git clone https://github.com/dibyendud-cdata/bi-command-center.git
cd bi-command-center
npm install

2. Configure environment

cp .env.local.template .env.local

Only one value is required in .env.local:

CDATA_MCP_ENDPOINT=https://mcp.cloud.cdata.com/mcp
NEXT_PUBLIC_APP_NAME=BI Command Center
NODE_ENV=development

Important: Do not put your CData email, PAT, or LLM API key in .env.local. These are entered by each user through the app UI and stored encrypted in their browser. The server is stateless with respect to credentials.

3. Run

npm run dev

Open http://localhost:3001.


First-Time App Setup

Step 1 — Create a profile

On the login screen, enter your name, select a role, and enter your CData Username (Email) and Password (PAT). Click "Start".

Step 2 — Configure your LLM

Go to Settings → LLM Configuration. Select your provider (Grok, Gemini, OpenAI, Mistral, or DeepSeek), choose a model, and paste your API key. Click Save LLM Config.

Step 3 — Map your connections

Go to Settings → CData Connect AI Configuration. Click List Connections to fetch your available CData catalogs. Expand any connection to lazy-load its schemas. For each schema, click the entity buttons (CRM, ERP, Support, etc.) to assign it to one or more categories. Click Save Mappings.

Step 4 — Query

Return to the dashboard. Your source cards in the sidebar will show the mapped connections and table counts. Type a question in the search bar or click a Quick Query Template.


How a Query Works (Step by Step)

User types a question in the search bar
        │
        ▼
POST /api/query
  Headers: X-CData-Email, X-CData-PAT, X-LLM-Provider, X-LLM-Key, X-LLM-Model,
           X-Connection-Mappings (base64), X-Table-Selections (base64)
        │
        ├── 1. buildSchemaContext()
        │       └── CData REST API: getTables() + getColumns() for each mapped entity
        │           • Filtered to selected tables if any pinned (otherwise up to 30 per connection)
        │           • Result cached for 5 minutes per user+mappings+selections combination
        │
        ├── 2. LLM planQuery(question, schemaContext)
        │       └── Returns JSON query plan:
        │           { sourcesNeeded, queries[{source, sql, canParallel}], joinStrategy }
        │
        ├── 3. executor.execute(plan, credentials, mappings)
        │       ├── Parallel queries → CData MCP endpoint (SQL execution)
        │       ├── Sequential queries → CData MCP endpoint
        │       └── In-memory LEFT/INNER JOIN of result sets
        │
        └── 4. LLM synthesizeResults(question, results)
                └── Returns { summary, insights[], followUpQuestions[] }

Response → { plan, results, synthesis, tokenUsage }

Project Structure

bi-command-center/
├── app/
│   ├── api/
│   │   ├── cdata/
│   │   │   ├── catalogs/route.ts     # List CData connections
│   │   │   ├── schemas/route.ts      # List schemas per connection
│   │   │   └── test/route.ts         # Connection health check
│   │   ├── sources/
│   │   │   ├── route.ts              # Source card status + table counts
│   │   │   └── [sourceId]/
│   │   │       ├── tables/route.ts           # Table list per entity
│   │   │       ├── tables/[tableName]/route.ts # Table data preview
│   │   │       └── preview/route.ts          # Quick preview
│   │   └── query/route.ts            # Main NL query endpoint
│   ├── settings/page.tsx             # Settings page
│   ├── page.tsx                      # Dashboard
│   ├── layout.tsx                    # Root layout + providers
│   └── globals.css
├── components/
│   ├── auth/
│   │   ├── LoginScreen.tsx
│   │   └── ProfileMenu.tsx
│   ├── query/
│   │   ├── SearchBar.tsx
│   │   ├── TemplateCards.tsx
│   │   ├── ResultsTable.tsx
│   │   ├── ResultsVisualizer.tsx
│   │   └── QueryPlan.tsx
│   ├── sources/
│   │   └── SourceList.tsx            # Source cards with table selection
│   ├── logo.tsx                      # SVG app logo
│   └── ui/                           # shadcn/ui components + toaster
├── contexts/
│   ├── auth-context.tsx              # Profile + credential management
│   ├── llm-config-context.tsx        # LLM settings + token tracking
│   └── mappings-context.tsx          # Connection mappings + table selections
├── lib/
│   ├── auth/storage.ts               # Encrypted profile localStorage
│   ├── cdata/
│   │   ├── client.ts                 # CData MCP query client
│   │   ├── metadata-client.ts        # CData REST metadata client
│   │   └── cache.ts                  # Query result cache
│   ├── llm/
│   │   └── config-storage.ts         # Encrypted LLM config localStorage
│   ├── mappings/
│   │   └── storage.ts                # Mappings + selections localStorage
│   ├── query/
│   │   └── executor.ts               # SQL execution + result join engine
│   └── llm-client.ts                 # LLM provider abstraction
├── types/
│   ├── sources.ts                    # DataSourceType, EntityMappings, etc.
│   └── queries.ts                    # QueryPlan, QueryResults, etc.
└── .env.local.template               # Environment variable reference

Security Model

Concern How It's Handled
Credential storage AES-GCM encryption in localStorage; key derived from PAT via PBKDF2
Credential transmission HTTP headers per request only; never in URL or body
Server-side credential state None — server is fully stateless; credentials are not stored or cached
Multi-user isolation All localStorage keys namespaced by profile UUID
Env file safety .gitignore excludes all *.local, *.env, and *.bak files
LLM data exposure LLM sees only table/column names (metadata) and final result rows — never raw DB connections

Troubleshooting

CData Connection Issues

  • Ensure your Username (Email) and Password (PAT) are correct in your profile
  • Verify your connection appears in the CData Connect AI dashboard
  • Go to Settings → expand a connection → assign schemas to entity categories

LLM Issues

  • Confirm your API key is saved (Settings → LLM Configuration → Save LLM Config)
  • Check that the correct provider is selected for your key
  • If queries time out, try a faster model (e.g. grok-3-fast, gemini-2.0-flash)

Query Returns Wrong Results

  • Check which tables are selected in the source cards (empty = all tables)
  • Try pinning only the relevant tables for your question
  • The schema context is cached for 5 minutes — after remapping connections, wait or restart the server

Tech Stack

Layer Technology
Framework Next.js 15 (App Router)
Language TypeScript 5
Styling Tailwind CSS + shadcn/ui
Data connectivity CData Connect AI (REST metadata API + MCP query endpoint)
LLM providers Grok (xAI), Gemini (Google), OpenAI, Mistral, DeepSeek
Charts Recharts
State management React Context + encrypted localStorage
Cryptography Web Crypto API — AES-GCM encryption, PBKDF2 key derivation

License

Private — all rights reserved.

About

Natural-language BI dashboard powered by CData Connect AI. Ask business questions in plain English, the app queries your CRM, ERP, Support, HRIS, Usage, and Marketing systems simultaneously, joins the results, and delivers an executive summary. Built with Next.js 15 and your choice of LLM.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages