AI Configuration

Configure AI models to enable AI-powered features such as AI search and data governance checks.

Overview

Entropy Data fully works without any AI model. All core functionality operates independently of AI. AI features are optional enhancements that can be enabled or disabled at any time.

When enabled, AI powers the following features:

  • Search: Improved search experience. Natural language search across data products, contracts, teams, and definitions
  • Data Governance AI: Automated policy compliance checks on data products and contracts
  • Access Request Evaluation: AI analyzes access requests against policies and data contract terms
  • Data Contract Assistant: An Data Contract Assistant and inline AI features in the new ODCS Data Contract Editor

See AI Use Cases for detailed descriptions and planned features.

Disabled

AI is fully optional.

Navigate to Organization Settings → AI and ensure that AI is disabled.

Managed (Cloud only)

The Cloud version includes an optional managed AI model. No configuration required—AI features work out of the box. However, you still can optionally bring your own model if preferred.

Bring Your Own Model

For cloud and self-hosted deployments, configure your own AI model to enable AI features to have full control on costs and privacy considerations. You need an OpenAI API-compatible deployment (which most AI providers and gateways support).

Consider a tokens per minute quota of at least 100,000.

Navigate to Organization Settings → AI to configure your AI model.

OpenAI

SettingValue
Base URLhttps://api.openai.com/
API KeyYour OpenAI API key from platform.openai.com
Modelgpt-4o (recommended)

The Base URL should contain everything before /v1/chat/completions, including any path prefix if you use a proxy or gateway.

See OpenAI API documentation for more details.

Azure OpenAI

You can deploy OpenAI models in Azure AI Foundry. Note: For EU customers, we recommend to deploy in Sweden region.

You can also use the model-router as a model.

SettingValue
Endpointhttps://{your-resource-name}.openai.azure.com/
API KeyYour Azure OpenAI API key
Chat Deployment NameYour deployed model name (e.g., gpt-4o)

The Endpoint must follow the format https://{your-resource-name}.openai.azure.com/. Any path after the base URL will be stripped on save.

See Azure OpenAI documentation for more details.

Google Gemini

Gemini models can be used through their OpenAI-compatible API:

SettingValue
Base URLhttps://generativelanguage.googleapis.com/v1beta/openai/
API KeyYour API key from aistudio.google.com → Get API key
Modelgemini-2.5-pro

Gemini 3 is not yet supported.

See Google Gemini API documentation for more details.

Anthropic Claude

SettingValue
API KeyYour Anthropic API key from console.anthropic.com
Modelclaude-sonnet-4-6 (recommended)
Host URLLeave empty for Anthropic's default API

For Azure-hosted Anthropic models, set the Host URL and include the /anthropic path prefix (e.g., https://my-azure-foundry-name.services.ai.azure.com/anthropic/).

Note: Anthropic does not offer embedding models. If you use features that require embeddings (e.g., legacy search), you need to configure an additional embeddings model from a provider that supports them, such as OpenAI or Azure OpenAI.

See Anthropic API documentation for more details.

Other OpenAI-Compatible Endpoints

Any OpenAI-compatible API endpoint can be used, including:

  • Self-hosted models (vLLM, Ollama, etc.)
  • AI Gateways (Portkey, LiteLLM, etc.)
  • Other cloud providers with OpenAI-compatible APIs

Configure the base URL, API key, and model name according to your provider's documentation. The Base URL should contain everything before /v1/chat/completions, including any path prefix.

Custom Headers

For AI Gateways or providers that require custom headers (such as Portkey), you can add custom headers in the configuration. This is useful for routing, tracing, or authentication purposes.

Environment Variables (self-hosted)

After configuring your AI model, enable specific features:

Environment VariableValueDescription
APPLICATION_ACCESSREQUEST_AI_ENABLEDtrueEnable AI governance checks on access requests

See Configuration for more details on environment variables.