Documentation

Overview

NavyAI gives you one API key and one base URL for chat, responses, images, embeddings, speech, moderation, and model discovery. The public API is built to feel familiar whether you come from OpenAI-style clients, Anthropic-style clients, or your own direct HTTP calls.

Base URL

https://api.navy

What you can do

  • Send OpenAI-style chat requests through POST /v1/chat/completions
  • Send Anthropic-style requests through POST /v1/messages
  • Use the newer OpenAI-style Responses flow through POST /v1/responses
  • Generate embeddings through POST /v1/embeddings
  • Generate images and videos through POST /v1/images/generations
  • Poll long-running image and video jobs through GET /v1/images/generations/:id
  • Convert speech to text with POST /v1/audio/transcriptions
  • Convert text to speech with POST /v1/audio/speech
  • Moderate content with POST /v1/moderations
  • Inspect supported models with GET /v1/models
  • Check account usage with GET /v1/usage

Why teams use it

  • One key across multiple model providers
  • OpenAI-compatible and Anthropic-compatible request formats
  • Premium and plan-gated models exposed through the same surface
  • Streaming support across text endpoints
  • Built-in model discovery so apps can render available models dynamically

Quick start

1. Get your API key — Sign in and generate a key from the Dashboard.

2. Set the base URL — Point any OpenAI-compatible SDK to https://api.navy/v1

3. Make your first request:

1curl -X POST https://api.navy/v1/chat/completions \
2  -H "Authorization: Bearer sk-navy-YOUR_KEY" \
3  -H "Content-Type: application/json" \
4  -d '{
5    "model": "gpt-5",
6    "messages": [
7      {"role": "user", "content": "Write a one-line launch announcement."}
8    ]
9  }'
Docs Assistant
I’m here to help with NavyAI docs. Ask about endpoints, auth, models, request bodies, or integration details.