Communicate with any LLM provider using a single, unified interface.
Switch between OpenAI, Anthropic, DeepSeek, Mistral, Ollama, and more without changing your code.
go get github.com/code-koan/llm-sdk-go
# Set up your API key(s)
export OPENAI_API_KEY="YOUR_KEY_HERE" # or ANTHROPIC_API_KEY, etcpackage main
import (
"context"
"fmt"
"log"
llmsdk "github.com/code-koan/llm-sdk-go"
"github.com/code-koan/llm-sdk-go/providers/openai"
)
func main() {
ctx := context.Background()
provider, err := openai.New()
if err != nil {
log.Fatal(err)
}
response, err := provider.Completion(ctx, llmsdk.CompletionParams{
Model: "gpt-4o-mini",
Messages: []llmsdk.Message{
{Role: llmsdk.RoleUser, Content: "Hello!"},
},
})
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Choices[0].Message.Content)
}That's it! To switch providers, change the import and constructor (e.g., anthropic.New() instead of openai.New()).
- Go 1.25 or newer
- API keys for whichever LLM providers you want to use
Import the main package and the providers you need:
import (
llmsdk "github.com/code-koan/llm-sdk-go"
"github.com/code-koan/llm-sdk-go/providers/openai" // OpenAI
"github.com/code-koan/llm-sdk-go/providers/anthropic" // Anthropic
)See our list of supported providers to choose which ones you need.
Set environment variables for your chosen providers:
export OPENAI_API_KEY="your-key-here"
export ANTHROPIC_API_KEY="your-key-here"
export DEEPSEEK_API_KEY="your-key-here"
# ... etcAlternatively, pass API keys directly in your code using options:
provider, err := openai.New(llmsdk.WithAPIKey("your-key-here"))- Simple, unified interface - Same types and patterns across all providers
- Idiomatic Go - Follows Go conventions with proper error handling and context support
- Leverages official provider SDKs - Uses
github.com/openai/openai-goandgithub.com/anthropics/anthropic-sdk-go - Type-safe - Full type definitions for all request and response types
- Streaming support - Channel-based streaming that's natural in Go
- Battle-tested patterns - Proven unified interface design across multiple LLM providers
Create a provider instance and use it for requests:
import (
"context"
"fmt"
"log"
llmsdk "github.com/code-koan/llm-sdk-go"
"github.com/code-koan/llm-sdk-go/providers/openai"
)
// Create provider once, reuse for multiple requests.
provider, err := openai.New(llmsdk.WithAPIKey("your-api-key"))
if err != nil {
log.Fatal(err)
}
ctx := context.Background()
response, err := provider.Completion(ctx, llmsdk.CompletionParams{
Model: "gpt-4o-mini",
Messages: []llmsdk.Message{
{Role: llmsdk.RoleUser, Content: "Hello!"},
},
})
if err != nil {
log.Fatal(err)
}
fmt.Println(response.Choices[0].Message.Content)Provider instances are reusable and recommended for production applications.
Use channels for streaming responses:
chunks, errs := provider.CompletionStream(ctx, llmsdk.CompletionParams{
Model: "gpt-4o-mini",
Messages: []llmsdk.Message{
{Role: llmsdk.RoleUser, Content: "Write a short poem about Go."},
},
})
for chunk := range chunks {
if len(chunk.Choices) > 0 {
fmt.Print(chunk.Choices[0].Delta.Content)
}
}
if err := <-errs; err != nil {
log.Fatal(err)
}response, err := provider.Completion(ctx, llmsdk.CompletionParams{
Model: "gpt-4o-mini",
Messages: []llmsdk.Message{
{Role: llmsdk.RoleUser, Content: "What's the weather in Paris?"},
},
Tools: []llmsdk.Tool{
{
Type: "function",
Function: llmsdk.Function{
Name: "get_weather",
Description: "Get the current weather for a location",
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"location": map[string]any{
"type": "string",
"description": "The city name",
},
},
"required": []string{"location"},
},
},
},
},
ToolChoice: "auto",
})
// Check for tool calls.
if len(response.Choices[0].Message.ToolCalls) > 0 {
tc := response.Choices[0].Message.ToolCalls[0]
fmt.Printf("Function: %s, Args: %s\n", tc.Function.Name, tc.Function.Arguments)
}For models that support extended thinking (like Claude):
response, err := provider.Completion(ctx, llmsdk.CompletionParams{
Model: "claude-sonnet-4-20250514",
Messages: []llmsdk.Message{
{Role: llmsdk.RoleUser, Content: "Solve this step by step: What is 15% of 80?"},
},
ReasoningEffort: llmsdk.ReasoningEffortMedium,
})
if response.Choices[0].Message.Reasoning != nil {
fmt.Println("Thinking:", response.Choices[0].Message.Reasoning.Content)
}
fmt.Println("Answer:", response.Choices[0].Message.Content)All provider errors are normalized to common error types:
response, err := provider.Completion(ctx, params)
if err != nil {
switch {
case errors.Is(err, llmsdk.ErrRateLimit):
// Handle rate limiting - maybe retry with backoff.
case errors.Is(err, llmsdk.ErrAuthentication):
// Handle auth errors - check API key.
case errors.Is(err, llmsdk.ErrContextLength):
// Handle context too long - reduce input.
default:
// Handle other errors.
}
}You can also use type assertions for more details:
var rateLimitErr *llmsdk.RateLimitError
if errors.As(err, &rateLimitErr) {
fmt.Printf("Rate limited by %s: %s\n", rateLimitErr.Provider, rateLimitErr.Message)
}Each provider uses its own model identifiers. To find available models:
- Check the provider's documentation
- Use the
ListModelsAPI (if the provider supports it):
provider, _ := openai.New()
models, err := provider.ListModels(ctx)
for _, model := range models.Data {
fmt.Println(model.ID)
}| Provider | Completion | Streaming | Tools | Reasoning | Embeddings |
|---|---|---|---|---|---|
| Anthropic | ✅ | ✅ | ✅ | ✅ | ❌ |
| DeepSeek | ✅ | ✅ | ✅ | ✅ | ❌ |
| Gemini | ✅ | ✅ | ✅ | ✅ | ✅ |
| Groq | ✅ | ✅ | ✅ | ❌ | ❌ |
| llama.cpp | ✅ | ✅ | ✅ | ❌ | ✅ |
| Llamafile | ✅ | ✅ | ✅ | ❌ | ✅ |
| Mistral | ✅ | ✅ | ✅ | ✅ | ✅ |
| Ollama | ✅ | ✅ | ✅ | ✅ | ✅ |
| OpenAI | ✅ | ✅ | ✅ | ✅ | ✅ |
| z.ai | ✅ | ✅ | ✅ | ✅ | ❌ |
More providers coming soon! See docs/providers.md for the full list.
- Quickstart Guide - Get up and running quickly
- Supported Providers - List of all supported LLM providers
- API Reference - Complete API documentation
- Examples - Code examples for common use cases
We welcome contributions from developers of all skill levels! Please see our Contributing Guide or open an issue to discuss changes.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.