Skip to content

Onboard: add Ollama auth flow and improve model defaults#41529

Merged
steipete merged 2 commits intoopenclaw:mainfrom
BruceMacD:brucemacd/ollama-onboarding
Mar 11, 2026
Merged

Onboard: add Ollama auth flow and improve model defaults#41529
steipete merged 2 commits intoopenclaw:mainfrom
BruceMacD:brucemacd/ollama-onboarding

Conversation

@BruceMacD
Copy link
Contributor

Summary

  • Problem: Ollama has no onboarding path. Users must configure auth cases through Ollama externally.
  • Why it matters: Trying to use Ollama cloud models requires auth, but this is not obvious through the onboarding.
  • What changed: Added Ollama as a first-class onboarding provider with Cloud+Local / Local-only mode, browser-based Cloud sign-in via /api/me, smart model suggestions per mode, auto-pull of missing models, and graceful fallback. Extracted shared ollama-models.ts module to avoid duplication existing models selection logic for Ollama.
  • What did NOT change (scope boundary): No changes to existing provider flows, model picker logic, gateway configuration, or Ollama streaming runtime behavior. models-config.providers.discovery.ts only refactored to import shared constants from the new module.

Change Type (select all)

  • Bug fix
  • Feature
  • Refactor
  • Docs
  • Security hardening
  • Chore/infra

Scope (select all touched areas)

  • Gateway / orchestration
  • Skills / tool execution
  • Auth / tokens
  • Memory / storage
  • Integrations
  • API / contracts
  • UI / DX
  • CI/CD / infra

Linked Issue/PR

User-visible / Behavior Changes

  • New "Ollama (Cloud and local open models)" option in the onboarding provider picker:
◆  Model/auth provider
│  ○ OpenAI
│  ○ Anthropic
│  ○ Chutes
│  ○ vLLM
│  ● Ollama (Cloud and local open models)
│  ○ MiniMax
│  ○ Moonshot AI (Kimi K2.5)
  • Ollama base URL prompt and Cloud+Local / Local mode selection:
◆  Ollama base URL
│  http://127.0.0.1:11434
│
◆  Ollama mode
│  ● Cloud + Local (Ollama cloud models + local models)
│  ○ Local
  • Cloud mode triggers browser sign-in when not authenticated:
◇  Ollama Cloud ──────────────────────────────────╮
│                                                  │
│  Sign in to Ollama Cloud:                        │
│  https://ollama.com/connect?name=...&key=...     │
│                                                  │
├──────────────────────────────────────────────────╯
│
◇  Have you signed in?
│  Yes
  • Model picker shows cloud + local models with suggested defaults first:
◆  Default model
│  ● Keep current (default: ollama/kimi-k2.5:cloud)
│  ○ Enter model manually
│  ○ ollama/glm-4.7-flash:latest
│  ○ ollama/glm-5:cloud
│  ○ ollama/kimi-k2.5:cloud
│  ○ ollama/minimax-m2.5:cloud
│  ○ ollama/qwen3.5:35b
  • Auto-pulls selected model if not available locally
  • Non-interactive mode (--auth-choice ollama) supported for CI/automation

Security Impact (required)

  • New permissions/capabilities? No
  • Secrets/tokens handling changed? No, stores a placeholder ollama-local credential in auth-profiles for the Ollama provider (same pattern as other local providers), actual auth happens in the external Ollama server.
  • New/changed network calls? Yes — fetches /api/tags, /api/me, and /api/pull from the user-configured Ollama base URL during onboarding
  • Command/tool execution surface changed? No
  • Data access scope changed? No
  • All network calls are to the user's own Ollama instance (localhost by default) or Ollama Cloud (ollama.com) when the user explicitly selects Cloud+Local mode and authenticates via browser. No new external services contacted without user action.

Repro + Verification

Environment

  • OS: macOS
  • Runtime/container: Node 22+
  • Model/provider: Ollama (local + cloud)
  • Integration/channel (if any): N/A
  • Relevant config (redacted): Default onboarding config

Steps

  1. Run openclaw onboard
  2. Select "Ollama" as provider
  3. Accept default base URL, select Cloud+Local or Local mode
  4. Complete sign-in if Cloud+Local, select a model

Expected

  • Ollama provider configured in config.yaml with discovered models
  • Selected model auto-pulled if not available locally
  • Auth profile stored

Actual

  • All of the above works as expected

Evidence

  • Failing test/log before + passing after
  • Trace/log snippets
  • Screenshot/recording (terminal walkthrough above)
  • Perf numbers (if relevant)

New tests: ollama-setup.test.ts (13 tests), auth-choice.apply.ollama.test.ts (2 tests), auth-choice-options.test.ts (5 tests)

Human Verification (required)

  • Verified scenarios: Full interactive onboarding with Ollama Cloud+Local mode, browser sign-in, model selection
  • Edge cases checked: Ollama not reachable, cloud auth rejection, model not available locally

Compatibility / Migration

  • Backward compatible? Yes
  • Config/env changes? No
  • Migration needed? No

Failure Recovery (if this breaks)

  • How to disable/revert this change quickly: Revert commit; Ollama option disappears from onboarding, existing configs unaffected
  • Files/config to restore: None, no existing config format changes

Risks and Mitigations

  • Risk: Ollama Cloud /api/me endpoint behavior could change
    • Mitigation: On failure/unreachable, checkOllamaCloudAuth returns signedIn: true (assumes no cloud auth needed), so onboarding degrades gracefully

@openclaw-barnacle openclaw-barnacle bot added commands Command implementations agents Agent runtime and tooling size: XL labels Mar 9, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3ae2525b75

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Mar 9, 2026

Greptile Summary

This PR adds Ollama as a first-class onboarding provider, introducing interactive and non-interactive setup flows with Cloud+Local and Local modes, browser-based sign-in via /api/me, auto-pull of missing models, and smart model suggestions. It also extracts a new shared ollama-models.ts module, eliminating duplication of constants and utilities.

Finding: Auth credential should be stored after interactive prompts complete. Currently storeOllamaCredential() is called at line 312, before mode selection and cloud sign-in flows. If the user cancels at either prompt, an orphaned ollama:default auth profile remains persisted. Recommend deferring this call until after all user interactions succeed.

Confidence Score: 4/5

  • Safe to merge with one minor fix: defer auth credential storage until after interactive prompts complete.
  • The PR implements a well-structured Ollama onboarding flow with solid test coverage. The shared module extraction eliminates duplication cleanly. One low-risk finding: a credential is stored before the user completes all interactive steps, which could leave an orphaned auth profile if they cancel. This is easily fixed by moving one function call, and does not impact functionality. No security or backward compatibility concerns.
  • src/commands/ollama-setup.ts (defer credential storage until after all prompts complete)

Last reviewed commit: 3ae2525

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 7c285db220

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@BruceMacD BruceMacD force-pushed the brucemacd/ollama-onboarding branch from 7c285db to a627efc Compare March 10, 2026 17:48
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a627efc4ec

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@BruceMacD BruceMacD force-pushed the brucemacd/ollama-onboarding branch from a627efc to 10a39c7 Compare March 10, 2026 18:17
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 10a39c722e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@BruceMacD BruceMacD force-pushed the brucemacd/ollama-onboarding branch from 10a39c7 to 9034b4d Compare March 10, 2026 19:53
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9034b4db77

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@BruceMacD BruceMacD force-pushed the brucemacd/ollama-onboarding branch from 9034b4d to fa744d1 Compare March 10, 2026 21:45
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: fa744d12bc

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@jmorganca
Copy link
Contributor

Hi OpenClaw maintainer team, the Ollama team and I have worked together on this PR to improve the built in Ollama provider functionality. We're super open to feedback and can make adjustments as needed. We'd love to help OpenClaw users more easily connect to (increasingly capable) local and larger hosted open models (Ollama's cloud models don't store prompts or outputs and data is never trained on).

@steipete
Copy link
Contributor

  1. Medium: src/wizard/onboarding.ts:475 unconditionally calls
    ensureOllamaModelPulled(), and src/commands/ollama-setup.ts
    :485 treats anything missing from /api/tags as needing /api
    /pull. That is wrong for cloud models chosen in “Cloud + Lo
    cal” mode. Ollama’s current docs say local API requests for
    cloud models are auto-authenticated after sign-in, and cloud
    models are available immediately without pulling. In this P
    R, selecting kimi-k2.5:cloud or glm-5:cloud will always go
    down the pull path because those models will not appear in
    local /api/tags, so a valid cloud-only setup can fail or at
    least block on an unnecessary download step. Sources:
    https://docs.ollama.com/api/authentication,
    https://docs.ollama.com/api/anthropic-compatibility
  2. Medium: src/commands/ollama-setup.ts:19 hardcodes the cloud
    suggestions as ["kimi-k2.5:cloud", "glm-5:cloud", "qwen3.5:
    35b"]. That omits minimax-m2.5:cloud entirely and mixes a l
    ocal model into the cloud recommendation list, so the PR’s
    own example picker is not actually producible from this imp
    lementation. It also diverges from Ollama’s current OpenClaw
    integration page, which recommends minimax-m2.5:cloud for c
    loud and glm-4.7-flash for local. Source:
    https://docs.ollama.com/integrations/openclaw

BruceMacD and others added 2 commits March 11, 2026 14:50
Add Ollama as a auth provider in onboarding with Cloud + Local mode
selection, browser-based sign-in via /api/me, smart model suggestions
per mode, and graceful fallback when the default model is unavailable.

- Extract shared ollama-models.ts
- Auto-pull missing models during onboarding
- Non-interactive mode support for CI/automation

Closes openclaw#8239
Closes openclaw#3494

Co-Authored-By: Jeffrey Morgan <[email protected]>
@steipete steipete force-pushed the brucemacd/ollama-onboarding branch from fa744d1 to 3ed1cf1 Compare March 11, 2026 14:52
@steipete steipete merged commit 1435fce into openclaw:main Mar 11, 2026
10 checks passed
@steipete
Copy link
Contributor

Landed via temp rebase onto main.

  • Gate: PATH="/opt/homebrew/opt/node@22/bin:$PATH" pnpm exec oxfmt --check src/commands/ollama-setup.ts src/commands/ollama-setup.test.ts CHANGELOG.md; PATH="/opt/homebrew/opt/node@22/bin:$PATH" pnpm test src/commands/ollama-setup.test.ts src/commands/auth-choice-options.test.ts src/commands/auth-choice.apply.ollama.test.ts src/commands/auth-choice.test.ts
  • Local full gate note: pnpm build and src/plugins/loader.test.ts still fail on this machine due pre-existing @mariozechner/pi-ai/oauth resolution / wrapper-signature issues unrelated to Onboard: add Ollama auth flow and improve model defaults #41529.
  • Land commit: 3ed1cf1
  • Merge commit: 1435fce

Thanks @BruceMacD!

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3ed1cf1e46

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +76 to +77
if (!response.ok) {
return { reachable: true, models: [] };

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Treat non-2xx Ollama tag responses as unreachable

fetchOllamaModels currently reports reachable: true for any non-OK /api/tags response, which lets onboarding proceed as if the Ollama endpoint is healthy even when it is returning 401/404/500. In the new non-interactive Ollama flow, that can produce a "successful" config (notably when a :cloud model is requested and no pull is attempted) and defer the real failure until first model invocation. Returning an unreachable/error state for non-2xx responses would fail fast and prevent persisting broken Ollama defaults.

Useful? React with 👍 / 👎.

mrosmarin added a commit to mrosmarin/openclaw that referenced this pull request Mar 11, 2026
* main:
  fix(agents): check billing errors before context overflow heuristics (openclaw#40409)
  fix(config): add missing editMessage and createForumTopic to Telegram actions schema (openclaw#35498)
  fix(signal): add missing accountUuid to Zod config schema (openclaw#35578)
  fix(voice-call): add speed and instructions to OpenAI TTS config schema (openclaw#39226)
  fix(telegram): clear stale retain before transient final fallback (openclaw#41763)
  Fix env proxy bootstrap for model traffic (openclaw#43248)
  fix: tighten Ollama onboarding cloud handling (openclaw#41529) (thanks @BruceMacD)
  Onboard: add Ollama auth flow and improve model defaults
dhoman pushed a commit to dhoman/chrono-claw that referenced this pull request Mar 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

agents Agent runtime and tooling commands Command implementations size: XL

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Feature request: Add Ollama as a local model provider in QuickStart [Feature]: Configure Ollama from onboard

3 participants