English documentation. For Korean documentation, please see README.ko.md.
zai-deep-research is a generic Agent Skills-compatible deep research skill whose hard requirements are:
- z.ai Coding Plan access
- four configured z.ai MCP servers:
web-search-zaiweb-reader-zaivision-zaizread
The skill itself is not tied to one AI coding product. It is intended to work across Agent Skills-compatible clients, while the bundled Python launcher provides backend adapters for codex, claude, opencode, and gemini.
Without z.ai Coding Plan access and those four MCP servers, this repository is not useful in practice.
| Client | Skill package | scripts/run.py launcher |
Notes |
|---|---|---|---|
codex |
Supported | Supported | One supported backend, not the identity of the skill |
claude |
Supported | Supported | Launcher uses non-interactive print mode |
opencode |
Supported | Supported | Launcher uses opencode run |
gemini |
Supported | Supported | Launcher uses headless prompt mode |
Runtime behavior can differ slightly by client because each CLI exposes different non-interactive and MCP interfaces. The external contract stays the same: validate prerequisites, gather evidence iteratively, and produce a final Markdown report.
The skill coordinates four prompt templates under agents/:
plannerrefines the request, decides whether clarification is necessary, and selects the MCPs that matter.researchergathers evidence through the configured z.ai MCP servers.summarizerturns each research pass into a concise iteration summary and proposes the next queries.synthesizerwrites the final markdown report.
The optional launcher lives in zai-deep-research/scripts/run.py. It:
- auto-detects or accepts an explicit client backend
- validates that the required MCP names are configured in that client
- runs the four stages iteratively
- stores runtime state under
./.zai-deep-researchby default - writes the final report under
./research/by default
Please configure the four z.ai MCP servers in your client first. The names must match exactly unless you override them in config.json:
| Required name | z.ai service |
|---|---|
vision-zai |
Vision MCP Server |
web-search-zai |
Web Search MCP Server |
web-reader-zai |
Web Content Reading |
zread |
Zread MCP Server |
Each client has its own MCP configuration format. What matters for this skill is the server name and the client’s ability to expose MCP tools at runtime.
By default, the installer now asks target-by-target whether to install to:
- shared Agent Skills path (
.agents) - Codex
- OpenCode
- Gemini
- Claude Code
.agents is not a separate AI product. In this repository it means the shared Agent Skills location used as a portable install target across compatible clients.
Example:
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --scope userUse --layout when you want a single non-interactive install target:
shared:~/.agents/skillsor./.agents/skillscodex:~/.codex/skillsopencode:~/.config/opencode/skillsgemini:~/.gemini/skillsclaude:~/.claude/skills
If you already cloned this repository:
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --layout shared --scope user
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --layout codex
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --layout opencode
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --layout claude
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --scope project --dry-runIf you want a curl | sh flow:
curl -fsSL https://raw.githubusercontent.com/studiojin-dev/zai-deep-research-skill/main/zai-deep-research/scripts/install.sh | sh -s -- --scope user
curl -fsSL https://raw.githubusercontent.com/studiojin-dev/zai-deep-research-skill/main/zai-deep-research/scripts/install.sh | sh -s -- --scope projectUse --dry-run whenever you want to confirm the resolved source and destination before copying files.
Use --force only when you intentionally want to replace an existing installation.
Native client layouts are user-scope only. --scope project is supported only for --layout shared.
If you installed into a shared path, run commands from the installed skill directory instead of the repository checkout:
- user install:
~/.agents/skills/zai-deep-research - workspace install:
./.agents/skills/zai-deep-research
The examples below still use repository-relative paths so they also work before installation or when testing from a clone.
Always validate before first use:
python zai-deep-research/scripts/run.py --validate --client codex
python zai-deep-research/scripts/run.py --validate --client claude
python zai-deep-research/scripts/run.py --validate --client opencode
python zai-deep-research/scripts/run.py --validate --client gemini
python zai-deep-research/scripts/run.py --validate --client codex --jsonIf --client auto is ambiguous because multiple supported CLIs are installed, rerun with an explicit backend.
codex mcp list can include both local-command and remote-URL tables. The launcher now parses both correctly and prints the detected MCP names in text mode, which makes false negatives easier to diagnose.
Copy the example config if you need to change storage paths, MCP names, or the default launcher backend:
cp zai-deep-research/assets/config.example.json zai-deep-research/config.jsonIf you are already running from an installed shared path, use the installed paths instead:
cp ~/.agents/skills/zai-deep-research/assets/config.example.json ~/.agents/skills/zai-deep-research/config.json
python ~/.agents/skills/zai-deep-research/scripts/run.py --validate --client codexImportant config fields:
runtime.client: default launcher backend (auto,codex,claude,opencode,gemini)memory_db_path: SQLite database for iteration summaries, reports, artifacts, and lexical memory indexesdata_dir: base directory for runtime state
Relative storage paths resolve from the current working directory.
python zai-deep-research/scripts/run.py "Compare the latest open-source browser automation MCP servers" --client codex
python zai-deep-research/scripts/run.py "Assess the risks of vendor lock-in for model gateways" --client claude --output-dir ./research
python zai-deep-research/scripts/run.py "Analyze pricing changes" --client opencode --config ./zai-deep-research/config.json
python zai-deep-research/scripts/run.py "Review the latest changes in model gateway pricing" --client gemini --max-iterations 3
python zai-deep-research/scripts/run.py "Compare the latest open-source browser automation MCP servers" --client codex --jsonIf a backend is especially slow in your environment, increase the per-stage timeout with:
ZAI_DEEP_RESEARCH_COMMAND_TIMEOUT_SECONDS=600 python zai-deep-research/scripts/run.py "Compare the latest open-source browser automation MCP servers" --client codexWhen the selected backend is codex, the launcher now forces sub-runs to use reasoning_effort="medium" so the skill does not inherit an excessively slow global xhigh setting.
If the launcher detects broken remote MCP transports during a codex preflight probe, it automatically excludes those MCPs for the current run instead of waiting for the main researcher turn to hang. In text mode the launcher now prints Configured MCPs, Active MCPs for this run, and Disabled MCPs for this run before research starts. In JSON mode the same state is returned as configured_mcp_names, active_mcp_names, and disabled_mcp_names.
During execution, the text launcher also prints step status lines for preflight, planner, researcher, summarizer, synthesizer, and finalize. Long-running codex steps emit periodic heartbeat lines so you can distinguish "still working" from "stalled".
Use --json when you need a stable interface for automation, eval harnesses, or wrapper scripts. The payload is opt-in so existing text-mode workflows remain unchanged.
--validate --jsonreturns validation status, configured MCP names, missing MCPs, lexical memory availability, and duration.- normal
--jsonruns returnsuccess,clarification_required, orerrorstatus plus client, session id, report path, iteration count, clarification questions, duration, and best-effort token usage. - successful or clarification JSON results also include
configured_mcp_names,active_mcp_names, anddisabled_mcp_namesso wrappers can see the preflight MCP state without parsing text. - runtime JSON results also include
step_events,run_summary, andfinal_decisionso callers can tell whether the run completed normally, completed with skipped iterations, or aborted. - when clarification is required, the launcher exits with code
2, leavesreport_pathempty, and returns the blocking questions inclarification_questions. token_usagemay benullwhen the selected backend does not expose stable usage metadata.
If SQLite FTS5 is unavailable in the active Python runtime, validation fails because lexical memory depends on it.
During the compatibility grace period, the launcher accepts legacy vector config keys and exposes a deprecated validation alias for wrappers that have not migrated yet.
| Old surface | New canonical surface | Current behavior | Removal |
|---|---|---|---|
vector_memory_available |
lexical_memory_available |
Both appear in validation JSON; vector_memory_available is deprecated |
Next major release |
storage.vector_index_path |
none | Accepted and ignored with a validation warning | Next major release |
storage.vector_metadata_path |
none | Accepted and ignored with a validation warning | Next major release |
Wrapper guidance:
- Read
lexical_memory_availablegoing forward. - Delete
vector_index_pathandvector_metadata_pathfrom saved configs; onlymemory_db_pathis used now. - Treat
vector_memory_availableas a temporary alias only. - All
--validate --jsonresponses keep the same validation payload shape, even when config loading or backend selection fails before full validation.
The repository now ships a codex-first, web-centric eval suite under zai-deep-research/evals/evals.json.
- Snapshot the current skill before making changes:
python zai-deep-research/scripts/eval.py snapshot --dest ./.zai-deep-research-evals/skill-snapshot- Run the full current-vs-old comparison:
python zai-deep-research/scripts/eval.py run --client codex --baseline-skill ./.zai-deep-research-evals/skill-snapshotBy default, artifacts are written under ./.zai-deep-research-evals/iteration-N/. Override the root with --workspace if you want a different location.
Each eval stores:
outputs/for generated markdown reportsresult.jsonfor raw launcher output and stderrtiming.jsonfor duration and best-effort token countsgrading.jsonfor automated assertion results
Each iteration also writes:
benchmark.jsonfor aggregated pass-rate, runtime, and token summariesfeedback.jsonas a human-review stub for comments that automation cannot grade
benchmark.json keeps token statistics as null when the backend does not expose token usage.
Read zai-deep-research/references/EVALS.md for the full workflow and benchmark interpretation guidance.
Lexical memory now uses SQLite FTS5 inside the same memory.sqlite database. No FAISS or embedding packages are required.
If you want to verify that your Python runtime supports it, rerun:
python zai-deep-research/scripts/run.py --validate --client codexBy default, runtime data is stored under ./.zai-deep-research in the current working directory.
For example, if you run the launcher from ~/realrepo, the default storage paths become:
~/realrepo/.zai-deep-research/memory.sqlite
The final Markdown report is written to ./research/ in the current working directory by default. If you prefer a different directory, pass --output-dir.
zai-deep-research/
├── SKILL.md
├── agents/
├── assets/
├── evals/
├── references/
└── scripts/
SKILL.md: the portable skill contractagents/: prompt templates for the four research stagesevals/: committed eval definitions used byscripts/eval.pyreferences/CONFIG.md: config and backend selection detailsreferences/EVALS.md: benchmark workflow, workspace layout, and human-review guidancereferences/CLIENTS.md: client-specific launcher and troubleshooting notesscripts/: launcher, installer, eval harness, and runtime helpers