Skip to content

studiojin-dev/zai-deep-research-skill

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GitHub GitHub stars GitHub license GitHub release GitHub last commit Docs: English 문서: 한국어 Agent Skills Python MCP z.ai Coding Plan

zai-deep-research

English documentation. For Korean documentation, please see README.ko.md.

Overview

zai-deep-research is a generic Agent Skills-compatible deep research skill whose hard requirements are:

  • z.ai Coding Plan access
  • four configured z.ai MCP servers:
    • web-search-zai
    • web-reader-zai
    • vision-zai
    • zread

The skill itself is not tied to one AI coding product. It is intended to work across Agent Skills-compatible clients, while the bundled Python launcher provides backend adapters for codex, claude, opencode, and gemini.

Without z.ai Coding Plan access and those four MCP servers, this repository is not useful in practice.

Support Matrix

Client Skill package scripts/run.py launcher Notes
codex Supported Supported One supported backend, not the identity of the skill
claude Supported Supported Launcher uses non-interactive print mode
opencode Supported Supported Launcher uses opencode run
gemini Supported Supported Launcher uses headless prompt mode

Runtime behavior can differ slightly by client because each CLI exposes different non-interactive and MCP interfaces. The external contract stays the same: validate prerequisites, gather evidence iteratively, and produce a final Markdown report.

How It Works

The skill coordinates four prompt templates under agents/:

  • planner refines the request, decides whether clarification is necessary, and selects the MCPs that matter.
  • researcher gathers evidence through the configured z.ai MCP servers.
  • summarizer turns each research pass into a concise iteration summary and proposes the next queries.
  • synthesizer writes the final markdown report.

The optional launcher lives in zai-deep-research/scripts/run.py. It:

  • auto-detects or accepts an explicit client backend
  • validates that the required MCP names are configured in that client
  • runs the four stages iteratively
  • stores runtime state under ./.zai-deep-research by default
  • writes the final report under ./research/ by default

Before You Install

Please configure the four z.ai MCP servers in your client first. The names must match exactly unless you override them in config.json:

Required name z.ai service
vision-zai Vision MCP Server
web-search-zai Web Search MCP Server
web-reader-zai Web Content Reading
zread Zread MCP Server

Each client has its own MCP configuration format. What matters for this skill is the server name and the client’s ability to expose MCP tools at runtime.

Installation

Interactive installer

By default, the installer now asks target-by-target whether to install to:

  • shared Agent Skills path (.agents)
  • Codex
  • OpenCode
  • Gemini
  • Claude Code

.agents is not a separate AI product. In this repository it means the shared Agent Skills location used as a portable install target across compatible clients.

Example:

sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --scope user

Explicit single-target install

Use --layout when you want a single non-interactive install target:

  • shared: ~/.agents/skills or ./.agents/skills
  • codex: ~/.codex/skills
  • opencode: ~/.config/opencode/skills
  • gemini: ~/.gemini/skills
  • claude: ~/.claude/skills

If you already cloned this repository:

sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --layout shared --scope user
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --layout codex
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --layout opencode
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --layout claude
sh zai-deep-research/scripts/install.sh --source-dir ./zai-deep-research --scope project --dry-run

If you want a curl | sh flow:

curl -fsSL https://raw.githubusercontent.com/studiojin-dev/zai-deep-research-skill/main/zai-deep-research/scripts/install.sh | sh -s -- --scope user
curl -fsSL https://raw.githubusercontent.com/studiojin-dev/zai-deep-research-skill/main/zai-deep-research/scripts/install.sh | sh -s -- --scope project

Use --dry-run whenever you want to confirm the resolved source and destination before copying files. Use --force only when you intentionally want to replace an existing installation.

Native client layouts are user-scope only. --scope project is supported only for --layout shared.

After Installation

If you installed into a shared path, run commands from the installed skill directory instead of the repository checkout:

  • user install: ~/.agents/skills/zai-deep-research
  • workspace install: ./.agents/skills/zai-deep-research

The examples below still use repository-relative paths so they also work before installation or when testing from a clone.

Validate the selected client

Always validate before first use:

python zai-deep-research/scripts/run.py --validate --client codex
python zai-deep-research/scripts/run.py --validate --client claude
python zai-deep-research/scripts/run.py --validate --client opencode
python zai-deep-research/scripts/run.py --validate --client gemini
python zai-deep-research/scripts/run.py --validate --client codex --json

If --client auto is ambiguous because multiple supported CLIs are installed, rerun with an explicit backend.

codex mcp list can include both local-command and remote-URL tables. The launcher now parses both correctly and prints the detected MCP names in text mode, which makes false negatives easier to diagnose.

Configure storage or default client

Copy the example config if you need to change storage paths, MCP names, or the default launcher backend:

cp zai-deep-research/assets/config.example.json zai-deep-research/config.json

If you are already running from an installed shared path, use the installed paths instead:

cp ~/.agents/skills/zai-deep-research/assets/config.example.json ~/.agents/skills/zai-deep-research/config.json
python ~/.agents/skills/zai-deep-research/scripts/run.py --validate --client codex

Important config fields:

  • runtime.client: default launcher backend (auto, codex, claude, opencode, gemini)
  • memory_db_path: SQLite database for iteration summaries, reports, artifacts, and lexical memory indexes
  • data_dir: base directory for runtime state

Relative storage paths resolve from the current working directory.

Run the launcher

python zai-deep-research/scripts/run.py "Compare the latest open-source browser automation MCP servers" --client codex
python zai-deep-research/scripts/run.py "Assess the risks of vendor lock-in for model gateways" --client claude --output-dir ./research
python zai-deep-research/scripts/run.py "Analyze pricing changes" --client opencode --config ./zai-deep-research/config.json
python zai-deep-research/scripts/run.py "Review the latest changes in model gateway pricing" --client gemini --max-iterations 3
python zai-deep-research/scripts/run.py "Compare the latest open-source browser automation MCP servers" --client codex --json

If a backend is especially slow in your environment, increase the per-stage timeout with:

ZAI_DEEP_RESEARCH_COMMAND_TIMEOUT_SECONDS=600 python zai-deep-research/scripts/run.py "Compare the latest open-source browser automation MCP servers" --client codex

When the selected backend is codex, the launcher now forces sub-runs to use reasoning_effort="medium" so the skill does not inherit an excessively slow global xhigh setting.

If the launcher detects broken remote MCP transports during a codex preflight probe, it automatically excludes those MCPs for the current run instead of waiting for the main researcher turn to hang. In text mode the launcher now prints Configured MCPs, Active MCPs for this run, and Disabled MCPs for this run before research starts. In JSON mode the same state is returned as configured_mcp_names, active_mcp_names, and disabled_mcp_names.

During execution, the text launcher also prints step status lines for preflight, planner, researcher, summarizer, synthesizer, and finalize. Long-running codex steps emit periodic heartbeat lines so you can distinguish "still working" from "stalled".

Machine-readable launcher output

Use --json when you need a stable interface for automation, eval harnesses, or wrapper scripts. The payload is opt-in so existing text-mode workflows remain unchanged.

  • --validate --json returns validation status, configured MCP names, missing MCPs, lexical memory availability, and duration.
  • normal --json runs return success, clarification_required, or error status plus client, session id, report path, iteration count, clarification questions, duration, and best-effort token usage.
  • successful or clarification JSON results also include configured_mcp_names, active_mcp_names, and disabled_mcp_names so wrappers can see the preflight MCP state without parsing text.
  • runtime JSON results also include step_events, run_summary, and final_decision so callers can tell whether the run completed normally, completed with skipped iterations, or aborted.
  • when clarification is required, the launcher exits with code 2, leaves report_path empty, and returns the blocking questions in clarification_questions.
  • token_usage may be null when the selected backend does not expose stable usage metadata.

If SQLite FTS5 is unavailable in the active Python runtime, validation fails because lexical memory depends on it.

Compatibility migration

During the compatibility grace period, the launcher accepts legacy vector config keys and exposes a deprecated validation alias for wrappers that have not migrated yet.

Old surface New canonical surface Current behavior Removal
vector_memory_available lexical_memory_available Both appear in validation JSON; vector_memory_available is deprecated Next major release
storage.vector_index_path none Accepted and ignored with a validation warning Next major release
storage.vector_metadata_path none Accepted and ignored with a validation warning Next major release

Wrapper guidance:

  • Read lexical_memory_available going forward.
  • Delete vector_index_path and vector_metadata_path from saved configs; only memory_db_path is used now.
  • Treat vector_memory_available as a temporary alias only.
  • All --validate --json responses keep the same validation payload shape, even when config loading or backend selection fails before full validation.

Local eval workflow

The repository now ships a codex-first, web-centric eval suite under zai-deep-research/evals/evals.json.

  1. Snapshot the current skill before making changes:
python zai-deep-research/scripts/eval.py snapshot --dest ./.zai-deep-research-evals/skill-snapshot
  1. Run the full current-vs-old comparison:
python zai-deep-research/scripts/eval.py run --client codex --baseline-skill ./.zai-deep-research-evals/skill-snapshot

By default, artifacts are written under ./.zai-deep-research-evals/iteration-N/. Override the root with --workspace if you want a different location.

Each eval stores:

  • outputs/ for generated markdown reports
  • result.json for raw launcher output and stderr
  • timing.json for duration and best-effort token counts
  • grading.json for automated assertion results

Each iteration also writes:

  • benchmark.json for aggregated pass-rate, runtime, and token summaries
  • feedback.json as a human-review stub for comments that automation cannot grade

benchmark.json keeps token statistics as null when the backend does not expose token usage.

Read zai-deep-research/references/EVALS.md for the full workflow and benchmark interpretation guidance.

Lexical memory

Lexical memory now uses SQLite FTS5 inside the same memory.sqlite database. No FAISS or embedding packages are required.

If you want to verify that your Python runtime supports it, rerun:

python zai-deep-research/scripts/run.py --validate --client codex

Data Storage

By default, runtime data is stored under ./.zai-deep-research in the current working directory.

For example, if you run the launcher from ~/realrepo, the default storage paths become:

  • ~/realrepo/.zai-deep-research/memory.sqlite

The final Markdown report is written to ./research/ in the current working directory by default. If you prefer a different directory, pass --output-dir.

Repository Structure

zai-deep-research/
├── SKILL.md
├── agents/
├── assets/
├── evals/
├── references/
└── scripts/
  • SKILL.md: the portable skill contract
  • agents/: prompt templates for the four research stages
  • evals/: committed eval definitions used by scripts/eval.py
  • references/CONFIG.md: config and backend selection details
  • references/EVALS.md: benchmark workflow, workspace layout, and human-review guidance
  • references/CLIENTS.md: client-specific launcher and troubleshooting notes
  • scripts/: launcher, installer, eval harness, and runtime helpers

About

Skills for agent cli using z.ai coding plan.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors