A Claude Code skill plugin for the Fred Hutchinson Cancer Center HPC cluster (Gizmo). It provides contextual, accurate guidance on job submission, storage, software modules, GPU computing, monitoring, and more, loaded on demand as you work.
Claude Code loads skills based on what you're doing. Ask about submitting a Slurm job, and it loads fh.slurm. Ask about GPU availability, and it loads fh.gpu and fh.monitoring. Each skill is a focused document covering one topic with commands, examples, pitfalls, and references.
34 skills cover the full surface area of the Fred Hutch HPC, plus lab-specific conventions under the setty.* namespace:
| Skill | Description |
|---|---|
fh.access |
SSH, NoMachine, VPN, Open OnDemand, session persistence |
fh.alphafold |
AlphaFold 3 on the chorus GPU partition |
fh.aws-access |
AWS account access, SSO login, S3, Batch, cost management |
fh.cloud |
AWS cloud computing (Batch, WDL/PROOF, Nextflow, CloudShell) |
fh.cluster-overview |
Quick reference: partitions, node hardware, GPU inventory, key paths |
fh.containers |
Apptainer/Docker containers, digest pinning, multi-stage builds |
fh.credentials |
HutchNet ID, Slurm access, GitHub org membership, MFA |
fh.cromwell |
Cromwell/WDL workflow execution, Google Batch API, clinical genomics |
fh.data-management |
FAIR principles, NIH DMSP, data formats, versioning, metadata standards |
fh.databases |
MyDB (Postgres, MariaDB, MongoDB, Neo4j), REDCap, MS SQL |
fh.data-transfer |
Motuz, Globus, AWS CLI, Aspera, migration to Economy Cloud |
fh.github |
Fred Hutch GitHub org, security policies, version control |
fh.gpu |
GPU types (1080 Ti, 2080 Ti, L40S), CUDA, chorus partition |
fh.grants |
HPC descriptions and citations for grant applications |
fh.interactive-sessions |
grabnode, srun, resource flags, session management |
fh.linux-basics |
Essential shell commands for HPC users |
fh.modules |
Lmod modules: spider vs avail, hierarchies, when to use alternatives |
fh.monitoring |
Grafana/Prometheus queries, Slurm CLI monitoring, dashboards |
fh.nextflow |
Nextflow on Gizmo/AWS, profiles, nf-test, container pinning |
fh.onboarding |
New user checklist for Fred Hutch computational resources |
fh.parallel |
Job arrays, threading, MPI, reproducible parallel RNG |
fh.partitions |
Partition specs, decision guide, checkpointing, fair-share |
fh.python |
uv, Lmod modules, mamba fallback, Jupyter, dependency management |
fh.r |
R/RStudio, renv, Bioconductor, Jupyter R kernel |
fh.reproducibility |
Environment pinning, container digests, parallel RNG, agent code risks |
fh.slurm |
sbatch, sacct profiling, arrays, dependencies, backfill scheduling |
fh.storage |
Overview of all storage tiers (home, fast, scratch, economy) |
fh.storage-fast |
/fh/fast/ POSIX storage: paths, quotas, collaboration |
fh.storage-s3 |
Economy/S3 storage: CLI, boto3, R, sharing, versioning |
fh.storage-scratch |
/hpc/temp/, local staging patterns, I/O anti-patterns |
fh.testing |
pytest, testthat, nf-test, snapshot testing, CI, practical priorities |
fh.vscode-remote |
VS Code remote on compute nodes, Lmod integration |
fh.workflows-overview |
Nextflow vs Snakemake vs WDL, portability stack, cloud bursting |
setty.plots |
Setty Lab plot aesthetics: matplotlib/seaborn/scanpy styling, Helvetica/Arial, Paired palette, Illustrator handoff, palantir/kompot plot references |
Skill content is distilled from:
Fred Hutch infrastructure:
- SciComp Wiki — the official Fred Hutch Scientific Computing documentation, covering access, storage, software, and large-scale computing
- SciComp Resource Library — 45+ tutorials and how-to guides
- SciComp Pathways — step-by-step workflows for common tasks
- Live cluster probing — partition specs, module versions, mount points, and environment variables verified directly on Gizmo
- Grafana — dashboard catalog and Prometheus query patterns for cluster monitoring
Setty Lab conventions:
- Setty Lab Wiki — lab-specific guidelines on compute resources, plot aesthetics, and research practices (source for the
setty.*namespace)
HPC best practices:
- NERSC, TACC, Sigma2, Yale YCRC, Harvard FASRC — job scheduling, resource profiling, I/O patterns, fair-share
- Slurm documentation — fair tree algorithm, multifactor priority, job arrays
- LUMI Lmod tutorials — hierarchical module systems
Reproducibility and scientific rigor:
- Ziemann, Poulain, Bora (2023). "Five pillars of computational reproducibility." Briefings in Bioinformatics
- Vangala et al. (2026). "AI-Generated Code Is Not Reproducible (Yet)." arXiv
- Edmonds et al. (2022). "Software testing in microbial bioinformatics." Microbial Genomics
- Sandve et al. (2013). "Ten Simple Rules for Reproducible Computational Research." PLOS Comp Bio
Workflows and portability:
- "Empowering bioinformatics communities with Nextflow and nf-core." Genome Biology (2025)
- "nf-test: Improving pipeline reliability." GigaScience (2025)
- "Applying FAIR Principles to Computational Workflows." Nature Scientific Data (2025)
Data management:
- NIH Data Management and Sharing Policy — DMSP requirements
- FAIR Principles — Findable, Accessible, Interoperable, Reusable
- Apache Parquet, Zarr, TileDB-SOMA — data format recommendations
Where the wiki and the live cluster disagree, we trust the cluster. Deviations are documented in shared/reports/validation-agent.md.
Every skill upholds these values:
- Scientific accuracy — commands, paths, and configurations are verified against the live cluster. No fabrication.
- Reproducibility — skills encourage versioned environments (modules, containers, conda envs), explicit resource requests, and documented workflows.
- Fair resource usage — skills teach users to request only what they need, use appropriate partitions, and release resources when done. An idle grabnode session wastes what someone else could use.
- Cooperation — the cluster is shared infrastructure. Skills promote
--nicefor non-urgent work, checking cluster load before large submissions, and respecting SciComp policies. - Security — skills never expose credentials, enforce proper access methods, and flag PHI/PII handling requirements.
Cloning into ~/.claude/ ensures skills are accessible inside the agent_sandbox, which mounts ~/.claude as writable by default. No extra sandbox configuration needed.
git clone [email protected]:settylab/fh-hpc-skills.git ~/.claude/fh-hpc-skills
# Symlink into the Claude Code config directory (respects custom CLAUDE_CONFIG_DIR)
SKILLS_DIR="${CLAUDE_CONFIG_DIR:-$HOME/.claude}/skills"
mkdir -p "$SKILLS_DIR"
for skill in ~/.claude/fh-hpc-skills/skills/fh.*/; do
ln -sf "$skill" "$SKILLS_DIR/$(basename "$skill")"
doneIf you don't have SSH keys configured for GitHub, use HTTPS instead:
git clone https://github.com/settylab/fh-hpc-skills.git ~/.claude/fh-hpc-skillsSKILLS_DIR="${CLAUDE_CONFIG_DIR:-$HOME/.claude}/skills"
mkdir -p "$SKILLS_DIR"
cp -r ~/.claude/fh-hpc-skills/skills/fh.slurm "$SKILLS_DIR/"SKILLS_DIR="${CLAUDE_CONFIG_DIR:-$HOME/.claude}/skills"
ls "$SKILLS_DIR"/fh.*/SKILL.mdSkills become available immediately in your next Claude Code session. No restart required.
Skills load automatically based on context. Just ask naturally:
> How do I submit a GPU job?
→ loads fh.slurm, fh.gpu, fh.partitions
> What storage should I use for intermediate files?
→ loads fh.storage, fh.storage-scratch
> How busy is the cluster right now?
→ loads fh.monitoring
> I'm new here, where do I start?
→ loads fh.onboarding
You can also invoke skills directly with slash commands if configured:
> /fh.slurm
> /fh.monitoring
skills/ # The deliverable: 33 Claude Code skills
docs/wiki-raw/ # Raw fetched wiki content
docs/wiki-distilled/ # Structured knowledge extracted from wiki + live cluster
shared/reports/ # Agent work reports and validation results
shared/lockfiles/ # Agent coordination (empty when complete)
templates/ # Skill and agent instruction templates
sources.yml # Wiki URL manifest
To update a skill:
- Check the SciComp Wiki for the latest documentation
- Verify against the live cluster (paths, modules, partitions may change)
- Edit
skills/<name>/SKILL.mddirectly - Ensure the
description:frontmatter is specific enough for accurate skill loading - Run the validation agent to check for inconsistencies
To add a new skill:
- Create
skills/fh.<name>/SKILL.mdwith frontmatter and a TRIGGER line - Keep it focused on one topic — if it exceeds ~150 lines, consider splitting
- Cross-reference related skills rather than duplicating content
A few items could not be verified from the CLI and remain based on wiki documentation: the Open OnDemand URL and available apps, current SciComp Slack channel names, SMB/NFS desktop mount paths, and the AWS SSO browser flow.
Internal use at Fred Hutchinson Cancer Center.