State of Mind for BCI,
on your device.
60+ brain metrics computed on-device. GPU-accelerated neural embeddings. Vector similarity search. Automatic sleep staging. A full WebSocket API.
No cloud. No accounts. No telemetry.
Download NeuroSkill™
See it in action
Watch a walkthrough of Skill — from connecting the Muse headset to searching neural embeddings.
This demo shows the complete workflow: Bluetooth connection, live dashboard with 60+ metrics, embedding search, session comparison, Brain Nebula™, sleep staging, and the WebSocket API.
256 Hz
Sample rate
~4 Hz
Metric updates
5 s
Epoch window
32-D
Embeddings
M=16
HNSW index
60+
Metrics
9
API commands
5
Languages
From headset to insight
in four steps
From headset to insight in four steps
Connect
Power on your Muse or OpenBCI device. Skill discovers it automatically — Muse 2/S over BLE (256 Hz, 4 channels), or any OpenBCI board via BLE, USB serial, WiFi Shield, or UDP (4–24 channels, 125–1000 Hz).
Observe
60+ metrics appear instantly — live waveforms with spectrograms, 5-band power bars, FAA gauge, brain scores, composite indices, PPG vitals, IMU head pose, blink & clench detection, consciousness metrics, and headache/migraine research correlates — all updated at ~4 Hz.
Record & Label
Raw EEG is saved to CSV continuously. Add free-text labels to annotate moments. Run calibration tasks to collect labeled training data. All stored locally in open formats.
Search & Compare
The ZUNA encoder creates 32-D embeddings every 5 seconds. Query the HNSW index to find similar brain states across days. Compare sessions side-by-side with Brain Nebula™.
First data in 60 seconds
Three steps from box to live EEG. No accounts, no configuration beyond Bluetooth.
Download & install
macOS .dmg or Linux .deb / .AppImage. Open, drag to Applications, launch. No account. No licence key. No internet after install.
Pair your Muse
Power on your headset (hold the button until you feel a vibration). The Setup Wizard scans for Bluetooth, connects automatically, and walks you through electrode fit with live signal-quality indicators on all four channels.
See your brain live
The moment signal quality turns green on all channels, the dashboard fills in — waveforms, spectrograms, band powers, and 60+ metrics updating at ~4 Hz. Your session is already being recorded to CSV.
What happens in your first session
- 0:00 App opens, Setup Wizard launches
- 0:30 Bluetooth paired, headset connected
- 1:00 Signal quality green on all 4 channels
- 1:30 First metrics appear on the dashboard
- 5:00 Run Calibration to establish your baseline
- 30:00 Sleep staging becomes available
Free · GPL-3.0 license · No account required
Start exploring
your brain
Power on your Muse, launch Skill, and see 60+ metrics in seconds. No account. No setup beyond Bluetooth.
Muse 2 · Muse S · OpenBCI Ganglion / Cyton / Cyton+Daisy / Galea · macOS 12+ · Linux · Windows · GPL-3.0 license
Everything runs locally
Built with Rust, Svelte 5, wgpu, and Tauri 2. No cloud dependencies. No internet required after install.
GPU Signal Processing
Overlap-save filtering via wgpu compute shaders. Configurable notch (50/60 Hz + harmonics), high-pass, and low-pass filters. 512-sample Hann-windowed FFT at ~4 Hz update rate. ~125 ms total pipeline latency from electrode to screen.
ZUNA Neural Embeddings
A transformer encoder runs on your GPU via wgpu, converting every 5-second EEG epoch into a 32-dimensional embedding vector. Model weights loaded from local HuggingFace cache. Zero external API calls — everything runs locally.
Vector Similarity Search
HNSW index (M=16, ef=200, cosine distance) enables approximate nearest-neighbour search across your entire recording history. One daily index file. Query thousands of embeddings in milliseconds to find moments when your brain was in the same state.
Automatic Sleep Staging
Sessions ≥ 30 minutes are classified into Wake / N1 / N2 / N3 / REM using delta/theta/alpha/beta power ratios per AASM guidelines. Staircase hypnograms with per-stage duration, percentages, and side-by-side comparison.
Brain Nebula™
Project embeddings into interactive 3D space with Three.js. Similar brain states cluster together, different states separate. Click labelled points to trace temporal connections. Colour by date, session, or label.
Session Comparison
Pick any two sessions: side-by-side band powers with deltas, all 60+ metrics, FAA, sleep hypnograms, time-series charts (bands, scores, PPG, artifacts, pose), and Brain Nebula™ distribution overlay.
WebSocket API
Local JSON WebSocket server starts on every launch. 3 outbound events (eeg-bands at ~4 Hz, muse-status at ~1 Hz, label-created) and 9 inbound commands. Discoverable via mDNS. Connect from Python, Node.js, Unity, or any language.
On-Device LLM Chat
A built-in chat window powered by a local language model running entirely on your GPU — no internet required. Download any GGUF model (Qwen, Llama, Mistral…) from within Settings → LLM and start chatting. Reasoning models show their full thinking chain. The server exposes an OpenAI-compatible REST API on the same port as the WebSocket, so any OpenAI-compatible client can connect. Vision models can also analyse images — attach a screenshot or diagram and the model will describe, translate, or reason about it. Requires the llm Cargo feature.
EEG-Triggered Do Not Disturb
Automatically activates macOS Do Not Disturb (or any Focus mode — Work, Sleep, Driving…) when your EEG engagement score stays above a configurable threshold for a sustained period. Set the focus threshold (0–100, derived from EEG β/(α+θ)), the minimum sustained duration before DND activates (30 s – 5 min), the exit delay before DND clears once focus drops, and a lookback window to prevent brief dips from toggling DND off. An exit notification fires when focus mode deactivates automatically. Status shows live: current engagement, system DND state, and whether the timer is counting. Requires macOS 12+.
Command Palette & Shortcuts
Press ⌘K (Ctrl+K) anywhere to open a fuzzy-searchable command palette. Global keyboard shortcuts (⌘⇧O, ⌘⇧M) work even when the app is in the background. All windows accessible from tray menu, palette, or shortcut.
Multilingual (5 Languages)
Full UI translated into English, Deutsch, Français, Українська, and עברית — including all metric descriptions, help text, tooltips, and error messages. RTL layout support for Hebrew.
Accessible Design
6 colour themes including 3 colorblind-safe palettes (deuteranopia, protanopia, tritanopia). Configurable font sizes. High-contrast mode. All interactive elements have keyboard navigation and screen reader labels.
100% Local & Private
No cloud, no accounts, no telemetry, no analytics. All data stored locally in CSV, SQLite, and HNSW formats. The only optional network request is a manual update check (Ed25519 signature verified).
Auto-Updating & Open Formats
Built-in Tauri updater with Ed25519 signature verification. All data in standard open formats — CSV for raw EEG, SQLite for embeddings and metrics, HNSW binary for vector indices. Export, copy, or analyse with any tool.
How Skill compares
An honest side-by-side with three tools you may already know.
Muse Direct streams raw OSC data from Muse headsets. BrainFlow is a hardware-agnostic library for 40+ boards. OpenBCI GUI is built around OpenBCI hardware. NeuroSkill™ now supports the full OpenBCI board family alongside Muse — embeddings, search, sleep staging, and session comparison, all in one app.
Analysis & Insights
Integration
User Interface
Platform & Values
| Feature | NeuroSkill™ | Muse Direct | BrainFlow | OpenBCI GUI |
|---|---|---|---|---|
| Analysis & Insights | ||||
| Real-time metrics | 60+ — spectral, complexity, connectivity, PPG/HRV, artifacts | ~8 — band powers, blink, jaw, eye movement | Band powers via SDK; no built-in dashboard | ~15 — band powers, FFT, BIS index |
| GPU signal processing | wgpu compute shaders, ~125 ms end-to-end | — | — | — |
| Neural embeddings | ZUNA encoder, 32-D, on-device GPU | — | — | — |
| Vector similarity search | HNSW index, cosine distance, cross-session | — | — | — |
| Automatic sleep staging | Wake / N1 / N2 / N3 / REM per AASM | — | — | — |
| Session comparison (A/B) | Metrics, sleep hypnograms, Brain Nebula™ overlay | — | — | — |
| Integration | ||||
| Developer API | JSON WebSocket, mDNS, 9 commands + CLI | OSC streaming | Native library — Python, C++, Java, C#, R… | LSL / OSC / serial streaming |
| Data export | CSV · SQLite · HNSW binary | CSV · OSC stream | Raw samples to file or stream | CSV · BDF / EDF |
| Supported hardware | Muse 2 · Muse S · OpenBCI Ganglion · Cyton · Cyton+Daisy · Galea | All Muse models | 40+ boards (Muse, OpenBCI, Emotiv, Neurosity…) | OpenBCI Cyton, Ganglion (+ BrainFlow boards) |
| User Interface | ||||
| Auto device connect & reconnect | BLE scan at launch, silent retry with countdown timer | Auto-pairs to last-known device | — Library — connection is caller's responsibility | Manual board selection required each launch |
| Zero-config first session | Open app → pair → live data. No config files. | — Requires code; no standalone app | Driver/port setup often required | |
| Smart session auto-range | Compare/sleep/search auto-select sessions; rerun: line for reproducibility | — | — | — |
| Interactive 3D data visualisation | Brain Nebula™ point cloud, 3D electrode head, animated hypnogram | — 2D waveforms and bars only | — No built-in visualisation | 2D time-series and FFT plots |
| Inline metric explanations | Every metric shows formula, range, and DOI-linked reference | — | — | Basic labels; no formula or citation |
| Platform & Values | ||||
| Desktop app | macOS · Linux · Windows (Tauri 2) | macOS & iOS only | — Library — bring your own UI | macOS · Linux · Windows (Java) |
| No cloud · fully local | No accounts, no telemetry, no internet | |||
| Open source | GPL-3.0 | — | MIT | MIT |
| Price | Free | Free | Free | Free |
Competitor information is based on public documentation as of early 2026. This is our view — verify independently for current accuracy. Every tool has different strengths; choose what fits your workflow.
Interface design
Autonomous. Versatile. Human.
Most EEG tools make you configure everything. NeuroSkill™ acts like a knowledgeable assistant — making sensible decisions on your behalf while staying completely transparent about what it's doing and why.
Autonomous
NeuroSkill™ acts on your behalf without asking. The headset re-pairs silently after dropout. Compare, Search, and Sleep auto-select the most relevant sessions. The CLI discovers the app over mDNS — no IP address, no port memorisation.
- BLE auto-reconnect with visible countdown
- Smart defaults in every time-range command
- mDNS + lsof fallback discovery
- Rerun: line printed for reproducibility
Versatile
Eight undockable panels, a WebSocket API, and a full CLI — use exactly as much or as little as your workflow needs. A meditator just looks at one gauge. A researcher pipes JSON to Python. A developer builds a custom dashboard on top. Same app, completely different experience.
- 8 undockable UI panels (resize, reorder, pin)
- WebSocket API for any language or tool
- TypeScript CLI for shell scripting
- Brain Nebula™ · hypnogram · live waveforms · gauges
Human
You should never need a manual to use your own brain data. Every metric shows its formula, typical range, and a link to the source paper. Signal quality is colour-coded before you start. Scores are 0–100 with plain-English labels, not raw µV² values.
- Inline formula + DOI-linked reference per metric
- Per-channel signal quality before recording starts
- 0–100 composite scores with plain-English labels
- No configuration files ever required
Know your data is clean
Real-time per-channel signal quality monitoring. See exactly which electrodes have good skin contact, which are noisy, and when to adjust your headset.
Continuous Monitoring
Signal quality is recomputed every epoch (2.5 s) from rolling RMS windows on the raw EEG. You always know the state of each electrode — no guessing.
4-Channel Breakdown
Individual quality indicators for TP9, AF7, AF8, and TP10. Quickly identify which electrode needs repositioning instead of seeing a single overall score.
RMS-Based Detection
Quality is derived from the root-mean-square amplitude of each channel. Abnormally high RMS flags movement artifact or poor contact; abnormally low flags a disconnected sensor.
Automatic Artifact Flagging
Epochs with poor signal are automatically flagged. Downstream metrics, embeddings, and sleep staging can exclude flagged epochs for cleaner analysis.
Quality over API
Per-channel quality status is included in the WebSocket status command response. Build external dashboards or trigger alerts when signal degrades.
Quality History Chart
A dedicated signal quality chart shows how electrode contact evolves over the session. Spot recurring drops from headband slippage or movement.
57+ metrics every epoch
Spectral, temporal, connectivity, complexity, cardiovascular, and behavioral — computed every 2.5–5 seconds, stored with each embedding, and streamed over WebSocket.
Band Powers
Spectral power in the five canonical EEG frequency bands, computed via 512-sample Hann-windowed FFT (Welch method) at ~4 Hz. Reported as both absolute (µV²/Hz) and relative (fraction of total) power.
Every metric includes an interactive tooltip with its scientific explanation inside the app. All are backed by peer-reviewed references — see the Science References section.
Understanding the sensors
The Muse headband uses 4 dry EEG electrodes in the international 10-20 system positions, plus a PPG sensor and 9-axis IMU.
Electrodes
3D viewer not available on this device. Select an electrode below to learn more.
Muse Headset Electrodes
The Muse 2 / Muse S uses 4 EEG channels from the 10-10 system. Click any electrode on the 3D head to explore.
Brain Regions
Additional Sensors
PPG Sensor
Forehead (centre, between AF7/AF8)
Photoplethysmography via infrared and red LEDs. Measures heart rate, HRV (RMSSD, SDNN, pNN50), LF/HF ratio, respiratory rate, SpO₂ estimate, perfusion index, and Baevsky Stress Index.
9-Axis IMU
Inside the headband pod
Accelerometer + gyroscope + magnetometer. Provides head pitch, roll, stillness index, nod/shake detection. Used for movement artifact flagging.
DRL / REF
Centre forehead
Driven Right Leg and Reference electrodes provide the common reference voltage and active noise cancellation for the four EEG channels.
Placement Tips
- TP9 & TP10: Tuck behind each ear on the mastoid bone. Press firmly for good contact. Brush away any hair covering the sensors.
- AF7 & AF8: Rest on the forehead, just above the eyebrows. Sweep hair aside. Should sit flat against clean, dry skin.
- Signal quality: If a sensor shows 'Poor' or 'No Signal', adjust the headband position slightly. Moisten sensor pads with water or saline for best conductivity.
- Live feedback: The Setup Wizard and Help → Electrodes tab show real-time per-channel signal quality so you can adjust fit before recording.
Dashboard, LLM Chat, Auto-DND — and 6 more
Each opened from the tray menu, command palette (⌘K), or global keyboard shortcuts.
Dashboard
Real-time waveforms, spectrograms, 5-band power bars, FAA gauge, 20+ EEG indices, composite scores, PPG vitals, IMU pose, blink & clench detection, GPU utilisation, device status, and continuous CSV recording.
Settings
Four tabs — Device config and paired devices, Signal Processing (notch 50/60 Hz, bandpass, embedding overlap), Appearance (theme, fonts, 6 colour schemes with 3 colorblind-safe palettes), Shortcuts (global ⌘⇧O / ⌘⇧M configuration).

Search & Compare
Query the HNSW vector index by time range. Results ranked by cosine distance. Brain Nebula™ projects embeddings so similar brain states cluster together. Compare sessions side-by-side with sleep hypnograms.
Calibration
Guided alternating-action task (e.g. eyes open / closed) with configurable duration and timed breaks. Labels saved automatically. Events broadcast via WebSocket for external tool sync.
Label
Quick free-text annotation of the current EEG moment. Saved with exact timestamp to labels.sqlite. Submit with ⌘Enter. Labels appear in search results, Brain Nebula™, and session history.

Setup Wizard
Five-step onboarding: Welcome → Bluetooth scanning → Fit check with live per-channel quality indicators → Optional calibration → Done. Runs on first launch or manually.
LLM Chat
A local chat window backed by any GGUF-format language model running on your GPU — zero cloud, zero API keys. Open Settings → LLM to download a model (Qwen3.5, Llama, Mistral…), pick a quantisation (Q4_0 recommended for speed), and click Chat. Reasoning models display their full step-by-step thinking chain inside a collapsible Thought block before the final answer. The server also exposes an OpenAI-compatible HTTP API on the same port as the WebSocket (http://localhost:8375/v1/chat/completions) so any OpenAI SDK, shell script, or third-party app can connect without code changes. Vision models understand images too — paste or drag a screenshot, diagram, or photo into the chat input and the model will describe, translate, or reason about the visual content, powered by a small multi-modal projector (mmproj) downloaded separately. Conversation history is kept in the session log; start a new conversation with the + button.
Auto Do Not Disturb
Skill reads your live EEG engagement score (derived from the β/(α+θ) ratio, 0–100) and automatically activates macOS Focus mode — Do Not Disturb, Work, Sleep, Driving, or any custom mode you've set up — the moment your brain has been in a focused state long enough to mean it. Configure the focus threshold (default 60), the sustained duration before DND kicks in (30 s – 5 min), the exit delay before DND clears after focus drops (1–60 min), and a lookback window that prevents brief dips from toggling DND off prematurely. An optional exit notification fires when focus mode deactivates automatically. The bottom status bar shows live state: current engagement score, whether DND is active, and the system Focus mode name. All controlled from Settings → Goals. Requires macOS 12 (Monterey) or later.
API & CLI
A local WebSocket server and a TypeScript CLI — both auto-discovered via mDNS, zero cloud, zero config.
WebSocket API
A JSON WebSocket server starts on every launch. Discover via mDNS or connect directly.
3 Event Streams
eeg-bands (~4 Hz), muse-status, label-created — all 60+ metrics pushed in real time
9 Commands
status · label · search · sessions · compare · sleep · calibrate · umap · umap_poll
mDNS Discovery
Zero-config LAN discovery · advertised as _skill._tcp · no IP address needed
dns-sd -B _skill._tcp → ws://localhost:8375 View Full API DocsCommand-line Interface
A TypeScript CLI wrapping the full WebSocket API. mDNS auto-discovery, --json mode, live progress bars.
9 Commands
Full coverage of the WebSocket API — search, sleep staging, Brain Nebula™, compare, and more
mDNS Auto-discovery
Finds any NeuroSkill™ instance on the LAN automatically — no IP config, no port flags
--json Mode
Machine-readable output for piping to jq, awk, or any script — pair with cron or CI
Backed by 61 papers
Every metric maps to peer-reviewed literature. This is the complete list — all 61 references from the app's Help → References tab.
Who is Skill for?
Researchers, students, meditators, athletes, BCI developers, clinicians, and anyone curious about their brain — across all ages.
Which devices work?
Skill supports Muse 2, Muse S, and the full OpenBCI board family — Ganglion, Cyton, Cyton+Daisy (all with optional WiFi Shield), and Galea. Use them standalone or side-by-side.
InteraXon Muse Headsets

Muse 2
MU-02Supported ✓
4-ch EEG (TP9, AF7, AF8, TP10) · PPG (HR, HRV, SpO₂) · 9-axis IMU. Full feature support including all 60+ metrics, embeddings, sleep staging, and API. Bluetooth LE · 256 Hz.

Muse S (Gen 1)
MU-03Supported ✓
Same sensors as Muse 2 in a fabric headband designed for sleep. Identical feature support. Recommended for overnight recordings due to comfort. Bluetooth LE · 256 Hz.

Muse S (Gen 2)
MU-04Supported ✓
Updated Muse S with improved PPG sensor. Same BLE protocol and feature support as Gen 1. Bluetooth LE · 256 Hz.
OpenBCI Boards
All OpenBCI boards connect via Settings → OpenBCI. Select your board, set the port or IP, and click Connect. Any board can run alongside a Muse headset simultaneously. The first 4 channels drive the real-time analysis pipeline; all channels are saved to CSV.

Ganglion
BLESupported ✓
4-ch EEG · Bluetooth LE · 200 Hz. Most portable OpenBCI board. Connects the same way as Muse — press Connect and NeuroSkill™ scans automatically.

Ganglion + WiFi Shield
WiFiSupported ✓
4-ch EEG · WiFi Shield · 200 Hz. Replaces BLE with a 2.4 GHz Wi-Fi link. IP auto-discovered via mDNS or entered manually.

Cyton
USBSupported ✓
8-ch EEG · USB radio dongle · 250 Hz. Channels 1–4 drive the real-time pipeline; all 8 saved to CSV. Auto-detects serial port.

Cyton + WiFi Shield
WiFiSupported ✓
8-ch EEG · WiFi Shield · 1000 Hz. Highest sample-rate option. Set low-pass filter to ≤ 500 Hz in Signal Processing settings.

Cyton + Daisy
USBSupported ✓
16-ch EEG · USB radio dongle · 125 Hz. Doubles Cyton channel count. Channels 1–4 drive real-time analysis; all 16 saved to CSV.

Cyton + Daisy + WiFi Shield
WiFiSupported ✓
16-ch EEG · WiFi Shield · 125 Hz. Full 16-channel recording over Wi-Fi. IP auto-discovered or entered manually.

Galea
UDPSupported ✓
24-ch biosignals (EEG + EMG + AUX) · UDP · 250 Hz. Research-grade headset. Channels 1–8 EEG → real-time analysis; 9–16 EMG; 17–24 AUX. All 24 saved to CSV.
Not Currently Supported

Muse (2014 original)
MU-01Different BLE protocol and electrode configuration. No PPG sensor. The original Muse uses a different data format that NeuroSkill™ does not implement.
Muse 2016
MU-01 rev2Transitional hardware with partially updated protocol. BLE advertising differs from Muse 2/S. Not tested.
Want support for another device?
We'd love to add support for more EEG hardware — Emotiv, Neurosity, g.tec, or others. If you have a specific device you'd like to see, tell us and we'll prioritise.
We'll notify you if/when support is added. Your email is only used for this purpose.
Where does it run?
Skill is built with cross-platform technologies (Rust, Tauri 2, wgpu) but has been primarily developed and tested on macOS.
macOS
PrimaryTested ✓
Fully tested on macOS 12 (Monterey) and later. Apple Silicon only (M1/M2/M3/M4). CoreBluetooth provides native Muse connectivity — no drivers needed. GPU acceleration via Metal through wgpu. This is the primary development and testing platform.
Requirements
macOS 12+, Apple Silicon (M1/M2/M3/M4), Bluetooth 4.0+, ~200 MB disk space
Linux
ExperimentalShould work — not fully tested
NeuroSkill™ builds and runs on Linux distributions with BlueZ (the standard Linux Bluetooth stack). .deb and .AppImage packages are generated. BLE connectivity uses btleplug via BlueZ/D-Bus. GPU acceleration via Vulkan through wgpu. Community reports welcome — if you encounter issues, please file a bug report.
Requirements
BlueZ 5.50+, Vulkan-capable GPU, systemd (for .deb), glibc 2.31+
Windows
SupportedSupported ✓
Builds and runs on Windows 10/11 (x86-64 MSVC). NSIS installer available from the latest release. BLE uses the Windows Runtime Bluetooth API. GPU-accelerated LLM inference via Vulkan, with automatic CPU fallback. All core features — live EEG, band powers, ZUNA embeddings, similarity search, and the WebSocket API — work out of the box.
Requirements
Windows 10 1903+, Bluetooth 4.0+, Vulkan-capable GPU (optional), ~200 MB disk space
Note: macOS is the only platform where Bluetooth connectivity, GPU acceleration, signal processing, and all UI features have been systematically verified. Linux and Windows support relies on the cross-platform abstractions in Tauri 2, wgpu, and btleplug — they should work, but edge cases may exist. If you test on Linux or Windows, please share your experience.
Your brain, your data
Privacy by design. Not an afterthought. Not a toggle. The architecture makes data leakage structurally impossible.
No Cloud
All data stored locally in CSV, SQLite, and HNSW formats. Nothing uploaded anywhere, ever. Your EEG data never leaves your machine.
No Accounts
No sign-up, login, tokens, or user identifiers of any kind. No email collection. No user profiles.
No Telemetry
Zero analytics, crash reports, tracking pixels, or phone-home beacons. No third-party SDKs. No Firebase, Sentry, Mixpanel, or equivalents.
On-Device AI
ZUNA encoder, GPU filtering, FFT, HNSW search — all local CPU/GPU. Model weights cached locally. No OpenAI, no cloud ML, no external APIs.
Local-Only Network
WebSocket server is LAN-only. mDNS is local multicast. Raw EEG samples never broadcast — only derived metrics over the API.
Open Formats
CSV + SQLite + HNSW binary — read, copy, analyse, or delete your data with Python, R, MATLAB, or any standard tool.
Updates Only
The only network activity is update checks — manual or via a configurable auto-update schedule. Ed25519 signature verified. You control when (or whether) it happens in Settings.
Full Control
Delete the app data folder and everything is gone. No residual cloud data, no account to close, no data retention policies to worry about.
Transparent about security
Skill is privacy-first but not a security product. Here's exactly what it does and doesn't do to protect your data — no marketing spin.
What's in place
Tauri Sandbox
The app runs in Tauri's security sandbox. The web frontend has no direct filesystem or network access — all privileged operations go through an explicit Rust command allowlist.
Ed25519 Update Signatures
Every update bundle is signed with Ed25519. The app verifies the signature before applying. If the signature doesn't match, the update is rejected. Supply-chain tampering is detected.
No Unsolicited Connections
Zero analytics, crash reports, or CDN assets. The only outbound connection is for update checks — either triggered manually or via the configurable auto-update schedule in Settings. All update bundles are Ed25519-verified before installation.
Open-Source & Auditable
All code is public. The Rust signal processing, the Tauri IPC layer, the Svelte frontend — anyone can inspect, audit, and verify what the app does.
No Auth Tokens or Secrets
No API keys, session tokens, JWTs, or credentials stored on disk. There is nothing for an attacker to exfiltrate that grants access to external services.
What's NOT in place
No Encryption at Rest
Your EEG data, embeddings, metrics, and labels are stored as plain CSV, SQLite, and HNSW binary files. They are not encrypted. Anyone with access to your filesystem can read them.
No App-Level Password
There is no PIN, password, or biometric lock to open Skill. If someone can log into your OS account, they can open the app and see all your data.
No Per-User Profiles
Skill doesn't have multi-user support. All sessions, labels, and settings are shared. If multiple people use the same OS account, they see each other's brain data.
Our recommendation
Use your operating system's built-in protections: enable FileVault (macOS) or LUKS (Linux) for full-disk encryption, set a strong login password, and lock your screen when stepping away. This protects all local data — including Skill's — without any app-level overhead.
Skill is a research tool, not a medical device or a security product. It is designed to keep data local and private by architecture, but it relies on your OS for access control and encryption. If you handle sensitive research data, apply your institution's standard data-protection practices.
Common questions
All 27 questions from the app's Help → FAQ tab.



















