Cross-platform disk usage and duplicate-file analyzer (Rust + Tauri + React + TypeScript).
- Core library (
cutest-disk-tree): Rust library crate insrc/lib.rs(plussrc/db/*) that implements directory scanning, aggregation, and SQLite persistence. This is the shared core used by both the CLI and the Tauri app. - CLI (
cutest-disk-treebinary): Simple command-line entrypoint insrc/main.rsthat calls into the core library to index a single path. Built and run from the repo root withcargo build/cargo run -- <path>. - Tauri desktop app (
cutest-disk-tree-tauri): Tauri host crate insrc-tauri/that depends on the core library (cutest-disk-tree = { path = ".." }) and exposes its functionality as Tauri commands. The UI is a React + TypeScript frontend (see the Tauri section below).
First-time setup (generates app icons if missing):
npm install
node scripts/gen-icon.cjsRun the app:
npm run tauri dev- Choose folder to scan: Opens the native directory picker, then runs the indexer.
- Summary: Root path, file entry count, unique files (hard links deduped) and total size.
- Largest folders: Top 100 folders by recursive size.
- Largest files: Top 200 files by size.
- Duplicates: Placeholder (hashing not implemented yet).
- Check for updates: Uses
tauri-plugin-updater; it fetches latest.json from this repo’s releases. For production builds use./scripts/release.sh, which sets the updater pubkey from.tauri-public-key(see Releasing).
Scan results are stored in SQLite in the app data directory (index.db). Each scan overwrites data for that root path; you can re-scan to refresh.
The Tauri host writes a debug.log file on startup. By default it lives next to index.db in the app data directory (see table below), but you can override the location with an environment variable loaded from .env:
CUTE_DISK_TREE_DEBUG_LOG_PATH: absolute path to thedebug.logfile that the app should use.
Example .env in this repo (use forward slashes so backslashes are not treated as escapes; adjust path as needed):
CUTE_DISK_TREE_DEBUG_LOG_PATH=C:/Users/kamme/Desktop/repos/cute-tools/cutest-disk-tree/debug.logOn each app start, the file at CUTE_DISK_TREE_DEBUG_LOG_PATH (or the default path) is truncated and rewritten with a === starting cutest disk tree === header so each run gets a fresh log.
Database location (app identifier com.cutest.disk-tree):
| OS | Path |
|---|---|
| Windows | %APPDATA%\com.cutest.disk-tree\index.db (e.g. C:\Users\<You>\AppData\Roaming\com.cutest.disk-tree\index.db) |
| macOS | ~/Library/Application Support/com.cutest.disk-tree/index.db |
| Linux | ~/.local/share/com.cutest.disk-tree/index.db (or $XDG_DATA_HOME/com.cutest.disk-tree/index.db if set) |
Build for production:
npm run tauri buildReleases are published to Odin94/cutest-disk-tree. The in-app “Check for updates” uses latest.json from the latest release.
- Generate a key pair (keep the private key secret and backed up):
npm run tauri signer generate -w ~/.tauri/cutest-disk-tree.key - Save the private key as
.tauri-private-keyand the public key as.tauri-public-keyin the repo root (base64-encoded). The release script reads these and setsplugins.updater.pubkeyintauri.conf.jsonat build time. Keep.tauri-private-keyout of version control (it is in.gitignore).
- Bump
versioninsrc-tauri/tauri.conf.json(andpackage.jsonif you use it for display). - Either:
- Build and release in one go: Run
./scripts/build-all-platforms.sh(Bash). It will confirm pre-steps, run a signed build, then run the release script to producerelease-out/v<VERSION>/latest.jsonandmanifest.txt. - Or build yourself, then release: Run a signed build, then run
./scripts/release.sh. It will patch the updater config, then generatelatest.jsonandmanifest.txtfrom the existing bundle. It does not upload anything; it prints the manual GitHub steps at the end.
- Build and release in one go: Run
- Follow the printed steps: create the release and tag on GitHub, upload the files listed in
manifest.txt(installers,.sigfiles, andlatest.json), then publish.
Multi-platform: The script produces a latest.json for the platform you built on. For updates on multiple OSes, run the build (and release script) on each OS and merge the platforms blocks from each latest.json into one, then upload that single latest.json to the release.
If you prefer not to use the script:
$env:TAURI_SIGNING_PRIVATE_KEY="<path-or-content-of-private-key>"
npm run tauri buildOn macOS/Linux use export TAURI_SIGNING_PRIVATE_KEY="..." instead.
3. Open Releases → “Draft a new release”.
4. Create a tag (e.g. v0.1.1) and publish the release.
5. Upload the built artifacts from src-tauri/target/release/bundle/ and add latest.json as described in the script’s output (or in the “Check for updates” paragraph below).
6. Publish the release.
After that, existing installs will see the update when users click “Check for updates”.
The benchmark binary compares the three search-indexing strategies (suffix array, SQLite, compressed-text LZ4) against each other using the path set in CUTE_DISK_TREE_SCAN_PATH in .env. It runs 3 iterations and prints a detailed report covering per-step build times, per-query find times, averages, and worst-case figures.
Build (release mode is required for meaningful numbers):
cargo build --bin benchmark --releaseRun and save output to a timestamped file:
# bash / MSYS / Git Bash
mkdir -p benchmark
cargo run --bin benchmark --release 2>&1 | tee "./benchmark/$(date +%Y-%m-%dT%H-%M-%S).txt"# PowerShell
New-Item -ItemType Directory -Force benchmark | Out-Null
cargo run --bin benchmark --release 2>&1 | Tee-Object -FilePath "benchmark/$(Get-Date -Format 'yyyy-MM-ddTHH-mm-ss').txt"Results are written to ./benchmark/<timestamp>.txt. The benchmark/ directory is git-ignored so you can accumulate runs locally without committing them.
See ignored/benchmarks.md for a full explanation of what the benchmark measures, the fairness decisions behind it, and analysis of how each strategy could be improved.
Walk a directory tree, aggregate folder sizes, and avoid double-counting hard links.
Build (requires Rust):
cargo buildRun:
cargo run -- <path>
# or
cargo run -- .- Symlinks: Not followed (ignored for traversal).
- Hard links: Counted once per (device, inode) on Unix; per (volume, file id) on Windows.
- Output: Total file entries, unique file count/size, and top 20 folders by recursive size.