Skip to content

Vim/hyp 184 cli hs follow command for websocket streams#74

Open
vimmotions wants to merge 91 commits intomainfrom
vim/hyp-184-cli-hs-follow-command-for-websocket-streams
Open

Vim/hyp 184 cli hs follow command for websocket streams#74
vimmotions wants to merge 91 commits intomainfrom
vim/hyp-184-cli-hs-follow-command-for-websocket-streams

Conversation

@vimmotions
Copy link
Contributor

hs stream — Live WebSocket Stream CLI + Interactive TUI

Summary

Adds a new hs stream command that connects to a deployed stack's WebSocket and streams live entity data to the terminal. This is the CLI equivalent of what the TypeScript SDK does programmatically — but with filtering, recording, time-travel, and an interactive TUI for exploration.

This was the highest-leverage missing UX feature: users could deploy stacks with hs up but had no way to observe live stream data without writing code.

What's new

Core streaming (hs stream <View> --url <wss://...>)

  • Connects to any deployed stack's WebSocket endpoint
  • Subscribes to a view using the standard Entity/mode syntax (e.g. PumpfunToken/list)
  • Outputs merged entity state as NDJSON (one JSON object per line) to stdout
  • --raw mode outputs unmerged WebSocket frames directly
  • Pipe-friendly: works with | jq, | head, | grep, etc.
  • URL resolution: --url explicit, --stack from hyperstack.toml, or auto-match from config
  • Subscription controls: --key, --take, --skip, --no-snapshot, --after

Filtering & triggers

  • --where DSL with 10 operators: =, !=, >, >=, <, <=, ~regex, !~regex, ? (exists), !? (not exists)
  • Dot-path support for nested fields: --where "info.symbol=TRUMP"
  • Multiple --where flags are ANDed together
  • --first exits immediately after the first entity matches the filter
  • --select projects specific fields: --select "info.name,info.symbol"
  • --ops filters by operation type: --ops upsert,patch
  • --count shows a running update count on stderr

Agent-friendly output

  • --no-dna outputs NO_DNA v1 envelope format with lifecycle events (connected, snapshot_complete, entity_update, disconnected)
  • Every TUI interaction has a non-interactive CLI equivalent for agent consumption

Recording & replay

  • --save snapshot.json records all frames with timestamps
  • --duration 30 auto-stops recording after N seconds
  • --load snapshot.json replays through the same merge/filter pipeline (no WebSocket needed)

Entity history & time-travel

  • EntityStore tracks per-entity history with a ring buffer (default 1000 entries)
  • --history --key <key> outputs full update history as JSON
  • --at N --key <key> shows entity state at a specific point in history
  • --diff --key <key> shows field-level changes between updates

Interactive TUI (--tui)

Behind --features tui (ratatui + crossterm):

  • Split-pane layout: entity list (30%) + JSON detail view (70%)
  • JSON syntax coloring: keys in cyan, strings green, numbers magenta, booleans yellow
  • Deep search (/): filters entities by searching all values in the JSON tree, not just keys
  • Time-travel: h/l step through entity version history, with diff view toggle (d)
  • Vim motions: j/k, G/gg, Ctrl+d/Ctrl+u, n (next match), number prefixes (10j)
  • Pause/resume (p): freeze the stream while exploring
  • Save snapshot (s): dump current recording to JSON file
  • Raw frame toggle (r)
  • Auto-scrolling list: selection always stays visible in both directions

SDK changes

  • deep_merge_with_append made public for reuse
  • parse_frame, parse_snapshot_entities, try_parse_subscribed_frame, ClientMessage, SnapshotEntity exported from hyperstack-sdk

Usage examples

# Stream all entities as NDJSON
hs stream PumpfunToken/list --url wss://my-stack.stack.usehyperstack.com

# Find first token with a specific symbol
hs stream PumpfunToken/list --url wss://... --where "info.symbol=TRUMP" --first

# Record 30 seconds, replay later in TUI
hs stream PumpfunToken/list --url wss://... --save capture.json --duration 30
hs stream --load capture.json --tui

# Interactive exploration
hs stream PumpfunToken/list --url wss://... --tui

# Pipe to jq
hs stream PumpfunToken/list --url wss://... --raw | jq '.data[0].data.info.name'

New files

cli/src/commands/stream/
├── mod.rs          # Command args, URL resolution, entry point
├── client.rs       # WebSocket connection, frame processing, replay
├── filter.rs       # --where DSL parser & evaluator (9 unit tests)
├── output.rs       # NDJSON, NO_DNA, raw formatters
├── snapshot.rs     # --save/--load file I/O
├── store.rs        # EntityStore with history ring buffer (5 unit tests)
└── tui/
    ├── mod.rs      # Terminal setup, event loop, key dispatch
    ├── app.rs      # App state, vim motions, entity management
    └── ui.rs       # ratatui layout, JSON coloring, widgets

New dependencies (cli)

  • hyperstack-sdk — reuse Frame types, subscription protocol, merge logic
  • tokio, futures-util, tokio-tungstenite — async WebSocket client
  • ratatui, crossterm — TUI (optional, behind tui feature flag)

Tests

  • 14 unit tests: 9 for filter DSL (all operators, nested paths, type coercion), 5 for EntityStore (merge, patch, history, diff)
  • Manually tested against my live stack deployment with PumpfunToken/list view

…ase 1)

Core streaming MVP: connect to a deployed stack's WebSocket, subscribe
to a view (e.g. OreRound/latest), and stream entity data as NDJSON to
stdout. Supports --raw mode for raw frames and merged entity output
(default). Resolves WebSocket URL from --url, --stack, or hyperstack.toml.

Also exports parse_frame, parse_snapshot_entities, ClientMessage, and
deep_merge_with_append from hyperstack-sdk for reuse.
…s stream (Phase 2)

- Filter DSL via --where: =, !=, >, >=, <, <=, ~regex, !~regex, ?, !?
  with dot-path support for nested fields (e.g. --where "user.age>18")
- --first exits after first entity matches filter criteria
- --select projects specific fields (comma-separated dot paths)
- --ops filters by operation type (upsert, patch, delete)
- --no-dna outputs NO_DNA v1 agent-friendly envelope format with
  lifecycle events (connected, snapshot_complete, entity_update, disconnected)
- --count shows running update count on stderr
- 9 unit tests for filter parsing and evaluation
…hase 3)

- --save <file> records all raw frames with timestamps to a JSON file
- --duration <secs> auto-stops recording after N seconds
- --load <file> replays a saved snapshot through the same merge/filter
  pipeline (no WebSocket connection needed)
- Snapshot format includes metadata (view, url, captured_at, duration)
…ase 4)

EntityStore tracks full entity state + per-entity history ring buffer
(default 1000 entries). Supports:
- --history: outputs full update history for --key entity as JSON array
- --at N: shows entity state at specific history index (0 = latest)
- --diff: shows field-level diff (added/changed/removed) between updates
  with raw patch data when available

These flags provide non-interactive agent equivalents of the TUI time
travel feature. 5 unit tests for store operations and diffing.
Behind --features tui flag, `hs stream --tui` launches a ratatui-based
terminal UI with:
- Split-pane layout: entity list (30%) + detail view (70%)
- Entity navigation (j/k, arrows), detail focus (Enter/Esc)
- Time travel through entity history (h/l, Home/End)
- Diff view toggle (d) showing field-level changes
- JSON syntax coloring in detail panel
- Pause/resume live updates (p)
- Save snapshot to file (s)
- Entity key filtering (/)
- Raw frame toggle (r)
- Status bar with keybinding hints
- Timeline bar showing history position

Dependencies: ratatui 0.29, crossterm 0.28 (optional)
Without the tui feature, --tui prints an error with install instructions.
The server sends subscribed acknowledgments as binary frames with a
different shape (no `entity` field), causing parse_frame to fail.
Now falls back to try_parse_subscribed_frame before warning, so real
parse errors are still surfaced while subscribed frames are handled
cleanly. Re-exports try_parse_subscribed_frame from hyperstack-sdk.
Previously, keys like r, d, s, h, l etc. were matched as TUI commands
before checking if filter input was active, making it impossible to
type those characters in the filter. Now checks filter_input_active
first and routes all Char keys to the filter text input.
The / filter now recursively searches all string, number, and boolean
values in each entity's JSON data. Typing "test" matches any entity
where any field value contains "test" (case-insensitive), not just
entities whose key contains the search term.
When typing a filter that reduces the entity list, the selection now
clamps to stay within the filtered results. Navigation (j/k) also
bounds against the filtered count instead of the full entity list.
Uses ratatui's ListState with the selected index so the list widget
automatically scrolls to keep the highlighted entity in view when
navigating past the visible area. Also shows filter text and
filtered/total count in the title when a filter is active.
- gg: jump to top of list
- G: jump to bottom of list
- Ctrl+d / Ctrl+u: half-page down/up
- n: jump to next filter match (wraps around)
- Number prefixes: e.g. 10j moves down 10, 5k moves up 5,
  3Ctrl+d moves 3 half-pages down
- Esc clears any pending count/g prefix
ListState was being recreated fresh each frame, losing the scroll
offset. Now stored in App and synced with selected_index on every
action, so ratatui properly auto-scrolls the entity list in both
directions — pressing k after scrolling past the bottom now scrolls
back up as expected.
The timeline bar now shows two distinct pieces of info:
- Row position: "Row 1/513" (your position in the entity list)
- Entity version: "version 1/2" (update history for the selected entity)

Previously only showed "update N/N" which was confusing because it
looked like it referred to list position rather than entity history.
The entity count is already shown in the list panel title, no need
to repeat it in the header bar.
@vimmotions vimmotions requested a review from adiman9 March 23, 2026 17:15
@vercel
Copy link

vercel bot commented Mar 23, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
hyperstack-docs Ready Ready Preview, Comment Mar 24, 2026 1:44pm

Request Review

@greptile-apps
Copy link

greptile-apps bot commented Mar 23, 2026

Greptile Summary

This PR adds the hs stream CLI command — a WebSocket streaming client that delivers live entity updates from a deployed Hyperstack stack directly to the terminal. It fills the gap between hs up (deploy) and any programmatic observation of the data, covering NDJSON piped output, a full --where DSL, recording/replay, entity history, and an optional ratatui TUI behind a feature flag.

Key changes:

  • 7 new source files under cli/src/commands/stream/ implementing the command, WebSocket client, filter DSL (10 operators, dot-paths, 9 tests), output formatters (NDJSON / NO_DNA / raw), snapshot recorder/player, and EntityStore with ring-buffer history (5 tests).
  • 3 TUI files (behind --features tui) providing a split-pane entity explorer with vim motions, deep JSON search, time-travel history, and diff view.
  • SDK visibility changes: deep_merge_with_append, frame parsing helpers, SnapshotEntity, and ClientMessage are now pub-exported for CLI reuse.
  • New CLI dependencies: tokio, futures-util, tokio-tungstenite, and optionally ratatui/crossterm.

Issues found:

  • SnapshotPlayer::load reads the entire snapshot file into a String before deserialising. The save path explicitly streams to avoid this overhead, but the load path does not — for captures near the 100k-frame limit this can cause significant memory pressure. Using serde_json::from_reader over a BufReader would be consistent.
  • --raw combined with field-level --where predicates silently produces no matches for snapshot batch frames (since frame.data is a JSON array, not an entity object). The risk is documented in a code comment but not surfaced to the user; an early warning would prevent confusing empty output.
  • --load is not declared as conflicts_with = "save" in clap, so --load file.json --save new.json is silently accepted and performs a replay-and-re-record with no documentation or feedback about the outcome.
  • StdoutWriter::writeln flushes the BufWriter on every call, negating its batching benefit for high-throughput streams. For pipe-friendliness this is intentional, but keeping the explicit flush while removing BufWriter would make the intent clearer.

Confidence Score: 4/5

  • Safe to merge; the snapshot load memory issue and the --raw/--where silent mismatch are worth addressing but are not blocking correctness.
  • The core streaming logic, filter DSL, entity store, and TUI event loop are all well-implemented and covered by unit tests. The identified issues are medium-severity quality/UX concerns rather than data-loss or crash bugs. No security, concurrency, or data-integrity problems were found.
  • cli/src/commands/stream/snapshot.rs deserves attention for the load-path memory usage; cli/src/commands/stream/client.rs for the --raw + --where silent-mismatch UX gap.

Important Files Changed

Filename Overview
cli/src/commands/stream/client.rs Core WebSocket client with frame processing, ping/heartbeat, --first/--ops/--where handling, and snapshot recording. Logic is sound; no reconnect logic is present (acceptable for v1).
cli/src/commands/stream/filter.rs Well-structured DSL parser for --where with 10 operators; two-char operators are correctly prioritised; 9 unit tests cover all operators and edge cases including precedence.
cli/src/commands/stream/snapshot.rs Save path streams serialisation correctly, but load path reads the entire file as a String before parsing — inconsistent with the stated memory goal and problematic for large captures.
cli/src/commands/stream/store.rs EntityStore with ring-buffer history (1000 entries/entity); at/diff/history semantics are correct and consistent; 5 unit tests pass all scenarios including tombstone delete.
cli/src/commands/stream/output.rs Clean output abstraction with NDJSON, NO_DNA, raw, and count modes. StdoutWriter flushes after every writeln which defeats BufWriter buffering, though this is intentional for pipe-friendliness.
cli/src/commands/stream/mod.rs Command entry point with URL resolution (explicit --url → --stack → auto-match → single-stack fallback) and TUI/flag compatibility checks. --load does not conflict with --save in clap declarations.
cli/src/commands/stream/tui/app.rs App state and vim motions are well-implemented; filtered key cache strategy is efficient; raw frame buffer capped at 1000. Blocking file I/O during SaveSnapshot is acknowledged in a comment.
cli/src/commands/stream/tui/mod.rs TUI event loop with dedicated WS reader task, 10k-entry channel, dropped-frame counter, and panic hook that restores terminal state. Graceful shutdown signals the WS task with a 2-second timeout.
cli/src/commands/stream/tui/ui.rs ratatui layout with JSON syntax colouring, timeline bar, and status line. colorize_json_line uses "": " as separator (correct for serde_json pretty output). truncate_key handles multi-byte chars via char_indices.
rust/hyperstack-sdk/src/store.rs Single-line visibility change: deep_merge_with_append promoted from private to pub. No logic changes.
rust/hyperstack-sdk/src/lib.rs Exports parse_frame, parse_snapshot_entities, try_parse_subscribed_frame, SnapshotEntity, deep_merge_with_append, and ClientMessage for CLI reuse. No logic changes.

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as hs stream (mod.rs)
    participant Client as client.rs
    participant WS as WebSocket Server
    participant Store as EntityStore
    participant Output as StdoutWriter

    User->>CLI: hs stream View/mode --url wss://...
    CLI->>CLI: resolve_url() / validate_ws_url()
    CLI->>Client: stream(url, view, args)
    Client->>WS: connect_async(url)
    WS-->>Client: WebSocket handshake
    Client->>WS: Subscribe(view, key, take, skip, after)

    loop Frame loop (tokio::select!)
        WS-->>Client: Binary/Text Frame
        Client->>Client: parse_frame()
        alt Snapshot frame
            Client->>Store: upsert(key, data, "snapshot")
            Client->>Client: filter.matches(data)?
            Client->>Output: emit_entity / NO_DNA / raw
        else Upsert/Create
            Client->>Store: upsert(key, data, op)
            Client->>Output: emit_entity
        else Patch
            Client->>Store: patch(key, data, append_paths)
            Client->>Client: deep_merge_with_append()
            Client->>Output: emit_entity (merged state)
        else Delete
            Client->>Store: delete(key)
            Client->>Output: print_delete
        end
        Client->>Client: --first? → break
        Client->>Client: --duration elapsed? → break
        User-->>Client: Ctrl+C → break
    end

    Client->>Client: SnapshotRecorder.save() if --save
    Client->>Client: output_history_if_requested() if --history/--at/--diff
Loading

Comments Outside Diff (4)

  1. cli/src/commands/stream/snapshot.rs, line 2003-2007 (link)

    Load reads entire file into memory — inconsistent with save's streaming approach

    save() explicitly streams frames one-by-one to avoid "holding the entire JSON in memory," but load() reads the whole file into a String before deserialising. For captures at the MAX_FRAMES limit (100,000 frames), each frame can be several KB, making the in-memory String potentially hundreds of MB before serde even starts parsing — the same OOM risk the save path was designed to avoid.

    A serde_json::from_reader over a BufReader<File> would be consistent and avoid the intermediate allocation:

    pub fn load(path: &str) -> Result<Self> {
        let file = fs::File::open(path)
            .with_context(|| format!("Failed to open snapshot file: {}", path))?;
        let reader = io::BufReader::new(file);
        let file: SnapshotFile = serde_json::from_reader(reader)
            .with_context(|| format!("Failed to parse snapshot file: {}", path))?;
        // ...
    }
  2. cli/src/commands/stream/client.rs, line 892-909 (link)

    --raw + field-level --where silently produces no matches on snapshot frames

    When --raw is active and --where includes field-level predicates (e.g. --where "info.symbol=TRUMP"), the filter runs against frame.data which is a JSON array for snapshot frames — field lookups on an array always return None, so every snapshot frame is silently dropped. Users piping --raw --where "field=value" during startup will see no output until the first live upsert/patch frame arrives, with no indication why.

    A warning on entry when --raw and non-empty --where are both set would surface this early:

    if let OutputMode::Raw = state.output_mode {
        if !state.filter.is_empty() {
            eprintln!(
                "Warning: --where filters in --raw mode match against raw frame.data. \
                 Snapshot frames contain entity arrays, so field-level filters \
                 (e.g. --where \"info.name=X\") will not match them. \
                 Omit --raw for per-entity field filtering."
            );
        }
        // ...
    }
  3. cli/src/commands/stream/mod.rs, line 1519-1521 (link)

    --load does not conflict with --save in clap declarations

    --load is declared as conflicting with url, tui, and duration, but not with save. This means hs stream --load capture.json --save new_capture.json is silently accepted and will replay from capture.json while re-serialising every frame to new_capture.json — an action with a non-obvious outcome (it works, but the URL in the new snapshot's header comes from the original file's metadata, not a live connection).

    If replay+re-save is intentional, it should be documented. If not, adding conflicts_with = "save" would prevent the confusion:

  4. cli/src/commands/stream/output.rs, line 1747-1751 (link)

    BufWriter is flushed after every write, negating its batching benefit

    BufWriter is designed to coalesce many small writes into fewer, larger syscalls. Calling flush() after every writeln forces an immediate syscall on each line, leaving BufWriter acting only as a thin wrapper rather than a buffer. For high-throughput streams this can be measurable.

    For pipe-friendly streaming output the intent is correct (consumers like jq need data promptly), but the flush should be removed from writeln and instead rely on BufWriter's auto-flush on drop (already present in the Drop impl), or flush only periodically. If per-line flushing is required, BufWriter can be removed entirely to make the intent explicit:

Prompt To Fix All With AI
This is a comment left during a code review.
Path: cli/src/commands/stream/snapshot.rs
Line: 2003-2007

Comment:
**Load reads entire file into memory — inconsistent with save's streaming approach**

`save()` explicitly streams frames one-by-one to avoid "holding the entire JSON in memory," but `load()` reads the whole file into a `String` before deserialising. For captures at the `MAX_FRAMES` limit (100,000 frames), each frame can be several KB, making the in-memory `String` potentially hundreds of MB before serde even starts parsing — the same OOM risk the save path was designed to avoid.

A `serde_json::from_reader` over a `BufReader<File>` would be consistent and avoid the intermediate allocation:

```rust
pub fn load(path: &str) -> Result<Self> {
    let file = fs::File::open(path)
        .with_context(|| format!("Failed to open snapshot file: {}", path))?;
    let reader = io::BufReader::new(file);
    let file: SnapshotFile = serde_json::from_reader(reader)
        .with_context(|| format!("Failed to parse snapshot file: {}", path))?;
    // ...
}
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: cli/src/commands/stream/client.rs
Line: 892-909

Comment:
**`--raw` + field-level `--where` silently produces no matches on snapshot frames**

When `--raw` is active and `--where` includes field-level predicates (e.g. `--where "info.symbol=TRUMP"`), the filter runs against `frame.data` which is a JSON array for snapshot frames — field lookups on an array always return `None`, so every snapshot frame is silently dropped. Users piping `--raw --where "field=value"` during startup will see no output until the first live upsert/patch frame arrives, with no indication why.

A warning on entry when `--raw` and non-empty `--where` are both set would surface this early:

```rust
if let OutputMode::Raw = state.output_mode {
    if !state.filter.is_empty() {
        eprintln!(
            "Warning: --where filters in --raw mode match against raw frame.data. \
             Snapshot frames contain entity arrays, so field-level filters \
             (e.g. --where \"info.name=X\") will not match them. \
             Omit --raw for per-entity field filtering."
        );
    }
    // ...
}
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: cli/src/commands/stream/mod.rs
Line: 1519-1521

Comment:
**`--load` does not conflict with `--save` in clap declarations**

`--load` is declared as conflicting with `url`, `tui`, and `duration`, but not with `save`. This means `hs stream --load capture.json --save new_capture.json` is silently accepted and will replay from `capture.json` while re-serialising every frame to `new_capture.json` — an action with a non-obvious outcome (it works, but the URL in the new snapshot's header comes from the original file's metadata, not a live connection).

If replay+re-save is intentional, it should be documented. If not, adding `conflicts_with = "save"` would prevent the confusion:

```suggestion
    #[arg(long, conflicts_with = "url", conflicts_with = "tui", conflicts_with = "duration", conflicts_with = "save")]
    pub load: Option<String>,
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: cli/src/commands/stream/output.rs
Line: 1747-1751

Comment:
**`BufWriter` is flushed after every write, negating its batching benefit**

`BufWriter` is designed to coalesce many small writes into fewer, larger syscalls. Calling `flush()` after every `writeln` forces an immediate syscall on each line, leaving `BufWriter` acting only as a thin wrapper rather than a buffer. For high-throughput streams this can be measurable.

For pipe-friendly streaming output the intent is correct (consumers like `jq` need data promptly), but the flush should be removed from `writeln` and instead rely on `BufWriter`'s auto-flush on drop (already present in the `Drop` impl), or flush only periodically. If per-line flushing is required, `BufWriter` can be removed entirely to make the intent explicit:

```suggestion
    pub fn writeln(&mut self, line: &str) -> Result<()> {
        writeln!(self.inner, "{}", line)?;
        Ok(())
    }
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (30): Last reviewed commit: "fix: flush stdout per write, streaming s..." | Re-trigger Greptile

Previously the deadline was checked at the top of the loop before
entering select!, so on quiet streams the actual stop time could
overshoot by up to the 30-second ping interval. Now uses a sleep
future as a select! arm that fires exactly when the duration expires.
…lisions

Previously --select "a.id,b.id" would output {"id": <b's value>},
silently overwriting a.id. Now uses the full dot-path as the output
key: {"a.id": 1, "b.id": 2}. Single-segment paths are unchanged
(--select "name" still outputs {"name": ...}).

Adds test for the collision case.
raw_frames were only collected when show_raw was active, and were
never read by any rendering code. Now:
- Always collects raw frames (so toggling on shows recent data)
- selected_entity_data() checks show_raw first and returns the most
  recent raw WebSocket frame for the selected entity key
If the TUI panics, disable_raw_mode and LeaveAlternateScreen never
executed, leaving the user's terminal in raw mode (unusable until
running 'reset'). Now installs a panic hook before entering raw mode
that restores the terminal state before re-invoking the original hook.
entity_keys.contains() was O(n) per frame, degrading with thousands
of entities. Now maintains a parallel HashSet<String> for O(1)
membership checks. HashSet::insert returns false if already present,
so it doubles as the contains check. Delete also removes from the set.
- Use strip_suffix instead of manual string slicing for ? and !? suffix
- Use is_some_and/is_none_or instead of map_or for Option comparisons
- Use direct == Some() comparison instead of map_or for equality check
The snapshot_complete detection and NO_DNA event emission only existed
in the Message::Text branch. Since the server primarily sends binary
frames, consumers relying on the NO_DNA snapshot_complete lifecycle
event would never see it. Now mirrors the same tracking logic in the
binary frame branch.
Byte-index slicing panics when the cut point lands in the middle of a
multi-byte codepoint (emoji, CJK characters). Now uses char_indices
to find safe byte boundaries for truncation.
…nfigured

Previously fell through to the first stack with any URL, silently
connecting to an unrelated stack. Now only auto-selects when there is
exactly one stack with a URL (unambiguous), and prints which stack was
chosen. With multiple stacks, requires explicit --url or --stack.
Previously --first only triggered when a --where filter was present,
silently running forever without one. Now --first always exits after
the first output: with --where it exits on first match, without
--where it exits after the first frame (raw) or entity (merged).
Adds comments explaining that two-char operators are checked before
single-char to avoid misparsing, and that the split is on the first
operator occurrence so values may contain operator characters
(e.g. --where "name=a=b" works correctly).
… in --ops

- TUI snapshot duration_ms now computed from first/last frame ts instead
  of recorder start_time which was near-zero
- --ops upsert now also matches create frames (semantically identical)
- Document panic hook limitation and history() ordering convention
- Reset pending_g after GotoTop (vim gg required double-press again)
- Delete retains entity history with tombstone for post-stream analysis
- SnapshotRecorder capped at 100k frames with warning
- entity_count updated after snapshot loop, not per-iteration
Ctrl+D, Ctrl+U etc were being inserted as literal characters in the
filter text. Now Ctrl+U clears the filter line, Ctrl+W deletes the
last word, and other control/alt combos are ignored.
…rite

- --ops create now correctly matches Create frames (normalized to upsert)
- entity_count updated per-entity in snapshot loop so NoDna events
  have accurate counts
- record_with_ts respects MAX_FRAMES cap
- --load conflicts_with tui at Clap level (cleaner error)
- Snapshot writes use tmp+rename for atomicity
…r errors

- TUI WS task handles subscribed frames via try_parse_subscribed_frame
  fallback (previously silently dropped)
- Snapshot tmp file placed in same directory as target (avoids EXDEV)
- build_state called before connect_async for fast-fail on bad args
- Empty field name in --where (e.g. "=value") now returns clear error
…lisions

- Float equality uses string match first, then epsilon comparison
  (avoids f64 rounding issues for decimals like 1.1)
- visible_rows subtracts 5 not 6 (header+timeline+status+2 borders)
- Remove unreachable --load/--tui runtime check (clap handles it)
- Snapshot filenames include milliseconds to prevent same-second collisions
The warning fired on every call after MAX_FRAMES since len never
exceeded the cap. Now uses a limit_warned flag to print once.
…ove premature log

- Delete of entity before cursor now shifts selected_index back to
  preserve which entity the user was looking at
- --raw and --no-dna with --tui now bail with clear messages
- Removed premature "Subscribing to..." printed before connection
…source

- Moved connected event from build_state to after connect_async succeeds,
  so failed connections don't emit a connected event with no matching
  disconnected
- Replay connected event includes "source": "replay" so consumers can
  distinguish live vs replay
- --load now conflicts_with --duration at clap level
- Bail on --where/--select/--ops/--first with --tui (previously ignored)
- TUI detects WebSocket disconnect and shows DISCONNECTED in header
- Float equality uses exact bitwise comparison after string match
  (relative epsilon was too loose for large numbers)
- Duration expiry sends WebSocket close frame before breaking
- finalize_count() clears the overwriting \r count line before post-
  stream messages (prevents garbled terminal output)
- Snapshot write removes existing destination before rename (Windows
  compatibility where fs::rename fails if target exists)
- Document that NoDna snapshot entity_count is a running tally
- Document silent delete filter drop for unseen entities
- Remove-before-rename only runs on Windows (POSIX rename overwrites)
- Propagate remove_file errors instead of silently swallowing them
- Clean up tmp file on rename failure before propagating error
…okups

- Output functions now use a shared BufWriter<Stdout> held in StreamState
  instead of acquiring/releasing stdout lock per call
- Text WebSocket frames parsed once directly to Frame instead of double-
  parsing (Value then Frame)
- diff_at stores entry in local variable instead of 3 redundant gets
- TUI channel buffer increased to 10k for large snapshot batches
- Document that filter cache invalidation is per-tick not per-frame
…emove dead flush

- Replay snapshot_complete now checks received_snapshot (consistent
  with live stream path)
- Document --select flattening behavior in help text
- Document compute_diff as shallow top-level only
- Document colorize_json_line serde_json assumption
- Confirm --first semantics with comment
- Remove unused StdoutWriter::flush (Drop impl suffices)
…g_count cap

- StdoutWriter flushes after each writeln (prevents delays on low-
  throughput streams)
- JSON key coloring uses "\": " to avoid matching colons inside keys
- Snapshot serialization streams to file via BufWriter instead of
  building full JSON string in memory
- Document --ops snapshot as valid value
- Cap pending_count at 99999 to prevent usize overflow
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant