Oracle Database Performance Analysis Tool
Parse · Analyze · Visualize · Consult AI
- What is JAS-MIN?
- Key Features
- Architecture Overview
- Installation
- Quick Start
- Usage Reference
- Statistical Algorithms
- AI Model Integration
- Output Structure
- Environment Variables
- CLI Reference
- Further Reading
- Authors
- License
JAS-MIN (JSON AWR & Statspack Miner) is a high-performance Oracle Database performance analysis tool written in Rust. It parses hundreds of AWR (.html) and STATSPACK (.txt) reports, converts them into structured JSON, and runs a comprehensive suite of statistical and numerical analyses focused on DB Time decomposition.
Instead of manually combing through verbose report files, JAS-MIN produces a single interactive HTML dashboard with Plotly-based visualizations, statistical summaries, anomaly detection, correlation analysis, multi-model gradient regression with multicollinearity diagnostics, and optional AI-generated interpretations.
Named after Jasmin Fluri — one of the SOUC founders — when the tool was first introduced at SOUC Database Circle 2024.
| Category | Capabilities |
|---|---|
| Parsing | Parallel parsing of AWR (.html) and STATSPACK (.txt) report directories into a unified JSON format. Supports Oracle 11g through 23ai report formats. |
| Visualization | Interactive Plotly HTML dashboards: time-series, heatmaps, histograms, box plots for wait events, SQL statistics, Load Profile, I/O stats, Instance Efficiency, Latch Activity, Segment Statistics. |
| Anomaly Detection | Median Absolute Deviation (MAD) with configurable thresholds and sliding window across wait events, SQL elapsed times, Load Profile, Instance Statistics, Dictionary Cache, Library Cache, Latch Activity, and Time Model. |
| Correlation | Pearson correlation between DB Time and every instance statistic, wait event, and SQL, with Bonferroni-corrected significance thresholds. |
| Gradient Analysis | Four-model regression suite (Ridge, Elastic Net, Huber, Quantile-95) to determine which wait events, statistics, and SQL statements most influence DB Time and DB CPU changes. Includes signed impact scores preserving directionality. |
| Multicollinearity Diagnostics | Variance Inflation Factor (VIF) computation for all predictors, automatic detection of collinear groups, and combined group impact calculation resolving cases where individual impacts are suppressed by multicollinearity. |
| Cross-Model Triangulation | Automated classification of bottlenecks by cross-referencing all four regression models (CONFIRMED_BOTTLENECK, TAIL_RISK, OUTLIER_DRIVEN, etc.). |
| AI Integration | One-shot analysis via OpenAI, Google Gemini, or OpenRouter; modular multi-step pipeline for smaller-context models; local model support (LM Studio, Ollama); interactive backend assistant chat. |
| Security | Three-tier security model controlling exposure of object names, SQL text, and other sensitive data in the JSON output. |
| Parallelism | Rayon-based parallel file parsing and anomaly detection with configurable thread count. |
┌─────────────────────────────────────────────────────────────────┐
│ AWR / STATSPACK Reports │
│ (.html files / .txt files) │
└─────────────────┬───────────────────────────────────────────────┘
│ parallel parsing (rayon)
▼
┌─────────────────────────────────────────────────────────────────┐
│ AWRSCollection (JSON) │
│ ┌──────────┐ ┌───────────┐ ┌──────────┐ ┌──────────────────┐ │
│ │SnapInfo │ │LoadProfile│ │WaitEvents│ │ SQL Elapsed/CPU │ │
│ │HostCPU │ │TimeModel │ │ FG / BG │ │ IO/Gets/Reads │ │
│ │InstStats │ │Efficiency │ │Histograms│ │ ASH Top Events │ │
│ │IOStats │ │DictCache │ │LibCache │ │ Segment Stats │ │
│ │LatchAct │ │RedoLog │ │WaitClass │ │ Init Parameters │ │
│ └──────────┘ └───────────┘ └──────────┘ └──────────────────┘ │
└─────────────────┬───────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Analysis Engine │
│ │
│ ┌────────────────┐ ┌───────────────┐ ┌───────────────────┐ │
│ │ Peak Detection │ │ MAD Anomaly │ │ Pearson │ │
│ │ (CPU/Time │ │ Detection │ │ Correlation │ │
│ │ Ratio) │ │ (sliding │ │ (Bonferroni │ │
│ │ │ │ window) │ │ corrected) │ │
│ └────────────────┘ └───────────────┘ └───────────────────┘ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Multi-Model Gradient Regression │ │
│ │ Ridge · Elastic Net · Huber (IRLS) · Quantile-95 (IRLS) │ │
│ │ → VIF Diagnostics · Collinear Group Impact │ │
│ │ → Cross-Model Triangulation │ │
│ └────────────────────────────────────────────────────────────┘ │
└─────────────────┬───────────────────────────────────────────────┘
│
┌───────┴────────┐
▼ ▼
┌──────────────┐ ┌──────────────────┐
│ HTML Report │ │ ReportForAI │
│ (Plotly │ │ (TOON/JSON) │
│ dashboard) │ │ │ │
│ │ │ ▼ │
│ + TXT log │ │ AI Integration │
│ + CSV │ │ (OpenAI/Gemini/ │
│ anomalies │ │ OpenRouter/ │
│ │ │ Local LLM) │
└──────────────┘ └──────────────────┘
- Rust toolchain (1.75+ recommended): rustup.rs
git clone https://github.com/ora600pl/jas-min.git
cd jas-min
cargo build --releaseThe binary will be at ./target/release/jas-min.
- STATSPACK: Use the included
gen_statspack_reps.shscript. - AWR: Use awr-generator.sql by @flashdba.
You need at least a week of reports (ideally more) to make meaningful statistical analysis.
jas-min -d ./awr_reportsThis will:
- Parse all
.html(AWR) and.txt(STATSPACK) files in./awr_reports/ - Produce
awr_reports.json(structured data) - Produce
awr_reports.txt(text log) - Generate
awr_reports.html_reports/directory with the interactive HTML dashboard - Open the main report in your default browser
jas-min -j awr_reports.json# Using Google Gemini
export GEMINI_API_KEY="your-key"
jas-min -d ./awr_reports --ai google:gemini-2.5-flash:EN
# Using OpenAI
export OPENAI_API_KEY="your-key"
jas-min -d ./awr_reports --ai openai:o3:EN
# Using OpenRouter
export OPENROUTER_API_KEY="your-key"
jas-min -d ./awr_reports --ai openrouter:anthropic/claude-sonnet-4:EN
# Using a local model (LM Studio / Ollama)
export LOCAL_API_KEY="..."
export LOCAL_BASE_URL="http://localhost:1234/v1/chat/completions"
jas-min -d ./awr_reports --ai local:my-model:EN# Create .env file with PORT and API keys first
jas-min -d ./awr_reports -b google:gemini-2.5-flash| Flag | Description | Default |
|---|---|---|
-d, --directory <DIR> |
Parse all reports in the given directory | — |
-j, --json-file <FILE> |
Analyze a previously generated JSON file | — |
--file <FILE> |
Parse a single report file and print JSON to stdout | — |
-o, --outfile <FILE> |
Write JSON output to a non-default file | <dirname>.json |
-P, --parallel <N> |
Parallelism level for file parsing | 4 |
-q, --quiet |
Suppress terminal output (still writes to log file) | false |
jas-min -d ./reports -s 1000-2000| Flag | Description | Default |
|---|---|---|
-s, --snap-range <BEGIN-END> |
Filter analysis to a specific snap ID range | 0-666666666 |
| Level | Flag | Description |
|---|---|---|
| 0 | -S 0 |
Maximum security: no object names, database names, or sensitive data stored |
| 1 | -S 1 |
Stores segment names from Segment Statistics sections |
| 2 | -S 2 |
Stores full SQL text from AWR reports |
| Flag | Description | Default |
|---|---|---|
-m, --mad-threshold <FLOAT> |
MAD score threshold for flagging anomalies | 7.0 |
-W, --mad-window-size <PCT> |
Sliding window size as percentage of total probes (100 = global) | 100 |
# Use a 10% sliding window with threshold 5
jas-min -d ./reports -W 10 -m 5| Flag | Description | Default |
|---|---|---|
-R, --ridge-lambda <FLOAT> |
L2 regularization strength for Ridge regression | 50.0 |
-E, --en-lambda <FLOAT> |
Overall regularization strength for Elastic Net | 30.0 |
-A, --en-alpha <FLOAT> |
L1/L2 mixing: 1.0 = Lasso (pure L1), 0.0 = Ridge-like (pure L2) | 0.666 |
-I, --en-max-iter <N> |
Max iterations for Elastic Net coordinate descent | 5000 |
-T, --en-tol <FLOAT> |
Convergence tolerance for Elastic Net | 1e-6 |
| Flag | Description | Default |
|---|---|---|
-a, --ai <VENDOR:MODEL:LANG> |
Run AI-powered interpretation after analysis | — |
-C, --token-count-factor <N> |
Multiply base output token count (8192) by this factor | 8 |
-B, --tokens-budget <N> |
Token budget for modular LLM analysis | 80000 |
-D, --deep-check <N> |
Ask AI to deep-analyze top-N snapshots (Gemini only) | 0 |
-u, --url-context-file <FILE> |
Provide URL context file for Gemini URL context tool | — |
# Google Gemini backend
jas-min -d ./reports -b google:gemini-2.5-flash
# OpenAI backend (requires OPENAI_ASST_ID in .env)
jas-min -d ./reports -b openai| Flag | Description |
|---|---|
-b, --backend-assistant <TYPE:MODEL> |
Launch the interactive assistant backend (openai or google:model) |
JAS-MIN identifies performance peaks by computing the ratio of DB CPU to DB Time from the Load Profile section of each snapshot.
-
$R \approx 1.0$ : CPU-bound workload — sessions spend most time on CPU. -
$R < 0.666$ (default threshold): Wait-bound workload — significant time is spent on wait events rather than CPU. - The threshold is configurable via
-t, --time-cpu-ratio.
When -f, --filter-db-time), that snapshot is marked as a peak period. The top wait events, background events, and SQL statements from each peak snapshot are selected for deeper analysis.
MAD is used as a robust anomaly detection method across multiple data domains. Unlike standard deviation, MAD is resistant to outliers, making it ideal for performance data with bursty patterns.
Computation:
Given a time series
- Compute the median:
$\tilde{x} = \text{median}(X)$ - Compute absolute deviations:
$d_i = |x_i - \tilde{x}|$ - Compute MAD:
$\text{MAD} = \text{median}({d_1, d_2, \ldots, d_n})$ - Compute the MAD score for each observation:
$z_i = \frac{|x_i - \tilde{x}|}{\text{MAD}}$ - Flag as anomaly if
$z_i > \text{threshold}$ (default: 7.0)
Implementation Note: The median is computed using select_nth_unstable for
Sliding Window Mode (-W <PCT>):
When the window size is less than 100%, JAS-MIN uses a sliding window approach: for each observation
Applied to:
- Foreground & Background Wait Events (total wait time)
- SQL Elapsed Times
- Load Profile metrics (per-second rates)
- Instance Activity Statistics
- Dictionary Cache (get requests)
- Library Cache (pin requests)
- Latch Activity (get requests)
- Time Model Statistics (time in seconds)
The Pearson correlation coefficient
- Every foreground/background wait event's total wait time
- Every SQL statement's elapsed time
- Every instance activity statistic
Additionally, JAS-MIN computes pairwise correlations between each SQL's elapsed time and every foreground wait event to identify which wait events most strongly co-occur with specific SQL statements.
NaN Protection: When a statistic or event has zero variance (all values identical), the Pearson formula produces NaN. JAS-MIN guards against this by treating non-finite correlation values as zero.
When correlating a large number of instance statistics with DB Time, JAS-MIN applies the Bonferroni correction to control the family-wise error rate:
Where
To determine what drives DB Time changes, JAS-MIN computes first-order differences (deltas) of both DB Time and each feature (wait event, statistic, SQL elapsed time), standardizes them, and fits four regression models.
Pre-processing:
-
Differencing: Compute deltas:
$\Delta y_t = y_{t+1} - y_t$ (DB Time),$\Delta x_{j,t} = x_{j,t+1} - x_{j,t}$ (features) -
Target centering: The target variable
$\Delta y$ is centered by subtracting its mean (equivalent to fitting an implicit intercept), preventing global trends from leaking into predictor coefficients. -
Standardization with Bessel's correction: Each feature's deltas are standardized using sample standard deviation with
$N-1$ denominator:
- Compute MAD of raw deltas for impact scaling
Four Regression Models:
| Model | Method | Purpose |
|---|---|---|
| Ridge | Dense linear system solver with partial pivoting + L2 penalty: |
Dense, stabilized ranking of all contributing factors. Uses partial pivoting for numerical stability under ill-conditioned Gram matrices. |
| Elastic Net | Coordinate descent with L1+L2 penalty: |
Sparse ranking highlighting dominant factors; handles collinearity by zeroing out redundant features. |
| Huber | Iteratively Reweighted Least Squares (IRLS) with Huber loss. The threshold |
Outlier-resistant ranking; downweights extreme snapshots. |
| Quantile 95 | IRLS with asymmetric check-loss function ( |
Models the worst 5% of snapshots (tail risk). |
Impact Score:
This quantifies the expected shift in DB Time for a typical perturbation in feature
Signed Impact:
Preserves directionality: positive values indicate factors that drive DB Time up (actual bottlenecks), negative values indicate factors associated with DB Time decreases. Rankings are sorted by signed impact descending, so actual bottlenecks appear first. This prevents anti-correlated events (e.g., idle events that increase when DB Time drops) from appearing as top bottlenecks.
Applied to seven gradient sections:
- DB Time vs. Foreground Wait Events (wait seconds)
- DB Time vs. Instance Statistics — Counters
- DB Time vs. Instance Statistics — Volumes (bytes)
- DB Time vs. Instance Statistics — Time metrics
- DB Time vs. SQL Elapsed Time
- DB CPU vs. Instance Statistics — CPU-related
- DB CPU vs. SQL CPU Time
When multiple predictors are highly correlated (e.g., enq: TX - row lock contention and enq: TM - contention that always spike together), multivariate regression cannot reliably separate their individual effects. This manifests as individual Impact scores near zero despite the events clearly co-occurring with DB Time spikes.
JAS-MIN computes the Variance Inflation Factor (VIF) for each predictor to diagnose this:
where
| VIF Range | Interpretation | Action |
|---|---|---|
| 1.0 – 5.0 | Acceptable | Individual coefficients are reliable |
| 5.0 – 10.0 | Moderate collinearity | Coefficients may be unstable; check group impact |
| 10.0 – 100.0 | High collinearity | Individual coefficients unreliable; use group impact |
| > 100.0 | Severe collinearity | Individual Impact is meaningless; only group impact is valid |
VIF diagnostics with interpretation labels (MODERATE_COLLINEARITY, HIGH_COLLINEARITY, SEVERE_COLLINEARITY) are included in the gradient HTML report and in the ReportForAI data sent to AI models.
When VIF exceeds 10.0 for multiple predictors, JAS-MIN automatically identifies groups of collinear predictors using greedy pairwise correlation clustering (threshold
- The raw differenced series of all group members are summed into a single combined signal:
$\Delta x_{group,t} = \sum_{j \in G} \Delta x_{j,t}$ - A univariate regression is performed:
$\beta_{group} = \frac{Cov(\Delta x_{group}, \Delta y)}{Var(\Delta x_{group})}$ - The combined group impact is computed:
$|\beta_{group}| \times MAD(\Delta x_{group})$
This resolves the key limitation of multivariate regression when applied to correlated Oracle performance metrics. For example:
If
enq: TX - row lock contentionandenq: TM - contentionhave individual VIF > 800 and individual Impact ≈ 0 (because Ridge/Huber/EN cannot separate their effects), the collinear group impact may reveal a combined impact of 42.3, correctly representing their joint contribution to DB Time during spikes.
Collinear group impacts are displayed in the gradient HTML report and included in the ReportForAI for AI interpretation.
After fitting all four models, JAS-MIN automatically classifies each feature by checking its presence in the Top-N of each model (positive gradient coefficient and non-zero impact):
| Classification | Models Present | Interpretation | Priority |
|---|---|---|---|
CONFIRMED_BOTTLENECK |
All 4 | Systematic, robust bottleneck | CRITICAL |
CONFIRMED_BOTTLENECK_EN_COLLINEAR |
Ridge + Huber + Q95 | Bottleneck masked by L1 collinearity | CRITICAL |
STRONG_CONTRIBUTOR |
Ridge + EN + Huber | Reliable systematic contributor | MEDIUM |
STABLE_CONTRIBUTOR |
Ridge + Huber | Steady background contributor | LOW-MEDIUM |
TAIL_RISK |
Q95 only (not Ridge) | Rare catastrophic spikes | HIGH |
TAIL_OUTLIER |
Ridge + Q95 (not Huber) | Extreme snapshots that ARE the worst periods | HIGH |
OUTLIER_DRIVEN |
Ridge only (not Huber) | Impact from a few extreme snapshots | MEDIUM |
SPARSE_DOMINANT |
EN only (not Ridge) | Dominant among correlated group | MEDIUM |
ROBUST_ONLY |
Huber only | Background factor visible only without outliers | LOW |
Integration with VIF: When a predictor is classified as a bottleneck but has VIF > 10, the classification should be interpreted in conjunction with the collinear group impact. The cross-model classification identifies what is important; the VIF and group impact explain how much it truly contributes.
For wait events, SQL statements, and Load Profile metrics, JAS-MIN computes:
-
Mean (
$\bar{x}$ ), Standard Deviation ($\sigma$ ) - Median, Q1, Q3, IQR, Lower/Upper Fences
- Min, Max, Variance
- Weighted averages for latch contention metrics (weighted by get requests)
| Vendor | Env Variable | Flag Format |
|---|---|---|
| OpenAI | OPENAI_API_KEY |
openai:gpt-4-turbo:EN |
| Google Gemini | GEMINI_API_KEY |
google:gemini-2.5-flash:EN |
| OpenRouter | OPENROUTER_API_KEY |
openrouter:anthropic/claude-sonnet-4:EN |
| OpenRouter (modular) | OPENROUTER_API_KEY |
openroutersmall:model-name:EN |
| Local (LM Studio, Ollama) | LOCAL_API_KEY, LOCAL_BASE_URL |
local:model-name:EN |
The language code (EN, PL, etc.) controls the output language of the AI-generated report.
For models with large context windows (Gemini, OpenAI, OpenRouter), JAS-MIN sends the entire ReportForAI structure (serialized as TOON — a compact JSON-like format) along with a comprehensive system prompt to a single API call.
jas-min -d ./reports --ai google:gemini-2.5-flash:ENThe system prompt includes:
- Complete role description and analytical methodology (6-step reasoning)
- Gradient analysis interpretation rules with cross-model classification table
- VIF diagnostics and collinear group impact interpretation guidelines
- Output structure specification (11-section Markdown report)
- Initialization parameter analysis instructions with source requirements
Output: <logfile>_gemini.md → auto-converted to .html with interlinks.
For models with smaller context windows (openroutersmall: or local:), JAS-MIN uses a modular multi-step pipeline:
- Section Extraction: The ReportForAI is split into ~20 independent sections (Baseline, FG Waits, BG Waits, SQLs, I/O, Latches, 8× Segments, Stats Correlation, Load Profile Anomalies, Anomaly Clusters, 5× Gradients).
- Budget-Aware Trimming: Each section is trimmed to fit within the configured
--tokens-budgetusing binary search over the number of items. - Context Capsule: A compact summary (general_data + top spikes) is attached to every section call as a temporal anchor.
- Per-Section Analysis: Each section is sent to the LLM as an independent call with the system prompt + context capsule + section data.
- Composition: All section notes are bundled and sent in a final compose step to produce the unified Markdown report.
jas-min -d ./reports --ai local:qwen3-32b:EN -B 60000With -D <N>, JAS-MIN asks Gemini to select the top-N most critical snapshots, then sends full AWR JSON data for each snapshot for deep-dive analysis:
jas-min -d ./reports --ai google:gemini-2.5-flash:EN -D 5The assistant mode launches a local HTTP server (Axum) that proxies chat messages between the browser-based JAS-MIN dashboard and the AI backend:
Browser (JAS-MIN HTML) ←→ localhost:<PORT>/api/chat ←→ AI Backend
- OpenAI Backend: Creates a thread with file search (vector store) using the Assistants API v2.
- Gemini Backend: Maintains conversation history with the full report in context.
The assistant is embedded in the main HTML report as a collapsible chat widget.
The ReportForAI JSON/TOON document sent to AI models contains:
| Section | Content |
|---|---|
general_data |
MAD/ratio analysis description |
top_spikes_marked |
Peak periods with DB Time, DB CPU, ratio |
top_foreground_wait_events |
Wait stats, correlations, MAD anomalies, associated tables from SQL text |
top_background_wait_events |
Background wait stats and anomalies |
top_sqls_by_elapsed_time |
SQL metrics, ASH events, correlations, MAD |
io_stats_by_function_summary |
Per-function I/O (LGWR, DBWR, etc.) |
latch_activity_summary |
Latch contention metrics |
top_10_segments_by_* |
8 segment ranking sections |
instance_stats_pearson_correlation |
Statistics correlated with DB Time |
load_profile_anomalies |
Load Profile MAD anomalies |
anomaly_clusters |
Temporally grouped cross-domain anomalies |
db_time_gradient_* |
5 gradient sections (DB Time) with VIF diagnostics and collinear group impacts |
db_cpu_gradient_* |
2 gradient sections (DB CPU) with VIF diagnostics and collinear group impacts |
initialization_parameters |
Oracle init.ora parameters |
Each gradient section (DbTimeGradientSection) contains:
| Field | Content |
|---|---|
settings |
Regularization hyperparameters and unit descriptions |
ridge_top |
Top-50 Ridge regression results (event, coefficient, impact) |
elastic_net_top |
Top-50 Elastic Net results (non-zero only) |
huber_top |
Top-50 Huber robust regression results |
quantile95_top |
Top-50 Quantile-95 regression results |
cross_model_classifications |
Cross-model triangulation labels and priority |
vif_diagnostics |
Predictors with VIF > 5 and interpretation labels |
collinear_group_impacts |
Grouped members, combined coefficient, and combined impact |
reasonings.txt: Place in$JASMIN_HOME/or the current directory. Appended to the system prompt as advanced rules.- URL Context (
-u <file>): JSON file mapping event/SQL names to URLs. Used with Gemini's URL context tool for grounding responses.
After running JAS-MIN, the following outputs are generated:
<directory>.json # Structured AWR/STATSPACK data
<directory>.txt # Text analysis log
<directory>.html_reports/
├── jasmin_main.html # Main interactive dashboard
├── fg/ # Foreground wait event detail pages
│ └── fg_<event_name>.html
├── bg/ # Background wait event detail pages
│ └── bg_<event_name>.html
├── sqlid/ # SQL statement detail pages
│ └── sqlid_<sql_id>.html
├── stats/
│ ├── statistics_corr.html # Instance statistics correlation table
│ ├── gradient.html # DB Time gradient analysis (with VIF & groups)
│ ├── gradient_cpu.html # DB CPU gradient analysis (with VIF & groups)
│ ├── global_statistics.json # Load Profile summary statistics
│ ├── jasmin_highlight.html # Load Profile box plots
│ └── inst_stat_<name>.html # Individual statistic detail pages
├── iostats/ # I/O statistics by function
│ ├── iostats_zMAIN.html
│ └── iostats_<function>.html
├── latches/
│ └── latchstats_activity.html # Latch activity summary table
├── segstats/ # Segment statistics tables
│ └── segstats_<stat_name>.html
└── jasmin/anomalies/ # Anomaly CSV exports
├── anomalies_reference.csv
└── <snap_id>.csv
report_for_ai.toon # TOON-encoded data for AI consumption
Create a .env file in $JASMIN_HOME or the current directory:
# AI API Keys (set the ones you use)
OPENAI_API_KEY=sk-...
GEMINI_API_KEY=AI...
OPENROUTER_API_KEY=sk-or-...
# Local model configuration
LOCAL_API_KEY=lm-studio
LOCAL_BASE_URL=http://localhost:1234/v1/chat/completions
LOCAL_MODEL=my-local-model
# OpenAI custom endpoint (optional)
OPENAI_URL=https://api.openai.com/
# Backend assistant
PORT=3000
OPENAI_ASST_ID=asst_... # Required for OpenAI assistant backendSet JASMIN_HOME to load .env and reasonings.txt from a centralized location:
export JASMIN_HOME=/path/to/jasmin_homejas-min [OPTIONS]
Options:
--file <FILE> Parse a single text or HTML file
-d, --directory <DIR> Parse whole directory of files
-o, --outfile <FILE> Write output to non-default file
-t, --time-cpu-ratio <FLOAT> DB CPU / DB Time ratio threshold [default: 0.666]
-f, --filter-db-time <FLOAT> Filter only DB Time > this value [default: 0.0]
-i, --id-sqls <SQL_IDS> Include specific SQL_IDs (comma-separated)
-j, --json-file <FILE> Analyze a previously generated JSON file
-s, --snap-range <BEGIN-END> Filter snap ID range [default: 0-666666666]
-q, --quiet Suppress terminal output
-a, --ai <VENDOR:MODEL:LANG> AI model for interpretation
-C, --token-count-factor <N> Output token multiplier [default: 8]
-b, --backend-assistant <TYPE> Launch backend assistant (openai | google:model)
-m, --mad-threshold <FLOAT> MAD anomaly threshold [default: 7.0]
-W, --mad-window-size <PCT> MAD sliding window size (% of probes) [default: 100]
-P, --parallel <N> Parallelism level [default: 4]
-S, --security-level <N> Security level: 0, 1, or 2 [default: 0]
-u, --url-context-file <FILE> URL context file for Gemini
-D, --deep-check <N> Deep-analyze top-N snapshots [default: 0]
-B, --tokens-budget <N> Token budget for modular LLM [default: 80000]
-R, --ridge-lambda <FLOAT> Ridge L2 regularization [default: 50.0]
-E, --en-lambda <FLOAT> Elastic Net regularization [default: 30.0]
-A, --en-alpha <FLOAT> Elastic Net L1/L2 mix [default: 0.666]
-I, --en-max-iter <N> Elastic Net max iterations [default: 5000]
-T, --en-tol <FLOAT> Elastic Net convergence tolerance [default: 1e-6]
-h, --help Print help
-V, --version Print version
- JAS-MIN Introduction (blog.ora-600.pl)
- JAS-MIN and AI (blog.ora-600.pl)
- JAS-MIN Part 1 — Digging Deep into AWR & STATSPACK (struktuur.pl)
- Kamil Stawiarski — [email protected] · blog.ora-600.pl
- Radosław Kut — [email protected] · blog.struktuur.pl
Built by ORA-600 | Database Whisperers 🍺
See repository for license details.
If you need expert Oracle performance tuning, reach out to ora-600.pl
