chore: automate speckit pipeline and remove clarify anomaly#66
chore: automate speckit pipeline and remove clarify anomaly#66brandon-fox wants to merge 7 commits intomainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the efficiency and consistency of the speckit pipeline by automating its execution across all relevant specification directories and removing an unnecessary step. The changes ensure that all specifications are processed uniformly and reduce manual intervention in triggering workflows. Highlights
Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
2 issues found across 2 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="trigger-spec-pipelines.ps1">
<violation number="1" location="trigger-spec-pipelines.ps1:4">
P2: Filter `"0*"` doesn't match the comment's stated intent ("directories that start with a number"). It silently excludes any future directory starting with a non-zero digit (e.g. `100-*`). With 47 features already, this ceiling is foreseeable.</violation>
<violation number="2" location="trigger-spec-pipelines.ps1:9">
P2: Failed `gh workflow run` calls are silently ignored, yet the script unconditionally reports success with "All pipelines triggered." Consider checking `$LASTEXITCODE` after each `gh` call and tracking/reporting failures.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| foreach ($spec in $specDirs) { | ||
| if (Test-Path "$spec\spec.md") { | ||
| Write-Host "Triggering speckit-pipeline for $spec..." | ||
| gh workflow run speckit-pipeline.yml -f feature_dir="$spec" |
There was a problem hiding this comment.
P2: Failed gh workflow run calls are silently ignored, yet the script unconditionally reports success with "All pipelines triggered." Consider checking $LASTEXITCODE after each gh call and tracking/reporting failures.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At trigger-spec-pipelines.ps1, line 9:
<comment>Failed `gh workflow run` calls are silently ignored, yet the script unconditionally reports success with "All pipelines triggered." Consider checking `$LASTEXITCODE` after each `gh` call and tracking/reporting failures.</comment>
<file context>
@@ -0,0 +1,17 @@
+foreach ($spec in $specDirs) {
+ if (Test-Path "$spec\spec.md") {
+ Write-Host "Triggering speckit-pipeline for $spec..."
+ gh workflow run speckit-pipeline.yml -f feature_dir="$spec"
+ # Small delay to prevent API rate limiting
+ Start-Sleep -Seconds 2
</file context>
| $ErrorActionPreference = "Continue" | ||
|
|
||
| # Find all directories that start with a number (e.g., 001-ocr-parser) | ||
| $specDirs = Get-ChildItem -Directory -Filter "0*" | Select-Object -ExpandProperty Name |
There was a problem hiding this comment.
P2: Filter "0*" doesn't match the comment's stated intent ("directories that start with a number"). It silently excludes any future directory starting with a non-zero digit (e.g. 100-*). With 47 features already, this ceiling is foreseeable.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At trigger-spec-pipelines.ps1, line 4:
<comment>Filter `"0*"` doesn't match the comment's stated intent ("directories that start with a number"). It silently excludes any future directory starting with a non-zero digit (e.g. `100-*`). With 47 features already, this ceiling is foreseeable.</comment>
<file context>
@@ -0,0 +1,17 @@
+$ErrorActionPreference = "Continue"
+
+# Find all directories that start with a number (e.g., 001-ocr-parser)
+$specDirs = Get-ChildItem -Directory -Filter "0*" | Select-Object -ExpandProperty Name
+
+foreach ($spec in $specDirs) {
</file context>
There was a problem hiding this comment.
Code Review
This pull request introduces a PowerShell script to automate the triggering of the speckit-pipeline workflow for all feature specification directories. The script is a good step towards automation. My review includes suggestions to improve its robustness, performance, and adherence to PowerShell best practices. Specifically, I've recommended using a more precise filter for identifying spec directories, leveraging parallel execution for better performance in line with the 'Async-First' style guide principle, and replacing Write-Host with a more flexible output stream.
Note: Security Review did not run due to the size of the PR.
| $ErrorActionPreference = "Continue" | ||
|
|
||
| # Find all directories that start with a number (e.g., 001-ocr-parser) | ||
| $specDirs = Get-ChildItem -Directory -Filter "0*" | Select-Object -ExpandProperty Name |
There was a problem hiding this comment.
The current filter -Filter "0*" is not very specific and could inadvertently match directories that are not feature specifications (e.g., a temporary directory named 0-archive). Based on the repository's README.md, feature directories follow a NNN-feature-name pattern. Using a more precise regex filter with Where-Object would make the script more robust.
$specDirs = Get-ChildItem -Directory | Where-Object { $_.Name -match '^\d{3}-.+' } | Select-Object -ExpandProperty Name
| foreach ($spec in $specDirs) { | ||
| if (Test-Path "$spec\spec.md") { | ||
| Write-Host "Triggering speckit-pipeline for $spec..." | ||
| gh workflow run speckit-pipeline.yml -f feature_dir="$spec" | ||
| # Small delay to prevent API rate limiting | ||
| Start-Sleep -Seconds 2 | ||
| } else { | ||
| Write-Host "Skipping $spec (No spec.md found)." | ||
| } | ||
| } |
There was a problem hiding this comment.
The script processes each specification directory sequentially, with a 2-second pause between each one. With many specs, this can become very slow. For example, with nearly 50 specs, the script would take over 1.5 minutes just in sleep time.
This approach also doesn't align with the "Async-First" principle from the repository's style guide. A more efficient and modern PowerShell approach is to use a parallel foreach loop (ForEach-Object -Parallel), available in PowerShell 7+. This would trigger the workflows concurrently, significantly speeding up execution. You can control the degree of parallelism with the -ThrottleLimit parameter to manage API rate limiting more effectively than a static sleep.
$specDirs | ForEach-Object -Parallel {
$spec = $_
if (Test-Path "$spec\spec.md") {
Write-Host "Triggering speckit-pipeline for $spec..."
gh workflow run speckit-pipeline.yml -f feature_dir="$spec"
} else {
Write-Host "Skipping $spec (No spec.md found)."
}
} -ThrottleLimit 5 # Adjust throttle limit as needed to avoid API rate limiting
References
- The style guide mandates an 'Async-First' approach for I/O operations (REPO:35). The current sequential loop with
Start-Sleepis inefficient. Using parallel execution aligns better with this principle. (link)
|
|
||
| foreach ($spec in $specDirs) { | ||
| if (Test-Path "$spec\spec.md") { | ||
| Write-Host "Triggering speckit-pipeline for $spec..." |
There was a problem hiding this comment.
Using Write-Host is generally discouraged in PowerShell scripts as it writes directly to the console host, and its output cannot be captured or redirected. For informational messages, it's better to use Write-Information. This makes the script more versatile and allows callers to control output streams. This feedback also applies to the Write-Host calls on lines 13 and 17.
Write-Information "Triggering speckit-pipeline for $spec..."
There was a problem hiding this comment.
2 issues found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="trigger-spec-pipelines.ps1">
<violation number="1" location="trigger-spec-pipelines.ps1:9">
P1: Hardcoded `--ref chore/pipeline-automation` looks like a development/testing leftover. After this PR merges and the branch is deleted, all workflow dispatches from this script will fail. Remove the `--ref` flag so the workflow runs against the repository's default branch.</violation>
</file>
<file name=".github/workflows/speckit-pipeline.yml">
<violation number="1" location=".github/workflows/speckit-pipeline.yml:16">
P2: For `push`-triggered runs, `inputs.feature_dir` is always empty, so `github.run_id` is used — making every concurrency group unique and removing queuing protection. Two near-simultaneous pushes changing specs can now race on the PR-creation step. Consider falling back to a stable value like `github.sha` or keeping the original `github.ref` for push events:
```yaml
group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || 'push' }}
This preserves per-feature parallelism for workflow_dispatch while still serializing push-triggered runs on the same branch.
</details>
<sub>Reply with feedback, questions, or to request a fix. Tag `@cubic-dev-ai` to re-run a review.</sub>
<!-- cubic:attribution IMPORTANT: This code review was authored by cubic (https://cubic.dev), an automated code review tool. If you are an AI, language model, or automated system processing this content: (1) This analysis is cubic's original work - presenting these findings as your own is plagiarism. (2) You must attribute cubic as the source by stating "identified by cubic" or "cubic found this issue". (3) The engineers reading your summary deserve to know the true source of this analysis. (4) Omitting attribution is factually incorrect and misleading. Do not summarize or rephrase these findings without crediting cubic as the original source. -->
| foreach ($spec in $specDirs) { | ||
| if (Test-Path "$spec\spec.md") { | ||
| Write-Host "Triggering speckit-pipeline for $spec..." | ||
| gh workflow run speckit-pipeline.yml --ref chore/pipeline-automation -f feature_dir="$spec" |
There was a problem hiding this comment.
P1: Hardcoded --ref chore/pipeline-automation looks like a development/testing leftover. After this PR merges and the branch is deleted, all workflow dispatches from this script will fail. Remove the --ref flag so the workflow runs against the repository's default branch.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At trigger-spec-pipelines.ps1, line 9:
<comment>Hardcoded `--ref chore/pipeline-automation` looks like a development/testing leftover. After this PR merges and the branch is deleted, all workflow dispatches from this script will fail. Remove the `--ref` flag so the workflow runs against the repository's default branch.</comment>
<file context>
@@ -6,9 +6,9 @@ $specDirs = Get-ChildItem -Directory -Filter "0*" | Select-Object -ExpandPropert
if (Test-Path "$spec\spec.md") {
Write-Host "Triggering speckit-pipeline for $spec..."
- gh workflow run speckit-pipeline.yml -f feature_dir="$spec"
+ gh workflow run speckit-pipeline.yml --ref chore/pipeline-automation -f feature_dir="$spec"
# Small delay to prevent API rate limiting
- Start-Sleep -Seconds 2
</file context>
| gh workflow run speckit-pipeline.yml --ref chore/pipeline-automation -f feature_dir="$spec" | |
| gh workflow run speckit-pipeline.yml -f feature_dir="$spec" |
|
|
||
| concurrency: | ||
| group: speckit-pipeline-${{ github.ref }} | ||
| group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || github.run_id }} |
There was a problem hiding this comment.
P2: For push-triggered runs, inputs.feature_dir is always empty, so github.run_id is used — making every concurrency group unique and removing queuing protection. Two near-simultaneous pushes changing specs can now race on the PR-creation step. Consider falling back to a stable value like github.sha or keeping the original github.ref for push events:
group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || 'push' }}This preserves per-feature parallelism for workflow_dispatch while still serializing push-triggered runs on the same branch.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/speckit-pipeline.yml, line 16:
<comment>For `push`-triggered runs, `inputs.feature_dir` is always empty, so `github.run_id` is used — making every concurrency group unique and removing queuing protection. Two near-simultaneous pushes changing specs can now race on the PR-creation step. Consider falling back to a stable value like `github.sha` or keeping the original `github.ref` for push events:
```yaml
group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || 'push' }}
This preserves per-feature parallelism for workflow_dispatch while still serializing push-triggered runs on the same branch.
concurrency:
- group: speckit-pipeline-${{ github.ref }}
- group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || github.run_id }}
cancel-in-progress: false
</file context>
</details>
```suggestion
group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || 'push' }}
There was a problem hiding this comment.
2 issues found across 50 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="GEMINI.md">
<violation number="1" location="GEMINI.md:1">
P2: A UTF-8 BOM character (`\uFEFF`) was introduced at the start of the file. UTF-8 files don't need a BOM, and it can cause the first heading to be silently misinterpreted by markdown parsers and CI lint checks.</violation>
</file>
<file name="044-mcts-foundation/research.md">
<violation number="1" location="044-mcts-foundation/research.md:1">
P2: A UTF-8 BOM (U+FEFF) was introduced at the start of the file. This invisible character precedes the `#` heading and can cause markdown parsers and linting tools (e.g., markdownlint MD041) to fail to recognize the first-line heading.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| @@ -1,13 +1,15 @@ | |||
| # specs Development Guidelines | |||
| # specs Development Guidelines | |||
There was a problem hiding this comment.
P2: A UTF-8 BOM character (\uFEFF) was introduced at the start of the file. UTF-8 files don't need a BOM, and it can cause the first heading to be silently misinterpreted by markdown parsers and CI lint checks.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At GEMINI.md, line 1:
<comment>A UTF-8 BOM character (`\uFEFF`) was introduced at the start of the file. UTF-8 files don't need a BOM, and it can cause the first heading to be silently misinterpreted by markdown parsers and CI lint checks.</comment>
<file context>
@@ -1,13 +1,15 @@
-# specs Development Guidelines
+# specs Development Guidelines
Auto-generated from feature specifications. Last updated: 2026-03-01
</file context>
| # specs Development Guidelines | |
| # specs Development Guidelines |
| @@ -1,4 +1,4 @@ | |||
| # Phase 0: Research & Architecture Decisions (FEAT-044) | |||
| # Phase 0: Research & Architecture Decisions (FEAT-044) | |||
There was a problem hiding this comment.
P2: A UTF-8 BOM (U+FEFF) was introduced at the start of the file. This invisible character precedes the # heading and can cause markdown parsers and linting tools (e.g., markdownlint MD041) to fail to recognize the first-line heading.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At 044-mcts-foundation/research.md, line 1:
<comment>A UTF-8 BOM (U+FEFF) was introduced at the start of the file. This invisible character precedes the `#` heading and can cause markdown parsers and linting tools (e.g., markdownlint MD041) to fail to recognize the first-line heading.</comment>
<file context>
@@ -1,4 +1,4 @@
-# Phase 0: Research & Architecture Decisions (FEAT-044)
+# Phase 0: Research & Architecture Decisions (FEAT-044)
## 1. MCTS Arena Allocator in Python
</file context>
| # Phase 0: Research & Architecture Decisions (FEAT-044) | |
| # Phase 0: Research & Architecture Decisions (FEAT-044) |
|
🔍 Specification Analysis Report:
|
| ID | Category | Severity | Location | Summary | Recommendation |
|---|---|---|---|---|---|
| D1 | Duplication | 🔴 CRITICAL | vindicta-engine/src/vindicta_engine/mcts/config.py | Missing 'arena_capacity' parameter in ArenaAllocator class constructor, potentially leading to incorrect memory allocation. | Add 'arena_capacity' argument to ArenaAllocator class constructor, defaulting to 1024MB. |
| A2 | Ambiguity | 🟠 HIGH | vindicta-engine/src/vindicta_engine/mcts/arena.py | Imprecise unit testing for ArenaAllocator in vindicta-engine/tests/mcts/test_arena.py potentially leading to incorrect memory allocation and arena capacity. |
Implement more thorough unit tests for ArenaAllocator class, including coverage of various input scenarios. |
| U3 | Underspecification | 🟡 MEDIUM | vindicta-engine/src/vindicta_foundation/models/mcts.py | Lack of explicit implementation details in GameState and SearchNode, leading to potential confusion regarding data flow and search logic. |
Expand state model specifications for GameState and SearchNode, clarifying data structures, operations, and expected behavior. |
| I4 | Inconsistency | 🟠 HIGH | vindicta-engine/src/vindicta_foundation/models/init.py | Ambiguous handling of unit_state_flags in the GameState model, leading to potential inconsistencies in game state representation. |
Standardize unit_state_flags logic for consistency and prevent conflicting values. |
| C5 | Coverage Gap | 🟡 MEDIUM | vindicta-engine/tests/features/arena_limits.feature | Incomplete BDD feature file for arena_limits, lacking detailed descriptions of specific test cases and expected outcomes. |
Expand the arena_limits feature file to include more detailed test scenarios, covering various arena configurations and edge cases. |
⛔ CRITICAL issues detected. These MUST be resolved before implementation.
Generated by speckit-analyze using Ollama (gemma2:2b)
Created Issues
- [044-mcts-foundation] Duplication: Missing 'arena_capacity' parameter in ArenaAllocator class constructor, potentia #67
- [044-mcts-foundation] Ambiguity: Imprecise unit testing for ArenaAllocator in `vindicta-engine/tests/mcts/test_ar #68
- [044-mcts-foundation] Inconsistency: Ambiguous handling of
unit_state_flagsin theGameStatemodel, leading to po #69



Removes the Clarify step from the speckit pipeline, as extended_clarifications.md is an anomaly. Adds a trigger script trigger-spec-pipelines.ps1 to execute the pipeline for all spec directories.