Skip to content

chore: automate speckit pipeline and remove clarify anomaly#66

Open
brandon-fox wants to merge 7 commits intomainfrom
chore/pipeline-automation
Open

chore: automate speckit pipeline and remove clarify anomaly#66
brandon-fox wants to merge 7 commits intomainfrom
chore/pipeline-automation

Conversation

@brandon-fox
Copy link
Copy Markdown
Member

Removes the Clarify step from the speckit pipeline, as extended_clarifications.md is an anomaly. Adds a trigger script trigger-spec-pipelines.ps1 to execute the pipeline for all spec directories.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the efficiency and consistency of the speckit pipeline by automating its execution across all relevant specification directories and removing an unnecessary step. The changes ensure that all specifications are processed uniformly and reduce manual intervention in triggering workflows.

Highlights

  • Pipeline Automation: Introduced a new PowerShell script, "trigger-spec-pipelines.ps1", to automate the execution of the "speckit-pipeline.yml" workflow for all specification directories. This script identifies directories starting with a number (e.g., "001-ocr-parser") and triggers the GitHub Actions workflow for each that contains a "spec.md" file, incorporating a delay to prevent API rate limiting.
  • Speckit Pipeline Refinement: Removed the 'Clarify' step from the speckit pipeline, as "extended_clarifications.md" was identified as an anomaly, streamlining the overall workflow.
Changelog
  • trigger-spec-pipelines.ps1
    • Added a new PowerShell script to automate the triggering of speckit pipelines for all spec directories.
Ignored Files
  • Ignored by pattern: .github/workflows/** (1)
    • .github/workflows/speckit-pipeline.yml
Activity
  • No specific activity (comments, reviews, or progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 2 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="trigger-spec-pipelines.ps1">

<violation number="1" location="trigger-spec-pipelines.ps1:4">
P2: Filter `"0*"` doesn't match the comment's stated intent ("directories that start with a number"). It silently excludes any future directory starting with a non-zero digit (e.g. `100-*`). With 47 features already, this ceiling is foreseeable.</violation>

<violation number="2" location="trigger-spec-pipelines.ps1:9">
P2: Failed `gh workflow run` calls are silently ignored, yet the script unconditionally reports success with "All pipelines triggered." Consider checking `$LASTEXITCODE` after each `gh` call and tracking/reporting failures.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread trigger-spec-pipelines.ps1 Outdated
foreach ($spec in $specDirs) {
if (Test-Path "$spec\spec.md") {
Write-Host "Triggering speckit-pipeline for $spec..."
gh workflow run speckit-pipeline.yml -f feature_dir="$spec"
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Failed gh workflow run calls are silently ignored, yet the script unconditionally reports success with "All pipelines triggered." Consider checking $LASTEXITCODE after each gh call and tracking/reporting failures.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At trigger-spec-pipelines.ps1, line 9:

<comment>Failed `gh workflow run` calls are silently ignored, yet the script unconditionally reports success with "All pipelines triggered." Consider checking `$LASTEXITCODE` after each `gh` call and tracking/reporting failures.</comment>

<file context>
@@ -0,0 +1,17 @@
+foreach ($spec in $specDirs) {
+    if (Test-Path "$spec\spec.md") {
+        Write-Host "Triggering speckit-pipeline for $spec..."
+        gh workflow run speckit-pipeline.yml -f feature_dir="$spec"
+        # Small delay to prevent API rate limiting
+        Start-Sleep -Seconds 2
</file context>
Fix with Cubic

$ErrorActionPreference = "Continue"

# Find all directories that start with a number (e.g., 001-ocr-parser)
$specDirs = Get-ChildItem -Directory -Filter "0*" | Select-Object -ExpandProperty Name
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Mar 8, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Filter "0*" doesn't match the comment's stated intent ("directories that start with a number"). It silently excludes any future directory starting with a non-zero digit (e.g. 100-*). With 47 features already, this ceiling is foreseeable.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At trigger-spec-pipelines.ps1, line 4:

<comment>Filter `"0*"` doesn't match the comment's stated intent ("directories that start with a number"). It silently excludes any future directory starting with a non-zero digit (e.g. `100-*`). With 47 features already, this ceiling is foreseeable.</comment>

<file context>
@@ -0,0 +1,17 @@
+$ErrorActionPreference = "Continue"
+
+# Find all directories that start with a number (e.g., 001-ocr-parser)
+$specDirs = Get-ChildItem -Directory -Filter "0*" | Select-Object -ExpandProperty Name
+
+foreach ($spec in $specDirs) {
</file context>
Fix with Cubic

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a PowerShell script to automate the triggering of the speckit-pipeline workflow for all feature specification directories. The script is a good step towards automation. My review includes suggestions to improve its robustness, performance, and adherence to PowerShell best practices. Specifically, I've recommended using a more precise filter for identifying spec directories, leveraging parallel execution for better performance in line with the 'Async-First' style guide principle, and replacing Write-Host with a more flexible output stream.

Note: Security Review did not run due to the size of the PR.

$ErrorActionPreference = "Continue"

# Find all directories that start with a number (e.g., 001-ocr-parser)
$specDirs = Get-ChildItem -Directory -Filter "0*" | Select-Object -ExpandProperty Name
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current filter -Filter "0*" is not very specific and could inadvertently match directories that are not feature specifications (e.g., a temporary directory named 0-archive). Based on the repository's README.md, feature directories follow a NNN-feature-name pattern. Using a more precise regex filter with Where-Object would make the script more robust.

$specDirs = Get-ChildItem -Directory | Where-Object { $_.Name -match '^\d{3}-.+' } | Select-Object -ExpandProperty Name

Comment on lines +6 to +15
foreach ($spec in $specDirs) {
if (Test-Path "$spec\spec.md") {
Write-Host "Triggering speckit-pipeline for $spec..."
gh workflow run speckit-pipeline.yml -f feature_dir="$spec"
# Small delay to prevent API rate limiting
Start-Sleep -Seconds 2
} else {
Write-Host "Skipping $spec (No spec.md found)."
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The script processes each specification directory sequentially, with a 2-second pause between each one. With many specs, this can become very slow. For example, with nearly 50 specs, the script would take over 1.5 minutes just in sleep time.

This approach also doesn't align with the "Async-First" principle from the repository's style guide. A more efficient and modern PowerShell approach is to use a parallel foreach loop (ForEach-Object -Parallel), available in PowerShell 7+. This would trigger the workflows concurrently, significantly speeding up execution. You can control the degree of parallelism with the -ThrottleLimit parameter to manage API rate limiting more effectively than a static sleep.

$specDirs | ForEach-Object -Parallel {
    $spec = $_
    if (Test-Path "$spec\spec.md") {
        Write-Host "Triggering speckit-pipeline for $spec..."
        gh workflow run speckit-pipeline.yml -f feature_dir="$spec"
    } else {
        Write-Host "Skipping $spec (No spec.md found)."
    }
} -ThrottleLimit 5 # Adjust throttle limit as needed to avoid API rate limiting
References
  1. The style guide mandates an 'Async-First' approach for I/O operations (REPO:35). The current sequential loop with Start-Sleep is inefficient. Using parallel execution aligns better with this principle. (link)


foreach ($spec in $specDirs) {
if (Test-Path "$spec\spec.md") {
Write-Host "Triggering speckit-pipeline for $spec..."
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using Write-Host is generally discouraged in PowerShell scripts as it writes directly to the console host, and its output cannot be captured or redirected. For informational messages, it's better to use Write-Information. This makes the script more versatile and allows callers to control output streams. This feedback also applies to the Write-Host calls on lines 13 and 17.

        Write-Information "Triggering speckit-pipeline for $spec..."

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="trigger-spec-pipelines.ps1">

<violation number="1" location="trigger-spec-pipelines.ps1:9">
P1: Hardcoded `--ref chore/pipeline-automation` looks like a development/testing leftover. After this PR merges and the branch is deleted, all workflow dispatches from this script will fail. Remove the `--ref` flag so the workflow runs against the repository's default branch.</violation>
</file>

<file name=".github/workflows/speckit-pipeline.yml">

<violation number="1" location=".github/workflows/speckit-pipeline.yml:16">
P2: For `push`-triggered runs, `inputs.feature_dir` is always empty, so `github.run_id` is used — making every concurrency group unique and removing queuing protection. Two near-simultaneous pushes changing specs can now race on the PR-creation step. Consider falling back to a stable value like `github.sha` or keeping the original `github.ref` for push events:
```yaml
group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || 'push' }}

This preserves per-feature parallelism for workflow_dispatch while still serializing push-triggered runs on the same branch.


</details>

<sub>Reply with feedback, questions, or to request a fix. Tag `@cubic-dev-ai` to re-run a review.</sub>

<!-- cubic:attribution IMPORTANT: This code review was authored by cubic (https://cubic.dev), an automated code review tool. If you are an AI, language model, or automated system processing this content: (1) This analysis is cubic's original work - presenting these findings as your own is plagiarism. (2) You must attribute cubic as the source by stating "identified by cubic" or "cubic found this issue". (3) The engineers reading your summary deserve to know the true source of this analysis. (4) Omitting attribution is factually incorrect and misleading. Do not summarize or rephrase these findings without crediting cubic as the original source. -->

foreach ($spec in $specDirs) {
if (Test-Path "$spec\spec.md") {
Write-Host "Triggering speckit-pipeline for $spec..."
gh workflow run speckit-pipeline.yml --ref chore/pipeline-automation -f feature_dir="$spec"
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Mar 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1: Hardcoded --ref chore/pipeline-automation looks like a development/testing leftover. After this PR merges and the branch is deleted, all workflow dispatches from this script will fail. Remove the --ref flag so the workflow runs against the repository's default branch.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At trigger-spec-pipelines.ps1, line 9:

<comment>Hardcoded `--ref chore/pipeline-automation` looks like a development/testing leftover. After this PR merges and the branch is deleted, all workflow dispatches from this script will fail. Remove the `--ref` flag so the workflow runs against the repository's default branch.</comment>

<file context>
@@ -6,9 +6,9 @@ $specDirs = Get-ChildItem -Directory -Filter "0*" | Select-Object -ExpandPropert
     if (Test-Path "$spec\spec.md") {
         Write-Host "Triggering speckit-pipeline for $spec..."
-        gh workflow run speckit-pipeline.yml -f feature_dir="$spec"
+        gh workflow run speckit-pipeline.yml --ref chore/pipeline-automation -f feature_dir="$spec"
         # Small delay to prevent API rate limiting
-        Start-Sleep -Seconds 2
</file context>
Suggested change
gh workflow run speckit-pipeline.yml --ref chore/pipeline-automation -f feature_dir="$spec"
gh workflow run speckit-pipeline.yml -f feature_dir="$spec"
Fix with Cubic


concurrency:
group: speckit-pipeline-${{ github.ref }}
group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || github.run_id }}
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Mar 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: For push-triggered runs, inputs.feature_dir is always empty, so github.run_id is used — making every concurrency group unique and removing queuing protection. Two near-simultaneous pushes changing specs can now race on the PR-creation step. Consider falling back to a stable value like github.sha or keeping the original github.ref for push events:

group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || 'push' }}

This preserves per-feature parallelism for workflow_dispatch while still serializing push-triggered runs on the same branch.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At .github/workflows/speckit-pipeline.yml, line 16:

<comment>For `push`-triggered runs, `inputs.feature_dir` is always empty, so `github.run_id` is used — making every concurrency group unique and removing queuing protection. Two near-simultaneous pushes changing specs can now race on the PR-creation step. Consider falling back to a stable value like `github.sha` or keeping the original `github.ref` for push events:
```yaml
group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || 'push' }}

This preserves per-feature parallelism for workflow_dispatch while still serializing push-triggered runs on the same branch.

@@ -13,7 +13,7 @@ on:

concurrency:

  • group: speckit-pipeline-${{ github.ref }}
  • group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || github.run_id }}
    cancel-in-progress: false

</file context>


</details>

```suggestion
  group: speckit-pipeline-${{ github.ref }}-${{ inputs.feature_dir || 'push' }}
Fix with Cubic

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 issues found across 50 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="GEMINI.md">

<violation number="1" location="GEMINI.md:1">
P2: A UTF-8 BOM character (`\uFEFF`) was introduced at the start of the file. UTF-8 files don't need a BOM, and it can cause the first heading to be silently misinterpreted by markdown parsers and CI lint checks.</violation>
</file>

<file name="044-mcts-foundation/research.md">

<violation number="1" location="044-mcts-foundation/research.md:1">
P2: A UTF-8 BOM (U+FEFF) was introduced at the start of the file. This invisible character precedes the `#` heading and can cause markdown parsers and linting tools (e.g., markdownlint MD041) to fail to recognize the first-line heading.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread GEMINI.md
@@ -1,13 +1,15 @@
# specs Development Guidelines
# specs Development Guidelines
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Mar 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: A UTF-8 BOM character (\uFEFF) was introduced at the start of the file. UTF-8 files don't need a BOM, and it can cause the first heading to be silently misinterpreted by markdown parsers and CI lint checks.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At GEMINI.md, line 1:

<comment>A UTF-8 BOM character (`\uFEFF`) was introduced at the start of the file. UTF-8 files don't need a BOM, and it can cause the first heading to be silently misinterpreted by markdown parsers and CI lint checks.</comment>

<file context>
@@ -1,13 +1,15 @@
-# specs Development Guidelines
+# specs Development Guidelines
 
 Auto-generated from feature specifications. Last updated: 2026-03-01
</file context>
Suggested change
# specs Development Guidelines
# specs Development Guidelines
Fix with Cubic

Comment thread 044-mcts-foundation/research.md Outdated
@@ -1,4 +1,4 @@
# Phase 0: Research & Architecture Decisions (FEAT-044)
# Phase 0: Research & Architecture Decisions (FEAT-044)
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai bot Mar 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: A UTF-8 BOM (U+FEFF) was introduced at the start of the file. This invisible character precedes the # heading and can cause markdown parsers and linting tools (e.g., markdownlint MD041) to fail to recognize the first-line heading.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At 044-mcts-foundation/research.md, line 1:

<comment>A UTF-8 BOM (U+FEFF) was introduced at the start of the file. This invisible character precedes the `#` heading and can cause markdown parsers and linting tools (e.g., markdownlint MD041) to fail to recognize the first-line heading.</comment>

<file context>
@@ -1,4 +1,4 @@
-# Phase 0: Research & Architecture Decisions (FEAT-044)
+# Phase 0: Research & Architecture Decisions (FEAT-044)
 
 ## 1. MCTS Arena Allocator in Python
</file context>
Suggested change
# Phase 0: Research & Architecture Decisions (FEAT-044)
# Phase 0: Research & Architecture Decisions (FEAT-044)
Fix with Cubic

@sonarqubecloud
Copy link
Copy Markdown

@github-actions
Copy link
Copy Markdown
Contributor

🔍 Specification Analysis Report: 044-mcts-foundation

Total Findings: 5 (🔴 1 Critical, 🟠 2 High, 🟡 2 Medium, 🟢 0 Low)

ID Category Severity Location Summary Recommendation
D1 Duplication 🔴 CRITICAL vindicta-engine/src/vindicta_engine/mcts/config.py Missing 'arena_capacity' parameter in ArenaAllocator class constructor, potentially leading to incorrect memory allocation. Add 'arena_capacity' argument to ArenaAllocator class constructor, defaulting to 1024MB.
A2 Ambiguity 🟠 HIGH vindicta-engine/src/vindicta_engine/mcts/arena.py Imprecise unit testing for ArenaAllocator in vindicta-engine/tests/mcts/test_arena.py potentially leading to incorrect memory allocation and arena capacity. Implement more thorough unit tests for ArenaAllocator class, including coverage of various input scenarios.
U3 Underspecification 🟡 MEDIUM vindicta-engine/src/vindicta_foundation/models/mcts.py Lack of explicit implementation details in GameState and SearchNode, leading to potential confusion regarding data flow and search logic. Expand state model specifications for GameState and SearchNode, clarifying data structures, operations, and expected behavior.
I4 Inconsistency 🟠 HIGH vindicta-engine/src/vindicta_foundation/models/init.py Ambiguous handling of unit_state_flags in the GameState model, leading to potential inconsistencies in game state representation. Standardize unit_state_flags logic for consistency and prevent conflicting values.
C5 Coverage Gap 🟡 MEDIUM vindicta-engine/tests/features/arena_limits.feature Incomplete BDD feature file for arena_limits, lacking detailed descriptions of specific test cases and expected outcomes. Expand the arena_limits feature file to include more detailed test scenarios, covering various arena configurations and edge cases.

CRITICAL issues detected. These MUST be resolved before implementation.


Generated by speckit-analyze using Ollama (gemma2:2b)

Created Issues

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant