From 9bedb84245e7bd627b6b7bb2e87045bdbc200882 Mon Sep 17 00:00:00 2001 From: Omar Santos Date: Mon, 27 Oct 2025 10:45:08 -0700 Subject: [PATCH 1/3] Update README.md --- README.md | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/README.md b/README.md index 22467be..3b465c5 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,27 @@ # .github Central repository for managing common GitHub configurations and contributing guidelines across all projects. +--- + +[Project CodeGuard](https://github.com/project-codeguard/rules) is an open-source, model-agnostic security framework that embeds secure-by-default practices into AI coding agent workflows. It provides comprehensive security rules that guide AI assistants to generate more secure code automatically. + +## Why Project CodeGuard? + +AI coding agents are transforming software engineering, but this speed can introduce security vulnerabilities. Is your AI coding agent implementation introducing security vulnerabilities? + +- ❌ Skipping input validation +- ❌ Hardcoding secrets and credentials +- ❌ Using weak cryptographic algorithms +- ❌ Relying on unsafe functions +- ❌ Missing authentication/authorization checks +- ❌ Missing any other security best practice + +Project CodeGuard solves this by embedding security best practices directly into AI coding agent workflows. + +**Before, During, and After Code Generation.** + +Project CodeGuard can be used **before**, **during** and **after** code generation. They can be used at the AI agent planning phase or for initial specification-driven engineering tasks. Project CodeGuard rules can also be used to prevent vulnerabilities from being introduced during code generation. They can also be used by automated code-review AI agents. + +For example, a rule focused on input validation could work at multiple stages: it might suggest secure input handling patterns during code generation, flag potentially unsafe user or AI agent input processing in real-time and then validate that proper sanitization and validation logic is present in the final code. Another rule targeting secret management could prevent hardcoded credentials from being generated, alert developers when sensitive data patterns are detected, and verify that secrets are properly externalized using secure configuration management. + +This multi-stage methodology ensures that security considerations are woven throughout the development process rather than being an afterthought, creating multiple layers of protection while maintaining the speed and productivity that make AI coding tools so valuable. + From f6bc8e045ed03b2ca8cd33d588d4effdd4925701 Mon Sep 17 00:00:00 2001 From: Omar Santos Date: Mon, 27 Oct 2025 10:50:18 -0700 Subject: [PATCH 2/3] Creating `.github` profile README.md --- profile/README.md | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 profile/README.md diff --git a/profile/README.md b/profile/README.md new file mode 100644 index 0000000..0cbf17c --- /dev/null +++ b/profile/README.md @@ -0,0 +1,20 @@ +# Project CodeGuard + +[Project CodeGuard](https://github.com/project-codeguard/rules) is an open-source, model-agnostic security framework that embeds secure-by-default practices into AI coding agent workflows. It provides comprehensive security rules that guide AI assistants to generate more secure code automatically. + +## Why Project CodeGuard? + +AI coding agents are transforming software engineering, but this speed can introduce security vulnerabilities. Is your AI coding agent implementation introducing security vulnerabilities? + +[Project CodeGuard](https://github.com/project-codeguard/rules) solves this by embedding security best practices directly into AI coding agent workflows. + +👉 Access the [Project CodeGuard Rules here](https://github.com/project-codeguard/rules) + +## Before, During, and After Code Generation + +[Project CodeGuard](https://github.com/project-codeguard/rules) can be used **before**, **during** and **after** code generation. They can be used at the AI agent planning phase or for initial specification-driven engineering tasks. Project CodeGuard rules can also be used to prevent vulnerabilities from being introduced during code generation. They can also be used by automated code-review AI agents. + +For example, a rule focused on input validation could work at multiple stages: it might suggest secure input handling patterns during code generation, flag potentially unsafe user or AI agent input processing in real-time and then validate that proper sanitization and validation logic is present in the final code. Another rule targeting secret management could prevent hardcoded credentials from being generated, alert developers when sensitive data patterns are detected, and verify that secrets are properly externalized using secure configuration management. + +This multi-stage methodology ensures that security considerations are woven throughout the development process rather than being an afterthought, creating multiple layers of protection while maintaining the speed and productivity that make AI coding tools so valuable. + From b9342961fd9777371d0453a551851f64ae3a05b3 Mon Sep 17 00:00:00 2001 From: Omar Santos Date: Mon, 27 Oct 2025 10:50:52 -0700 Subject: [PATCH 3/3] Update README.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 3b465c5..813614b 100644 --- a/README.md +++ b/README.md @@ -21,7 +21,7 @@ Project CodeGuard solves this by embedding security best practices directly into Project CodeGuard can be used **before**, **during** and **after** code generation. They can be used at the AI agent planning phase or for initial specification-driven engineering tasks. Project CodeGuard rules can also be used to prevent vulnerabilities from being introduced during code generation. They can also be used by automated code-review AI agents. -For example, a rule focused on input validation could work at multiple stages: it might suggest secure input handling patterns during code generation, flag potentially unsafe user or AI agent input processing in real-time and then validate that proper sanitization and validation logic is present in the final code. Another rule targeting secret management could prevent hardcoded credentials from being generated, alert developers when sensitive data patterns are detected, and verify that secrets are properly externalized using secure configuration management. +For example, a rule focused on input validation could work at multiple stages. It might suggest secure input handling patterns during code generation, flag potentially unsafe user or AI agent input processing in real-time, and then validate that proper sanitization and validation logic is present in the final code. Another rule targeting secret management could prevent hardcoded credentials from being generated, alert developers when sensitive data patterns are detected, and verify that secrets are properly externalized using secure configuration management. This multi-stage methodology ensures that security considerations are woven throughout the development process rather than being an afterthought, creating multiple layers of protection while maintaining the speed and productivity that make AI coding tools so valuable.