GitHub Copilot Archives - The GitHub Blog https://github.blog/tag/github-copilot/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Wed, 11 Mar 2026 20:47:37 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 GitHub Copilot Archives - The GitHub Blog https://github.blog/tag/github-copilot/ 32 32 153214340 Continuous AI for accessibility: How GitHub transforms feedback into inclusion https://github.blog/ai-and-ml/github-copilot/continuous-ai-for-accessibility-how-github-transforms-feedback-into-inclusion/ Thu, 12 Mar 2026 16:00:00 +0000 https://github.blog/?p=94451 AI automates triage for accessibility feedback, allowing us to focus on fixing barriers—turning a chaotic backlog into continuous, rapid resolutions.

The post Continuous AI for accessibility: How GitHub transforms feedback into inclusion appeared first on The GitHub Blog.

]]>

For years, accessibility feedback at GitHub didn’t have a clear place to go.

Unlike typical product feedback, accessibility issues don’t belong to any single team—they cut across the entire ecosystem. For example, a screen reader user might report a broken workflow that touches navigation, authentication, and settings. A keyboard-only user might hit a trap in a shared component used across dozens of pages. A low vision user might flag a color contrast issue that affects every surface using a shared design element. No single team owns any of these problems—but every one of them blocks a real person.

These reports require coordination that our existing processes weren’t originally built for. Feedback was often scattered across backlogs, bugs lingered without owners, and users followed up to silence. Improvements were often promised for a mythical “phase two” that rarely materialized.

We knew we needed to change this. But before we could build something better, we had to lay the groundwork—centralizing scattered reports, creating templates, and triaging years of backlog. Only once we had that foundation in place could we ask: How can AI make this easier?

The answer was an internal workflow, powered by GitHub Actions, GitHub Copilot, and GitHub Models, that ensures every piece of user and customer feedback becomes a tracked, prioritized issue. When someone reports an accessibility barrier, their feedback is captured, reviewed, and followed through until it’s addressed. We didn’t want AI to replace human judgment—we wanted it to handle repetitive work so humans could focus on fixing the software.

This is how we went from chaos to a system where every piece of accessibility feedback is tracked, prioritized, and acted on—not eventually, but continuously.

Accessibility as a living system

Continuous AI for accessibility weaves inclusion into the fabric of software development. It’s not a single product or a one-time audit—it’s a living methodology that combines automation, artificial intelligence, and human expertise.

This philosophy connects directly to our support for the 2025 Global Accessibility Awareness Day (GAAD) pledge: strengthening accessibility across the open source ecosystem by ensuring user and customer feedback is routed to the right teams and translated into meaningful platform improvements.

The most important breakthroughs rarely come from code scanners—they come from listening to real people. But listening at scale is hard, which is why we needed technology to help amplify those voices. We built a feedback workflow that functions less like a static ticketing system and more like a dynamic engine—leveraging GitHub products to clarify, structure, and track user and customer feedback, turning it into implementation-ready solutions.

Designing for people first

Before jumping into solutions, we stepped back to understand who this system needed to serve:

  • Issue submitters: Community managers, support agents, and sales reps submit issues on behalf of users and customers. They aren’t always accessibility experts, so they need a system that guides them and teaches accessibility concepts in the flow of work.
  • Accessibility and service teams: Engineers and designers responsible for fixes need structured, actionable data—reproducible steps, WCAG mapping, severity scores, and clear ownership.
  • Program and product managers: Leadership needs visibility into pain points by category, trends, and progress over time to allocate resources strategically.

With these personas in mind, we knew we wanted to 1) treat feedback as data flowing through a pipeline and 2) build a system able to evolve with us.

How feedback flows

With that foundation set, we built an architecture around an event-driven pattern, where each step triggers a GitHub Action that orchestrates what comes next—ensuring consistent handling no matter where the feedback originates. We built this system largely by hand starting in mid-2024. Today, tools like Agentic Workflows let you create GitHub Actions using natural language—meaning this kind of system could be built in a fraction of the time.

The workflow reacts to key events: Issue creation launches GitHub Copilot analysis via the GitHub Models API, status changes initiate hand-offs between teams, and resolution triggers submitter follow-up with the user. Every Action can also be triggered manually or re-run as needed—automation covers the common path, while humans can step in at any point.

Feedback isn’t just captured—it continuously flows through the right channels, providing visibility, structure, and actionability at every stage.

*Click images to enlarge.

A left-to-right flowchart showing the seven steps of the feedback workflow in sequence: Intake, Copilot Analysis, Submitter Review, Accessibility Team Review, Link Audits, Close Loop, and Improvement. Feedback loops show that Submitter Review can re-run Copilot Analysis, Close Loop can return to Accessibility Team Review, and Improvement feeds updated prompts back to Copilot Analysis.

1. Actioning intake

Feedback can come from anywhere—support tickets, social media posts, email, direct outreach—but most users choose the GitHub accessibility discussion board. It’s where they can work together and build community around shared experiences. Today, 90% of the accessibility feedback flows through that single channel. Because posts are public, other users can confirm the problem, add context, or suggest workarounds—so issues often arrive with richer detail than a support ticket ever could. Regardless of the source, every piece of feedback gets acknowledged within five business days, and even feedback we can’t act on gets a response pointing to helpful resources.

When feedback requires action from internal teams, a team member manually creates a tracking issue using our custom accessibility feedback issue template. Issue templates are pre-defined forms that standardize how information is collected when opening a new issue. The template captures the initial context—what the user reported, where it came from, and which components are involved—so nothing is lost between intake and triage.

This is where automation kicks in. Creating the issue triggers a GitHub Action that engages GitHub Copilot, and a second Action adds the issue to a project board, providing a centralized view of current status, surfacing trends, and helping identify emerging needs.

A left-to-right flowchart where user or customer feedback enters through Discussion Board, Support Ticket, Social Media, Email, or Direct Outreach, moves to an Acknowledge and Validate step, branches at a validity decision, and either proceeds to Create Issue or loops back through Request More Details to the user.

2. GitHub Copilot analysis

With the tracking issue created, a GitHub Action workflow programmatically calls the GitHub Models API to analyze the report. We chose stored prompts over model fine-tuning so that anyone on the team can update the AI’s behavior through a pull request—no retraining pipeline, no specialized ML knowledge required.

We configured GitHub Copilot using custom instructions developed by our accessibility subject matter experts. Our prompt serves two roles: triage analysis, which classifies issues by WCAG violation, severity, and affected user group, and accessibility coaching, where GitHub Copilot acts as a subject-matter expert to help teams write and review accessible code.

These instruction files point to our accessibility policies, component library, and internal documentation that details how we interpret and apply WCAG success criteria. When our standards evolve, the team updates the markdown and instruction files via pull request—the AI’s behavior changes with the next run, not the next training cycle. For a detailed walkthrough of this approach, see our guide on optimizing GitHub Copilot custom instructions for accessibility.

The automation works in two steps. First, an Action fires on issue creation and triggers GitHub Copilot to analyze the report. GitHub Copilot populates approximately 80% of the issue’s metadata automatically—over 40 data points including issue type, user segment, original source, affected components, and enough context to understand the user’s experience. The remaining 20% requires manual input from the team member. GitHub Copilot then posts a comment on the issue containing:

  • A summary of the problem and user impact
  • Suggested WCAG success criteria for potential violations
  • Severity level (sev1 through sev4, where sev1 is critical)
  • Impacted user groups (screen reader users, keyboard users, low vision users, etc.)
  • Recommended team assignment (design, engineering, or both)
  • A checklist of low-barrier accessibility tests so the submitter can verify the issue

Then a second Action fires on that comment, parses the response, applies labels based on the severity GitHub Copilot assigned, updates the issue’s status on the project board, and assigns it to the submitter for review.

If GitHub Copilot’s analysis seems off, anyone can flag it by opening an issue describing what it got wrong and what it should have said—feeding directly into our continuous improvement process.

A left-to-right flowchart where a newly created issue triggers Action 1, which feeds the report along with custom instructions and WCAG documentation into Copilot Analysis. Copilot posts a comment with its findings, then Action 2 parses that comment and branches into four parallel outcomes: applying labels, applying metadata, adding to the project board, and assigning the submitter.

3. Submitter review

Before we act on GitHub Copilot’s recommendations, two layers of review happen—starting with the issue submitter.

The submitter attempts to replicate the problem the user reported. The checklist GitHub Copilot provides in its comment guides our community managers, support agents, and sales reps through expert-level testing procedures—no accessibility expertise required. Each item includes plain-language explanations, step-by-step instructions, and links to tools and documentation.

Example questions include:

  • Can you navigate the page using only a keyboard? Press “Tab” to move through interactive elements. Can you reach all buttons, links, and form fields? Can you see where your focus is at all times?
  • Do images have descriptive alt text? Right-click an image and select “Inspect” to view the markup. Does the alt attribute describe the image’s purpose, or is it a generic file name?
  • Are interactive elements clearly labeled? Using a screen reader, navigate to a button or link. Is its purpose announced clearly? Alternatively, review the accessibility tree in your browser’s developer tools to inspect how elements are exposed to assistive technologies.

If the submitter can replicate the problem, they mark the issue as reviewed, which triggers the next GitHub Action. If they can’t reproduce it, they reach out to the user for more details. Once new information arrives, the submitter can re-run the GitHub Copilot analysis—either by manually triggering the Action from the Actions tab or by removing and re-adding the relevant label to kick it off automatically. AI provides the draft, but humans provide the verification.

A left-to-right flowchart where the submitter receives the issue with Copilot’s checklist, attempts to replicate the problem, and reaches a decision. If replicable, the issue is marked as reviewed and moves to the accessibility team. If not replicable, the submitter contacts the user for more details. When new information arrives, the submitter re-runs Copilot analysis, which loops back to the replication step.

4. Accessibility team review

Once the submitter marks the issue as reviewed, a GitHub Action updates its status on the workflow project board and adds it to a separate accessibility first responder board. This alerts the accessibility team—engineers, designers, champions, testing vendors, and managers—that GitHub Copilot’s analysis is ready for their review.

The team validates GitHub Copilot’s analysis—checking the severity level, WCAG mapping, and category labels—and corrects anything the AI got wrong. When there’s a discrepancy, we assume the human is correct. We log these corrections and use them to refine the prompt files, improving future accuracy.

Once validated, the team determines the resolution approach:

  • Documentation or settings update: Provide the solution directly to the user.
  • Code fix by the accessibility team: Create a pull request directly.
  • Service team needed: Assign the issue to the appropriate service team and track it through resolution.

With a path forward set, the team marks the issue as triaged. An Action then reassigns it to the submitter, who communicates the plan to the user—letting them know what’s being done and what to expect.

A left-to-right flowchart where a reviewed issue triggers an Action that updates the project board and adds it to the first responder board. The accessibility team validates Copilot’s analysis, logs any corrections, then determines a resolution: provide documentation, create a code fix, or assign to a service team. All three paths converge at marking the issue as triaged, which triggers an Action that reassigns it to the submitter to communicate the plan to the user.

5. Linking to audits

As part of the review process, the team connects user and customer feedback to our formal accessibility audit system.

Roughly 75–80% of the time, reported issues correspond to something we already know about from internal audits. Instead of creating duplicates, we find the existing internal audit issue and add a customer-reported label. This lets us prioritize based on real-world impact—a sev2 issue might technically be less critical than a sev1, but if multiple users are reporting it, we bump up its priority.

If the feedback reveals something new, we create a new audit issue and link it to the tracking issue.

A left-to-right flowchart where the team checks whether an existing audit issue covers the reported problem. If one exists, they link it and add a customer-reported label. If not, they create a new audit issue and link it. Both paths converge at updating priority based on real-world impact.

6. Closing the loop

This is the most critical step for trust. Users who take the time to report accessibility barriers deserve to know their feedback led to action.

Once a resolution path is set, the submitter reaches out to the original user to let them know the plan—what’s being fixed, and what to expect. When the fix ships, the submitter follows up again and asks the user to test it. Because most issues originate from the community discussion board, we post confirmations there for everyone to see.

If the user confirms the fix works, we close the tracking issue. If the fix doesn’t fully address the problem, the submitter gathers more details and the process loops back to the accessibility team review. We don’t close issues until the user confirms the fix works for them.

A left-to-right flowchart where the submitter communicates the resolution plan to the user and monitors until the fix ships. The user is asked to test the fix. If it works, the issue is closed. If it doesn’t, the submitter gathers more details and the process loops back to the accessibility team review.

7. Continuous improvement

The workflow doesn’t end when an issue closes—it feeds back into itself.

When submitters or accessibility team members spot inaccuracies in GitHub Copilot’s output, they open a new issue requesting a review of the results. Every GitHub Copilot analysis comment includes a link to create this issue at the bottom, so the feedback loop is built into the workflow itself. The team reviews the inaccuracy, and the correction becomes a pull request to the custom instruction and prompt files described earlier.

We also automate the integration of new accessibility guidance. A separate GitHub Action scans our internal accessibility guide repository weekly and incorporates changes into GitHub Copilot’s custom instructions automatically.

The goal isn’t perfection—it’s continuous improvement. Each quarter, we review accuracy metrics and refine our instructions. These reviews feed into quarterly and fiscal year reports that track resolution times, WCAG failure patterns, and feedback volume trends—giving leadership visibility into both progress and persistent gaps. The system gets smarter over time, and now we have the data to show it.

A left-to-right flowchart with two parallel loops. In the first, an inaccuracy is spotted, a review issue is opened, the team creates a pull request to update the prompt files, and the changes merge to improve future analyses. In the second, a weekly Action scans the accessibility guide repository and auto-updates Copilot's custom instructions. Both loops feed into quarterly reviews that produce fiscal year reports tracking resolution times, WCAG failure patterns, and feedback volume trends.

Impact in numbers

A year ago, nearly half of accessibility feedback sat unresolved for over 300 days. Today, that backlog isn’t just smaller—it’s gone. And the improvements don’t stop there.

  • 89% of issues now close within 90 days (up from 21%)
  • 62% reduction in average resolution time (118 days → 45 days)
  • 70% reduction in manual administrative time
  • 1,150% increase in issues resolved within 30 days (4 → 50 year-over-year)
  • 50% reduction in critical sev1 issues
  • 100% of issues closed within 60 days in our most recent quarter

We track this through automated weekly and quarterly reports generated by GitHub Actions—surfacing which WCAG criteria fail most often and how resolution times trend over time.

Beyond the numbers

A user named James emailed us to report that the GitHub Copilot CLI was inaccessible. Decorative formatting created noise for screen readers, and interactive elements were impossible to navigate.

A team member created a tracking issue. Within moments, GitHub Copilot analyzed the report—mapping James’s description to specific technical concepts, linking to internal documentation, and providing reproduction steps so the submitter could experience the product exactly as James did.

With that context, the team member realized our engineering team had already shipped accessible CLI updates earlier in the year—James simply wasn’t aware.

They replied immediately. His response? “Thanks for pointing out the –screen-reader mode, which I think will help massively.”

Because the AI workflow identified the problem correctly, we turned a frustration into a resolution in hours.

But the most rewarding result isn’t the speed—it’s the feedback from users. Not just that we responded, but that the fixes actually worked for them:

  • “Huge thanks to the team for updating the contributions graph in the high contrast theme. The addition of borders around the grid edges is a small but meaningful improvement. Keep it up!”
  • “Let’s say you want to create several labels for your GitHub-powered workflow: bug, enhancement, dependency updates… But what if you are blind? Before you had only hex codes randomly thrown at you… now it’s fixed, and those colors have meaningful English names. Well done, GitHub!”
  • “This may not be very professional but I literally just screamed! This fix has actually made my day… Before this I was getting my wife to manage the GitHub issues but now I can actually navigate them by myself! It means a lot that I can now be a bit more independent so thank you again.”

That independence is the point. Every workflow, every automation, every review—it all exists so moments like these are the expectation, not the exception.

The bigger picture

Stories like these remind us why the foundation matters. Design annotations, code scanners, accessibility champions, and testing with people with disabilities—these aren’t replaced by AI. They are what make AI-assisted workflows effective. Without that human foundation, AI is just a faster way to miss the point.

We’re still learning, and the system is still evolving. But every piece of feedback teaches us something, and that knowledge now flows continuously back to our team, our users, and the tools we build. 

If you maintain a repository—whether it’s a massive enterprise project or a weekend open-source library—you can build this kind of system today. Start small. Create an issue template for accessibility. Add a .github/copilot-instructions.md file with your team’s accessibility standards. Let AI handle the triage and formatting so your team can focus on what really matters: writing more inclusive code.

And if you hit an accessibility barrier while using GitHub, please share your feedback. It won’t disappear into a backlog. We’re listening—and now we have the system to follow through.

The post Continuous AI for accessibility: How GitHub transforms feedback into inclusion appeared first on The GitHub Blog.

]]>
94451
The era of “AI as text” is over. Execution is the new interface. https://github.blog/ai-and-ml/github-copilot/the-era-of-ai-as-text-is-over-execution-is-the-new-interface/ Tue, 10 Mar 2026 20:16:01 +0000 https://github.blog/?p=94415 AI is shifting from prompt-response interactions to programmable execution. See how the GitHub Copilot SDK enables agentic workflows directly inside your applications.

The post The era of “AI as text” is over. Execution is the new interface. appeared first on The GitHub Blog.

]]>

Over the past two years, most teams have interacted with AI the same way: provide text input, receive text output, and manually decide what to do next.

But production software doesn’t operate on isolated exchanges. Real systems execute. They plan steps, invoke tools, modify files, recover from errors, and adapt under constraints you define.

As a developer, you’ve gotten used to using GitHub Copilot as your trusted AI in the IDE. But I bet you’ve thought more than once: “Why can’t I use this kind of agentic workflow inside my own apps too?”

Now you can.

The GitHub Copilot SDK makes that execution layer available as a programmable capability inside your software.

Instead of maintaining your own orchestration stack, you can embed the same production-tested planning and execution engine that powers GitHub Copilot CLI directly into your systems.

If your application can trigger logic, it can now trigger agentic execution. This shift changes the architecture of AI-powered systems.

So how does it work? Here are three concrete patterns teams are using to embed agentic execution into real applications.

Pattern #1: Delegate multi-step work to agents

For years, teams have relied on scripts and glue code to automate repetitive tasks. But the moment a workflow depends on context, changes shape mid-run, or requires error recovery, scripts become brittle. You either hard-code edge cases, or start building a homegrown orchestration layer.

With the Copilot SDK, your application can delegate intent rather than encode fixed steps.

For example:

Your app exposes an action like “Prepare this repository for release.”

Instead of defining every step manually, you pass intent and constraints. The agent:

  • Explores the repository
  • Plans required steps
  • Modifies files
  • Runs commands
  • Adapts if something fails

All while operating within defined boundaries.

Why this matters: As systems scale, fixed workflows break down. Agentic execution allows software to adapt while remaining constrained and observable, without rebuilding orchestration from scratch.

View multi-step execution examples →

Pattern #2: Ground execution in structured runtime context

Many teams attempt to push more behavior into prompts. But encoding system logic in text makes workflows harder to test, reason about, and evolve. Over time, prompts become brittle substitutes for structured system integration.

With the Copilot SDK, context becomes structured and composable.

You can:

  • Define domain-specific tools or agent skills
  • Expose tools via Model Context Protocol (MCP)
  • Let the execution engine retrieve context at runtime

Instead of stuffing ownership data, API schemas, or dependency rules into prompts, your agents access those systems directly during planning and execution.

For example, an internal agent might:

  • Query service ownership
  • Pull historical decision records
  • Check dependency graphs
  • Reference internal APIs
  • Act under defined safety constraints

Why this matters: Reliable AI workflows depend on structured, permissioned context. MCP provides the plumbing that keeps agentic execution grounded in real tools and real data, without guesswork embedded in prompts.

Pattern #3: Embed execution outside the IDE

Much of today’s AI tooling assumes meaningful work happens inside the IDE. But modern software ecosystems extend far beyond an editor.

Teams want agentic capabilities inside:

  • Desktop applications
  • Internal operational tools
  • Background services
  • SaaS platforms
  • Event-driven systems

With the Copilot SDK, execution becomes an application-layer capability.

Your system can listen for an event—such as a file change, deployment trigger, or user action—and invoke Copilot programmatically.

The planning and execution loop runs inside your product, not in a separate interface or developer tool.

Why this matters: When execution is embedded into your application, AI stops being a helper in a side window and becomes infrastructure. It’s available wherever your software runs, not just inside an IDE or terminal.

Build your first Copilot-powered app →

Execution is the new interface

The shift from “AI as text” to “AI as execution” is architectural. Agentic workflows are programmable planning and execution loops that operate under constraints, integrate with real systems, and adapt at runtime.

The GitHub Copilot SDK makes those execution capabilities accessible as a programmable layer. Teams can focus on defining what their software should accomplish, rather than rebuilding how orchestration works every time they introduce AI.

If your application can trigger logic, it can trigger agentic execution.

Explore the GitHub Copilot SDK →

The post The era of “AI as text” is over. Execution is the new interface. appeared first on The GitHub Blog.

]]>
94415
Under the hood: Security architecture of GitHub Agentic Workflows https://github.blog/ai-and-ml/generative-ai/under-the-hood-security-architecture-of-github-agentic-workflows/ Mon, 09 Mar 2026 16:00:00 +0000 https://github.blog/?p=94363 GitHub Agentic Workflows are built with isolation, constrained outputs, and comprehensive logging. Learn how our threat model and security architecture help teams run agents safely in GitHub Actions.

The post Under the hood: Security architecture of GitHub Agentic Workflows appeared first on The GitHub Blog.

]]>

Whether you’re an open-source maintainer or part of an enterprise team, waking up to documentation fixes, new unit tests, and refactoring suggestions can be a true “aha” moment. But automation also raises an important concern: how do you put guardrails on agents that have access to your repository and the internet? Will you be wondering if your agent relied on documentation from a sketchy website, or pushed a commit containing an API token? What if it decides to add noisy comments to every open issue one day? Automations must be predictable to offer durable value.

But what is the safest way to add agents to existing automations like CI/CD? Agents are non-deterministic: They must consume untrusted inputs, reason over repository state, and make decisions at runtime. Letting agents operate in CI/CD without real-time supervision allows you to scale your software engineering, but it also requires novel guardrails to keep you from creating security problems.

GitHub Agentic Workflows run on top of GitHub Actions. By default, everything in an action runs in the same trust domain. Rogue agents can interfere with MCP servers, access authentication secrets, and make network requests to arbitrary hosts. A buggy or prompt-injected agent with unrestricted access to these resources can act in unexpected and insecure ways.

That’s why security is baked into the architecture of GitHub Agentic Workflows. We treat agent execution as an extension of the CI/CD model—not as a separate runtime. We separate open‑ended authoring from governed execution, then compile a workflow into a GitHub Action with explicit constraints such as permissions, outputs, auditability, and network access.

This post explains how we built Agentic Workflows with security in mind from day one, starting with the threat model and the security architecture that it needs.

Threat model

There are two properties of agentic workflows that change the threat model for automation.

First, agents’ ability to reason over repository state and act autonomously makes them valuable, but it also means they cannot be trusted by default—especially in the presence of untrusted inputs.

Second, GitHub Actions provide a highly permissive execution environment. A shared trust domain is a feature for deterministic automation, enabling broad access, composability, and good performance. But when combined with untrusted agents, having a single trust domain can create a large blast radius if something goes wrong.

Under this model, we assume an agent will try to read and write state that it shouldn’t, communicate over unintended channels, and abuse legitimate channels to perform unwanted actions. By default, GitHub Agentic Workflows run in a strict security mode with this threat model in mind, and their design is guided by four security principles: defense in depth, don’t trust agents with secrets, stage and vet all writes, and log everything.

Defend in depth

GitHub Agentic Workflows provide a layered security architecture consisting of substrate, configuration, and planning layers. Each layer limits the impact of failures above it by enforcing distinct security properties that are consistent with its assumptions.

Diagram of a three-layer system architecture with labeled sections Planning layer, Configuration layer, and Substrate layer. Each layer contains three blue tiles:

Planning: Safe Outputs MCP (GitHub write operations), Call filtering (call availability, volume), Output sanitization (secret removal, moderation).
Configuration: Compiler (GH AW extension), Firewall policies (allowlist), MCP config (Docker image, auth token).
Substrate: Action runner VM (OS, hypervisor), Docker containers (Docker daemon, network), Trusted containers (firewall, MCP gateway, API proxy).

The substrate layer rests on a GitHub Actions runner virtual machine (VM) and several trusted containers that limit the resources an agent can access. Collectively, the substrate level provides isolation among components, mediation of privileged operations and system calls, and kernel-enforced communication boundaries. These protections hold even if an untrusted user-level component is compromised and executes arbitrary code within its container isolation boundary.

Above the substrate layer is a configuration layer that includes declarative artifacts and the toolchains that interpret them to instantiate a secure system structure and connectivity. The configuration layer dictates which components are loaded, how components are connected, what communication channels are permitted, and what privileges are assigned. Externally minted tokens, such as agent API keys and GitHub access tokens, are critical inputs that bound components’ external effects—configuration controls which tokens are loaded into which containers.

The final layer of defense is the planning layer. The configuration layer dictates which components exist and how they communicate, but it does not dictate which components are active over time. The planning layer’s primary responsibility is to create a staged workflow with explicit data exchanges between them. The safe outputs subsystem, which will be described in greater detail below, is the primary instance of secure planning.

Don’t trust agents with secrets

From the beginning, we wanted workflow agents to have zero access to secrets. Agentic workflows execute as GitHub Actions, in which components share a single trust domain on top of the runner VM. In that model, sensitive material like agent authentication tokens and MCP server API keys reside in environment variables and configuration files visible to all processes in the VM.

This is dangerous because agents are susceptible to prompt injection: Attackers can craft malicious inputs like web pages or repository issues that trick agents into leaking sensitive information. For example, a prompt-injected agent with access to shell-command tools can read configuration files, SSH keys, Linux /proc state, and workflow logs to discover credentials and other secrets. It can then upload these secrets to the web or encode them within public-facing GitHub objects like repository issues, pull requests, and comments.

Our first mitigation was to isolate the agent in a dedicated container with tightly controlled egress: firewalled internet access, MCP access through a trusted MCP gateway, and LLM API calls through an API proxy. To limit internet access, agentic workflows create a private network between the agent and firewall. The MCP gateway runs in a separate trusted container, launches MCP servers, and has exclusive access to MCP authentication material.

Although agents like Claude, Codex, and Copilot must communicate with an LLM over an authenticated channel, we avoid exposing those tokens directly to the agent’s container. Instead, we place LLM auth tokens in an isolated API proxy and configure agents to route model traffic through that proxy.

Architecture diagram showing several connected Docker containers. A Codex token connects to an api-proxy container, which connects to an OpenAI service icon. A separate flow shows an agent container (linked to chroot/host) communicating over http to a gh-aw-firewall container, then over http to a gh-aw-mcpg container (linked to Host Docker Socket), then over stdio to a GitHub MCP container (linked to a GitHub PAT). A GitHub icon appears above the GitHub MCP container.

Zero-secret agents require a fundamental trade-off between security and utility. Coding workloads require broad access to compilers, interpreters, scripts, and repository state, but expanding the in-container setup would duplicate existing actions provisioning logic and increase the set of network destinations that must be allowed through the firewall.

Instead, we carefully expose host files and executables using container volume mounts and run the agent in a chroot jail. We start by mounting the entire VM host file system read-only at /host. We then overlay selected paths with empty tmpfs layers and launch the agent in a chroot jail rooted at /host. This approach keeps the host-side setup intact while constraining the agent’s writable and discoverable surface to what it needs for its job.

Stage and vet all writes

Prompt-injected agents can still do harm even if they do not have access to secrets. For example, a rogue agent could spam a repository with pointless issues and pull requests to overwhelm repository maintainers, or add objectionable URLs and other content in repository objects.

To prevent this kind of behavior, the agentic workflows compiler decomposes workflows into explicit stages and defines, for each stage:

  • The active components and permissions (read vs. write)
  • The data artifacts emitted by that stage
  • The admissible downstream consumers of those artifacts

While the agent runs, it can read GitHub state through the GitHub MCP server and can only stage its updates through the safe outputs MCP server. Once the agent exits, write operations that have been buffered by the safe outputs MCP server are processed by a suite of safe outputs analyses.

Diagram showing a GitHub-centric workflow with green arrows and two rows of components. At the top, a GitHub icon points down into three boxes: Agent (Untrusted), GitHub MCP (Read-only), and MCP config (Write-buffered). Below are three processing steps labeled Filter operations, Moderate content, and Remove secrets, each marked 'Deterministic analysis.' Green arrows indicate data flow from GitHub into the system, down through configuration to 'Remove secrets,' then left through 'Moderate content' and 'Filter operations,' looping back toward the agent.

First, safe outputs allow workflow authors to specify which write operations an agent can perform. Authors can choose which subset of GitHub updates are allowed, such as creating issues, comments, or pull requests. Second, safe outputs limits the number of updates that are allowed, such as restricting an agent to creating at most three pull requests in a given run. Third, safe outputs analyzes update content to remove unwanted patterns, such as output sanitization to remove URLs. Only artifacts that pass through the entire safe outputs pipeline can be passed on, ensuring that each stage’s side effects are explicit and vetted.

Log everything

Even with zero secrets and vetted writes, an agent can still transform repository data and invoke tools in unintended ways or try to break out of the constraints that we impose upon it. Agents are determined to accomplish their tasks by any means and have a surprisingly deep toolbox of tricks for doing so. If an agent behaves unexpectedly, post-incident analysis requires visibility into the complete execution path.

Agentic workflows make observability a first-class property of the architecture by logging extensively at each trust boundary. Network and destination-level activity is recorded at the firewall layer, model request/response metadata and authenticated requests are captured by the API proxy; and tool invocations are logged by the MCP gateway and MCP servers. We also add internal instrumentation to the agent container to audit potentially sensitive actions like environment variable accesses. Together, these logs support end-to-end forensic reconstruction, policy validation, and rapid detection of anomalous agent behavior.

Pervasive logging also lays the foundation for future information-flow controls. Every location where communication can be observed is also a location where it can be mediated. Agentic workflows already support the GitHub MCP server’s lockdown mode, and in the coming months, we’ll introduce additional safety controls that enforce policies across MCP servers based on visibility (public vs. private) and the role of a repository object’s author.

What’s next?

We’d love for you to be involved! Share your thoughts in the Community discussion or join us (and tons of other awesome makers) in the #agentic-workflows channel of the GitHub Next Discord. We look forward to seeing what you build with GitHub Agentic Workflows. Happy automating, and keep an eye out for more updates!

The post Under the hood: Security architecture of GitHub Agentic Workflows appeared first on The GitHub Blog.

]]>
94363
60 million Copilot code reviews and counting https://github.blog/ai-and-ml/github-copilot/60-million-copilot-code-reviews-and-counting/ Thu, 05 Mar 2026 20:10:43 +0000 https://github.blog/?p=94298 How Copilot code review helps teams keep up with AI-accelerated code changes.

The post 60 million Copilot code reviews and counting appeared first on The GitHub Blog.

]]>

Since our initial launch of Copilot code review (CCR) last April, usage has grown 10X, now accounting for more than one in five code reviews on GitHub.

Behind the scenes, we’ve been running continuous experiments to enhance comment quality. We also moved to an agentic architecture that retrieves repository context and reasons across changes. At every step of the way, we’ve listened to your feedback: your survey answers and even your simple thumbs-up and thumbs-down reactions on comments have helped us identify key issues and iterate on our UX to provide a comprehensive review experience.

Copilot code review handles pull request reviews and summaries, allowing teams to focus on more complex tasks.

Suvarna Rane, Software Development Manager, General Motors

Redefining a “good” code review

As Copilot code review evolved over time, so has our definition of a “good code review.” When we started building it in 2024, our goal was simple thoroughness. Since then, we’ve learned that what developers actually value is high-signal feedback that helps them move a pull request forward quickly. Today, Copilot code review leverages the best models, memory, and agentic tool-calling to conduct comprehensive reviews. To get here, we’ve used a continuous evaluation loop to tune the agent’s judgment, focusing on three qualities that shape that experience: accuracy, signal, and speed.

Accuracy

Our aim has been for Copilot code review to deliver sound judgment, prioritizing consequential logic and maintainability issues. We evaluate performance in two ways: through internal testing against known code issues, and through production signals from real pull requests. In production, we track two key indicators:

  • Developer feedback: Thumbs-up and thumbs-down reactions on comments help us understand whether suggestions are helpful.
  • Production signals: We measure whether flagged issues are resolved before merging.

Together, these signals help ensure that Copilot code review surfaces issues that matter, and that faster merges come from confident fixes, not less scrutiny.

Copilot code review comment identifying a missing dependency in a React useCallback hook and suggesting a code change to add handleKeyboardDrag to the dependency array.

Signal

In code review, more comments don’t necessarily mean a better review. Our goal isn’t to maximize comment volume, but to surface issues that actually matter.

A high-signal comment helps a developer understand both the problem and the fix:

Copilot code review comment warning that a retry loop could run indefinitely when an API returns HTTP 429 without a Retry-After header and suggesting adding a retry limit and backoff.

Silence is better than noise. In 71% of the reviews, Copilot code review surfaces actionable feedback. In the remaining 29%, the agent says nothing at all.

As our ability to identify high-signal findings improves, we’re also able to comment more confidently, now averaging about 5.1 comments per review without increasing review churn or lowering our quality threshold.

Speed

In code review, speed matters, but signal matters more. Copilot code review is designed to provide a reliable first pass shortly after a pull request is opened. That being said, meaningful reviews still require analysis. As reasoning capabilities improve, so does the computation required to surface deeper issues.

We treat this as a deliberate trade-off. In one recent change, adopting a more advanced reasoning model improved positive feedback rates by 6%, even though review latency increased by 16%.

For us, that’s the right exchange. A slightly slower review that surfaces real issues is far more valuable than instant feedback that adds noise. We continue to reduce latency wherever possible, but never at the expense of high-signal findings developers can trust.

About the agentic architecture

Given our new definition of “good,” we redeveloped our code review system. Today’s agentic design can retrieve context intelligently and explore the repository to understand logic, architecture, and specific invariants.

This shift alone has driven an initial 8.1% increase in positive feedback.

Here’s why:

  • It catches issues as it reads, not just at the end: Previously, agents waited until the end of a review to finalize results, which often led to “forgetting” early discoveries.
  • It can maintain memory across reviews: Now, every pull request doesn’t need to be an isolated event. If it flags a pattern in one part of the codebase, it can reuse that context in future reviews.
  • It keeps long pull requests reviewable with an explicit plan: It can map out its review strategy ahead of time, significantly improving its performance on long, complex pull requests, where context is easily lost.
  • It reads linked issues and pull requests: That extra context helps it flag subtle gaps. This includes cases where the code looks reasonable in isolation but doesn’t match the project’s requirements.

Making reviews easier to navigate

By iterating on how the agent interacts with pull requests, we’ve reduced noise and made feedback more actionable. Here’s what that means for you.

  • Quickly understand feedback (and the fix) with multi-line comments: We moved away from pinning comments to single lines. By attaching feedback to logical code ranges, Copilot makes it easier to see what it’s referring to and apply the suggested change.
Copilot code review comment on a GitHub Actions workflow identifying a missing use_caches input parameter and suggesting a code change to add the boolean input to the workflow configuration.
  • Keep your pull request timeline readable: Instead of multiple separate comments for the same pattern error, which can be overwhelming, the agent clusters them into a single, cohesive unit to reduce cognitive load.
  • Fix whole classes of issues at once with batch autofixes: Apply suggested fixes in batches, resolving an entire class of logic bugs or style issues at once, rather than context-switching through a dozen individual suggestions.

Take this with you

As AI continues to accelerate software development, it’s more important than ever to help teams review and trust code at scale. Copilot code review helps teams keep pace by surfacing high-signal feedback directly in pull requests, enabling developers to catch issues earlier and merge with greater confidence.

More than 12,000 organizations now run Copilot code review automatically on every pull request. At WEX, this shift toward default AI –assisted reviews has helped scale Copilot adoption across the engineering organization:

Today, two-thirds of developers are using Copilot — including the organization’s most active contributors. WEX has since expanded adoption by making Copilot code review a default across every repository. Developers are also heavily utilizing agent mode and the coding agent to drive autonomy, helping WEX see a huge lift in deployments, with ~30% more code shipped. — WEX customer story

Going forward, we’re focused on deeper personalization and high-fidelity interactivity, refining the agent to learn your team’s unwritten preferences while enabling two-way conversations that let you refine fixes and explore alternatives before merging.

As Copilot capabilities continue to evolve, from coding and planning to review and automation, the goal is simple: help developers move faster while maintaining the trust and quality that great software demands.

Get started today

Copilot code review is a premium feature available with Copilot Pro, Copilot Pro+, Copilot Business, and Copilot Enterprise. See the following resources to:

Already enabled Copilot code review? See these docs to set up automatic Copilot code reviews on every pull request within your repository or organization.

Have thoughts or feedback? Please let us know in our community discussion post.

The post 60 million Copilot code reviews and counting appeared first on The GitHub Blog.

]]>
94298
Scaling AI opportunity across the globe: Learnings from GitHub and Andela https://github.blog/developer-skills/career-growth/scaling-ai-opportunity-across-the-globe-learnings-from-github-and-andela/ Thu, 05 Mar 2026 17:00:00 +0000 https://github.blog/?p=94275 Developers connected to Andela share how they’re learning AI tools inside real production workflows.

The post Scaling AI opportunity across the globe: Learnings from GitHub and Andela appeared first on The GitHub Blog.

]]>

Across the globe, developer talent is abundant. But what has been historically inequitable is the access to emerging technologies, mentorship, and enablement when those technologies are reshaping the industry. Developers in regions like Africa, South America, and Southeast Asia can build products at scale, yet access to emerging tools and learning pathways often varies by geography and employer.

Andela is a global talent marketplace built on the belief that where you live should not determine your access to opportunity. Over the past two years, GitHub and Andela have been working together to expand structured AI access across Andela’s 5.5-million-member global talent network. As of now, 3,000 Andela engineers have been trained on GitHub Copilot through Andela’s AI Academy.

Starting in 2024, Andela began rolling out structured AI training to selected developers across Africa and Latin America whose day-to-day work directly involved complex production systems. Instead of treating AI as a standalone experiment, the program integrated Copilot directly into day-to-day development processes—within IDE environments, pull request reviews, and active refactoring work—ensuring it was evaluated under real production constraints.

To understand how the approach worked in practice, we spoke with Andela developers. Below, you’ll learn how they introduced AI into active production systems and identified a model that you can apply, whether you’re experimenting independently or integrating AI tools at work.

The challenges that global developers face today

Developers in regions such as Africa, South America, and Southeast Asia face a distinct set of challenges when it comes to AI skilling and reskilling.

Many developers contend with unreliable connectivity, limited access to high‑performance compute, and the high cost of cloud tools and data—all of which are foundational to learning and practicing modern AI. Training content is often designed for well‑resourced environments, assumes constant internet access, and is rarely localized for language, context, or regional use cases. At the same time, many developers are navigating informal or contract‑based work, leaving little time or financial cushion to invest in reskilling.

Without intentional investment in affordable access, localized learning pathways, and community‑driven ecosystems, the rapid pace of AI advancement risks widening existing inequities—excluding talented developers from across the globe from fully participating in, shaping, and benefiting from the future of AI.

Learning AI inside real work

For most mid-career developers, stepping away from production responsibilities to experiment with AI tools is not realistic. Deadlines continue, systems remain live, and reputations are earned over time. This is why learning has to happen inside real work.

In many organizations, AI tooling is provisioned broadly, and teams are told to experiment. Access is assumed to be enough. But without clarity around which roles benefit most, what jobs are being targeted, and how review standards evolve, adoption can stall or fragment.

Andela took a different approach. Developers were identified based on the relevance of AI to their responsibilities, job profiles were defined, and training programs reflected the actual systems developers were accountable for maintaining.

This is because the team at Andela knows that developers are rarely starting from scratch. More often, they are working inside dense, high-stakes systems where mistakes carry consequences. For many engineers across the globe, access to structured experimentation with emerging tools has not always been guaranteed, which makes learning inside real work both necessary and consequential.

Stephen N’nouka A’ Issah, a React developer from Cameroon who works in Rwanda, assumed early on that AI tools would not perform well under that level of complexity.

I thought it might help with simple things. But I didn’t expect it to work with advanced patterns or legacy code.

Stephen N’nouka A’ Issah, React developer

That skepticism reflected experience. Many developers have seen tools demonstrate well in controlled environments and struggle once deployed in production systems.

Recognizing this reality, Andela chose not to treat AI as a separate discipline or certification exercise removed from day-to-day work. Instead, through its AI Academy, it embedded learning directly into production workflows.

Abraham Omomoh, a learning program manager at Andela, explained the philosophy clearly.

Training has to reflect what developers are actually asked to do at work, not idealized exercises.

Abraham Omomoh, learning program manager at Andela

This way, learning occurs within the same systems developers are already accountable for maintaining.

The first payoff: Faster orientation

One of the earliest recognized benefits for developers was wasn’t increased output, but faster orientation within unfamiliar systems.

Daniel Nascimento, a senior engineer in Brazil with more than 25 years of experience, described what it’s like to work on legacy code that “nobody wants to touch,” where the real risk isn’t speed so much as unintended consequences.“The first thing I ask is: what does this project actually do?” he said. “What’s the architecture? What are the weaknesses? What are the strengths?”

To make change safer, he now uses AI tools to generate unit tests before refactoring, creating clearer boundaries for what can be modified without breaking behavior.

Legacy code usually doesn’t have coverage. So I use it to build that coverage first. Then I know what I’m playing with.

Daniel Nascimento, senior engineer

Stephen described a similar pattern when onboarding to unfamiliar systems. In his experience, AI doesn’t replace understand, it compresses the time it takes to surface intent, architectural patterns, and constraints before making changes. Much of this work involves:

  • Generating tests to understand behavior
  • Drafting refactors to clarify control flow
  • Sketching diagrams to reason about system boundaries

Even then, many suggestions still require cleanup or introduce subtle issues, reinforcing the importance of disciplined reviews.

With AI, confidence compounds

After several weeks of applying AI inside production systems, we could start to measure incremental improvements.

Developers reported:

  • Faster onboarding to unfamiliar systems
  • More confidence taking ownership of ambiguous work
  • Less time spent on setup and more on decisions

Daniel estimated a significant productivity gain, largely driven as an outcome of working differently.

“Using GitHub Copilot, I boosted my productivity by around 50%,” he said.

But it’s not just speed. It gives me more time to connect with the business and focus on real impact.

Daniel Nascimento

He emphasized much of that gain came from reducing repetitive overhead rather than replacing core engineering judgment.

For developers who previously lacked structured exposure to AI tooling, that access translated into expanded professional skills. Certifications strengthened their credibility and AI fluency expanded the scope of work they could take on.

The AI skills gap shows up as access, not ability

This work reinforces a broader pattern: the AI skills gap is, at its core, about structured access to tools, mentorship, and practical enablement.

Developers who adapt faster typically have:

  • Access to modern tools
  • Space to experiment safely
  • Teams aligned on how those tools should be used

Where those conditions exist, learning compounds. Where they don’t, AI impact is limited.

And this also matters for developers across the globe where increased skilling translates to better job and economic opportunities. Koffi Kelvin, an Andela engineer based in Kenya, shared, “GitHub Copilot is a portal that catapulted my professional trajectory into a literal other dimension.”

Between the workflows, security, testing and high-octane pipelines, it’s been less like a career path and more like a rocket launch.

Koffi Kelvin, Andela engineer

Expanding structured access in the Global South isn’t about catching up. Instead, it’s about ensuring that the developers shaping AI-assisted systems reflect the full diversity of global engineering talent.

Everyone benefits with access

When everyone across the globe has structured access to learning, we all benefit from it. AI upskilling is not about chasing hype or predicting the future. It is about learning how to integrate new tools into real systems without stepping away from the job. It allows developers to take on more complex work, contribute more confidently to global teams, and continue building at the edge of modern practice regardless of geography.

When that learning is structured—when access is intentional rather than incidental—it compounds. Sammy Kiogara Mati, an Andela engineer who works on GitHub, shared that “GitHub Copilot has expanded my view of what’s possible for global tech talent.”

AI does not level the playing field on its own. Structured access does.

To start your own AI learning journey,
check out GitHub Learn >

The post Scaling AI opportunity across the globe: Learnings from GitHub and Andela appeared first on The GitHub Blog.

]]>
94275
Join or host a GitHub Copilot Dev Days event near you https://github.blog/ai-and-ml/github-copilot/join-or-host-a-github-copilot-dev-days-event-near-you/ Tue, 03 Mar 2026 16:55:00 +0000 https://github.blog/?p=94244 GitHub Copilot Dev Days is a global series of hands-on, in-person, community-led events designed to help developers explore real-world, AI-assisted coding.

The post Join or host a GitHub Copilot Dev Days event near you appeared first on The GitHub Blog.

]]>

The way we build software is changing fast. AI is no longer a “someday” tool. It’s reshaping how we plan, write, review, and ship code right now. As products evolve faster than ever, developers are expected to keep up just as quickly. That’s why GitHub Copilot Dev Days exists: for developers to level up together on how they can use AI-assisted coding today.

GitHub Copilot Dev Days is a global series of hands-on, in-person, community-led events designed to help developers explore real-world AI-assisted coding with GitHub Copilot. Join us for the knowledge, stay for the great food, good vibes, and plenty of fun along the way. Find an event near you and register today.

Who is GitHub Copilot Dev Days for?

Anyone and everyone who is looking to improve their development workflow and learn something new! We have events run by and for folks from professional developers to students. Sessions cover various levels and programming backgrounds.

If it’s your first time trying out AI-assisted development, this event will introduce you to the tools and best practices to succeed from day one. If you’re more advanced, we’re excited to show you the latest tips and tricks to ensure you’re fully up to date.

What to expect from a GitHub Copilot Dev Day

Each event will feature live demos, practical sessions, and interactive workshops with high-quality training content. We will focus on real workflows you can use right away, whether you’re already using Copilot daily or just getting started. Your hosts are development experts: GitHub Stars, Microsoft MVPs, GitHub Campus Experts, Microsoft Student Ambassadors, GitHub and Microsoft employees, to name a few.

We will have training materials covering the GitHub Copilot CLI, Cloud Agent, GitHub Copilot in VS Code, Visual Studio, and other editors, and more! Different events will focus on different topics, so be sure to review the registration page beforehand.

The specific event details will vary, as each community event organizer might tweak the event to fit the interests of their local developer community. Here is a sample agenda:

  • Introductory session: 30-45 minutes on GitHub Copilot.
  • Local community session: 30-45 minutes by a local developer or community leader on relevant topics.
  • Hands-on workshop: 1 hour of coding and practical exercises.

All events are an opportunity to connect with your local developer community, learn something new, and enjoy some snacks and swag!

Events begin in March

Events are now live in cities around the world starting in March. Spots are limited and dates are approaching—now’s the time to grab a seat.

Want to bring GitHub Copilot Dev Days to your user group? Fill out our form.

Find a GitHub Copilot Dev Days event near you and register today >

The post Join or host a GitHub Copilot Dev Days event near you appeared first on The GitHub Blog.

]]>
94244
From idea to pull request: A practical guide to building with GitHub Copilot CLI https://github.blog/ai-and-ml/github-copilot/from-idea-to-pull-request-a-practical-guide-to-building-with-github-copilot-cli/ Fri, 27 Feb 2026 16:00:00 +0000 https://github.blog/?p=94179 A hands-on guide to using GitHub Copilot CLI to move from intent to reviewable changes, and how that work flows naturally into your IDE and GitHub.

The post From idea to pull request: A practical guide to building with GitHub Copilot CLI appeared first on The GitHub Blog.

]]>

Most developers already do real work in the terminal.

We initialize projects there, run tests there, debug CI failures there, and make fast, mechanical changes there before anything is ready for review. GitHub Copilot CLI fits into that reality by helping you move from intent to reviewable diffs directly in your terminal—and then carry that work into your editor or pull request.

This blog walks through a practical workflow for using Copilot CLI to create and evolve an application, based on a new GitHub Skills exercise. The Skills exercise provides a guided, hands-on walkthrough; this post focuses on why each step works and when to use it in real projects.

What Copilot CLI is (and is not)

Copilot CLI is a GitHub-aware coding agent in your terminal. You can describe what you want in natural language, use /plan to outline the work before touching code, and then review concrete commands or diffs before anything runs. Copilot may reason internally, but it only executes commands or applies changes after you explicitly approve them. 

In practice, Copilot CLI helps you:

  • Explore a problem based on your intent
  • Propose structured plans using /plan (or you can hit Shift + Tab to enter planning mode), or suggest concrete commands and diffs you can review
  • Generate or modify files
  • Explain failures where they occur

What it does not do:

  • Silently run commands or apply changes without your approval
  • Replace careful design work
  • Eliminate the need for review

You stay in control of what runs, what changes, and what ships.

Step 1: Start with intent, not scaffolding

Instead of starting by choosing a framework or copying a template, start by stating what you want to build.

From an empty directory, run:

copilot
> Create a small web service with a single JSON endpoint and basic tests

If you want to generate a proposal in a single prompt instead of entering interactive mode, you can also run:

copilot -p "Create a small web service with a single JSON endpoint and basic tests"

In the Skills exercise, this pattern is used repeatedly: describe intent first, then decide which suggested commands you actually want to run.

At this stage, Copilot CLI is exploring the problem space. It may:

  • Suggest a stack
  • Outline files
  • Propose setup commands

Nothing runs automatically. You inspect everything before deciding what to execute. This makes the CLI a good place to experiment before committing to a design.

Step 2: Scaffold only what you’re ready to own

Once you see a direction you’re comfortable with, ask Copilot CLI to help scaffold:

> Scaffold this as a minimal Node.js project with a test runner and README

This is where Copilot CLI is most immediately useful. It can:

  • Create directories and config,
  • Wire basic project structure,
  • Generate boilerplate you would otherwise type or copy by hand.

Copilot CLI does not “own” the project structure. It suggests scaffolding based on common conventions, which you should treat as a starting point, not a prescription.

The important constraint is that you’re always responsible for the result. Treat the output like code from a teammate: review it, edit it, or discard it.

Step 3: Iterate at the point of failure

Run your tests directly inside Copilot CLI:

Run all my tests and make sure they pass

When something fails, ask Copilot about that exact failure in the same session:

> Why are these tests failing?

If you want a concrete proposal instead of an explanation, try:

> Fix this test failure and show the diff

This pattern—run (!command), inspect, ask, review diff—keeps the agent grounded in real output instead of abstract prompts.

💡Pro tip: In practice, explain is useful when you want understanding, while suggest is better when you want a concrete proposal you can review. Learn more about slash commands in Copilot CLI in our guide

Step 4: Make mechanical or repo-wide changes

Copilot CLI is also well suited to changes that are easy to describe but tedious to execute:

> Rename all instances of X to Y across the repository and update tests

Because these changes are mechanical and scoped, they’re easy to review and easy to roll back. The CLI gives you a concrete diff instead of a wall of generated text.

Step 5: Move into your editor when you need to start shaping your code

Eventually, speed matters less than precision.

This is the natural handoff point to your editor or IDE, so it can:

  • Reason about edge cases
  • Refine APIs
  • Make design decisions

Copilot works there too, but the key point is why you switch environments. The CLI helps you quickly get to something real. The IDE is where you can shape your code into exactly what you want. 

A good rule of thumb: 

  • CLI: use /plan, generate a /diff, and move quickly with low ceremony
  • IDE: use /IDE when you need to refine logic and make decisions you’ll defend in review
  • GitHub: commit, open a pull request with the command /delegate, and collaborate asynchronously

Step 6: Ship on GitHub

Once the changes look good, commit and open a pull request which you can do through the Copilot CLI in natural language:

Add and commit all files with a applicable descriptive messages, push the changes.
Create a pull request and add Copilot as a reviewer

Now the work becomes durable:

  • Reviewable by teammates
  • Testable in CI
  • Ready for async iteration

This is where Copilot’s value compounds as part of a flow that ends with shipping versus just being a single surface. The Skills exercise intentionally ends here, because this is where Copilot’s value becomes durable: in commits, pull requests, and review (not just suggestions).

One workflow, three moments

A helpful mental model for Copilot looks like this:

  • CLI: prove value quickly with low ceremony
  • IDE: shape and refine your code
  • GitHub: review, collaborate, and ship

Copilot CLI is powerful precisely because it fits into this system instead of trying to replace it.

Take this with you

Copilot CLI is most useful when you treat it like a tool for momentum, not a replacement for judgment.

Used well, it helps you move from intent to concrete changes faster: exploring ideas, scaffolding projects, diagnosing failures, and handling mechanical work directly in the terminal. When precision matters, you move into your editor. When the work is ready to share, it lands on GitHub as a pull request—reviewable, testable, and shippable.

That flow matters more than any single command.

If you take one thing away from this guide, it’s this: Copilot works best when it fits naturally into how developers already build software. Start in the CLI to get unstuck or move quickly, slow down in the IDE to make decisions you can stand behind, and rely on GitHub to make the work durable.

Get started with GitHub Copilot CLI or take the Skills course >

The post From idea to pull request: A practical guide to building with GitHub Copilot CLI appeared first on The GitHub Blog.

]]>
94179
What’s new with GitHub Copilot coding agent https://github.blog/ai-and-ml/github-copilot/whats-new-with-github-copilot-coding-agent/ Thu, 26 Feb 2026 20:47:02 +0000 https://github.blog/?p=94157 GitHub Copilot coding agent now includes a model picker, self-review, built-in security scanning, custom agents, and CLI handoff. Here's what's new and how to use it.

The post What’s new with GitHub Copilot coding agent appeared first on The GitHub Blog.

]]>

You open an issue before lunch. By the time you’re back, there’s a pull request waiting.

That’s what GitHub Copilot coding agent is built for. It works in the background, fixing bugs, adding tests, cleaning up debt, and comes back with a pull request when it’s done. While you’re writing code in your editor with Copilot in real time, the coding agent is handling the work you’ve delegated.

A few recent updates make that handoff more useful. Here’s what shipped and how to start using it.

Visual learner? Watch the video above! ☝️

Choose the right model for each task

The Agents panel now includes a model picker.

Before, every background task ran on a single default model. You couldn’t pay for a more robust model to complete harder work or prioritize speed on routine tasks.

Now you can. Use a faster model for straightforward work like adding unit tests. Upgrade your model for a gnarly refactor or integration tests with real edge cases. If you’d rather not think about it, leave it on auto.

To get started:

  • Open the Agents panel (top-right in GitHub), select your repo, and pick a model.
  • Write a clear prompt and kick off the task.
  • Leave the model on auto if you’d rather let GitHub choose.

Model selection is available for Copilot Pro and Pro+ users now, with support for Business and Enterprise coming soon.

Learn more about model selection with Copilot coding agent. 👉

Pull requests that arrive in better shape

The painful part of reviewing agent output has always been the cleanup. You open the diff and there it is: logic that technically works, but nobody would write it that way.

Copilot coding agent now reviews its own changes using Copilot code review before it opens the pull request. It gets feedback, iterates, and improves the patch. By the time you’re tagged for review, someone already went through it.

In one session, the agent caught that its own string concatenation was overly complex and fixed it before the pull request landed. That kind of thing used to be your problem.

To get started:

  • Assign an issue to Copilot or create a task from the Agents panel.
  • Click into the task to view the logs.
  • See the moments where the agent ran Copilot code review and applied feedback.

Review the pull request when prompted. Copilot requests your review only after it has iterated.

Learn more about Copilot code review + Copilot coding agent. 👉

Security checks that run while the agent works

Just like with human-generated code, AI-generated code can introduce real risks: vulnerable patterns, secrets accidentally committed, dependencies with known CVEs. The difference is it does it faster. And you really don’t want to find that in review.

Copilot coding agent now runs code scanning, secret scanning, and dependency vulnerability checks directly inside its workflow. If a dependency has a known issue, or something looks like a committed API key, it gets flagged before the pull request opens.

Code scanning is normally part of GitHub Advanced Security. With Copilot coding agent, you get it for free.

To get started:

  • Run any task through the Agents panel.
  • Check the session logs as it runs. You’ll see scanning entries as the agent works.
  • Review the pull request. It’s already been through the security filter.

Learn more about security scanning in Copilot coding agent. 👉

Custom agents that follow your team’s process

A short prompt leaves a lot to judgment. And that judgment isn’t always consistent with how your team actually works.

Custom agents let you codify it. Create a file under .github/agents/ and define a specific approach. A performance optimizer agent, for example, can be wired to benchmark first, make the change, then measure the difference before opening a pull request.

In a recent GitHub Checkout demo, that’s exactly what happened. The agent benchmarked a lookup, made a targeted fix, and came back with a 99% improvement on that one function. Small scope, real data, no guessing.

You can share custom agents across your org or enterprise too, so the same process applies everywhere teams are using the coding agent.

To get started:

  • Create an agent file under .github/agents/ in your repo.
  • Open the Agents panel and start a new task.
  • Select your custom agent from the options.
  • Write a prompt scoped to what that agent does.

Learn more about creating custom agents. 👉

Move between cloud and local without losing context

Sometimes you start something in the cloud and want to finish it locally. Sometimes you’re deep in your terminal and want to hand something off without losing your flow. Either way, switching contexts used to mean starting the conversation over.

Now it doesn’t. Pull a cloud session into your terminal and you get the branch, the logs, and the full context. Or press & in the CLI to push work back to the cloud and keep going on your end.

To get started:

  • Start a task with Copilot coding agent and wait for the session to appear.
  • Click “Continue in Copilot CLI” and copy the command.
  • Paste it in your terminal to load the session locally with branch, logs, and context intact.
  • Press the ampersand symbol (&) in the CLI to delegate work back to the cloud and keep going locally.

Learn more about Copilot coding agent + CLI handoff. 👉

What this adds up to

Copilot coding agent has come a long way. Model selection, self-review, security scanning, custom agents, CLI handoff—and that’s just what shipped recently. The team is actively working on private mode, planning before coding, and using the agent for things that don’t even need a pull request, like summarizing issues or generating reports. There’s a lot more coming. Stay tuned.

Share feedback on what ships next in GitHub Community discussions.

Get started with GitHub Copilot coding agent >

The post What’s new with GitHub Copilot coding agent appeared first on The GitHub Blog.

]]>
94157
Multi-agent workflows often fail. Here’s how to engineer ones that don’t. https://github.blog/ai-and-ml/generative-ai/multi-agent-workflows-often-fail-heres-how-to-engineer-ones-that-dont/ Tue, 24 Feb 2026 16:00:00 +0000 https://github.blog/?p=94039 Most multi-agent workflow failures come down to missing structure, not model capability. Learn the three engineering patterns that make agent systems reliable.

The post Multi-agent workflows often fail. Here’s how to engineer ones that don’t. appeared first on The GitHub Blog.

]]>

If you’ve built a multi-agent workflow, you’ve probably seen it fail in a way that’s hard to explain.

The system completes, and agents take actions. But somewhere along the way, something subtle goes wrong. You might see an agent close an issue that another agent just opened, or ship a change that fails a downstream check it didn’t know existed.

That’s because the moment agents begin handling related tasks—triaging issues, proposing changes, running checks, and opening pull requests—they start making implicit assumptions about state, ordering, and validation. Without providing explicit instructions, data formats, and interfaces, things won’t go the way you planned. 

Through our work on agentic experiences at GitHub across GitHub Copilot, internal automations, and emerging multi-agent orchestration patterns, we’ve seen multi-agent systems behave much less like chat interfaces and much more like distributed systems.

This post is for engineers building multi-agent systems. We’ll walk through the most common reasons they fail and the engineering patterns that make them more reliable.

1. Natural language is messy. Typed schemas make it reliable.

Multi-agent workflows often fail early because agents exchange messy language or inconsistent JSON. Field names change, data types don’t match, formatting shifts, and nothing enforces consistency.

Just like establishing contracts early in development helps teams collaborate without stepping on each other, typed interfaces and strict schemas add structure at every boundary. Agents pass machine-checkable data, invalid messages fail fast, and downstream steps don’t have to guess what a payload means.

Most teams start by defining the data shape they expect agents to return:

type UserProfile = {
  id: number;
  email: string;
  plan: "free" | "pro" | "enterprise";
};

This changes debugging from “inspect logs and guess” to “this payload violated schema X.” Treat schema violations like contract failures: retry, repair, or escalate before bad state propagates.

The bottom line: Typed schemas are table stakes in multi-agent workflows. Without them, nothing else works. See how GitHub Models enable structured, repeatable AI workflows in real projects. 👉

2. Vague intent breaks agents. Action schemas make it clear.

Even with typed data, multi-agent workflows still fail because LLMs don’t follow implied intent, only explicit instructions.

“Analyze this issue and help the team take action” sounds clear. But different agents may close, assign, escalate, or do nothing—each reasonable, none automatable.

Action schemas fix this by defining the exact set of allowed actions and their structure. Not every step needs structure, but the outcome must always resolve to a small, explicit set of actions.

Here’s what an action schema might look like:

const ActionSchema = z.discriminatedUnion("type", [
  { type: "request-more-info", missing: string[] },
  { type: "assign", assignee: string },
  { type: "close-as-duplicate", duplicateOf: number },
  { type: "no-action" }
]);

With this in place, agents must return exactly one valid action. Anything else fails validation and is retried or escalated.

The bottom line: Most agent failures are action failures. For reducing ambiguity even earlier in the workflow—at the instruction level—this guide to writing effective custom instructions is helpful. 👉

3. Loose interfaces create errors. MCP adds the structure agents need.

Typed schemas, constrained actions, and structured reasoning only work if they’re consistently enforced. Without enforcement, they’re conventions, not guarantees.

Model Context Protocol (MCP) is the enforcement layer that turns these patterns into contracts.

MCP defines explicit input and output schemas for every tool and resource, validating calls before execution.

{
  "name": "create_issue",
  "input_schema": { ... },
  "output_schema": { ... }
}

With MCP, agents can’t invent fields, omit required inputs, or drift across interfaces. Validation happens before execution, which prevents bad state from ever reaching production systems.

The bottom line: Schemas define structure whereas action schemas define intent. MCP enforces both. Learn more about how MCP works and why it matters. 👉

Moving forward together

Multi-agent systems work when structure is explicit. When you add typed schemas, constrained actions, and structured interfaces enforced by MCP, agents start behaving like reliable system components.

The shift is simple but powerful: treat agents like code, not chat interfaces.

Learn how MCP enables structured, deterministic agent-tool interactions. 👉

The post Multi-agent workflows often fail. Here’s how to engineer ones that don’t. appeared first on The GitHub Blog.

]]>
94039
Automate repository tasks with GitHub Agentic Workflows   https://github.blog/ai-and-ml/automate-repository-tasks-with-github-agentic-workflows/ Fri, 13 Feb 2026 14:00:00 +0000 https://github.blog/?p=93730 Discover GitHub Agentic Workflows, now in technical preview. Build automations using coding agents in GitHub Actions to handle triage, documentation, code quality, and more.

The post Automate repository tasks with GitHub Agentic Workflows   appeared first on The GitHub Blog.

]]>

Imagine visiting your repository in the morning and feeling calm because you see:

  • Issues triaged and labelled
  • CI failures investigated with proposed fixes
  • Documentation has been updated to reflect recent code changes.
  • Two new pull requests that improve testing await your review.

All of it visible, inspectable, and operating within the boundaries you’ve defined.

That’s the future powered by GitHub Agentic Workflows: automated, intent-driven repository workflows that run in GitHub Actions, authored in plain Markdown and executed with coding agents. They’re designed for people working in GitHub, from individuals automating a single repo to teams operating at enterprise or open-source scale.

At GitHub Next, we began GitHub Agentic Workflows as an investigation into a simple question: what does repository automation with strong guardrails look like in the era of AI coding agents? A natural place to start was GitHub Actions, the heart of scalable repository automation on GitHub. By bringing automated coding agents into actions, we can enable their use across millions of repositories, while keeping decisions about when and where to use them in your hands.

GitHub Agentic Workflows are now available in technical preview. In this post, we’ll explain what they are and how they work. We invite you to put them to the test, to explore where repository-level AI automation delivers the most value.

Graphic showing quotes from customers. 'Home Assistant has thousands of open issues. No human can track what's trending or which problems affect the most users. I've built GitHub Agentic Workflows that analyze issues and surface what matters: that's the kind of judgment amplification that actually helps maintainers.'- Franck Nijhof, lead of the Home Assistant project, one of the top projects on GitHubby contributor countAgentic workflows also allow maintainers and community to experiment with repository automation together. 'Adopting GitHub’s Agentic Workflows has lowered the barrier for experimentation with AI tooling, making it significantly easier for staff, maintainers and newcomers alike. Inside of CNCF, we are benefiting from improved documentation automation along with improving team reporting across the organization. This isn't just a technical upgrade for our community, it’s part of a cultural shift that empowers our ecosystem to innovate faster with AI and agentic tooling.'- Chris Aniszczyk, CTO of the Cloud Native Computing Foundation (CNCF), whose mission is to make cloud native computing ubiquitous across the worldEnterprises are seeing similar benefits at scale. 'With GitHub Agentic Workflows, we’re able to expand how we apply agents to real engineering work at scale, including changes that span multiple repositories. The flexibility and built-in controls give us confidence to leverage Agentic Workflows across complex systems at Carvana.'- Alex Devkar, Senior Vice President, Engineering and Analytics, at Carvana

AI repository automation: A revolution through simplicity 

The concept behind GitHub Agentic Workflows is straightforward: you describe the outcomes you want in plain Markdown, add this as an automated workflow to your repository, and it executes using a coding agent in GitHub Actions.

This brings the power of coding agents into the heart of repository automation. Agentic workflows run as standard GitHub Actions workflows, with added guardrails for sandboxing, permissions, control, and review. When they execute, they can use different coding agent engines—such as Copilot CLI, Claude Code, or OpenAI Codex—depending on your configuration.

The use of GitHub Agentic Workflows makes entirely new categories of repository automation and software engineering possible, in a way that fits naturally with how developer teams already work on GitHub. All of them would be difficult or impossible to accomplish traditional YAML workflows alone:

  1. Continuous triage: automatically summarize, label, and route new issues.
  2. Continuous documentation: keep READMEs and documentation aligned with code changes.
  3. Continuous code simplificationrepeatedly identify code improvements and open pull requests for them.
  4. Continuous test improvementassess test coverage and add high-value tests.
  5. Continuous quality hygiene: proactively investigate CI failures and propose targeted fixes.
  6. Continuous reportingcreate regular reports on repository health, activity, and trends.

These are just a few examples of repository automations that showcase the power of GitHub Agentic Workflows. We call this Continuous AI: the integration of AI into the SDLC, enhancing automation and collaboration similar to continuous integration and continuous deployment (CI/CD) practices.

GitHub Agentic Workflows and Continuous AI are designed to augment existing CI/CD rather than replace it. They do not replace build, test, or release pipelines, and their use cases largely do not overlap with deterministic CI/CD workflows. Agentic workflows run on GitHub Actions because that is where GitHub provides the necessary infrastructure for permissions, logging, auditing, sandboxed execution, and rich repository context.

In our own usage at GitHub Next, we’re finding new uses for agentic workflows nearly every day. Throughout GitHub, teams have been using agentic workflows to create custom tools for themselves in minutes, replacing chores with intelligence or paving the way for humans to get work done by assembling the right information, in the right place, at the right time. A new world of possibilities is opening for teams and enterprises to keep their repositories healthy, navigable, and high-quality.

Let’s talk guardrails and control 

Designing for safety and control is non-negotiable. GitHub Agentic Workflows implements a defense-in-depth security architecture that protects against unintended behaviors and prompt-injection attacks.

Workflows run with read-only permissions by default. Write operations require explicit approval through safe outputs, which map to pre-approved, reviewable GitHub operations such as creating a pull request or adding a comment to an issue. Sandboxed execution, tool allowlisting, and network isolation help ensure that coding agents operate within controlled boundaries.

Guardrails like these make it practical to run agents continuously, not just as one-off experiments. See our security architecture for more details.

One alternative approach to agentic repository automation is to run coding agent CLIs, such as Copilot or Claude, directly inside a standard GitHub Actions YAML workflow. This approach often grants these agents more permission than is required for a specific task. In contrast, GitHub Agentic Workflows run coding agents with read-only access by default and rely on safe outputs for GitHub operations, providing tighter constraints, clearer review points, and stronger overall control.

A simple example: A daily repo report  

Let’s look at an agentic workflow which creates a daily status report for repository maintainers.

In practice, you will usually use AI assistance to create your workflows. The easiest way to do this is with an interactive coding agent. For example, with your favorite coding agent, you can enter this prompt:

Generate a workflow that creates a daily repo status report for a maintainer. Use the instructions at https://github.com/github/gh-aw/blob/main/create.md

The coding agent will interact with you to confirm your specific needs and intent, write the Markdown file, and check its validity. You can then review, refine, and validate the workflow before adding it to your repository.

This will create two files in .github/workflows

  • daily-repo-status.md (the agentic workflow)  
  • daily-repo-status.lock.yml (the corresponding agentic workflow lock file, which is executed by GitHub Actions) 

The file daily-repo-status.md will look like this: 

--- 
on: 
  schedule: daily 
 
permissions: 
  contents: read 
  issues: read 
  pull-requests: read 
 
safe-outputs: 
  create-issue: 
    title-prefix: "[repo status] " 
    labels: [report] 
 
tools: 
  github: 
---  
 
# Daily Repo Status Report 
 
Create a daily status report for maintainers. 
 
Include 
- Recent repository activity (issues, PRs, discussions, releases, code changes) 
- Progress tracking, goal reminders and highlights 
- Project status and recommendations 
- Actionable next steps for maintainers 
 
Keep it concise and link to the relevant issues/PRs.

This file has two parts: 

  1. Frontmatter (YAML between --- markers) for configuration 
  2. Markdown instructions that describe the job in natural language in natural language

The Markdown is the intent, but the trigger, permissions, tools, and allowed outputs are spelled out up front.

If you prefer, you can add the workflow to your repository manually: 

  1. Create the workflow: Add  daily-repo-status.md with the frontmatter and instructions.
  2. Create the lock file:  
    • gh extension install github/gh-aw  
    • gh aw compile
  3. Commit and push: Commit and push files to your repository.
  4. Add any required secrets: For example, add a token or API key for your coding agent.

Once you add this workflow to your repository, it will run automatically or you can trigger it manually using GitHub Actions. When the workflow runs, it creates a status report issue like this:

Screenshot of a GitHub issue titled "Daily Repo Report - February 9, 2026" showing key highlights, including 2 new releases, 1,737 commits from 16 contributors, 100 issues closed with 190 new issues opened, 50 pull requests merged from 93 opened pull requests, and 5 code quality issues opened.

What you can build with GitHub Agentic Workflows 

If you’re looking for further inspiration Peli’s Agent Factory is a guided tour through a wide range of workflows, with practical patterns you can adapt, remix, and standardize across repos.

A useful mental model: if repetitive work in a repository can be described in words, it might be a good fit for an agentic workflow.

If you’re looking for design patterns, check out ChatOps, DailyOps, DataOps, IssueOps, ProjectOps, MultiRepoOps, and Orchestration.

Uses for agent-assisted repository automation often depend on particular repos and development priorities. Your team’s approach to software development will differ from those of other teams. It pays to be imaginative about how you can use agentic automation to augment your team for your repositories for your goals.

Practical guidance for teams 

Agentic workflows bring a shift in thinking. They work best when you focus on goals and desired outputs rather than perfect prompts. You provide clarity on what success looks like, and allow the workflow to explore how to achieve it. Some boundaries are built into agentic workflows by default, and others are ones you explicitly define. This means the agent can explore and reason, but its conclusions always stay within safe, intentional limits.

You will find that your workflows can range from very general (“Improve the software”) to very specific (“Check that all technical documentation and error messages for this educational software are written in a style suitable for an audience of age 10 or above”). You can choose the level of specificity that’s appropriate for your team.

GitHub Agentic Workflows use coding agents at runtime, which incur billing costs. When using Copilot with default settings, each workflow run typically incurs two premium requests: one for the agentic work and one for a guardrail check through safe outputs. The models used can be configured to help manage these costs. Today, automated uses of Copilot are associated with a user account. For other coding agents, refer to our documentation for details. Here are a few more tips to help teams get value quickly:

  • Start with low-risk outputs such as comments, drafts, or reports before enabling pull request creation.
  • For coding, start with goal-oriented improvements such as routine refactoring, test coverage, or code simplification rather than feature work.
  • For reports, use instructions that are specific about what “good” looks like, including format, tone, links, and when to stop.
  • Agentic workflows create an agent-only, sub-loop that’s able to be autonomous because agents are acting under defined terms. But it’s important that humans stay in the broader loop of forward progress in the repository, through reports, issues, and pull requests. With GitHub Agentic Workflows, pull requests are never merged automatically, and humans must always review and approve.
  • Treat the workflow Markdown as code. Review changes, keep it small, and evolve it intentionally.

Continuous AI works best if you use it in conjunction with CI/CD. Don’t use agentic workflows as a replacement for GitHub Actions YAML workflows for CI/CD. This approach extends continuous automation to more subjective, repetitive tasks that traditional CI/CD struggle to express.

Build the future of automation with us   

GitHub Agentic Workflows are available now in technical preview and are a collaboration between GitHub, Microsoft Research, and Azure Core Upstream. We invite you to try them out and help us shape the future of repository automation.

We’d love for you to be involved! Share your thoughts in the Community discussion, or join us (and tons of other awesome makers) in the #agentic-workflows channel of the GitHub Next Discord. We look forward to seeing what you build with GitHub Agentic Workflows. Happy automating!

Try GitHub Agentic Workflows in a repo today! Install gh-aw, add a starter workflow or create one using AI, and run it. Then, share what you build (and what you want next)

The post Automate repository tasks with GitHub Agentic Workflows   appeared first on The GitHub Blog.

]]>
93730