Aaron Winston, Author at The GitHub Blog https://github.blog/author/aaronwinston/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Wed, 28 Jan 2026 19:58:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 Aaron Winston, Author at The GitHub Blog https://github.blog/author/aaronwinston/ 32 32 153214340 From pixels to characters: The engineering behind GitHub Copilot CLI’s animated ASCII banner https://github.blog/engineering/from-pixels-to-characters-the-engineering-behind-github-copilot-clis-animated-ascii-banner/ Wed, 28 Jan 2026 17:00:00 +0000 https://github.blog/?p=93464 Learn how GitHub built an accessible, multi-terminal-safe ASCII animation for the Copilot CLI using custom tooling, ANSI color roles, and advanced terminal engineering.

The post From pixels to characters: The engineering behind GitHub Copilot CLI’s animated ASCII banner appeared first on The GitHub Blog.

]]>

Most people think ASCII art is simple, and a nostalgic remnant of the early internet. But when the GitHub Copilot CLI team asked for a small entrance banner for the new command-line experience, they discovered the opposite: An ASCII animation in a real-world terminal is one of the most constrained UI engineering problems you can take on.

Part of what makes this even more interesting is the moment we’re in. Over the past year, CLIs have seen a surge of investment as AI-assisted and agentic workflows move directly into the terminal. But unlike the web—where design systems, accessibility standards, and rendering models are well-established—the CLI world is still fragmented. Terminals behave differently, have few shared standards, and offer almost no consistent accessibility guidelines. That reality shaped every engineering decision in this project.

Different terminals interpret ANSI color codes differently. Screen readers treat fast-changing characters as noise. Layout engines vary. Buffers flicker. Some users override global colors for accessibility. Others throttle redraw speed. There is no canvas, no compositor, no consistent rendering model, and no standard animation framework.

So when an animated Copilot mascot flying into the terminal appeared, it looked playful. But behind it was serious engineering work, unexpected complexity, a custom design toolchain, and a tight pairing between a designer and a long-time CLI engineer.

That complexity only became fully visible once the system was built. In the end, animating a three-second ASCII banner required over 6,000 lines of TypeScript—most of it dedicated not to visuals, but to handling terminal inconsistencies, accessibility constraints, and maintainable rendering logic.

This is the technical story of how it came together.

Why animated ASCII is a hard engineering problem

Before diving into the build process, it’s worth calling out why this problem space is more advanced than it looks.

Terminals don’t have a canvas

Unlike browsers (DOM), native apps (views), or graphics frameworks (GPU surfaces), terminals treat output as a stream of characters. There’s no native concept of:

  • Frames
  • Sprites
  • Z-index
  • Rasterized pixels
  • Animation tick rates

Because of this, every “frame” has to be manually repainted using cursor movements and redraw commands. There’s no compositor smoothing anything over behind the scenes. Everything is stdout writes + ANSI control sequences.

ANSI escape codes are inconsistent, and terminal color is its own engineering challenge

ANSI escape codes like \x1b[35m (bright magenta) or \x1b[H (cursor home) behave differently across terminals—not just in how they render, but in whether they’re supported at all. Some environments (like Windows Command Prompt or older versions of PowerShell) have limited or no ANSI support without extra configuration.

But even in terminals that do support ANSI, the hardest part isn’t the cursor movement. It’s the colors.

When you’re building a CLI, you realistically have three approaches:

  1. Use no color at all. This guarantees broad compatibility, but makes it harder to highlight meaning or guide users’ attention—especially in dense CLI output.
  2. Use richer color modes (3-bit, 4-bit, 8-bit, or truecolor) that aren’t uniformly supported or customizable. This introduces a maintenance headache: Different terminals, themes, and accessibility profiles render the same color codes differently, and users often disagree about what “good” colors look like.
  3. Use a minimal, customizable palette (usually 4-bit colors) that most terminals allow users to override in their preferences. This is the safest path, but it limits how accurately you can represent a brand palette—and it forces you to design for environments with widely varying contrast and theme choices.

For the Copilot CLI animation, this meant treating color as a semantic system, not a literal one: Instead of committing specific RGB values, the team mapped high-level “roles” (eyes, goggles, shadow, border) to ANSI colors that degrade gracefully across different terminals and accessibility settings.

Accessibility is a first-class concern

Terminals are used by developers with a wide range of visual abilities—not just blind users with screen readers, but also low-vision users, color-blind users, and anyone working in high-contrast or customized themes.

That means:

  • Rapid re-renders can create auditory clutter for screen readers
  • Color-based meaning must degrade safely, since bold, dim, or subtle hues may not be perceivable
  • Low-vision users may not see contrast differences that designers expect
  • Animations must be opt-in, not automatic
  • Clearing sequences must avoid confusing assistive technologies

This is also why the Copilot CLI animation ended up behind an opt-in flag early on—accessibility constraints shaped the architecture from the start. 

These constraints guided every decision in the Copilot CLI animation. The banner had to work when colors were overridden, when contrast was limited, and even when the animation itself wasn’t visible.

Ink (React for the terminal) helps, but it’s not an animation engine

Ink lets you build terminal interfaces using React components, but:

  • It re-renders on every state change
  • It doesn’t manage frame deltas
  • It doesn’t synchronize with terminal paint cycles
  • It doesn’t solve flicker or cursor ghosting

Which meant animation logic had to be handcrafted.

Frame-based ASCII animation has no existing workflow for designers

There are tools for ASCII art, but virtually none for:

  • Frame-by-frame editing
  • Multi-color ANSI previews
  • Exporting color roles
  • Generating Ink-ready components
  • Testing contrast and accessibility

Even existing ANSI preview tools don’t simulate how different terminals remap colors or handle cursor updates, which makes accurate design iteration almost impossible without custom tooling. So the team had to build one.

Part 1: A request that didn’t fit any workflow

Cameron Foxly (@cameronfoxly), a brand designer at GitHub with a background in animation, was asked to create a banner for the Copilot CLI.

“Normally, I’d build something in After Effects and hand off assets,” Cameron said. “But engineers didn’t have the time to manually translate animation frames into a CLI. And honestly, I wanted something more fun.”

He’d seen the static ASCII intro in Claude Code and knew Copilot deserved more personality.

The 3D Copilot mascot flying in to reveal the CLI logo felt right. But after attempting to create just one frame manually, the idea quickly ran into reality.

“It was a nightmare,” Cameron said. “If this is going to exist, I need to build my own tool.”

Part 2: Building an ASCII animation editor from scratch

Cameron opened an empty repository in VS Code, and began asking GitHub Copilot for help scaffolding an animation MVP that could:

  • Read text files as frames
  • Render them sequentially
  • Control timing
  • Clear the screen without flicker
  • Add a primitive “UI”

Within an hour, he had a working prototype that was monochrome, but functional.

Simplified early animation loop

Below is a simplified example variation of the frame loop logic Cameron prototyped:

import fs from "fs";
import readline from "readline";

/**
 * Load ASCII frames from a directory.
 */
const frames = fs
  .readdirSync("./frames")
  .filter(f => f.endsWith(".txt"))
  .map(f => fs.readFileSync(`./frames/${f}`, "utf8"));

let current = 0;

function render() {
  // Move cursor to top-left of terminal
  readline.cursorTo(process.stdout, 0, 0);

  // Clear the screen below the cursor
  readline.clearScreenDown(process.stdout);

  // Write the current frame
  process.stdout.write(frames[current]);

  // Advance to next frame
  current = (current + 1) % frames.length;
}

// 75ms = ~13fps. Higher can cause flicker in some terminals.
setInterval(render, 75);

This introduced the first major obstacle: color. The prototype worked in monochrome, but the moment color was added, inconsistencies across terminals—and accessibility constraints—became the dominant engineering problem.

Part 3: ANSI color theory and the real-world limitations

The Copilot brand palette is vibrant and high-contrast, which is great for web but exceptionally challenging for terminals.

ANSI terminals support:

  • 16-color mode (standard)
  • 256-color mode (extended)
  • Sometimes truecolor (“24-bit”) but inconsistently

Even in 256-color mode, terminals remap colors based on:

  • User themes
  • Accessibility settings
  • High-contrast modes
  • Light/dark backgrounds
  • OS-level overrides

Which means you can’t rely on exact hues. You have to design with variability in mind.

Cameron needed a way to paint characters with ANSI color roles while previewing how they look in different terminals.

He took a screenshot of the Wikipedia ANSI table, handed it to Copilot, and asked it to scaffold a palette UI for his tool.

Adding a color “brush” tool

A simplified version:

function applyColor(char, color) {
  // Minimal example: real implementation needed support for roles,
  // contrast testing, and multiple ANSI modes.
  const codes = {
    magenta: "\x1b[35m",
    cyan: "\x1b[36m",
    white: "\x1b[37m"
  };

  return `${codes[color]}${char}\x1b[0m`; // Reset after each char
}

This enabled Cameron to paint ANSI-colored ASCII like you would in Photoshop, one character at a time.

But now he had to export it into the real Copilot CLI codebase.

Part 4: Exporting to Ink (React for the terminal)

Ink is a React renderer for building CLIs using JSX components. Instead of writing to the DOM, components render to stdout.

Cameron asked Copilot to help generate an Ink component that would:

  • Accept frames
  • Render them line-by-line
  • Animate them with state updates
  • Integrate cleanly into the CLI codebase

Simplified Ink frame renderer

import React from "react";
import { Box, Text } from "ink";

/**
 * Render a single ASCII frame.
 */
export const CopilotBanner = ({ frame }) => (
  <Box flexDirection="column">
    {frame.split("\n").map((line, i) => (
      <Text key={i}>{line}</Text>
    ))}
  </Box>
);

And a minimal animation wrapper:

export const AnimatedBanner = () => {
  const [i, setI] = React.useState(0);

  React.useEffect(() => {
    const id = setInterval(() => setI(x => (x + 1) % frames.length), 75);
    return () => clearInterval(id);
  }, []);

  return <CopilotBanner frame={frames[i]} />;
};

This gave Cameron the confidence to open a pull request (his first engineering pull request in nine years at GitHub).

“Copilot filled in syntax I didn’t know,” Cameron said. “But I still made all the architectural decisions.”

Now it was time for the engineering team to turn a prototype into something production-worthy.

Part 5: Terminal animation isn’t solved technology

Andy Feller (@andyfeller), a long-time GitHub engineer behind the GitHub CLI, partnered with Cameron to bring the animation into the Copilot CLI codebase.

Unlike browsers—which share rendering engines, accessibility APIs, and standards like WCAG—terminal environments are a patchwork of behaviors inherited from decades-old hardware like the VT100. There’s no DOM, no semantic structure, and only partial agreement on capabilities across terminals. This makes even “simple” UI design problems in the terminal uniquely challenging, especially as AI-driven workflows push CLIs into daily use for more developers.

“There’s no framework for terminal animations,” Andy explained. “We had to figure out how to do this without flickering, without breaking accessibility, and across wildly different terminals.”

Andy broke the engineering challenges into four broad categories:

Challenge 1: From banner to ready without flickering

Most terminals repaint the entire viewport when new content arrives. At the same time, CLIs come with a strict usability expectation: when developers run a command, they want to get to work immediately. Any animation that flickers, blocks input, or lingers too long actively degrades the experience.

This created a core tension the team had to resolve: how to introduce a brief, animated banner without slowing startup, stealing focus, or destabilizing the terminal render loop.

In practice, this was complicated by the fact that terminals behave differently under load. Some:

  • Throttle fast writes
  • Reveal cleared frames momentarily
  • Buffer output differently
  • Repaint the cursor region inconsistently

To avoid flicker while keeping the CLI responsive across popular terminals like iTerm2, Windows Terminal, and VS Code, the team had to carefully coordinate several interdependent concerns:

  • Keeping the animation under three seconds so it never delayed user interaction
  • Separating static and non-static components to minimize unnecessary redraws
  • Initializing MCP servers, custom agents, and user setup without blocking render
  • Working within Ink’s asynchronous re-rendering model

The result was an animation treated as a non-blocking, best-effort enhancement—visible when it could be rendered safely, but never at the expense of startup performance or usability.

Challenge 2: Brand color mapping in ANSI

“ANSI color consistency simply doesn’t exist,” Andy said. 

Most modern terminals support 8-bit color, allowing CLIs to choose from 256 colors. However, how those colors are actually rendered varies widely based on terminal themes, OS settings, and user accessibility overrides. In practice, CLIs can’t rely on exact hues—or even consistent contrast—across environments.

The Copilot banner introduced an additional complexity: although it’s rendered using text characters, the block-letter Copilot logo functions as a graphical object, not readable body text. Under accessibility guidelines, non-text graphical elements have different contrast requirements than text, and they must remain perceivable without relying on fine detail or precise color matching.

To account for this, the team deliberately chose a minimal 4-bit ANSI palette—one of the few color modes most terminals allow users to customize—to ensure the animation remained legible under high-contrast themes, low-vision settings, and color overrides.

This meant the team had to:

  • Treat the Copilot wordmark as non-text graphical content with appropriate contrast requirements
  • Select ANSI color codes that approximate the Copilot palette without relying on exact hues
  • Satisfy WCAG contrast guidance for both text and non-text elements
  • Ensure the animation remained legible in light and dark terminals
  • Degrade gracefully when users override terminal colors for accessibility
  • Test color combinations across multiple terminal emulators and theme configurations

Rather than encoding brand colors directly, the animation maps semantic roles—such as borders, eyes, highlights, and text—to ANSI color slots that terminals can reinterpret safely. This allows the banner to remain recognizable without assuming control over the user’s color environment.

Dark mode version of the GitHub Copilot CLI banner.
Light mode version of the GitHub Copilot CLI banner.

Challenge 3: Making the animation maintainable

Cameron’s prototype was a great starting point for Andy to incorporate into the Copilot CLI but it wasn’t without its challenges:

  • Banner consisted of ~20 animation frames covering an 11×78 area
  • There are ~10 animation elements to stylize in any given frame
  • Needed a way to separate the text of the frame from the colors involved
  • Each frame mapped hard coded colors to row and column coordinates
  • Each frame required precise timing to display Cameron’s vision

First, the animation was broken down into distinct animation elements that could be used to create separate light and dark themes:

type AnimationElements =
    | "block_text"
    | "block_shadow"
    | "border"
    | "eyes"
    | "head"
    | "goggles"
    | "shine"
    | "stars"
    | "text";

type AnimationTheme = Record<AnimationElements, ANSIColors>;

const ANIMATION_ANSI_DARK: AnimationTheme = {
    block_text: "cyan",
    block_shadow: "white",
    border: "white",
    eyes: "greenBright",
    head: "magentaBright",
    goggles: "cyanBright",
    shine: "whiteBright",
    stars: "yellowBright",
    text: "whiteBright",
};

const ANIMATION_ANSI_LIGHT: AnimationTheme = {
    block_text: "blue",
    block_shadow: "blackBright",
    border: "blackBright",
    eyes: "green",
    head: "magenta",
    goggles: "cyan",
    shine: "whiteBright",
    stars: "yellow",
    text: "black",
};

Next, the overall animation and subsequent frames would capture content, color, duration needed to animate the banner:

interface AnimationFrame {
    title: string;
    duration: number;
    content: string;
    colors?: Record<string, AnimationElements>; // Map of "row,col" positions to animation elements
}

interface Animation {
    metadata: {
        id: string;
        name: string;
        description: string;
    };
    frames: AnimationFrame[];
}

Then, each animation frame was captured to separate frame content from stylistic and animation details, resulting in over 6,000 lines of TypeScript to safely animate three seconds of the Copilot logo across terminals with wildly different rendering and accessibility behaviors:

    const frames: AnimationFrame[] = [
        {
            title: "Frame 1",
            duration: 80,
            content: `
┌┐
││







││
└┘`,
            colors: {
                "1,0": "border",
                "1,1": "border",
                "2,0": "border",
                "2,1": "border",
                "10,0": "border",
                "10,1": "border",
                "11,0": "border",
                "11,1": "border",
            },
        },
        {
            title: "Frame 2",
            duration: 80,
            content: `
┌──     ──┐
│         │
 █▄▄▄
 ███▀█
 ███ ▐▌
 ███ ▐▌
   ▀▀█▌
   ▐ ▌
    ▐
│█▄▄▌     │
└▀▀▀    ──┘`,
            colors: {
                "1,0": "border",
                "1,1": "border",
                "1,2": "border",
                "1,8": "border",
                "1,9": "border",
                "1,10": "border",
                "2,0": "border",
                "2,10": "border",
                "3,1": "head",
                "3,2": "head",
                "3,3": "head",
                "3,4": "head",
                "4,1": "head",
                "4,2": "head",
                "4,3": "goggles",
                "4,4": "goggles",
                "4,5": "goggles",
                "5,1": "head",
                "5,2": "goggles",
                "5,3": "goggles",
                "5,5": "goggles",
                "5,6": "goggles",
                "6,1": "head",
                "6,2": "goggles",
                "6,3": "goggles",
                "6,5": "goggles",
                "6,6": "goggles",
                "7,3": "goggles",
                "7,4": "goggles",
                "7,5": "goggles",
                "7,6": "goggles",
                "8,3": "eyes",
                "8,5": "head",
                "9,4": "head",
                "10,0": "border",
                "10,1": "head",
                "10,2": "head",
                "10,3": "head",
                "10,4": "head",
                "10,10": "border",
                "11,0": "border",
                "11,1": "head",
                "11,2": "head",
                "11,3": "head",
                "11,8": "border",
                "11,9": "border",
                "11,10": "border",
            },
        },

Finally, each animation frame is rendered building segments of text based on consecutive color usage with the necessary ANSI escape codes:

           {frameContent.map((line, rowIndex) => {
                const truncatedLine = line.length > 80 ? line.substring(0, 80) : line;
                const coloredChars = Array.from(truncatedLine).map((char, colIndex) => {
                    const color = getCharacterColor(rowIndex, colIndex, currentFrame, theme, hasDarkTerminalBackground);
                    return { char, color };
                });

                // Group consecutive characters with the same color
                const segments: Array<{ text: string; color: string }> = [];
                let currentSegment = { text: "", color: coloredChars[0]?.color || theme.COPILOT };

                coloredChars.forEach(({ char, color }) => {
                    if (color === currentSegment.color) {
                        currentSegment.text += char;
                    } else {
                        if (currentSegment.text) segments.push(currentSegment);
                        currentSegment = { text: char, color };
                    }
                });
                if (currentSegment.text) segments.push(currentSegment);

                return (
                    <Text key={rowIndex} wrap="truncate">
                        {segments.map((segment, segIndex) => (
                            <Text key={segIndex} color={segment.color}>
                                {segment.text}
                            </Text>
                        ))}
                    </Text>
                );
            })}

Challenge 4: Accessibility-first design

The engineering team approached the banner with the same philosophy as the GitHub CLI’s accessibility work:

  • Respect global color overrides both in terminal and system preferences
  • After the first use, avoid animations unless explicitly enabled via the Copilot CLI configuration file
  • Minimize ANSI instructions that can confuse assistive tech

“CLI accessibility is under researched,” Andy noted. “We’ve learned a lot from users who are blind as well as users with low vision, and those lessons shaped this project.”

Because of this, the animation is opt-in and gated behind its own flag—so it’s not something developers see by default. And when developers run the CLI in –screen-reader mode, the banner is automatically skipped so no decorative characters or motion are sent to assistive technologies.

Part 6: An architecture built to scale

By the end of the refactor, the team had:

  • Frames stored as plain text
  • Animation elements
  • Themes as simple mappings
  • A runtime colorization step
  • Ink-driven timing and rendering
  • A maintainable foundation for future animations

This pattern—storing frames as plain text, layering semantic roles, and applying themes at runtime—isn’t specific to Copilot. It’s a reusable approach for anyone building terminal UIs or animations.

Part 7: What this project reveals about building for the terminal

A “simple ASCII banner” turned into:

  • A frame-based animation tool that didn’t exist
  • A custom ANSI color palette strategy
  • A new Ink component
  • A maintainable rendering architecture
  • Accessibility-first CLI design choices
  • A designer’s first engineering contribution
  • Real-world testing across diverse terminals
  • Open source contributions from the community

“The most rewarding part was stepping into open source for the first time,” Cameron said. “With Copilot, I was able to build out  my MVP ASCII animation tool into a full open source app at ascii-motion.app,. Someone fixed a typo in my README, and it made my day.”

As Andy pointed out, building accessible experiences for CLIs is still largely unexplored territory and far behind the tooling and standards available for the web.

Today, developers are already contributing to Cameron’s ASCII Motion tool, and the Copilot CLI team can ship new animations without rebuilding the system.

This is what building for the terminal demands: deep understanding of constraints, discipline around accessibility, and the willingness to invent tooling where none exists.

Use GitHub Copilot in your terminal

The GitHub Copilot CLI brings AI-assisted workflows directly into your terminal — including commands for explaining code, generating files, refactoring, testing, and navigating unfamiliar projects.

Try GitHub Copilot CLI >

The post From pixels to characters: The engineering behind GitHub Copilot CLI’s animated ASCII banner appeared first on The GitHub Blog.

]]>
93464
7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript https://github.blog/developer-skills/programming-languages-and-frameworks/7-learnings-from-anders-hejlsberg-the-architect-behind-c-and-typescript/ Tue, 27 Jan 2026 17:17:28 +0000 https://github.blog/?p=93457 Anders Hejlsberg shares lessons from C# and TypeScript on fast feedback loops, scaling software, open source visibility, and building tools that last.

The post 7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript appeared first on The GitHub Blog.

]]>

Anders Hejlsberg’s work has shaped how millions of developers code. Whether or not you recognize his name, you likely have touched his work: He’s the creator of Turbo Pascal and Delphi, the lead architect of C#, and the designer of TypeScript. 

We sat down with Hejlsberg to discuss his illustrious career and what it’s felt like to watch his innovations stand up to real world pressure. In a long-form conversation, Hejlsberg reflects on what language design looks like once the initial excitement fades, when performance limits appear, when open source becomes unavoidable, and how AI can impact a tool’s original function.

What emerges is a set of patterns for building systems that survive contact with scale. Here’s what we learned.

Watch the full interview above.

Fast feedback matters more than almost anything else

Hejlberg’s early instincts were shaped by extreme constraints. In the era of 64KB machines, there was no room for abstraction that did not pull its weight.

“You could keep it all in your head,” he recalls.

When you typed your code, you wanted to run it immediately.

Anders Hejlsberg

Turbo Pascal’s impact did not come from the Pascal language itself. It came from shortening the feedback loop. Edit, compile, run, fail, repeat, without touching disk or waiting for tooling to catch up. That tight loop respected developers’ time and attention.

The same idea shows up decades later in TypeScript, although in a different form. The language itself is only part of the story. Much of TypeScript’s value comes from its tooling: incremental checking, fast partial results, and language services that respond quickly even on large codebases.

The lesson here is not abstract. Developers can apply this directly to how they evaluate and choose tools. Fast feedback changes behavior. When errors surface quickly, developers experiment more, refactor more confidently, and catch problems closer to the moment they are introduced. When feedback is slow or delayed, teams compensate with conventions, workarounds, and process overhead. 

Whether you’re choosing a language, framework, or internal tooling, responsiveness matters. Tools that shorten the distance between writing code and understanding its consequences tend to earn trust. Tools that introduce latency, even if they’re powerful, often get sidelined. 

Scaling software means letting go of personal preferences 

As Hejlsberg moved from largely working alone to leading teams, particularly during the Delphi years, the hardest adjustment wasn’t technical.

It was learning to let go of personal preferences.

You have to accept that things get done differently than you would have preferred. Fixing it would not really change the behavior anyway.

Anders Hejlsberg

That mindset applies well beyond language design. Any system that needs to scale across teams requires a shift from personal taste to shared outcomes. The goal stops being code that looks the way you would write it, and starts being code that many people can understand, maintain, and evolve together. C# did not emerge from a clean-slate ideal. It emerged from conflicting demands. Visual Basic developers wanted approachability, C++ developers wanted power, and Windows demanded pragmatism.

The result was not theoretical purity. It was a language that enough people could use effectively.

Languages do not succeed because they are perfectly designed. They succeed because they accommodate the way teams actually work.

Why TypeScript extended JavaScript instead of replacing it

TypeScript exists because JavaScript succeeded at a scale few languages ever reach. As browsers became the real cross-platform runtime, teams started building applications far larger than dynamic typing comfortably supports.

Early attempts to cope were often extreme. Some teams compiled other languages into JavaScript just to get access to static analysis and refactoring tools.

That approach never sat well with Hejlsberg.

Telling developers to abandon the ecosystem they were already in was not realistic. Creating a brand-new language in 2012 would have required not just a compiler, but years of investment in editors, debuggers, refactoring tools, and community adoption.

Instead, TypeScript took a different path. It extended JavaScript in place, inheriting its flaws while making large-scale development more tractable.

This decision was not ideological, but practical. TypeScript succeeded because it worked with the constraints developers already had, rather than asking them to abandon existing tools, libraries, and mental models. 

The broader lesson is about compromise. Improvements that respect existing workflows tend to spread while improvements that require a wholesale replacement rarely do. In practice, meaningful progress often comes from making the systems you already depend on more capable instead of trying to start over.

Visibility is a part of what makes open source work

TypeScript did not take off immediately. Early releases were nominally open source, but development still happened largely behind closed doors.

That changed in 2014 when the project moved to GitHub and adopted a fully public development process. Features were proposed through pull requests, tradeoffs were discussed in the open, and issues were prioritized based on community feedback.

This shift made decision-making visible. Developers could see not just what shipped, but why certain choices were made and others were not. For the team, it also changed how work was prioritized. Instead of guessing what mattered most, they could look directly at the issues developers cared about.

The most effective open source projects do more than share code. They make decision-making visible so contributors and users can understand how priorities are set, and why tradeoffs are made.

Leaving JavaScript as an implementation language was a necessary break

For many years, TypeScript was self-hosted. The compiler was written in TypeScript and ran as JavaScript. This enabled powerful browser-based tooling and made experimentation easy.

Over time, however, the limitations became clear. JavaScript is single-threaded, has no shared-memory concurrency, and its object model is flexible (but expensive). As TypeScript projects grew, the compiler was leaving a large amount of available compute unused.

The team reached a point where further optimization would not be enough. They needed a different execution model.

The controversial decision was to port the compiler to Go.

This was not a rewrite. The goal was semantic fidelity. The new compiler needed to behave exactly like the old one, including quirks and edge cases. Rust, despite its popularity, would have required significant redesign due to ownership constraints and pervasive cyclic data structures. Go’s garbage collection and structural similarity made it possible to preserve behavior while unlocking performance and concurrency.

The result was substantial performance gains, split between native execution and parallelism. More importantly, the community did not have to relearn the compiler’s behavior.

Sometimes the most responsible choice isn’t the most ambitious one, but instead preserves behavior, minimizes disruption, and removes a hard limit that no amount of incremental optimization can overcome.

In an AI-driven workflow, grounding matters more than generation

Hejlberg is skeptical of the idea of AI-first programming languages. Models are best at languages they have already seen extensively, which naturally favors mainstream ecosystems like JavaScript, Python, and TypeScript.

But AI does change things when it comes to tooling.

The traditional IDE model assumed a developer writing code and using tools for assistance along the way. Increasingly, that relationship is reversing. AI systems generate code. Developers supervise and correct. Deterministic tools like type checkers and refactoring engines provide guardrails that prevent subtle errors.

In that world, the value of tooling is not creativity. It is accuracy and constraint. Tools need to expose precise semantic information so that AI systems can ask meaningful questions and receive reliable answers.

The risk is not that AI systems will generate bad code. Instead, it’s that they will generate plausible, confident code that lacks enough grounding in the realities of a codebase. 

For developers, this shifts where attention should go. The most valuable tools in an AI-assisted workflow aren’t the ones that generate the most code, but the ones that constrain it correctly. Strong type systems, reliable refactoring tools, and accurate semantic models become essential guardrails. They provide the structure that allows AI output to be reviewed, validated, and corrected efficiently instead of trusted blindly. 

Why open collaboration is critical

Despite the challenges of funding and maintenance, Hejlberg remains optimistic about open collaboration. One reason is institutional memory. Years of discussion, decisions, and tradeoffs remain searchable and visible.

That history does not disappear into private email threads or internal systems. It remains available to anyone who wants to understand how and why a system evolved.

Despite the challenges of funding and maintenance, Hejlsberg remains optimistic about open collaboration. And a big reason is institutional memory.

“We have 12 years of history captured in our project,” he explains. “If someone remembers that a discussion happened, we can usually find it. The context doesn’t disappear into email or private systems.”

That visibility changes how systems evolve. Design debates, rejected ideas, and tradeoffs remain accessible long after individual decisions are made. For developers joining a project later, that shared context often matters as much as the code itself.

A pattern that repeats across decades

Across four decades of language design, the same themes recur:

  • Fast feedback loops matter more than elegance
  • Systems need to accommodate imperfect code written by many people
  • Behavioral compatibility often matters more than architectural purity
  • Visible tradeoffs build trust

These aren’t secondary concerns. They’re fundamental decisions that determine whether a tool can adapt as its audience grows. Moreover, they ground innovation by ensuring new ideas can take root without breaking what already works.

For anyone building tools they want to see endure, those fundamentals matter as much as any breakthrough feature. And that may be the most important lesson of all.

Did you know TypeScript was the top language used in 2025? Read more in the Octoverse report >

The post 7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript appeared first on The GitHub Blog.

]]>
93457
TypeScript’s rise in the AI era: Insights from Lead Architect, Anders Hejlsberg https://github.blog/developer-skills/programming-languages-and-frameworks/typescripts-rise-in-the-ai-era-insights-from-lead-architect-anders-hejlsberg/ Thu, 06 Nov 2025 17:00:00 +0000 https://github.blog/?p=92161 TypeScript just became the most-used language on GitHub. Here’s why, according to its creator.

The post TypeScript’s rise in the AI era: Insights from Lead Architect, Anders Hejlsberg appeared first on The GitHub Blog.

]]>

When Anders Hejlsberg started work on TypeScript in 2012, he wasn’t dreaming up a new language to compete with JavaScript. He was trying to solve a very real problem: JavaScript had become the backbone of the web, but it didn’t scale well for large, multi-developer codebases. Teams were shipping millions of lines of loosely typed code, and the language offered no help when those systems grew too complex to reason about.

What began as a pragmatic fix has since reshaped modern development. In 2025, TypeScript became the most-used language on GitHub, overtaking both JavaScript and Python for the first time. More than a million developers contributed in TypeScript this year alone—a 66% jump, according to Octoverse.

“I remember thinking,” he says, “maybe we’ll get 25% of the JavaScript community to take an interest—that would be success. But where we are now? I’m floored.”

In 2025, TypeScript became the most-used language on GitHub, surpassing both JavaScript and Python for the first time. According to this year’s Octoverse report, more than a million developers began contributing in TypeScript this year alone (a 66% YoY jump). 

So, how did a typed superset of JavaScript become the dominant language of the AI era? We sat down with Anders to talk about evolution, performance, and why a language built for better human collaboration is now powering machine-assisted coding.

“We thought 25-percent adoption would be a success.”

“When we started the project,” Anders says, “I figured if we got 25-percent of the JavaScript community interested, that’d be a win. But now, seeing how many people rely on it every day … I’m floored. The whole team is.”

Back in 2012, JavaScript was already entrenched. TypeScript’s bet wasn’t to replace it but to make large-scale JavaScript development sane by adding types, tooling, and refactorability to the world’s most permissive language.

It’s the joy of working on something you know is making a difference. We didn’t set out to be everywhere. We just wanted developers to be able to build big systems with confidence.

Anders Hejlsberg, creator of TypeScript

A decade later, that bet became the default. Nearly every modern frontend framework—React, Next.js, Angular, SvelteKit—now scaffolds with TypeScript out of the box. The result: safer codebases, better autocomplete, and fewer 3 a.m. debugging sessions over a rogue undefined.

“The magic was making TypeScript feel like JavaScript, but with superpowers,” Anders says.

Rewriting the compiler for the future

When TypeScript launched, it was famously self-hosted: written in TypeScript itself. That kept the compiler portable and hackable. But performance eventually became a problem.

“As much as it pained us to give up on self-hosting, we knew we couldn’t squeeze any more performance out of it,” Anders says.

We experimented with C#, with others, and finally chose Go. The performance gain was 10X. Half from being native, half from shared-memory concurrency. You can’t ignore 10X.

The rewrite delivered a compiler that’s faster, leaner, and more scalable for enterprise-scale codebases—but functionally identical to the old one.

On this note, Anders says, “We have a native compiler that’s a carbon copy of the old one down to the quirks. The community doesn’t have to throw anything away.”

This philosophy around preserving behavior while improving performance is one reason developers trust TypeScript. It’s not a clean-slate rewrite every few years; it’s an evolutionary system built to stay compatible.

“Open source is evolution captured in code.”

Anders reflects on open source as an ecosystem that mirrors natural selection.

“Open source was a big experiment,” Anders says. “No one ever really figured out how to fund it—and yet here we are. It’s bigger than ever, and it’s not going away. It’s evolution captured right there in the code.”

This year’s Octoverse data backs him up: developers pushed nearly 1 billion commits in 2025 (+25% YoY), and 1.12 billion of those were to public and open source repositories. That’s an evolutionary record written one pull request at a time.

TypeScript’s own repository with twelve years of issues, pull requests, and design notes has become a living archive of language evolution. “We have 12 years of history captured on GitHub,” Anders says. “It’s all searchable. It’s evolution you can grep.”

The AI effect: Why TypeScript is thriving now

One of the most striking data points from Octoverse 2025 is how AI is changing language preferences. Developers are moving toward typed languages that make AI-assisted coding more reliable and maintainable. 

Anders explains why: “AI’s ability to write code in a language is proportional to how much of that language it’s seen. It’s a big regurgitator, with some extrapolation. AI has seen tons of JavaScript, Python, and TypeScript so it’s great at writing them. New languages are actually disadvantaged.”

That data familiarity, combined with TypeScript’s static type system, makes it uniquely fit for an AI-first workflow.

“If you ask AI to translate half a million lines of code, it might hallucinate,” Anders says. “But if you ask it to generate a program that does that translation deterministically, you get a reliable result. That’s the kind of problem types were made for.”

The takeaway: in a world where code is written by both humans and machines, types aren’t bureaucracy. They’re truth checkers.

From IDEs to agents

The rise of large language models is also changing what “developer tools” even mean. IDEs are becoming environments not just for developers, but for agents.

AI started out as the assistant. Now it’s doing the work, and you’re supervising. It doesn’t need an IDE the way we do. It needs the services. That’s why all this Model Context Protocol work is exciting.

“AI started out as the assistant,” Anders says. “Now it’s doing the work, and you’re supervising. It doesn’t need an IDE the way we do. It needs the services. That’s why all this Model Context Protocol work is exciting.”

The Octoverse report describes this shift as “AI reshaping choices, not just code.” Typed languages like TypeScript give agents the structure they need to refactor safely, answer semantic queries, and reason about codebases in a deterministic way.

“The goal,” Anders adds, “is to box in AI workflows with just enough determinism that they stay useful without going off the rails.”

The language that keeps evolving

From Turbo Pascal to C#, and now TypeScript, Anders’ work spans decades. But what’s striking is his consistency. He builds languages that make complex software simpler to reason about.

There’s nothing more satisfying than working on something that makes a difference. TypeScript keeps changing, but it always comes back to the same thing: helping developers express intent clearly.

That clarity might explain why more than one new developer joined GitHub every second in 2025, and a growing share of them choose to start in TypeScript. 

The story of TypeScript isn’t just about language design; it’s about evolution. A project that began as a pragmatic fix for JavaScript’s scale has become the foundation for how developers—and now AI—write code together.

Read the 2025 Octoverse report or start using GitHub Copilot >

The post TypeScript’s rise in the AI era: Insights from Lead Architect, Anders Hejlsberg appeared first on The GitHub Blog.

]]>
92161
GitHub Copilot tutorial: How to build, test, review, and ship code faster (with real prompts) https://github.blog/ai-and-ml/github-copilot/a-developers-guide-to-writing-debugging-reviewing-and-shipping-code-faster-with-github-copilot/ Wed, 05 Nov 2025 17:00:00 +0000 https://github.blog/?p=92164 How GitHub Copilot works today—including mission control—and how to get the most out of it. Here’s what you need to know.

The post GitHub Copilot tutorial: How to build, test, review, and ship code faster (with real prompts) appeared first on The GitHub Blog.

]]>

If you haven’t used GitHub Copilot since before mission control launched, you haven’t experienced what it can do now.

Copilot used to be an autocomplete tool. Now, it’s a full AI coding assistant that can run multi-step workflows, fix failing tests, review pull requests, and ship code—directly inside VS Code or GitHub.

Back in 2021, Copilot changed how you edited code. Today with Agent HQ and mission control, it’s changing how you build, review, secure and ship software.

Here’s one example: 

// Before
"Write tests for this module" = manual setup, fixtures, and edge cases

// Now
Ask Copilot: "Generate Jest tests for userSessionService with cache-enabled branch coverage"
Full test suite + explanations in record time

Under the hood, Copilot runs on multiple models tuned for reasoning, speed, and code understanding. It can see more of your project, generate more accurate results, and move naturally between your editor, terminal, and GitHub.

This guide walks through every part of the new Copilot experience with working examples, best practices, and prompts you can try right now (which you should).

What’s new with Copilot

Larger context + cross-file reasoning (now surfaced through mission control)

Early versions of Copilot saw only what you were typing. Now, it can read across multiple files, helping it understand intent and relationships between modules.

Ask: In mission control: “Find every function using outdated crypto libraries and refactor them to the new API. Open a draft PR.”

Copilot can trace patterns across your codebase, make updates, and explain what changed.

You can choose the right model for the job

You can now choose models based on your needs: one optimized for speed when prototyping, another for deeper reasoning during complex refactors.

It goes beyond code completion

Copilot is now a suite of tools built for every step of the workflow:

  • Mission control: Run multi-step tasks, generate tests, and open pull requests.
  • Agent mode: Define the outcome, and Copilot determines the best approach seeking feedback from you as needed, testing its own solutions, and refining its work in real time. 
  • Copilot CLI: Automate and explore your repository directly from the terminal.
  • Coding agent: Offload routine fixes or scaffolding to Copilot.
  • Code review: Let Copilot highlight risky diffs or missing tests before you merge.
  • Scoped agents: Offload routine fixes, refactors, docs, or test generation.

How to use GitHub Copilot (with examples)

Here are actionable items for each mode of Copilot, with code snippets and prompt examples.

Build faster with mission control and agent mode in VS Code

Once you’ve installed the Copilot extension, enable agent mode in settings and open mission control from the sidebar. Start by selecting a workflow (tests, refactor, documentation) or run a custom prompt. 

Prompt pattern:

# Add caching to userSessionService to reduce DB hits

In mission control: “Add a Redis caching layer to userSessionService, generate hit/miss tests, and open a draft PR.”

Copilot will create a new file, update the service, add tests, and open a draft pull request with a summary of changes.

Tip: Write comments that explain why, not just what.

// Cache responses by userId for 30s to reduce DB hits >1000/min

Short, specific comments make Copilot work better.

Break into the terminal with Copilot CLI

Copilot CLI brings the same intelligence to your terminal.  To install it, use the following command in your terminal: 

npm install -g @github/copilot-cli
copilot /login

Once installed and authenticated:

npm install -g @github/copilot-cli
copilot /login

Then run: 

copilot explain .

You’ll get a structured summary of your repository, dependencies, test coverage, and potential issues.

Here are some common, useful commands:

copilot explain .
copilot fix tests
copilot setup project
copilot edit src/**/*.py

Try this:

After a failing CI run, use the following command to have Copilot locate the issue, explain why it’s failing, and propose a fix for review.

copilot fix tests

Use Copilot code review

Copilot can now review pull requests directly in GitHub—no plugins required. It identifies risky diffs, missing test coverage, and potential bugs. 

Enable Copilot code review via your repository settings to get started. 

When a pull request is created, Copilot can comment on:

  • Missing test coverage
  • Potential bug/edge-case
  • Security vulnerabilities

Here’s an example

In your pull request chat, try writing:

Summarize the potential risks in this diff and suggest missing test coverage.

Copilot will reply inline with notes you can accept or ignore. It’s not here to merge for you. It’s here to help you think through issues and concepts faster.

Setting up async tasks with Copilot coding agent

Copilot coding agent can take a structured issue, write code, and open a draft pull request—all asynchronously. 

Here’s an example issue: 

### Feature Request: CSV Import for User Sessions  
- File: import_user_sessions.py  
- Parse CSV with headers userId, timestamp, action  
- Validate: action in {login, logout, timeout}  
- Batch size: up to 10k rows  
- On success: append to session table  
- Include: tests, docs, API endpoint

Assign that issue to Copilot. It will clone the repo, implement the feature, and open a draft pull request for your review.

Coding agent is best for:

  • Repetitive refactors
  • Boilerplate or scaffolding
  • Docs and test generation

You always review before merge, but Copilot accelerates everything leading up to it.

Best practices and guardrails

  • Review everything. AI writes code; you approve it. Always check logic, style, docs before you ship.
  • Prompt with context. The better your prompt (why, how, constraints), the better the output.
  • Use small increments. For agent mode or CLI edits, do one module at a time. Avoid “rewrite entire app in one shot.”
  • Keep developers in the loop. Especially for security, architecture, design decisions.
  • Document prompts and decisions. Maintain a log: “Used prompt X, result good/bad, adjustments made”. This helps refine your usage.
  • Build trust slowly. Use Copilot for non-critical paths first (tests, refactors), then expand to core workflows.
  • Keep context limits in mind. Although Copilot handles more context now, extremely large monolithic repos may still expose limitations.

Why this matters

More than 36 million developers joined GitHub this year (that’s more than one every second!), and 80% used Copilot in their first week.

AI-powered coding is no longer experimental. It’s part of the job.

Typed languages like TypeScript and Python dominate GitHub today, and their structure makes them ideal partners for Copilot. Strong types plus smart suggestions equals faster feedback loops and fewer regressions.

And now with mission control, everything’s in one place. You don’t need five AI tools, ten browser tabs, or a separate review bot. Everything happens where you already build software.

Take this with you

If you’ve been waiting to see what Copilot can really do, mission control is the moment. 

With GitHub Copilot—in the editor, in your terminal, in your reviews, and in the background of your team—you’re getting a toolkit designed to help you do real work faster, smarter, and on your terms.

You decide the architecture. You write the tests (or at least the ones you want to write). You merge the pull requests. Copilot helps with boilerplate, scaffolding, and routine tasks so you can keep your focus on the problem that really matters.

Pick one part of your stack this week—tests, docs, refactor—and run it through mission control. See where it saves time, then scale up.

This guide is your map. The tools are in your hands. Now it’s your turn to build.

Start using GitHub Copilot >

The post GitHub Copilot tutorial: How to build, test, review, and ship code faster (with real prompts) appeared first on The GitHub Blog.

]]>
92164
What are AI agents and why do they matter? https://github.blog/ai-and-ml/generative-ai/what-are-ai-agents-and-why-do-they-matter/ Tue, 13 Aug 2024 21:41:48 +0000 https://github.blog/?p=79310 Learn how AI agents and agentic AI systems use generative AI models and large language models to autonomously perform tasks on behalf of end users.

The post What are AI agents and why do they matter? appeared first on The GitHub Blog.

]]>

Imagine a Roomba that only told you your floors were dirty, but didn’t actually clean them for you. Helpful? Debatable. Annoying? Very.

When ChatGPT first arrived, that was about where things stood. It could describe how to do math problems and discuss theory endlessly, but it couldn’t reliably handle a simple arithmetic question. Connecting it with an external application, however (like an online calculator) significantly improved its abilities—just like connecting Roomba’s sensors with its robot body makes it capable of actually cleaning your floor.

That simple discovery was a precursor to an evolution that’s now occurring in generative AI where large language models (LLM) power AI agents that can pursue complex goals with limited direct supervision.

In these systems, the LLM serves as the brain while additional algorithms and tools are layered on top to accomplish key tasks ranging from generating software development plans to booking plane tickets. Proof-of-concepts like AutoGPT offer examples, such as a marketing agent that looks for Reddit comments with questions about a given product and then answers them autonomously. At their best, these agents hold the promise of pursuing complex goals with minimal direct oversight—and that means removing toil and mundane linear tasks while allowing us to focus on higher-level thinking. And when you connect AI agents with other AI agents to make multi-agent systems, like we’re doing with GitHub Copilot Workspace, the realm of possibility grows exponentially.

All this is to say, if you’re a developer you’ll likely start encountering more and more instances of agentic AI in the tools you use (including on GitHub) and in the news you read. So, this feels like as good a time as any to dive into exactly what agentic AI and AI agents are, how they work on a technical level, some of the technical challenges, and what this means for software development.

What are AI agents and agentic AI?

Agentic AI refers to artificial intelligence capable of making decisions, planning, and adapting to new information in real time. AI agents learn and enhance their performance through feedback, utilizing advanced algorithms and sensory inputs to execute tasks and engage with their environments.

According to Lilian Weng, the head of safety systems at OpenAI and their former head of applied AI research, an AI agent features three key characteristics:

  • Planning: an AI agent is capable of creating a step-by-step plan with discrete milestone goals from a prompt while learning from mistakes via a reward system to improve future outputs.
  • Memory: an AI agent combines the ability to use short-term memory to process chat-based prompts and follow-up prompts with longer-term data retention and recall (often via retrieval augmented generation, or RAG).
  • Tool use: an agent can query APIs to request additional information or execute an action based on an end user’s request.

What are the different types of AI agents?

AI agents range from simple reflex agents to sophisticated learning agents, and each has its strengths and weaknesses.

As this field continues to evolve, more types of AI agents will likely emerge. Whether you’re looking to build your own AI agent or understand a bit more about how GitHub uses AI to improve developer tools, here’s a list of the different types of AI agents you’ll most commonly encounter:

Characteristics Examples
Reflex agent Uses a model of the world to make decisions. They can remember some past states and make decisions based on both current and past experiences. Linting tools like ESLint or Pylint that apply a set of predefined rules to evaluate code.
Goal-based agent Achieves specific goals using their knowledge and the stated goal (or prompt) to make decisions. Advanced IDEs with AI-powered code completion such as GitHub Copilot.
Utility-based agent Aims to achieve a goal in the best way possible, as determined by evaluating different possible approaches. Tools that prioritize and assign bugs based on severity, impact, and developer workloads.
Learning agent Improves performance over time by learning from experiences. They consist of a learning element that makes improvements to the AI agent’s outputs based on user feedback and a performance element that uses the learned knowledge. Code completion tools, such as GitHub Copilot, that improve over time.

Common technical challenges with AI agents today

While there’s a lot of promise in agentic AI, there are two core industry-wide technical challenges when developing agentic AI systems today:

  • We can’t deterministically predict what an AI model will say or do next, and that makes explaining what and how their inputs work (that is, the combination of the prompt and the training data they use to generate a response) challenging.
  • We don’t have models that can fully explain their outputs, though work is being done to offer greater transparency by enabling them to explain how they arrived at a solution.

As a result, it is difficult to debug agentic systems and to create evaluation frameworks to understand their effectiveness, efficiency, and impact.

AI agents are difficult to debug, because they are prone to solve problems in unexpected ways. This is a nuance that has long been known in—of all things—chess, where machines make moves that seem counterintuitive to their human opponents, but can win games. The more sophisticated an agent becomes, the longer you expect it to run, the more difficult it is to debug—especially when you consider how quickly a log can grow.

AI agents are also difficult to evaluate in a repeatable way that shows progress without employing artificial constraints. This is especially challenging as the core capabilities of the underlying LLMs continue to rapidly improve, which makes it difficult to know whether your approach has improved results or if it’s simply the underlying model. Developers often encounter problems in choosing the right metrics, benchmarking overall performance against a set heuristic or rubric, and collecting end-user feedback and telemetry to evaluate agent output efficacy.

How we think about AI agents at GitHub

Our focus at GitHub has been to rethink the developer “inner loop” as collaboration with AI. That means AI agents that can reliably build, test, and debug code. It means reducing the energy needed to get started and empowering more people to learn and contribute to code bases. We know that it requires tackling every part of the developer’s day where they run into friction, and that’s where multi-agent systems like Copilot Workspace and code scanning autofix come in.

Earlier this year, we launched a technical preview of Copilot Workspace, our Copilot-native developer environment. It’s a multi-agent system—a network of agents that interact and collaborate to achieve a larger goal. Each agent in a system typically has specialized skills or functions, and they can communicate and coordinate with one another to solve complex problems more efficiently than a single agent could.

For Copilot Workspace, that means a developer can ask Copilot to help create an application, and it will not only generate a software development plan, but also the code, pull requests, and more, needed to achieve that plan.

There’s more in the works to make developers more productive and make their days a little bit (or a lot) better.

Why this matters (and some final thoughts)

There’s a lot of buzz around AI agents—and for good reason. As they continue to evolve, they’ll be able to work together to handle more complex tasks, which means less upfront cost of prompt engineering for users. For developers though, the benefit of AI agents is simple: they can allow developers to focus on higher-value activities.

When you give LLMs access to tools, memory, and plans to create agents, they become a bit like LEGO blocks that you can piece together to create more advanced systems. That’s because, at their best, AI agents are modular, adaptable, interoperable, and scalable, like LEGO blocks. Just as a child can transform a pile of colorful LEGO blocks into anything from a towering castle to a sleek spaceship, developers can use AI agents to build multi-agent systems that promise to revolutionize software development.

At GitHub, we’re excited about what AI agents, agentic AI, and multi-agent systems mean more broadly for software developers. With agentic AI coding tools like Copilot Workspace and code scanning autofix, developers will be able to build software that’s more secure, faster—and that’s just the beginning.

The post What are AI agents and why do they matter? appeared first on The GitHub Blog.

]]>
79310