Gwen Davis, Author at The GitHub Blog https://github.blog/author/purpledragon85/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Tue, 10 Mar 2026 20:16:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 Gwen Davis, Author at The GitHub Blog https://github.blog/author/purpledragon85/ 32 32 153214340 The era of “AI as text” is over. Execution is the new interface. https://github.blog/ai-and-ml/github-copilot/the-era-of-ai-as-text-is-over-execution-is-the-new-interface/ Tue, 10 Mar 2026 20:16:01 +0000 https://github.blog/?p=94415 AI is shifting from prompt-response interactions to programmable execution. See how the GitHub Copilot SDK enables agentic workflows directly inside your applications.

The post The era of “AI as text” is over. Execution is the new interface. appeared first on The GitHub Blog.

]]>

Over the past two years, most teams have interacted with AI the same way: provide text input, receive text output, and manually decide what to do next.

But production software doesn’t operate on isolated exchanges. Real systems execute. They plan steps, invoke tools, modify files, recover from errors, and adapt under constraints you define.

As a developer, you’ve gotten used to using GitHub Copilot as your trusted AI in the IDE. But I bet you’ve thought more than once: “Why can’t I use this kind of agentic workflow inside my own apps too?”

Now you can.

The GitHub Copilot SDK makes that execution layer available as a programmable capability inside your software.

Instead of maintaining your own orchestration stack, you can embed the same production-tested planning and execution engine that powers GitHub Copilot CLI directly into your systems.

If your application can trigger logic, it can now trigger agentic execution. This shift changes the architecture of AI-powered systems.

So how does it work? Here are three concrete patterns teams are using to embed agentic execution into real applications.

Pattern #1: Delegate multi-step work to agents

For years, teams have relied on scripts and glue code to automate repetitive tasks. But the moment a workflow depends on context, changes shape mid-run, or requires error recovery, scripts become brittle. You either hard-code edge cases, or start building a homegrown orchestration layer.

With the Copilot SDK, your application can delegate intent rather than encode fixed steps.

For example:

Your app exposes an action like “Prepare this repository for release.”

Instead of defining every step manually, you pass intent and constraints. The agent:

  • Explores the repository
  • Plans required steps
  • Modifies files
  • Runs commands
  • Adapts if something fails

All while operating within defined boundaries.

Why this matters: As systems scale, fixed workflows break down. Agentic execution allows software to adapt while remaining constrained and observable, without rebuilding orchestration from scratch.

View multi-step execution examples →

Pattern #2: Ground execution in structured runtime context

Many teams attempt to push more behavior into prompts. But encoding system logic in text makes workflows harder to test, reason about, and evolve. Over time, prompts become brittle substitutes for structured system integration.

With the Copilot SDK, context becomes structured and composable.

You can:

  • Define domain-specific tools or agent skills
  • Expose tools via Model Context Protocol (MCP)
  • Let the execution engine retrieve context at runtime

Instead of stuffing ownership data, API schemas, or dependency rules into prompts, your agents access those systems directly during planning and execution.

For example, an internal agent might:

  • Query service ownership
  • Pull historical decision records
  • Check dependency graphs
  • Reference internal APIs
  • Act under defined safety constraints

Why this matters: Reliable AI workflows depend on structured, permissioned context. MCP provides the plumbing that keeps agentic execution grounded in real tools and real data, without guesswork embedded in prompts.

Pattern #3: Embed execution outside the IDE

Much of today’s AI tooling assumes meaningful work happens inside the IDE. But modern software ecosystems extend far beyond an editor.

Teams want agentic capabilities inside:

  • Desktop applications
  • Internal operational tools
  • Background services
  • SaaS platforms
  • Event-driven systems

With the Copilot SDK, execution becomes an application-layer capability.

Your system can listen for an event—such as a file change, deployment trigger, or user action—and invoke Copilot programmatically.

The planning and execution loop runs inside your product, not in a separate interface or developer tool.

Why this matters: When execution is embedded into your application, AI stops being a helper in a side window and becomes infrastructure. It’s available wherever your software runs, not just inside an IDE or terminal.

Build your first Copilot-powered app →

Execution is the new interface

The shift from “AI as text” to “AI as execution” is architectural. Agentic workflows are programmable planning and execution loops that operate under constraints, integrate with real systems, and adapt at runtime.

The GitHub Copilot SDK makes those execution capabilities accessible as a programmable layer. Teams can focus on defining what their software should accomplish, rather than rebuilding how orchestration works every time they introduce AI.

If your application can trigger logic, it can trigger agentic execution.

Explore the GitHub Copilot SDK →

The post The era of “AI as text” is over. Execution is the new interface. appeared first on The GitHub Blog.

]]>
94415
Multi-agent workflows often fail. Here’s how to engineer ones that don’t. https://github.blog/ai-and-ml/generative-ai/multi-agent-workflows-often-fail-heres-how-to-engineer-ones-that-dont/ Tue, 24 Feb 2026 16:00:00 +0000 https://github.blog/?p=94039 Most multi-agent workflow failures come down to missing structure, not model capability. Learn the three engineering patterns that make agent systems reliable.

The post Multi-agent workflows often fail. Here’s how to engineer ones that don’t. appeared first on The GitHub Blog.

]]>

If you’ve built a multi-agent workflow, you’ve probably seen it fail in a way that’s hard to explain.

The system completes, and agents take actions. But somewhere along the way, something subtle goes wrong. You might see an agent close an issue that another agent just opened, or ship a change that fails a downstream check it didn’t know existed.

That’s because the moment agents begin handling related tasks—triaging issues, proposing changes, running checks, and opening pull requests—they start making implicit assumptions about state, ordering, and validation. Without providing explicit instructions, data formats, and interfaces, things won’t go the way you planned. 

Through our work on agentic experiences at GitHub across GitHub Copilot, internal automations, and emerging multi-agent orchestration patterns, we’ve seen multi-agent systems behave much less like chat interfaces and much more like distributed systems.

This post is for engineers building multi-agent systems. We’ll walk through the most common reasons they fail and the engineering patterns that make them more reliable.

1. Natural language is messy. Typed schemas make it reliable.

Multi-agent workflows often fail early because agents exchange messy language or inconsistent JSON. Field names change, data types don’t match, formatting shifts, and nothing enforces consistency.

Just like establishing contracts early in development helps teams collaborate without stepping on each other, typed interfaces and strict schemas add structure at every boundary. Agents pass machine-checkable data, invalid messages fail fast, and downstream steps don’t have to guess what a payload means.

Most teams start by defining the data shape they expect agents to return:

type UserProfile = {
  id: number;
  email: string;
  plan: "free" | "pro" | "enterprise";
};

This changes debugging from “inspect logs and guess” to “this payload violated schema X.” Treat schema violations like contract failures: retry, repair, or escalate before bad state propagates.

The bottom line: Typed schemas are table stakes in multi-agent workflows. Without them, nothing else works. See how GitHub Models enable structured, repeatable AI workflows in real projects. 👉

2. Vague intent breaks agents. Action schemas make it clear.

Even with typed data, multi-agent workflows still fail because LLMs don’t follow implied intent, only explicit instructions.

“Analyze this issue and help the team take action” sounds clear. But different agents may close, assign, escalate, or do nothing—each reasonable, none automatable.

Action schemas fix this by defining the exact set of allowed actions and their structure. Not every step needs structure, but the outcome must always resolve to a small, explicit set of actions.

Here’s what an action schema might look like:

const ActionSchema = z.discriminatedUnion("type", [
  { type: "request-more-info", missing: string[] },
  { type: "assign", assignee: string },
  { type: "close-as-duplicate", duplicateOf: number },
  { type: "no-action" }
]);

With this in place, agents must return exactly one valid action. Anything else fails validation and is retried or escalated.

The bottom line: Most agent failures are action failures. For reducing ambiguity even earlier in the workflow—at the instruction level—this guide to writing effective custom instructions is helpful. 👉

3. Loose interfaces create errors. MCP adds the structure agents need.

Typed schemas, constrained actions, and structured reasoning only work if they’re consistently enforced. Without enforcement, they’re conventions, not guarantees.

Model Context Protocol (MCP) is the enforcement layer that turns these patterns into contracts.

MCP defines explicit input and output schemas for every tool and resource, validating calls before execution.

{
  "name": "create_issue",
  "input_schema": { ... },
  "output_schema": { ... }
}

With MCP, agents can’t invent fields, omit required inputs, or drift across interfaces. Validation happens before execution, which prevents bad state from ever reaching production systems.

The bottom line: Schemas define structure whereas action schemas define intent. MCP enforces both. Learn more about how MCP works and why it matters. 👉

Moving forward together

Multi-agent systems work when structure is explicit. When you add typed schemas, constrained actions, and structured interfaces enforced by MCP, agents start behaving like reliable system components.

The shift is simple but powerful: treat agents like code, not chat interfaces.

Learn how MCP enables structured, deterministic agent-tool interactions. 👉

The post Multi-agent workflows often fail. Here’s how to engineer ones that don’t. appeared first on The GitHub Blog.

]]>
94039
Speed is nothing without control: How to keep quality high in the AI era https://github.blog/ai-and-ml/generative-ai/speed-is-nothing-without-control-how-to-keep-quality-high-in-the-ai-era/ Tue, 09 Dec 2025 17:00:00 +0000 https://github.blog/?p=92733 AI can help you build faster than ever, but it can also produce bugs, issues, and problems. Use these strategies to keep your speed without losing control of your code.

The post Speed is nothing without control: How to keep quality high in the AI era appeared first on The GitHub Blog.

]]>

What’s the point of moving faster if you can’t trust the code you’re shipping?

We’ve all been using AI in our workflows for a while now, and there’s no denying how much faster everyday development has become. Tasks that once took hours now finish in minutes. Entire features come together before you’ve even finished your morning coffee.

But we’ve also experienced the other side of that speed: when AI is used without clear direction or guardrails, it can generate what’s often called AI slop—semi-functional code stitched together without context, quietly piling up bugs, broken imports, and technical debt.

In this new era, being fast isn’t enough. Precision and quality are what set teams apart.

“The best drivers aren’t the ones who simply go the fastest, but the ones who stay smooth and in control at high speed,” said Marcelo Oliveira, GitHub VP of product at GitHub Universe 2025. “Speed and control aren’t trade-offs. They reinforce each other.”

So how do you get the best of both? How do you move fast and keep your code clean, reliable, and firmly under your direction? Here are three essential strategies:

Tip #1: Treat speed and quality as a package deal 

It’s very easy to accept AI-generated code that appears polished but hides underlying issues. However, speed without quality doesn’t help you ship faster, it just increases the risk of issues compounding down the road. That’s why the teams and organizations that succeed are the ones that pair AI-driven velocity with real guardrails.

And that’s exactly what GitHub Code Quality (currently in public preview) helps you do. GitHub Code Quality is an AI- and CodeQL-powered analysis tool that surfaces maintainability issues, reliability risks, and technical debt across your codebase, right as you work. Here’s how to start using it:

  1. Enable with one click
    Turn it on at the repository level and GitHub will analyze your code using a combination of CodeQL and LLM-based detection. This will give you a clear view of the maintainability and reliability issues in your codebase.
  2. Get automatic fixes inside every pull request
    As soon as you open a pull request, GitHub Code Quality flags unused variables, duplicated logic, runtime errors, and more. Here’s an example of pull request code that “works,” but isn’t production-ready:
// fuelCalculator.js

export function calculateFuelUsage(laps, fuelPerLap) {
  const lastLap = laps[laps.length - 1]; // unused variable

  function totalFuel(laps, fuelPerLap) {
    return laps.length * fuelPerLap;
  }

  // duplicated function
  function totalFuel(laps, fuelPerLap) {
    return laps.length * fuelPerLap;
  }

  return totalFuel(laps, fuelPerLap);

GitHub Code Quality responds with AI + CodeQL-powered suggestions, including a one-click fix:

-export function calculateFuelUsage(laps, fuelPerLap) {
-  const lastLap = laps[laps.length - 1]; // unused variable
-
-  function totalFuel(laps, fuelPerLap) {
-    return laps.length * fuelPerLap;
-  }
-
-  // duplicated function
-  function totalFuel(laps, fuelPerLap) {
-    return laps.length * fuelPerLap;
-  }
-
-  return totalFuel(laps, fuelPerLap);
-}
+export function calculateFuelUsage(laps, fuelPerLap) {
+  if (!Array.isArray(laps) || typeof fuelPerLap !== "number") {
+    throw new Error("Invalid input");
+  }
+  return laps.length * fuelPerLap;
+}

No triage or slowdown, just clean, reliable code.

  1. Enforce your quality bar
    Rulesets let you block merges that don’t meet your team’s standards. This keeps quality consistent without relying on reviewer willpower and without killing your velocity.
  2. Reveal (and fix) legacy technical debt
    The AI Findings page highlights issues in files your team is already working in, helping you fix problems while they’re top of mind and reduce context switching.

Bottom line: AI gives you speed. GitHub Code Quality gives you control. Together, they let you move faster and build better without ever trading one for the other.

Learn more about GitHub Code Quality 👉

Tip #2: Be the driver, not the passenger 

AI can generate code quickly, but quality has never come from automation alone. GitHub has always believed in giving you the tools to write your best code—from Copilot in the IDE, to GitHub Copilot code review in pull requests, to GitHub Code Quality—providing visibility into long-standing issues and tech debt, along with actionable fixes to help you address them.

These features give you the power to set direction, standards, and constraints. The clearer your intent, the better AI can support you.

Here’s a simple prompting framework that helps you do just that:

  1. Set the goal, not just the action
    Think of your prompts like giving direction to another engineer: the more clarity you provide, the better the final output. 

Bad prompt:

refactor this file

Better prompt:

refactor this file to improve readability and maintainability while preserving functionality, no breaking changes allowed
  1. Establish constraints
    Examples:
    • “No third-party dependencies”
    • “Must be backwards compatible with v1.7”
    • “Follow existing naming patterns”
  2. Provide reference context
    Link to related files, docs, existing tests, or architectural decisions.
  3. Decide the format of the output
    Pull request, diff, patch, commentary, or code block.

With GitHub Copilot coding agent, you can even assign multi-step tasks like:

Create a new helper function for formatting currency across the app.
- Must handle USD and EUR
- Round up to two decimals
- Add three unit tests
- Do not modify existing price parser
- Return as a pull request

Notice how you remain accountable for the thinking and the agent becomes accountable for the doing.

Bottom line: AI accelerates execution, but your clarity—and GitHub’s guardrails—are what turn that acceleration into high-quality software.

Learn more about coding agent 👉

Tip #3: Build visible proof of your thinking, not just your output

As AI takes on more execution work, what sets effective developers apart is how clearly they communicate decisions, trade-offs, and reasoning. It’s no longer enough to write code, you need to show how you think, evaluate, and approach problems across the lifecycle of a feature. 

Here’s a best practice to level up your documentation signal: 

  1. Create an issue that captures the why
    Write a brief summary of the problem, what success looks like, constraints, and any risks.
  2. Name your branch clearly and commit thoughtfully
    Use meaningful names and commit messages that narrate your reasoning, not just your keystrokes.
  3. Use Copilot and coding agent to build, then document decisions
    Include short notes on why you chose one approach over another and what alternatives you considered.
  4. Open a pull request with signal-rich context
    Add a short “Why,” “What changed,” and “Trade-offs” section, plus screenshots or test notes.

For example, instead of:

Added dark mode toggle

Try this:

- Added dark mode toggle to improve accessibility and user preference support.
- Chose localStorage for persistence to avoid server dependency.
- Kept styling changes scoped to avoid side effects on existing themes.

Bottom line: Your code shows what you did, but your documentation shows why it matters. In this new AI era, the latter is just as critical as the former.  

Learn more about effective documentation 👉

Moving forward together 

At the end of the day, quality is everything. While AI may accelerate the pace of work, it can also turn that speed on its head if the output isn’t guided with intent. But when you combine AI with clear direction, strong guardrails, and visible thinking, you help your team deliver cleaner, more reliable code at scale—and position your organization to move quickly without compromising on what matters most.

Get started with GitHub Copilot >

The post Speed is nothing without control: How to keep quality high in the AI era appeared first on The GitHub Blog.

]]>
92733
The developer role is evolving. Here’s how to stay ahead. https://github.blog/ai-and-ml/the-developer-role-is-evolving-heres-how-to-stay-ahead/ Mon, 06 Oct 2025 20:12:49 +0000 https://github.blog/?p=91291 AI is changing how software gets built. Explore the skills you need to keep up and stand out.

The post The developer role is evolving. Here’s how to stay ahead. appeared first on The GitHub Blog.

]]>
Editor’s note: This piece was originally published in our LinkedIn newsletter, Branching Out_ (this GitHub Blog version adds more detail). Sign up for the newsletter for more career-focused content >

The shift: Robots are coming for your boilerplate code. In fact, AI is on track to write 95% of code within the next 5 years.

The opportunity: Robots are terrible at creativity, collaboration, and big-picture thinking. Which, conveniently, is where the future of development is headed.

We’re entering a new AI-powered era. And the impact is massive: GitHub research projects that by 2030, AI-driven productivity gains could add the equivalent of 15 million more effective developers to the global workforce, unlocking more than $1.5 trillion in economic value.

Knowing how to code will still be important, but your main job as developer will be that of orchestrator, strategist, and collaborator. Your value won’t just hinge on typing speed, but increasingly on your ability to solve, design, and inspire. And while the job market is shifting and opportunities can feel uncertain, these are the kinds of skills that can help you stay resilient and stand out.

“Developers’ roles are evolving from manual coders to orchestrators of AI-driven development ecosystems,” says Ketai Qiu, co-author of From today’s code to tomorrow’s symphony: The AI transformation of developer’s routine by 2030. “The future of programming will be less about writing lines of code and more about defining intent, guiding AI systems, and integrating their outputs into coherent solutions.”

So what does this shift mean for you, and how can you start building these new skills into your workflow today? Let’s dive in:

Skill #1: Make AI coding more reliable by providing better context 

AI is fast, but it isn’t psychic. It can produce code in seconds, yet without understanding what you’re actually building, the results can feel vague or off-target. That’s where context comes in. By giving Copilot the right signals—your intent, your data, and the purpose behind a task—you steer its suggestions toward meaningful, useful output.

Enter GitHub Copilot Spaces: a dedicated environment where you and your team can shape Copilot’s responses with the context that matters most. Spaces let you upload sources (files, repositories, instructions), set intent, and collaborate so Copilot’s answers are accurate, relevant, and tailored to your work. Instead of getting one-size-fits-all suggestions, you get outputs designed for your actual codebase, team practices, and business goals.

✅ Here’s how to set up a GitHub Space:

Step 1: Go to github.com/copilot/spaces and create a new space.

Step 2: Upload context, anything from documentation and sample files to entire repositories.

Step 3: Start a chat: ask Copilot questions that draw on the sources you’ve added. For example, if you simply ask “Generate a SQL query to find active users,” Copilot will guess. If you’ve already shared your schema, the query it produces will be tuned to your actual tables and fields.

💡 Tip: Teams can open organization-wide spaces so everyone benefits from the same shared context instead of working in silos.

Mastering context won’t solve every challenge, but it signals that you know how to direct AI systems, which is an increasingly valued ability in modern development.

🎤 This focus on context isn’t just theory. It’s actually front and center at GitHub Universe 2025 (October 28–29 in San Francisco and online) this year. Join us at the conference to explore:

  • Dawn of the agents: Leveraging AI-powered tools to accelerate software development: In this session, see how intent and context drive better outcomes.
  • From intent to output: Designing with AI agents: Explore how developers are guiding Copilot from single prompts into orchestrated solutions.
Slide titled 'Dawn of the agents: Leveraging AI-powered tools to accelerate software development' with headshots of Maya Ross and Nick Liffen from GitHub.

Explore the full Universe agenda 👉

Skill #2: Provide insight, judgment, and strategy

AI can generate code, but it can’t replace human insight, creativity, or collaboration. The developers who thrive will be those who blend machine efficiency with human judgment and teamwork. Orchestration will be a primary competitive advantage.

That’s where GitHub Copilot code review comes in, an AI tool that scans pull requests, highlights issues, and suggests improvements automatically, helping teams ship faster with fewer bottlenecks.

✅ Try it today:

Step 1: Request a Copilot review: open an existing pull request and add Copilot as a reviewer.

Step 2: Review Copilot’s feedback: after a moment, it will add comments, suggestions, and inline changes you can commit directly.

Step 3: Refine the review: re-review, thumbs-up/down feedback, or add a .github/copilot-instructions.md file for custom rules.

💡 Tip: Once Copilot provides feedback, you can enable automatic reviews so it checks every pull request by default. Don’t worry, Copilot won’t block merges unless you decide to.

🎤 And if you want to see how global teams are already scaling collaboration with Copilot, you’ll find it at Universe. Join us to attend sessions like: 

  • From pair to peer: The next evolution of Copilot code review: In this session, learn how global teams are using Copilot code review to speed delivery and reduce bottlenecks.
  • Hackable badges. A fan favorite returns. Every in-person attendee gets a programmable badge to code, customize, and showcase, sparking collaboration, creativity, and connection.  
  • Plus: GitHub Expert Center, Career Corner, and Open Source Zone: where hallway conversations turn into lasting collaborations.
Conference slide titled ‘From pair to peer: The next evolution of Copilot code review.’ Includes GitHub Universe 2025 logo on the left, speaker photo of Elle Shwer with GitHub listed as affiliation.

Plan your Universe experience 👉

Skill #3: Recognize that learning is never done

The half-life of technical skills has always been a reality. But in an AI-powered world, it’s getting shorter. What you know today won’t be enough tomorrow, which is why continuous learning is the key to staying ahead.

✅ Here’s a practical roadmap to grow your AI skill set right inside GitHub (adapted from our blog, Vibe coding: Your roadmap to becoming an AI developer):

Step 1: Learn essential languages and frameworks: start with Python, then expand into Java and C++. Explore frameworks like TensorFlow, PyTorch, and Scikit-learn.

Step 2: Master machine learning basics: deep learning, NLP, and computer vision. Try open source repositories like Awesome Machine Learning, NLTK, or OpenCV.

Step 3: Showcase your skills on GitHub: organize your repository, publish READMEs, contribute to open source, and build a standout profile on GitHub Pages.

Step 4: Get certified in GitHub Copilot: learn the full toolkit, prepare with docs and projects, and earn your Copilot certification badge.

💡 Tip: Don’t just learn AI skills in isolation, show your work. Every repository, contribution, or badge signals to employers that you’re keeping pace with the AI era.

Continuous learning can’t eliminate uncertainty in a shifting job market. But it gives you the best shot at adapting, whether that means advancing in your current role or pivoting into adjacent paths like developer advocacy, architecture, or tooling.

🎤 Staying ahead means investing in continuous learning. And at Universe, you’ll hear directly from the leaders shaping how software gets built. Highlights include:

Conference slide titled ‘Flying high with AI: Cathay Pacific on transforming software.

See the full speaker lineup 👉

We’re all in this together 

Join us at Universe, where context, collaboration, and creativity come together. See what’s next, and be part of the community that’s building it. You’ll leave with insights, tools, and connections you can put to work right away.

Reserve your seat now 👉

GitHub Universe 2024 outdoor event with tents, archway, and attendees.

The post The developer role is evolving. Here’s how to stay ahead. appeared first on The GitHub Blog.

]]>
91291
Junior developers aren’t obsolete: Here’s how to thrive in the age of AI https://github.blog/ai-and-ml/generative-ai/junior-developers-arent-obsolete-heres-how-to-thrive-in-the-age-of-ai/ Thu, 07 Aug 2025 21:05:55 +0000 https://github.blog/?p=90014 The role of junior developer is evolving. If you're at this stage in your career, here's how to keep up and stand out.

The post Junior developers aren’t obsolete: Here’s how to thrive in the age of AI appeared first on The GitHub Blog.

]]>
Editor’s note: This piece was originally published in our LinkedIn newsletter, Branching Out_. Sign up now for more career-focused content > 

Everyone’s talking about it: AI is changing how we work. And nowhere is that more true than in the field of software engineering.

If you’re just getting started as a developer, you might be wondering: is AI ruining my chances of getting a junior-level role? After all, a 2023 study by ServiceNow and Pearson projects that nearly 26% of tasks performed by [current] junior application developers will be augmented or fully automated by 2027

In a word: No. Quite the contrary. New learners are well positioned to thrive as junior developers because they’re coming into the workforce already savvy with AI tools, which is just what companies need to adapt to the changing ways software is being developed.

Our CEO Thomas Domke says:

Hear more from Thomas on The Pragmatic Engineer podcast > 

So what does that mean for you? According to Miles Berry, professor of computing education at the University of Roehampton, today’s learners must develop the skills to work with AI rather than worry about being replaced by it. As a junior developer, you need to think critically about the code your AI tool gives you, stay curious when things feel unfamiliar, and collaborate with AI itself in addition to senior team members. 

As Berry puts it:

“Creativity and curiosity are at the heart of what sets us apart from machines.” 

With that in mind, here are five ways to stand out as a junior developer in the AI era:

1. Use AI to learn faster, not just code faster 

Most developers use GitHub Copilot for autocomplete. But if you’re just starting out, you can turn it into something more powerful: a coding coach.

Get Copilot to tutor you

You can set personal instructions so Copilot guides you through concepts instead of handing you full solutions. Here’s how: 

In VS Code, open the Command Palette and run:

> Chat: New Instructions File

Then paste this into the new file:

---
applyTo: "**"
---
I am learning to code. You are to act as a tutor; assume I am a beginning coder. Teach me concepts and best practices, but don’t provide full solutions. Help me understand the approach, and always add: "Always check the correctness of AI-generated responses."

This will apply your tutoring instructions to any file you work on. You can manage or update your instructions anytime from the Chat > Instructions view.

Ask Copilot questions

Open Copilot Chat in VS Code and treat it like your personal coach. Ask it to explain unfamiliar concepts, walk through debugging steps, or break down tricky syntax. You can also prompt it to compare different approaches (“Should I use a for loop or map here?”), explain error messages, or help you write test cases to validate your logic. Every prompt is a learning opportunity and the more specific your question, the better Copilot can guide you.

Practice problem solving without autocomplete 

When you’re learning to code, it can be tempting to rely on autocomplete suggestions. But turning off inline completions — at least temporarily — can help strengthen your problem-solving and critical thinking skills. You’ll still have access to Copilot Chat, so you can ask questions and get help without seeing full solutions too early.

Just keep in mind: This approach slows things down by design. It’s ideal when you’re learning new concepts, not when you’re under time pressure to build or ship something.

To disable Copilot code completion for a project (while keeping chat on), create a folder called .vscode in the root of your project, and add a file named settings.json with this content:

{
  "github.copilot.enable": {
    "*": false
  }
}

This setting disables completions in your current workspace, giving you space to think through solutions before asking Copilot for help.

Read our full guide on how to use Copilot as a tutor >

2. Build public projects that showcase your skills (and your AI savvy)

In today’s AI-powered world, highlighting your AI skills can help you stand out to employers. Your side projects are your portfolio and GitHub gives you the tools to sharpen your skills, collaborate, and showcase your work. Here’s how to get started: In VS Code, open Copilot Chat and type:

/new

Copilot can scaffold a new project inside your editor to help you get started. Once it’s scaffolded, ask Copilot:

“Add the MIT license to this project and publish it as a public project on GitHub.”

Open a command line in VS Code and send the following prompt to manually push:

git init && git add . && git commit -m "Initial commit" && git push

Or create a new repo using the GitHub web interface and upload your files.

From there, you can:

  • Track progress with issues, commits, and project boards.
  • Document your journey and milestones in the README.
  • Iterate and improve with feedback and AI assistance.

Read our full guide on prompting Copilot to create and publish new projects and start building your public portfolio > 

3. Level up your developer toolkit with core GitHub workflows

Yes, AI is changing the game, but strong fundamentals still win it. If you’re aiming to level up from student to junior dev, these core workflows are your launchpad:

  • Automate with GitHub Actions. Automating builds and deployments is a best practice for all developers. Use GitHub Actions to build, test, and deploy your projects automatically.
  • Contribute to open source. Join the global developer community by contributing to open source. It’s one of the best ways to learn new skills, grow your resume, and build real-world experience.
  • Collaborate through pull requests. Coding is a team sport. Practice the same pull request workflows used by professional teams: Review others’ code, discuss feedback, and merge with confidence.

Read our full guide on understanding GitHub workflows >

4. Sharpen your expertise by reviewing code

One of the fastest ways to grow as a developer is to learn from the reviews given by your peers. Every pull request is a chance to get feedback — not just on your code, but on how you think, communicate, and collaborate.

GitHub staff engineer Sarah Vessels has reviewed over 7,000 pull requests. She advises not to be afraid to ask questions during code reviews. If you’re unsure why a suggestion was made, speak up. If something isn’t obvious, clarify. Code review is a conversation, not a test.

Here’s how to make the most of it:

  • Ask questions. Use comments to understand decisions or explore alternate approaches. It shows curiosity and builds shared knowledge.
  • Look for patterns. Repeated suggestions often point to best practices you can internalize and reuse.
  • Take notes. Keep track of feedback you’ve received and how you’ve addressed it, which is great for personal growth and future reference.
  • Be gracious. Say thank you, follow up when you make changes, and acknowledge when a comment helped you see something differently.

Read our full guide on code review best practices and learn how to grow from every review >

5. Debug smarter and faster with AI

Debugging is one of the most time-consuming parts of software development. But with GitHub Copilot, you don’t have to do it alone.

Use Copilot Chat to:

  • Ask “Why is this function throwing an error?” and get real-time explanations.
  • Use /fix to highlight code and generate a potential fix.
  • Run /tests to create test cases and verify your logic.
  • Try /explain on cryptic errors to understand the root cause.

You can even combine commands for deeper debugging — for example, use /explain to understand the problem, then /fix to generate a solution, and /doc to document it for your team.

Read our full guide on how to debug code with Copilot >

The bottom line

Whether you’re writing your first pull request or building your fifth side project, GitHub is the place to sharpen your skills, collaborate in the open, and build a portfolio that gets you hired.

AI may be reshaping the software world, but, with the right tools and mindset, junior developers can thrive. 

Start building on GitHub today >

Learn how to code faster and better with our biweekly developer newsletter.

The post Junior developers aren’t obsolete: Here’s how to thrive in the age of AI appeared first on The GitHub Blog.

]]>
90014
Vibe coding: Your roadmap to becoming an AI developer https://github.blog/ai-and-ml/vibe-coding-your-roadmap-to-becoming-an-ai-developer/ Fri, 16 May 2025 16:00:00 +0000 https://github.blog/?p=87889 Learn how to go from curious coder to AI wizard—with a little help from GitHub.

The post Vibe coding: Your roadmap to becoming an AI developer appeared first on The GitHub Blog.

]]>

Editor’s note: This piece was originally published in our LinkedIn newsletter, Branching Out_. Sign up now for more career-focused content > 

Pop quiz: What do healthcare, self-driving cars, and your next job all have in common? 

If you guessed AI, you were right. And with 80% of developers expected to need at least a fundamental AI skill set by 2027, there’s never been a better time to dive into this field.

This blog will walk you through what you need to know, learn, and build to jump into the world of AI—using the tools and resources you already use on GitHub. 

Let’s dive in.

1. Learn essential programming languages and frameworks 💬

    Mastering the right programming languages and tools is foundational for anyone looking to excel in AI and machine learning development. Here’s a breakdown of the core programming languages to zero in on:

    • Python: Known for its simplicity and extensive library support, Python is the cornerstone of AI and machine learning. Its versatility makes it the preferred language for everything from data preprocessing to deploying AI models. (Fun fact: Python overtook JavaScript as the number one programming language in 2024!)
    • Java: With its scalability and cross-platform capabilities, Java is popular for enterprise-level applications and large-scale AI systems.
    • C++: As one of the fastest programming languages, C++ is often used in performance-critical applications like gaming AI, real-time simulations, and robotics.

    Beyond programming, these frameworks give you the tools to design, train, and deploy intelligent systems across real-world applications:

    • TensorFlow: Developed by Google, TensorFlow is a comprehensive framework that simplifies the process of building, training, and deploying AI models.
    • Keras: Built on top of TensorFlow, Keras is user-friendly and enables quick prototyping.
    • PyTorch: Favored by researchers for its flexibility, PyTorch provides dynamic computation graphs and intuitive debugging tools.
    • Scikit-learn: Ideal for traditional machine learning algorithms, Scikit-learn offers efficient tools for data analysis and modeling.

    Spoiler alert: Did you know you can learn programming languages and AI frameworks right on GitHub? Resources like GitHub Learning Lab, The Algorithms, TensorFlow Tutorials, and PyTorch Examples provide hands-on opportunities to build your skills. Plus, tools like GitHub Copilot provide real-time coding assistance that can help you navigate new languages and frameworks easily while you get up to speed.


     2. Master machine learning 🤖

    Machine learning (ML) is the driving force behind modern AI, enabling systems to learn from data and improve their performance over time. It bridges the gap between raw data and actionable insights, making ML expertise a must-have if you’re looking for a job in tech. Here are some key subfields to explore:

    • Deep learning: A subset of ML, deep learning uses multi-layered neural networks to analyze complex patterns in large datasets. While neural networks are used across ML, deep learning focuses on deeper architectures and powers advancements like speech recognition, autonomous vehicles, and generative AI models.
    • Natural language processing (NLP): NLP enables machines to understand, interpret, and respond to human language. Applications include chatbots, sentiment analysis, and language translation tools like Google Translate.
    • Computer vision: This field focuses on enabling machines to process and interpret visual information from the world, such as recognizing objects, analyzing images, and even driving cars.

    Luckily, you can explore ML right on GitHub. Start with open source repositories like Awesome Machine Learning for curated tools and tutorials, Keras for deep learning projects, NLTK for natural language processing, and OpenCV for computer vision. Additionally, try real-world challenges by searching for Kaggle competition solutions on GitHub or contribute to open source AI projects tagged with “good first issue” to gain hands-on experience. 


    3. Build a GitHub portfolio to showcase your skills 💼

    A strong GitHub portfolio highlights your skills and AI projects, setting you apart in the developer community. Here’s how to optimize yours:

    • Organize your repositories: Use clear names, detailed README files, and instructions for others to replicate your work.
    • Feature your best work: Showcase projects in areas like NLP or computer vision, and use tags to improve discoverability.
    • Create a profile README: Introduce yourself with a professional README that includes your interests, skills, and standout projects.
    • Use GitHub Pages: Build a personal site to host your projects, case studies, or interactive demos.
    • Contribute to open source: Highlight your open source contributions to show your collaboration and technical expertise.

    For detailed guidance, check out the guides on Building Your Stunning GitHub Portfolio and How to Create a GitHub Portfolio.


    4. Get certified in GitHub Copilot 🏅

    Earning a certification in GitHub Copilot showcases your expertise in leveraging AI-powered tools to enhance development workflows. It’s a valuable credential that demonstrates your skills to employers, collaborators, and the broader developer community. Here’s how to get started:

    • Understand GitHub Copilot: GitHub Copilot is an AI agent designed to help you write code faster and more efficiently. Familiarize yourself with its features, such as real-time code suggestions, agent mode in Visual Studio Code, model context protocol (MCP), and generating boilerplate code across multiple programming languages.
    • Explore certification options: GitHub offers certification programs through its certification portal. These programs validate your ability to use GitHub tools effectively, including GitHub Copilot. They also cover key topics like AI-powered development, workflow automation, and integration with CI/CD pipelines.
    • Prepare for the exam: Certification exams typically include theoretical and practical components. Prepare by exploring GitHub Copilot’s official documentation, completing hands-on exercises, and working on real-world projects where you utilize GitHub Copilot to solve coding challenges.
    • Earn the badge: Once you complete the exam successfully, you’ll receive a digital badge that you can showcase on LinkedIn, your GitHub profile, or your personal portfolio. This certification will enhance your resume and signal to employers that you’re equipped with cutting-edge AI development tools.

    Check out this LinkedIn guide for tips on becoming a certified code champion with GitHub Copilot. 

    Source

    The post Vibe coding: Your roadmap to becoming an AI developer appeared first on The GitHub Blog.

    ]]>
    87889
    5 GitHub Actions every maintainer needs to know https://github.blog/open-source/maintainers/5-github-actions-every-maintainer-needs-to-know/ Thu, 27 Mar 2025 16:00:46 +0000 https://github.blog/?p=85886 With these actions, you can keep your open source projects organized, minimize repetitive and manual tasks, and focus more on writing code.

    The post 5 GitHub Actions every maintainer needs to know appeared first on The GitHub Blog.

    ]]>

    Maintaining and contributing to open source projects can be rewarding—but it comes with a lot of small, repetitive tasks. The good news? GitHub Actions can automate the more tedious and error-prone parts of maintainership, freeing you up to focus on what matters: building and growing your community. Whether you’ve just launched your project or you’re looking to scale, here are a few of the most helpful actions to help you along your way.

    Pro tip: It’s best practice to audit the source code of any action you use, and pin actions to a full length commit SHA so that you always know what version of the code you’re using.

    Now, let’s get started.

    1. Clean up your backlog with stale

    Managing issues or pull requests can be challenging, especially when users open issues that require additional information to resolve. If they don’t respond with what you need, these issues can pile up and make your backlog look daunting. Stale closes any issues or pull requests that lack activity after a set number of days, keeping your open issues list nice and tidy.

    👉 Who uses it: DeepSeek-R1, opentelemetry-go, and more.

    2. Let super-linter sweat the small stuff for you

    It’s awesome when someone takes the time to submit a pull request to your project. It’s not so awesome when you have to manually reject that pull request because of a small mistake. A linter is a tool that helps you enforce best practices and consistent formatting. Super-linter is a collection of linters for a variety of languages that can automate many of the chores associated with code reviews, including enforcing style guidelines, detecting syntax errors, identifying security vulnerabilities, and ensuring code consistency across multiple languages.

    👉 Who uses it: Barman, frankenphp, and more.

    3. Stop repeating yourself with create-or-update-comment

    Repetitive comments for common scenarios can become tedious. Create-or-update-comment offers a reprieve, enabling you to automate tasks, like sending welcome messages to new contributors or providing standardized feedback when linters and other automated processes detect problems.

    👉 Who uses it: woocommerce, lucide, and more.

    4. Create release notes with ease with Release Drafter

    After all the merging, testing, and other work that goes into preparing a release, writing up the release notes is often the last thing you want to do. The good news: Release Drafter automates the process for you. Each time you merge a pull request, it updates a draft text of your release notes, so they’ll be ready when it’s time to publish.

    👉 Who uses it: LightGBM, Mealie, and more.

    5. Stay organized with pull request labeler

    Overwhelmed with PRs? Pull request labeler automatically labels pull requests based on the files or branch modified, helping you triage work and maintain a consistent labeling system.

    👉 Who uses it: Apache Lucene, Marvin, and more.

    Maintaining an open source project is a labor of love, but with the right tools, it doesn’t have to feel overwhelming. These actions are just a few examples of how automation can save time, reduce frustration, and help you focus on writing great code and growing your community.

    Why not give them a try and see how they can transform your open source journey? Your future self (and your contributors) will thank you!

    Find more actions on GitHub Marketplace.

    The post 5 GitHub Actions every maintainer needs to know appeared first on The GitHub Blog.

    ]]>
    85886
    How to get in the flow while coding (and why it’s important) https://github.blog/developer-skills/career-growth/how-to-get-in-the-flow-while-coding-and-why-its-important/ Mon, 22 Jan 2024 19:46:30 +0000 https://github.blog/?p=76281 Explore what flow state entails, its benefits, and three tips for reaching it the next time you code.

    The post How to get in the flow while coding (and why it’s important) appeared first on The GitHub Blog.

    ]]>

    It’s the dream: your ideas are flowing, time and space fade away, the path ahead of you is clear, you’re moving at the speed of thought, and every click you make is gold.

    This is called being in the flow or flow state. When you’re in the flow, you block out the world, are fully immersed in what you’re doing, and enjoy increased creativity, innovation, and happiness.

    “Being in the flow is magical,” says Jonathan Carter, technical advisor of the CEO at GitHub. “You tell your teammates they can go to lunch and you’ll catch them later—not because you’re a workaholic, but because there’s truly nothing else you’d rather be doing right now.”

    In this blog, we’ll explore what flow state entails, its benefits, and three tips for reaching it the next time you sit down to code. Let’s go.

    What exactly is the flow state?

    The concept of flow state came from positive psychologist Mihaly Csikszentmihalyi and his 1990 book, Flow: The Psychology of Optimal Experience. In it, Csikszentmihalyi describes nine dimensions of flow:

    1. Challenge-skills balance
    2. Total concentration
    3. Clear goals
    4. Immediate feedback
    5. Transformation of time
    6. Feeling intrinsically rewarded
    7. Effortlessness
    8. Loss of self-consciousness
    9. Feeling of total control

    Csikszentmihalyi discovered these dimensions by conducting research to understand how people achieve productivity and happiness. He found that in people’s favorite, most absorbed moments, their thoughts and actions “flowed,” and brought unrivaled motivation, meaning, and creativity.

    “Software has historically been viewed as mathematical or scientific in nature, but I would argue that writing code has more in common with other creative acts,” says Idan Gazit, senior director of research at GitHub. “Whether you’re writing an essay or writing a program, the challenge is getting into the headspace where you can untangle the thing you want to express.”

    What are the benefits of flow state for developers?

    When developers reach that coveted frame of mind, their productivity soars. According to our recent developer productivity research, developers produce higher quality work when they can easily collaborate—a hallmark of flow state—through comments, pull requests, issues, etc. According to the study, developers reported that effective collaboration provides a host of benefits:

    • Improved test coverage
    • Faster, cleaner, more secure code writing
    • Novel, creative solutions
    • Speedier deployments

    On the flip side, when developers can’t freely collaborate, their work suffers (it takes 23 minutes, on average, to get back into the task at hand after an interruption, according to a study from the University of California, Irvine).

    And flow state isn’t important just for individual developers—it helps businesses, too.

    “When it comes to business success, flow state is everything,” says Chris Reddington, senior manager of developer advocacy at GitHub. This is because today’s environments use dozens of languages and often leverage multiple cloud providers, creating pressure, complexity, and distractions. He adds, “The more we can help engineering teams stay in the flow, where they are just focused on solving those bigger problems, the better.”

    Quick tips for developers who want to get and stay in the flow state

    So, how can you achieve flow state during your day-to-day tasks? The following tips should help you reach the flow state and stay there—regardless of industry or where you are in your developer career.

    Tip #1: Optimize your environment

    Creating a distraction-free environment that’s conducive to work can pay huge dividends. Here are some ideas:

    • Block time. Create personal focus events on your calendar where no one can schedule meetings with you.
    • Schedule breaks. Use a timer to give yourself 15-30 minute breaks throughout the workday.
    • Snooze Slack and phone notifications. Be antisocial and make yourself unavailable to the world.
    • Eliminate or reduce multitasking. Being able to do more than one task at a time is a myth, anyway.
    • Invest in headphones. Noise-canceling headphones, in particular, can keep your stress down and your focus high.
    • Get comfortable. Invest in ergonomic office equipment, wear comfortable clothes, and ensure you’ve had enough to eat.
    • Hold on scheduling meetings. If you’re a team leader, be mindful of meeting frequency.
    • Create a pre-flow ritual. Routines like grabbing coffee, checking your messages, and then putting your phone on silent can cue your brain that it’s time to get to work.

    Of course, even with our best attempts, distractions happen. If you need to step away from the task at hand, that’s okay. Gazit also suggests pair programming or solution design to help overcome mental hurdles.

    “It’s a great magic trick,” he says. “Stepping back and talking through the problem with a teammate is often the fastest route to getting unstuck.”

    He also adds that GitHub Copilot can be helpful for this.

    “GitHub Copilot is never busy,” he says. “I’m not distracting it when I put it to work. Debugging with a rubber duck is fantastic, but GitHub Copilot is the rubber duck that talks back. It helps me reason about the solution space and suggests approaches I wouldn’t have considered.”

    A banner advertising GitHub's developer email newsletter.

    Tip #2: Map out your work

    You can also achieve flow state by ensuring you have a clear path for accomplishing your goal.

    Gazit describes how he can get into the flow state when he correctly nails the balance of architectural work. This is especially important when it comes to complex tasks like designing an API, where you first have to build an architecture while considering how it will be used and what kind of load it’ll put on your database.

    “If I do the architectural work well, I can then do the bricklaying with a feeling that I’m super clear on what I’m doing,” he says. “I know exactly where I’m going.”

    Reddington notes that mapping your work and the practice of blocking time, as mentioned above, often go hand in hand.

    “When I block out chunks of time, I can figure out how I’m going to solve the problems I’m tackling appropriately,” he says.

    However, he warns that you’re not always going to fix the things you’re trying to do in the allotted blocked time. But at least you can start mentally organizing.

    Finding the optimal mix of challenge and skill is also important to achieve flow. If something is too easy, you’ll be bored and unsatisfied. If it’s too challenging on the other hand, you’ll be stressed about not not getting it done, which will also keep flow elusive.

    “A good mix can make all the difference,” Reddington says.

    Tip #3: Find joy in the work you’re doing

    You won’t be able to hit flow state if you’re not enjoying yourself.

    “It’s only when you’re not worrying about meetings, or your email, or what you’re going to have for dinner, that you can hit the flow state,” Carter says.

    It’s a similar experience to being entertained.

    “It’s like when you’re reading a book and you just have to finish the next chapter or you’re binging Netflix and you need to see the next episode,” he says. “It’s that same energy.”

    Enjoyable work pertains to teams, too.

    Carter notes that office work enjoyment can be increased by clearly articulating the outcomes you’re trying to accomplish. When a product manager writes a well-articulated issue that clarifies the end result, there’s a higher likelihood that the team will be more motivated to take that work on and do it quickly.

    “They’re not focused on the complexity anymore but on the desire to get there,” he says.

    Similarly, if you’re involved in a project you don’t enjoy, it can be useful to rethink why you’re doing that work in the first place.

    “I find that if I can recreate the mindset of why we should solve the problem, I can bootstrap curiosity and get back into the flow state,” he says.

    The bottom line

    Achieving flow state can significantly enhance norepinephrine, dopamine, anandamide, serotonin, and endorphins—increasing feelings of motivation and intrinsic reward, as well as pattern recognition and lateral thinking ability. It’s a win for productivity, well-being, and keeping the intrinsic developer fire strong.

    “With flow state, you’re never at a point where you’re performing the same mechanics twice,” Carter says. “You’re learning in response to what you’re doing. You’re naturally interested, building up an unconscious muscle of curiosity. The learning potential is endless.”

    To learn more about how businesses are incorporating flow state into their processes, read Developer experience: What is it and why should you care? and explore how GitHub can help.

    The post How to get in the flow while coding (and why it’s important) appeared first on The GitHub Blog.

    ]]>
    76281
    A developer’s guide to open source LLMs and generative AI https://github.blog/ai-and-ml/llms/a-developers-guide-to-open-source-llms-and-generative-ai/ Thu, 05 Oct 2023 16:00:38 +0000 https://github.blog/?p=74518 Open source generative AI projects are a great way to build new AI-powered features and apps.

    The post A developer’s guide to open source LLMs and generative AI appeared first on The GitHub Blog.

    ]]>

    We all know that AI is changing the world. But what happens when you combine AI with the power of open source?

    Over the past year, there has been an explosion of open source generative AI projects on GitHub: by our count, more than 8,000. They range from commercially backed large language models (LLMs) like Meta’s LLaMA to experimental open source applications.

    These projects offer many benefits to open source developers and the machine learning community—and are a great way to start building new AI-powered features and applications.

    In this article, we’ll explore:

    • The differences between open source LLMs and closed source pre-trained models
    • Best practices for fine-tuning LLMs
    • The open source LLMs available today
    • What the future holds for the rapidly evolving world of generative AI

    Let’s jump in.

    Interested in building with LLMs? Check out our guide on prompt engineering >

    Open source vs. closed source LLMs

    By now, most of us are familiar with LLMs: neural network-based language models trained on vast quantities of data to mimic human behavior by performing various downstream tasks, like question answering, translation, and summarization. LLMs have disrupted the world with the introduction of tools like ChatGPT and GitHub Copilot.

    Open source LLMs differ from their closed counterparts regarding the source code (and sometimes other components, as well). With closed LLMs, the source code—which explains how the model is structured and how the training algorithms work—isn’t published.

    “When you’re doing research, you want access to the source code so you can fine-tune some of the pieces of the algorithm itself,” says Alireza Goudarzi, a senior researcher of machine learning at GitHub. “With closed models, it’s harder to do that.”

    Open source LLMs help the industry at large: because so many people contribute, they can be developed faster than closed models. They can also be more effective for edge cases or specific applications (like local language support), can contain bespoke security controls, and can run on local models.

    But closed models—often built by larger companies—have advantages, too. For one, they’re embedded in systems with filters for biased information, inappropriate language, and other questionable content. They also frequently have security measures baked in. Plus, they don’t need fine-tuning, a specialized skill set requiring dedicated people and teams.

    “Closed, off-the-shelf LLMs are high quality,” notes Eddie Aftandilian, a principal researcher at GitHub. “They’re often far more accessible to the average developer.”

    How to fine-tune open source LLMs

    Fine-tuning open source models is done on the large cloud provider hosted by the LLM, such as AWS, Google Cloud, or Microsoft Azure. Fine-tuning allows you to optimize the model by creating more advanced language interactions in applications like virtual assistants and chatbots. This can improve model accuracy anywhere from five to 10 percent.

    As for best practices? Goudarzi recommends being careful about data sampling and being clear about the specific needs of the application you’re trying to build. The curated data should match your needs exactly since the models are pre-trained on anything you can find online.

    “You need to emphasize certain things related to your objectives,” he says. “Let’s say you’re trying to create a model to process TV and smart home commands. You’d want to preselect your data to have more of a command form.”

    This will help optimize model efficiency.

    Choosing your model

    Which open source model is best for you? Aftandilian recommends focusing on models’ performance benchmarks against different scenarios, such as reasoning, domain-specific understanding of law or science, and linguistic comprehension.

    However, don’t assume that the benchmark results are correct or meaningful.

    “Rather, ask yourself, how good is this model at a particular task?” he says. “It’s pretty easy to let benchmarks seep into the training set due to lack of deep understanding, skewed performance, or limited generalization.”

    When this happens, the model is trained on its own evaluation data. “Which would make it look better than it should,” Aftandilian says.

    You should also consider how much the model costs to run and its overall latency rates. A large model, for instance, might be exceptionally powerful. But if it takes minutes to generate responses versus seconds, there may be better options. (For example, the models that power GitHub Copilot in the IDE feature a latency rate of less than ten milliseconds, which is well-suited for getting quick suggestions.)

    Supercharge your productivity with our monthly developer newsletter.

    Open source LLMs available today

    There are several open source commercially licensed models available. These include:

    • OpenLLaMA: An open source reproduction of Meta’s LLaMA model, developed by Berkeley AI Research, this project provides permissively licensed models with 3B, 7B, and 13B parameters, and is trained on one trillion tokens. OpenLLaMA models have been evaluated on tasks using the lm-evaluation-harness and perform comparably to the original LLaMA and GPT-J across most tasks. But because of the tokenizer’s configuration, the models aren’t great for code generation tasks with empty spaces.
    • Falcon-Series: Developed by Abu Dhabi’s Technology Innovation Institute (TII), Falcon-Series consists of two models: Falcon-40B and Falcon-7B. The series has a unique training data pipeline that extracts content with deduplication and filtering from web data. The models also use multi-query attention, which improves the scalability of inference. Falcon can generate human-like text, translate languages, and answer questions.
    • MPT-Series: A set of decoder-only large language models, MPT-Series models have been trained on one trillion tokens spanning code, natural language text, and scientific text. Developed by MosaicML, these models come in two specific versions: MPT-Instruct, designed to be task-oriented, and MPT-Chat, which provides a conversational experience. It’s most suitable for virtual assistants, chatbots, and other interactive user engagement tools.
    • FastChat-T5: A large transformer model with three billion parameters, FastChat-T5 is a chatbot model developed by the FastChat team through fine-tuning the Flan-T5-XL model. Trained on 70,000 user-shared conversations, it generates responses to user inputs autoregressively and is primarily for commercial applications. It’s a strong fit for applications that need language understanding, like virtual assistants, customer support systems, and interactive platforms. 

    The future of open source LLMs

    There’s been a scurry of activity in the open source LLM world.

    “Developers are very active on some of these open source models,” Aftandilian says. “They can optimize performance, explore new use cases, and push for new algorithms and more efficient data.”

    And that’s just the start.

    Meta’s LLaMA model is now available for commercial use, allowing businesses to create their own AI solutions.

    Goudarzi’s team has been thinking about how they can distill open source LLMs and reduce their size. If smaller, the models could be installed on local machines, and you could have your own mini version of GitHub Copilot, for instance. But for now, open source models often need financial support due to their extensive infrastructure and operating costs.

    One thing that surprised Goudarzi: originally, the machine learning community thought that more advanced generative AI would require more advanced algorithms. But that hasn’t been the case.

    “The simple algorithm actually stays the same, regardless of how much it can do,” he says. “Scaling is the only change, which is completely mind-blowing.”

    Who knows how open source LLMs will revolutionize the developer landscape.

    “I’m excited that we’re seeing so many open source LLMs now,” Goudarzi says. “When developers start building with these models, the possibilities are endless.”

    Interested in how generative AI can help optimize your productivity? Read our guide on developer experience >

    The post A developer’s guide to open source LLMs and generative AI appeared first on The GitHub Blog.

    ]]>
    74518
    Developer experience: What is it and why should you care? https://github.blog/enterprise-software/collaboration/developer-experience-what-is-it-and-why-should-you-care/ Thu, 08 Jun 2023 19:23:53 +0000 https://github.blog/?p=72278 Explore how investing in a better developer experience frees developers to do what matters most: building great software.

    The post Developer experience: What is it and why should you care? appeared first on The GitHub Blog.

    ]]>

    Developer experience examines how people, processes, and tools affect developers’ ability to work efficiently. Learn more about what developers want in our developer experience survey >

    What do building software and vacuuming your house have in common?

    Jonathan Carter, technical advisor of the CEO at GitHub, used to hate vacuuming. That’s because his vacuum was located on the first floor of his home and bringing it upstairs to the main floor was tedious. But when he realized he could simply keep the vacuum where he needed it, the task wasn’t that hard. Now he vacuums every other day.

    “The same is true with building software,” he says. “When we construct the experience to empower the desired behavior naturally and effortlessly, we get a great outcome.”

    This is what developer experience (DevEx) is about. DevEx—sometimes called DevX or DX—examines how the juxtaposition of developers, processes, and tools positively or negatively affects software development. In this article, we’ll explore the key components of DevEx and how its optimization is integral for business success.

    Let’s jump in.

    Are you a visual learner? 😎 We’ve got you covered.
    Learn about DevEx in our What is DevEx? video.

    What is developer experience?

    DevEx refers to the systems, technology, process, and culture that influence the effectiveness of software development. It looks at all components of a developer’s ecosystem—from environment to workflows to tools—and asks how they are contributing to developer productivity, satisfaction, and operational impact.

    “Building software is like having a giant house of cards in our brains,” says Idan Gazit, senior director of research at GitHub. “Tiny distractions can knock it over in an instant. DevEx is ultimately about how we contend with that house of cards.”

    With DevEx, every aspect of a developer’s journey is questioned.

    “Is the tool making my job harder or easier?” Gazit asks. “Is the environment helping me focus? Is the process eliminating ways in which I can make mistakes? Is the system keeping me in my flow—and confidently enabling me to stack my cards ever higher?”

    Additionally, how developers subjectively feel makes all the difference—which can be gauged by user testing, surveys, and feedback.

    “DevEx puts developers at the center and works to understand how they feel and think about the work that they do,” says Eirini Kalliamvakou, staff researcher at GitHub. Developer sentiment can uncover points of friction and provide the opportunity to find appropriate fixes.

    “You can’t improve the developer experience with developers out of the loop,” she says.

    Importantly, collaboration is the multiplier across the entire DevEx. Developers need to be able to easily communicate and share with each other to do their best work.

    What is the history of developer experience?

    While DevEx might seem like a logical strategy to improve software development, the industry has been slow to apply it.

    Over the past few decades, developers have witnessed an explosion of technologies, open source libraries, package managers, languages, and services—with more tools, APIs, and integrations arriving by the day. The result is an ecosystem where nearly everything developers could want or need is at their fingertips.

    But as the analyst firm RedMonk notes, while developers have access to an exponential amount of technology and DevOps tooling—which has produced a large degree of innovation and competition—they’re on their own figuring out how it all works together. This has led to a fragmented DevEx. It also puts pressure on developers to constantly learn about the latest products (or even just how to connect to the newest API).

    “We need a holistic view of what makes up developers’ workflow,” GitHub’s Kalliamvakou says. “And once we have that, we need to make sure that the experience is collaborative and smooth every step of the way.”

    Why is developer experience important?

    In short, a good DevEx is important because it enables developers to build with more confidence, drive greater impact, and feel satisfied.

    Greg Mondello, director of product at GitHub, says it’s no surprise that DevEx has seen a significant increase in investment over the past five years.

    “In most contexts, software development capacity is the limiting factor for innovation,” he says. “Therefore, improvements to the effectiveness of software development are inherently valuable.”

    Moreover, development is only becoming more complex. Building software today involves many tools, technologies, and services across different providers, which requires developers to manage far more intricate environments.

    At its best, a well-conceived DevEx provides greater consistency across environments, processes, and workflows, while automating the more tedious and manual processes.

    “This enables companies with better DevEx to outperform their competitors, regardless of vertical,” Mondello says.

    The research backs this up.

    According to a report from McKinsey, a better DevEx can lead to extensive benefits for organizations, such as improved employee attraction and retention, enhanced security, and increased developer productivity. As such, DevEx is important for all companies—and not just tech.

    “It doesn’t matter what industry you’re part of or what geography you’re in,” Mondello says. “With better DevEx, you’ll have better business results.”

    And the importance of DevEx will only continue to grow.

    According to a Forrester opportunity snapshot, teams can reduce time to market and grow revenue by creating an easier way for developers to write code, build software, and ship updates to customers. As a result of improving DevEx:

    • 74% of survey respondents said they can drive developer productivity
    • 77% can shorten time to market
    • 85% can impact revenue growth
    • 75% can better attract and retain customers
    • 82% can increase customer satisfaction

    “I find it fascinating how anxious people get sitting at a stoplight,” GitHub’s Carter says. “They’re not there for very long. Yes, it’s a psychological thing that humans don’t like to wait.”

    The same goes for building software.

    “Great DevEx shortens the distance between intention and reality,” he says.

    What makes a good developer experience?

    A good DevEx is where developers “have the info they need and can pivot between focus and collaboration,” Kalliamvakou says. “They can complete tasks with minimal delay.”

    Low friction is important.

    “Or ideally, no friction at all,” she notes.

    Developers experience many types of friction during their end-to-end workflow, especially if they’re using multiple tools. From meetings to requests to many other types of disruptions, developers often have to piece together context from fragmented, out-of-date sources, which hinders their ability to be productive and write high-quality code.

    In the end, collaboration is king.

    “Without collaboration, a good DevEx isn’t possible,” Kalliamvakou says.

    What are key developer experience metrics?

    Unfortunately, there are currently no standardized industry metrics to measure DevEx. However, Mondello says the DevOps Research and Assessment (DORA) framework, which measures an organization’s DevOps performance, can be helpful. Key metrics include:

    • Deployment frequency (DF): how frequently an organization releases new software
    • Lead time for changes (LT): the time taken from when a change is requested or initiated to when it is deployed
    • Mean time to recovery (MTTR): the average time it takes to recover from a failure
    • Change failure rate (CFR): the percentage of changes that result in a failure

    However, Carter thinks good metrics go beyond DORA. For instance, he believes a great DevEx metric is the time to first contribution for a new hire. A good time signifies that the new developer got all the context they needed plus feels empowered from creating value, which is what DevEx is all about.

    “No amount of morale boosting or being friendly makes up for the fact that people want to feel valuable,” Carter says. “Happier developers is the goal. There’s nobody in the world who feels great about opening a pull request that sits in an approval queue for two days.”

    Likewise, Carter says customer response time is a good metric. A strong response time indicates that the team has what they need to move quickly, while feeling empowered and helpful.

    “The more we can treat developer happiness as a goal, and measure thoughtful signals to make sure we’re doing that, the better,” he says. “This requires addressing culture, tooling, and policies to make sure teams have clarity and autonomy.”

    Kalliamvakou notes that measuring DevEx underscores the need to continually check in with developers and see how they’re feeling. While organizations already know how to capture system performance data to gauge how efficient processes are, most don’t collect developers’ opinions on the systems they’re using.

    “How can we improve developers’ experiences without checking in with developers about what their experience is?” she asks.

    Kalliamvakou says that running periodic surveys is critical. These surveys need to capture developers’ satisfaction—or dissatisfaction—with systems and what it’s like to work with them daily. “Without these surveys, even the most sophisticated telemetry is incomplete and potentially misleading,” she says.

    Kalliamvakou also warns that this work is not optional. “Organizations that are not surveying their developers on productivity, ease of work, etc. will fall behind,” she says.

    What are ways to improve developer experience?

    Companies and development teams should improve their DevEx the same way they improve other product spaces—by using a strategy that includes research, discovery, user testing, and other key design components.

    “Here at GitHub, we are constantly striving to reduce the time it takes for our developers to execute their workflows,” Mondello says, mentioning how GitHub’s invention of the pull request was a pivotal moment in DevEx history. “This means finding ways to make the build process more efficient, optimizing deployment, and tuning tests to execute more effectively.”

    He also adds that GitHub plays a leading role in the DevEx space: GitHub is a collaboration company, first and foremost, and collaboration is essential to DevEx. Especially in the age of AI. As time goes on, collaboration will become increasingly important, since it’s the only way to ensure that AI-generated code is solid.

    “If you improve your collaboration, you’ll inevitably improve your DevEx,” Mondello says.

    Kalliamvakou also notes that organizations need to understand their current DevEx and the most critical friction points.

    “Is the documentation scattered and do developers have to spend precious energy to understand context?” she asks. “Do the build systems take a long time? Or are they flaky, leaving your developers frustrated by the delays and inconsistent behavior? Worst of all, are your developers unable to focus?”

    Once an organization has done the work to identify friction, it needs to simplify, accelerate, or optimize existing systems and processes.

    “Careful though!” Kalliamvakou says. “Any change will involve tradeoffs, so companies need to monitor if DevEx is actually improving or if friction is actually introduced by an intervention.”

    Cutting down on the number of meetings, for instance, can seem like a great idea for leveling interruptions. But if developers start reporting that their collaboration is poor, you may end up in a worse place than when you started.

    “It’s a lot of work to approach DevEx holistically and effectively,” Kalliamvakou says. This is why many organizations create DevEx teams that are dedicated to understanding, improving, and monitoring it.

    What role does generative AI play in developer experience?

    There is no doubt that generative AI is the future of DevEx, as it enables developers to write high-quality code faster.

    “As models get better and more functionality is built around how developers work, we can expect AI to suggest whole workflows,” Kalliamvakou says, in addition to the code and pull request suggestions that they already provide. “AI could remove major disruptions, delays, and cognitive load that developers previously had to endure.”

    Mondello agrees.

    “Generative AI will unlock the potential for developers to leapfrog large amounts of the software development process,” he says. “Instead of merely focusing on eliminating toil or friction, DevEx will focus on finding ways to enable developers to make large strides in their development workflows.”

    However, with the enablement of faster code, companies will also need to determine ways to speed up their build and test processes and improve their overall pipelines to production.

    Mondello points to the impact that’s being made by GitHub’s generative AI product, GitHub Copilot.

    “We will build upon our success with GitHub Copilot as we shape GitHub Copilot X and bring generative AI to the entire software development lifecycle,” he says.

    The bottom line

    In today’s engineering environments, DevEx is one of the most important aspects to innovating quickly and achieving business goals. Developer happiness and empowerment are critical for software success, regardless of industry or niche—and will only continue to become more important over time.

    Learn more about what developers want in our developer experience survey >

    The post Developer experience: What is it and why should you care? appeared first on The GitHub Blog.

    ]]>
    72278