The latest on open source maintainers - The GitHub Blog https://github.blog/open-source/maintainers/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Wed, 18 Feb 2026 18:41:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 The latest on open source maintainers - The GitHub Blog https://github.blog/open-source/maintainers/ 32 32 153214340 What to expect for open source in 2026 https://github.blog/open-source/maintainers/what-to-expect-for-open-source-in-2026/ Wed, 18 Feb 2026 18:41:42 +0000 https://github.blog/?p=93939 Let’s dig into the 2025’s open source data on GitHub to see what we can learn about the future.

The post What to expect for open source in 2026 appeared first on The GitHub Blog.

]]>

Over the years (decades), open source has grown and changed along with software development, evolving as the open source community becomes more global.

But with any growth comes pain points. In order for open source to continue to thrive, it’s important for us to be aware of these challenges and determine how to overcome them.

To that end, let’s take a look at what Octoverse 2025 reveals about the direction open source is taking. Feel free to check out the full Octoverse report, and make your own predictions.

Growth that’s global in scope

In 2025, GitHub saw about 36 million new developers join our community. While that number alone is huge, it’s also important to see where in the world that growth comes from. India added 5.2 million developers, and there was significant growth across Brazil, Indonesia, Japan, and Germany. 

What does this mean? It’s clear that open source is becoming more global than it was before. It also means that oftentimes, the majority of developers live outside the regions where the projects they’re working on originated. This is a fundamental shift. While there have always been projects with global contributors, it’s now starting to become a reality for a greater number of projects.

Given this global scale, open source can’t rely on contributors sharing work hours, communication strategies, cultural expectations, or even language. The projects that are going to thrive are the ones that support the global community.

One of the best ways to do this is through explicit communication maintained in areas like contribution guidelines, codes of conduct, review expectations, and governance documentation. These are essential infrastructure for large projects that want to support this community. Projects that don’t include these guidelines will have trouble scaling as the number of contributors increases across the globe. Those that do provide them will be more resilient, sustainable, and will provide an easier path to onboard new contributors.

The double-edged sword of AI

AI has had a major role in accelerating global participation over 2025. It’s created a pathway that makes it easier for new developers to enter the coding world by dramatically lowering the barrier to entry. It helps contributors understand unfamiliar codebases, draft patches, and even create new projects from scratch. Ultimately, it has helped new developers make their first contributions sooner.

However, it has also created a lot of noise, or what is called “AI slop.” AI slop is a large quantity of low-quality—and oftentimes inaccurate—contributions that don’t add value to the project. Or they are contributions that would require so much work to incorporate, it would be faster to implement the solution yourself. 

This makes it harder than ever to maintain projects and make sure they continue moving forward in the intended direction. Auto-generated issues and pull requests increase volume without always increasing the quality of the project. As a result, maintainers need to spend more time reviewing contributions from developers with vastly variable levels of skill. In a lot of cases, the amount of time it takes to review the additional suggestions has risen faster than the number of maintainers.

Even if you remove AI slop from the equation, the sheer volume of contributions has grown, potentially to unmanageable levels. It can feel like a denial of service attack on human attention.

This is why maintainers have been asking: how do you sift through the noise and find the most important contributions? Luckily, we’ve added some tools to help. There are also a number of open source AI projects specifically trying to address the AI slop issue. In addition, maintainers have been using AI defensively, using it to triage issues, detect duplicate issues, and handle simple maintenance like the labeling of issues. By helping to offload some of the grunt work, it gives maintainers more time to focus on the issues that require human intervention and decision making.

Expect the open source projects that continue to expand and grow over the next year to be those that incorporate AI as part of the community infrastructure. In order to deal with this quantity of information, AI cannot be just a coding assistant. It needs to find ways to ease the pressure of being a maintainer and find a way to make that work more scalable.

Record growth is healthy, if it’s planned for

On the surface, record global growth looks like success. But this influx of newer developers can also be a burden. The sheer popularity of projects that cover basics, such as contributing your first pull request to GitHub, shows that a lot of these new developers are very much in their infancy in terms of comfort with open source. There’s uncertainty about how to move forward and how to interact with the community. Not to mention challenges with repetitive onboarding questions and duplicate issues.

This results in a growing gap between the number of participants in open source projects and the number of maintainers with a sense of ownership. As new developers grow at record rates, this gap will widen.

The way to address this is going to be less about having individuals serving as mentors—although that will still be important. It will be more about creating durable systems that show organizational maturity. What does this mean? While not an exhaustive list, here are some items:

  • Having a clear, defined path to move from contributor to reviewer to maintainer. Be aware that this can be difficult without a mentor to help guide along this path.
  • Shared governance models that don’t rely on a single timezone or small group of people.
  • Documentation that provides guidance on how to contribute and the goals of the project.

By helping to make sure that the number of maintainers keeps relative pace with the number of contributors, projects will be able to take advantage of the record growth. This does create an additional burden on the current maintainers, but the goal is to invest in a solid foundation that will result in a more stable structure in the future. Projects that don’t do this will have trouble functioning at the increased global scale and might start to stall or see problems like increased technical debt.

But what are people building?

It can’t be denied that AI was a major focus—about 60% of the top growing projects were AI focused. However, there were several that had nothing to do with AI. These projects (e.g., Home Assistant, VS Code, Godot) continue to thrive because they meet real needs and support broad, international communities.

A list of the fastest-growing open source projects by contribution: zen-browser/desktop, cline/cline, vllm-project/vllm, astral-sh/uv, microsoft/vscode, infiniflow/ragflow, sgl-project/sglang, continuedev/continue, comfyanonymous/ComfyUI, and home-assistant/core.

Just as the developer space is growing on a global scale, the same can be said about the projects that garner the most interest. These types of projects that support a global community and address their needs are going to continue to be popular and have the most support. 

This just continues to reinforce how open source is really embracing being a global phenomenon as opposed to a local one.

What this year will likely hold

Open source in 2026 won’t be defined by a single trend that emerged over 2025. Instead, it will be shaped by how the community responds to the pressures identified over the last year, particularly with the surge in AI and an explosively growing global community.

For developers, this means that it’s important to invest in processes as much as code. Open source is scaling in ways that would have been impossible to imagine a decade ago, and the important question going forward isn’t how much it will grow—it’s how can you make that growth sustainable.

Read the full Octoverse report >

The post What to expect for open source in 2026 appeared first on The GitHub Blog.

]]>
93939
Securing the AI software supply chain: Security results across 67 open source projects https://github.blog/open-source/maintainers/securing-the-ai-software-supply-chain-security-results-across-67-open-source-projects/ Tue, 17 Feb 2026 19:00:00 +0000 https://github.blog/?p=93831 Learn how The GitHub Secure Open Source Fund helped 67 critical AI‑stack projects accelerate fixes, strengthen ecosystems, and advance open source resilience.

The post Securing the AI software supply chain: Security results across 67 open source projects appeared first on The GitHub Blog.

]]>

Modern software is built on open source projects. In fact, you can trace almost any production system today, including AI, mobile, cloud, and embedded workloads, back to open source components. These components are the invisible infrastructure of software: the download that always works, the library you never question, the build step you haven’t thought about in years, if ever.

A few examples:

  • curl moves data for billions of systems, from package managers to CI pipelines.
  • Python, pandas, and SciPy sit underneath everything from LLM research to ETL workflows and model evaluation.
  • Node.js, LLVM, and Jenkins shape how software is compiled, tested, and shipped across industries.

When these projects are secure, teams can adopt automation, AI‑enhanced tooling, and faster release cycles without adding risk or slow down development. When they aren’t, the blast radius crosses project boundaries, propagating through registries, clouds, transitive dependencies, and production systems, including AI systems, that react far faster than traditional workflows.

Securing this layer is not only about preventing incidents; it’s about giving developers confidence that the systems they depend on—whether for model training, CI/CD, or core runtime behavior—are operating on hardened, trustworthy foundations. Open source is shared industrial infrastructure that deserves real investment and measurable outcomes.

That is the mission of the GitHub Secure Open Source Fund: to secure open source projects that underpin the digital supply chain, catalyze innovation, and are critical to the modern AI stack. 

We do this by directly linking funding to verified security outcomes and by giving maintainers resources, hands‑on security training, and a security community where they can raise their highest‑risk concerns and get expert feedback. 

Why securing critical open source projects matters 

A single production service can depend on hundreds or even thousands of transitive dependencies. As Log4Shell demonstrated, when one widely used project is compromised, the impact is rarely confined to a single application or company.

Investing in the security of widely used open source projects does three things at once:

  • It reinforces that security is a baseline requirement for modern software, not optional labor.
  • It gives maintainers time, resources, and support to perform proactive security work.
  • It reduces systemic risk across the global software supply chain.

This security work benefits everyone who writes, ships, or operates code, even if they never interact directly with the projects involved. That gap is exactly what the GitHub Secure Open Source Fund was built to close. In Session 1 & 2, 71 projects made significant security improvements. In Session 3, 67 open source projects delivered concrete security improvements to reduce systemic risk across the software supply chain.


Session 3, by the numbers

  • 67 projects
  • 98 maintainers
  • $670,000 in non-dilutive funding powered by GitHub Sponsors
  • 99% of projects completed the program with core GitHub security features enabled

Real security results across all sessions:

  • 138 projects
  • 219 maintainers
  • 38 countries represented by participating projects
  • $1.38M in non-dilutive funding powered by GitHub Sponsors
  • 191 new CVEs Issued
  • 250+ new secrets prevented from being leaked
  • 600+ leaked secrets were detected and resolved
  • Billions of monthly downloads powered by alumni projects

Plus, in just the last 6 months:

  • 500+ CodeQL alerts fixed
  • 66 secrets blocked

Where security work happened in Session 3

Session 3 focused on improving security across the systems developers rely on every day. The projects below are grouped by the role they play in the software ecosystem.

Core programming languages and runtimes 🤖

CPython • Himmelblau • LLVM Node.js • Rustls

These projects define how software is written and executed. Improvements here flow downstream to entire ecosystems.

This group includes CPython, Node.js, LLVM, Rustls, and related tooling that shapes compilation, execution, and cryptography at scale.

Quote from Node: GitHub SOSF trailblazed critical security knowledge for Open Source in the AI era.

For example, improvements to CPython directly benefit millions of developers who rely on Python for application development, automation, and AI workloads. LLVM maintainers identified security improvements that complement existing investments and reduce risk across toolchains used throughout the industry.

When language runtimes improve their security posture, everything built on top of them inherits that resilience.

Python quote: This program made it possible to enhance Python's security, directly benefitting millions of developers.

Web, networking, and core infrastructure libraries 📚

Apache APISIXcurlevcc kgatewayNettyquic-gourllib3 Vapor

These projects form the connective tissue of the internet. They handle HTTP, TLS, APIs, and network communication that nearly every application depends on.

This group includes curl, urllib3, Netty, Apache APISIX, quic-go, and related libraries that sit on the hot path of modern software.

Quote from curl: The program brings together security best practices in a concise, actionable form to give us assurance we're on the right track.

Build systems, CI/CD, and release tooling 🧰

Apache AirflowBabel FoundryGitoxideGoReleaserJenkinsJupyter Docker Stacks node-lru-cacheoapi-codegen PyPI / Warehouserimraf  • webpack

Compromising build tooling compromises the entire supply chain. These projects influence how software is built, tested, packaged, and shipped.

Session 3 included projects such as Jenkins, Apache Airflow, GoReleaser, PyPI Warehouse, webpack, and related automation and release infrastructure.

Maintainers in this category focused on securing workflows that often run with elevated privileges and broad access. Improvements here help prevent tampering before software ever reaches users.

Quote from Webpack: We've greatly enhanced our security to protect web applications against threats.

Data science, scientific computing, and AI foundations 📊

ACI.devArviZCocoIndexOpenBB PlatformOpenMetadata OpenSearchpandasPyMCSciPyTraceRoot

These projects sit at the core of modern data analysis, research, and AI development. They are increasingly embedded in production systems as well as research pipelines.

Projects such as pandas, SciPy, PyMC, ArviZ, and OpenSearch participated in Session 3. Maintainers expanded security coverage across large and complex codebases, often moving from limited scanning to continuous checks on every commit and release.

Many of these projects also engaged deeply with AI-related security topics, reflecting their growing role in AI workflows.

Quote from SciPy: The program took us from 0 to security scans on every line of code, on every commit, and on every release.

Developer tools and productivity utilities ⚒️

AssertJ ArduPilot AsyncAPI Initiative BevycalibreDIGITfabric.jsImageMagickjQueryjsoupMastodonMermaidMockoonp5.jspython-benedictReact Starter KitSeleniumSphinxSpyderssh_configThunderbird for AndroidTwo.jsxyflowYii framework

These projects shape the day-to-day experience of writing, testing, and maintaining software.

The group includes tools such as Selenium, Sphinx, ImageMagick, calibre, Spyder, and other widely used utilities that appear throughout development and testing environments.

Improving security here reduces the risk that developer tooling becomes an unexpected attack vector, especially in automated or shared environments.

Quote from Mermaid: We're not just well equipped for security; we're equipped to lift others up with the same knowledge.

Identity, secrets, and security frameworks 🔒

external-secretsHelmet.jsKeycloakKeyshadeOauth2 (Ruby)varlockWebAuthn (Go)

These projects form the backbone of authentication, authorization, secrets management, and secure configuration.

Session 3 participants included projects such as Keycloak, external-secrets, oauth2 libraries, WebAuthn tooling, and related security frameworks.

Maintainers in this group often reported shifting from reactive fixes to systematic threat modeling and long-term security planning, improving trust for every system that depends on them.

Quote from Keyshade: The GitHub SOSF was invaluable, helping us strengthen our security approach and making us more confident and effective organization-wide.

Security as shared infrastructure

One of the most durable outcomes of the program was a shift in mindset.

Maintainers moved security from a stretch goal to a core requirement. They shifted from reactive patching to proactive design, and from isolated work to shared practice. Many are now publishing playbooks, sharing incident response exercises, and passing lessons on to their contributor communities.

That is how security scales: one-to-many.

What’s next: Help us make open source more secure 

Securing open source is basic maintenance for the internet. By giving 67 heavily used projects real funding, three focused weeks, and direct help, we watched maintainers ship fixes that now protect millions of builds a day. This training, taught by the GitHub Security Lab and top cybersecurity experts, allows us to go beyond one-on-one education and enable one-to-many impact. 

For example, many maintainers are working to make their playbooks public. The incident-response plans they rehearsed are forkable. The signed releases they now ship flow downstream to every package manager and CI pipeline that depends on them.

Join us in this mission to secure the software supply chain at scale. 

  • Projects and maintainers: Apply now to the GitHub Secure Open Source Fund and help make open source safer for everyone. Session 4 begins April 2026. If you write code, rely on open source, or want the systems you depend on to remain trustworthy, we encourage you to apply.
  • Funding and Ecosystem Partners: Become a Funding or Ecosystem Partner and support a more secure open source future. Join us on this mission to secure the software supply chain at scale!

Thank you to all of our partners

We couldn’t do this without our incredible network of partners. Together, we are helping secure the open source ecosystem for everyone! 

Funding Partners: Alfred P. Sloan Foundation, American Express, Chainguard, Datadog, Herodevs, Kraken, Mayfield, Microsoft, Shopify, Stripe, Superbloom, Vercel, Zerodha, 1Password

A decorative image showing GitHub Secure Open Source Fund, powered by GitHub Sponsors. Logos below are: Alfred P. Sloan Foundation, American Express, chainguard, Datadog, herdevs, Kraken, Microsoft, Mayfield, Shopify, stripe, superbloom, Vercel, 1Password, Zerodha

Ecosystem Partners: Atlantic Council, Ecosyste.ms, CURIOSS, Digital Data Design Institute Lab for Innovation Science, Digital Infrastructure Insights Fund, Microsoft for Startups, Mozilla, OpenForum Europe, Open Source Collective, OpenUK, Open Technology Fund, OpenSSF, Open Source Initiative, OpenJS Foundation, University of California, OWASP, Santa Cruz OSPO, Sovereign Tech Agency, SustainOSS

A collage of ecosystem partners: OWASP, ecosyste.ms, curioss, Digital Data Design Institute, Digital Infrastructure Insights Fund, Mozilla, Open Forum Europe, Open Source Collective, Open UK, Microsoft for Startups, Open SSF, Open Source Initiative, Open JS Foundation, OSPO, Open Technology Fund, URA, Sovereign Tech Agency, Sustain, and Atlantic Council.

The post Securing the AI software supply chain: Security results across 67 open source projects appeared first on The GitHub Blog.

]]>
93831
Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. https://github.blog/open-source/maintainers/welcome-to-the-eternal-september-of-open-source-heres-what-we-plan-to-do-for-maintainers/ Thu, 12 Feb 2026 20:14:11 +0000 https://github.blog/?p=93789 Open source is hitting an “Eternal September.” As contribution friction drops, maintainers are adapting with new trust signals, triage approaches, and community-led solutions.

The post Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. appeared first on The GitHub Blog.

]]>

Open collaboration runs on trust. For a long time, that trust was protected by a natural, if imperfect filter: friction.

If you were on Usenet in 1993, you’ll remember that every September a flood of new university students would arrive online, unfamiliar with the norms, and the community would patiently onboard them. Then mainstream dial-up ISPs became popular and a continuous influx of new users came online. It became the September that never ended.

Today, open source is experiencing its own Eternal September. This time, it’s not just new users. It’s the sheer volume of contributions.

When the cost to contribute drops

In the era of mailing lists contributing to open source required real effort. You had to subscribe, lurk, understand the culture, format a patch correctly, and explain why it mattered. The effort didn’t guarantee quality, but it filtered for engagement. Most contributions came from someone who had genuinely engaged with the project.

It also excluded people. The barrier to entry was high. Many projects worked hard to lower it in order to make open source more welcoming.

A major shift came with the pull request. Hosting projects on GitHub, using pull requests, and labeling “Good First Issues” reduced the friction needed to contribute. Communities grew and contributions became more accessible.

That was a good thing.

But friction is a balancing act. Too much keeps people and their ideas out, too little friction can strain the trust open source depends on.

Today, a pull request can be generated in seconds. Generative AI makes it easy for people to produce code, issues, or security reports at scale. The cost to create has dropped but the cost to review has not.

It’s worth saying: most contributors are acting in good faith. Many want to help projects they care about. Others are motivated by learning, visibility, or the career benefits of contributing to widely used open source. Those incentives aren’t new and they aren’t wrong.

The challenge is what happens when low-quality contributions arrive at scale. When volume accelerates faster than review capacity, even well-intentioned submissions can overwhelm maintainers. And when that happens, trust, the foundation of open collaboration, starts to strain.

The new scale of noise

It is tempting to frame “low-quality contributions” or “AI slop” contributions as a unique recent phenomenon. It isn’t. Maintainers have always dealt with noisy inbound.

  • The Linux kernel operates under a “web of trust” philosophy and formalized its SubmittingPatches guide and introduced the Developer Certificate of Origin (DCO) in 2004 for a reason.
  • Mozilla and GNOME built formal triage systems around the reality that most incoming bug reports needed filtering before maintainers invested deeper time.
  • Automated scanners: Long before GenAI, maintainers dealt with waves of automated security and code quality reports from commercial and open source scanning tools.

The question from maintainers has often been the same: “Are you really trying to help me, or just help yourself?

Just because a tool—whether a static analyzer or an LLM—makes it easy to generate a report or a fix, it doesn’t mean that contribution is valuable to the project. The ease of creation often adds a burden to the maintainer because there is an imbalance of benefit. The contributor maybe gets the credit (or the CVE, or the visibility), while the maintainer gets the maintenance burden.

Maintainers are feeling that directly. For example:

  • curl ended its bug bounty program after AI-generated security reports exploded, each taking hours to validate.
  • Projects like Ghostty are moving to invitation-only contribution models, requiring discussion before accepting code contributions.
  • Multiple projects are adopting explicit rules about AI-generated contributions.

These are rational responses to an imbalance.

What we’re doing at GitHub

At GitHub, we aren’t just watching this happen. Maintainer sustainability is foundational to open source, and foundational to us. As the home of open source, we have a responsibility to help you manage what comes through the door.

We are approaching this from multiple angles: shipping immediate relief now, while building toward longer-term, systemic improvements. Some of this is about tooling. Some is about creating clearer signals so maintainers can decide where to spend their limited time.

Features we’ve already shipped

  • Repo-level pull request controls: Gives maintainers the option to limit pull request creation to collaborators or disable pull requests entirely. While the introduction of the pull request was fundamental to the growth of open source, maintainers should have the tools they need to manage their projects.
  • Pinned comments on issues: You can now pin a comment to the top of an issue from the comment menu.
  • Banners to reduce comment noise: Experience fewer unnecessary notifications with a banner that encourages people to react or subscribe instead of leaving noise like “+1” or “same here.”
  • Pull request performance improvements: Pull request diffs have been optimized for greater responsiveness and large pull requests in the new files changed experience respond up to 67% faster.
  • Faster issue navigation: Easier bug triage thanks to significantly improved speeds when browsing and navigating issues as a maintainer.
  • Temporary interaction limits: You can temporarily enforce a period of limited activity for certain users on a public repository.

Plus, coming soon: pull request deletion from the UI. This will remove spam or abusive pull requests so repositories can stay more manageable.

These improvements focus on reducing review overhead.

Exploring next steps

We know that walls don’t build communities. As we explore next steps, our focus is on giving maintainers more control while helping protect what makes open source communities work.

Some of the directions we’re exploring in consultation with maintainers include:

  • Criteria-based gating: Requiring a linked issue before a pull request can be opened, or defining rules that contributions must meet before submission.
  • Improved triage tools: Potentially leveraging automated triage to evaluate contributions against a project’s own guidelines (like CONTRIBUTING.md) and surface which pull requests should get your attention first.

These tools are meant to support decision-making, not replace it. Maintainers should always remain in control.

We are also aware of tradeoffs. Restrictions can disproportionately affect first-time contributors acting in good faith. That’s why these controls are optional and configurable.

The community is building ladders

One of the things I love most about open source is that when the community hits a wall, people build ladders. We’re seeing a lot of that right now.

Maintainers across the ecosystem are experimenting with different approaches. Some projects have moved to invitation-only workflows. Others are building custom GitHub Actions for contributor triage and reputation scoring.

Mitchell Hashimoto’s Vouch project is an interesting example. It implements an explicit trust management system where contributors must be vouched for by trusted maintainers before they can participate. It’s experimental and some aspects will be debated, but it fits a longer lineage, from Advogato’s trust metric to Drupal’s credit system to the Linux kernel’s Signed-off-by chain.

At the same time, many communities are investing heavily in education and onboarding to widen who can contribute while setting clearer expectations. The Python community, for example, emphasizes contributor guides, mentorship, and clearly labeled entry points. Kubernetes pairs strong governance with extensive documentation and contributor education, helping new contributors understand not just how to contribute, but what a useful contribution looks like.

These approaches aren’t mutually exclusive. Education helps good-faith contributors succeed. Guardrails help maintainers manage scale.

There is no single correct solution. That’s why we are excited to see maintainers building tools that match their project’s specific values. The tools communities build around the platform often become the proving ground for what might eventually become features. So we’re paying close attention.

Building community, not just walls

We also need to talk about incentives. If we only build blocks and bans, we create a fortress, not a bazaar.

Right now, the concept of “contribution” on GitHub still leans heavily toward code authorship. In WordPress, they use manually written “props” credit given not just for code, but for writing, reproduction steps, user testing, and community support. It recognizes the many forms of contribution that move a project forward.

We want to explore how GitHub can better surface and celebrate those contributions. Someone who has consistently triaged issues or merged documentation PRs has proven they understand your project’s voice. These are trust signals we should be surfacing to help you make decisions faster.

Tell us what you need

We’ve opened a community discussion to gather feedback on the directions we’re exploring: Exploring Solutions to Tackle Low-Quality Contributions on GitHub.

We want to hear from you. Share what is working for your projects, where the gaps are, and what would meaningfully improve your experience maintaining open source.

Open source’s Eternal September is a sign of something worth celebrating: more people want to participate than ever before. The volume of contributions is only going to grow — and that’s a good thing. But just as the early internet evolved its norms and tools to sustain community at scale, open source needs to do the same. Not by raising the drawbridge, but by giving maintainers better signals, better tools, and better ways to channel all that energy into work that moves their projects forward.

Let’s build that together.

The post Welcome to the Eternal September of open source. Here’s what we plan to do for maintainers. appeared first on The GitHub Blog.

]]>
93789
5 podcast episodes to help you build with confidence in 2026 https://github.blog/open-source/maintainers/5-podcast-episodes-to-help-you-build-with-confidence-in-2026/ Tue, 23 Dec 2025 00:15:00 +0000 https://github.blog/?p=93068 Looking ahead to the New Year? These GitHub Podcast episodes help you cut through the noise and build with more confidence across AI, open source, and developer tools.

The post 5 podcast episodes to help you build with confidence in 2026 appeared first on The GitHub Blog.

]]>

The end of the year creates a rare kind of quiet. It is the kind that makes it easier to reflect on how you have been building, what you have been learning, and what you want to do differently next year. It is also the perfect moment to catch up on the mountain of browser tabs you’ve kept open and podcast episodes you’ve bookmarked. Speaking of podcasts, we have one! (Wow, smooth transition, Cassidy).

If you’re looking to level-up your thinking around AI, open source software sustainability, and the future of software, we have some great conversations you can take on the road with you. 

This year on the GitHub Podcast, we talked with maintainers, educators, data experts, and builders across the open source ecosystem. These conversations were not just about trends or tools. They offered practical ways to think more clearly about where software is headed and how to grow alongside it. If 2026 is about building with more intention, these five episodes are a great place to start.

Understand where AI tooling is actually heading

If this year left you feeling overwhelmed by the pace of change in AI tooling, you are not alone. New models, new agents, and new workflows seemed to appear every week, often without much clarity on how they fit together or which ones would actually last.

Our Unlocking the power of MCP episode slows things down. It introduces the Model Context Protocol (MCP) as a way to make sense of that chaos, explaining how an open standard can help AI systems interact with tools in consistent and transparent ways. Rather than adding to the noise, the conversation gives you a clearer mental model for how modern AI tools are being built and why open standards matter for trust, interoperability, and long-term flexibility. Most importantly, MCP makes building better for everyone. Learn about how the standard works (and you can check out GitHub’s open sourced MCP server, too).

Ship smaller, smarter software—faster

Not every meaningful piece of software needs a pitch deck or a product roadmap. Building tools and the future of DIY development explores a growing shift toward personal, purpose-built tools. These are tools created to solve one specific problem well, often by the people who feel that pain most acutely. Developers and non-developers alike are really empowered these days by open source and AI tools, because they’re enabled to build faster and with less mental overhead. It is a great reminder that modern tooling and AI have lowered the barrier to turning ideas into working software, without stripping away creativity or craftsmanship. After listening to this one, you might just pick up that unused domain name and make something! 😉

Understand what keeps open source sustainable

If you were around the tech scene in 2021, you probably remember the absolute chaos that came with the Log4Shell vulnerability that was exposed in November that year. That vulnerability (and others since then) put a spotlight on the world’s dependence on underfunded open source infrastructure. But, money can’t solve all of the world’s problems, unfortunately. From Log4Shell to the Sovereign Tech Fund is a really interesting conversation about why success is not just about funding, but also community health, processes, and communication. By the end, you come away with a deeper appreciation for the invisible labor behind the tools you rely on, and a clearer sense of how individuals, companies, and governments can show up more responsibly.

2025 really has been the year of growth and change across projects. The Octoverse report analyzes the state of open source across 1.12 billion open source contributions, 518 million merged pull requests, 180 million developers… you get the idea, a lot of numbers and a lot of data. TypeScript’s Takeover, AI’s Lift-Off: Inside the 2025 Octoverse Report grounds the conversation in data from GitHub’s Octoverse report, turning billions of contributions into meaningful signals. The discussion helps connect trends like TypeScript’s rise, AI-assisted workflows, and even COBOL’s unexpected resurgence to real decisions developers face: what to learn next, where to invest time, and how to stay adaptable. Rather than predicting the future, it offers something more useful: a clearer picture of the present and how to navigate what comes next.

Understand what privacy-first software looks like in practice

As more everyday devices become connected, it is getting harder to tell where convenience ends and control begins. This episode offers a refreshing counterpoint. Recorded live at GitHub Universe 2025, the conversation with Frank “Frenck” Nijhof explores how Home Assistant has grown into one of the most active open source projects in the world by prioritizing local control, privacy, and long-term sustainability.

Listening to Privacy-First Smart Homes with Frenck from Home Assistant shifts how you think about automation and ownership. You hear how millions of households run smart homes without relying on the cloud, why the Open Home Foundation exists to fight vendor lock-in and e-waste, and how a welcoming community scaled to more than 21,000 contributors. The discussion also opens up what contribution can look like beyond writing code, showing how documentation, testing, and community support play a critical role. It is a reminder that building better technology often starts with clearer values and more inclusive ways to participate. Plus, you get to hear about the weird and wonderful ways people use Home Assistant to power their lives. 

Take this with you

As we look toward 2026, these episodes share a common thread. They encourage building with clarity, curiosity, and care for your tools, your community, and yourself. Whether you are listening while traveling, winding down for the year, or planning what you want to focus on next, we hope these conversations help you start the year feeling more grounded and inspired.

And if you speed through these episodes, don’t worry; we have so many more fantastic episodes from this season. You can listen to every episode of the GitHub Podcast wherever you get your podcasts, or watch them on YouTube. We are excited to see what you build in 2026.

Subscribe to the GitHub Podcast >

The post 5 podcast episodes to help you build with confidence in 2026 appeared first on The GitHub Blog.

]]>
93068
This year’s most influential open source projects https://github.blog/open-source/maintainers/this-years-most-influential-open-source-projects/ Mon, 22 Dec 2025 23:48:52 +0000 https://github.blog/?p=93051 From Appwrite to Zulip, Universe 2025’s Open Source Zone was stacked with standout projects showing just how far open source can go. Meet the maintainers—and if you want to join them in 2026, you can now apply for next year’s cohort.

The post This year’s most influential open source projects appeared first on The GitHub Blog.

]]>

From Appwrite to Zulip, the Open Source Zone at Universe 2025 was stacked with projects that pushed boundaries and turned heads. These twelve open source teams brought the creativity, the engineering craft, and the “I need to try that” demos that make Universe special. Here’s a closer look at what they showcased this year.

If you want to join them in 2026, applications for next year’s Open Source Zone are open now!

Appwrite: Backend made simple

Appwrite is an open source backend platform that helps developers build secure and scalable apps without boilerplate. With APIs for databases, authentication, storage, and more, it’s become a go-to foundation for web and mobile developers who want to ship faster.

Screenshot of Appwrite.

Origin story: Appwrite Appwrite was created in 2019 by Eldad Fux as a side project, and it quickly grew from a weekend project to one of the fastest-growing developer platforms on GitHub, with over 50,000 stars and hundreds of contributors worldwide. 

Photo of Appwrite's @divanov11 and @stnguyen90 in the Open Source Zone.
Appwrite’s @divanov11 and @stnguyen90 give the Open Source Zone a 👍🏻.

GoReleaser: Effortless release automation for Go

GoReleaser automates packaging, publishing, and distributing Go projects so developers can ship faster with less stress. With strong support from its contributor base, it has become the go-to release engineering tool for Go maintainers who want to focus on building rather than busywork.

🚦 Go go go, GoReleaser: GoReleaser started life in 2015 as a simple release.sh script. Within a year, @caarlos0, rewrote it in Go with YAML configs, during his holiday break—instead of, you know, actually taking a holiday. That rewrite became the foundation of what’s now a tool with over 15,000 stars and paying customers worldwide. GitHub included! E.g. for GitHub CLI.

And can we all just take a minute to applaud the GoReleaser logo?!

A logo of a gopher on a rocket.

💡 Fun fact: one of my colleagues, @ashleymcnamara, has created a secession (that’s the word for a bunch of Gophers—I checked!) of iconic Gopher designs that have become part of Go’s visual culture. If you’ve seen a Gopher sticker at a conference, odds are it came from her repo. Watch out, Ashley. Looks like you have some competition.

Homebrew: The missing package manager for macOS

Speaking of great logos. Homebrew is the de facto package manager for macOS, beloved by developers for making it simple to install, manage, and update software from the command line. From data scientists to DevOps engineers, millions rely on Homebrew every day to bootstrap their environments, automate workflows, and keep projects running smoothly.

Thanks for having us! GitHub Universe was a great opportunity to re-energize by meeting users and fellow maintainers.

Issy Long, Senior Software Engineer & Homebrew Lead Maintainer
Photo of Homebrew at the GitHub Universe Open Source Zone.
Homebrew lead maintainers @p-linnane and @issyl0 were on hand to meet users and answer questions. Cheers! 🍻

Ladybird: A browser for the bold

Ladybird is an ambitious and independent open source browser being built from scratch with performance, security, and privacy in mind. What began as a humble HTML viewer is now evolving into one of the most exciting projects in the browser space, supported by a rapidly growing global community.

Ladybird publish a monthly update showcasing bug fixes, performance improvements, and feature additions like variable font support and enhanced WebGL support.

💡 Did you know: Ladybird started life in 2018 as a tiny HTML viewer tucked inside the SerenityOS operating system. Fast-forward a few years and it’s grown up into a full-fledged, from-scratch browser with a buzzing open source community—1200 contributors and counting!

Moondream: Tiny AI, big vision

Moondream is an open source visual language model that brings visual intelligence for everyone. With a tiny 1 GB footprint and blazing performance, it runs anywhere from laptops to edge devices without the need for GPUs or complex infrastructure. Developers can caption images, detect objects, follow gaze, read documents, and more using natural language prompts. With more than 6 million downloads and thousands of GitHub stars, Moondream is trusted across industries from healthcare to robotics, making state-of-the-art vision AI as simple as writing a line of code.

Oh My Zsh: Supercharge your shell

Oh My Zsh is a community-driven framework that makes the Zsh shell stylish, powerful, and endlessly customizable. With hundreds of plugins and themes and millions of users, it is one of the most beloved ways to supercharge the command line.

People get really into customizing their prompts—myself included—but GitHub’s @casidoo raised the bar with her blog post. Safe to say her prompt looks way cooler than mine. For now… 😈

Photo of Oh My Zsh at the GitHub Universe Open Source Zone.
Oh my gosh, it’s the Oh My Zsh creator @robbyrussell and maintainer @carlosala discussing why your shell deserves nice things.

💡 Fun fact: Oh My Zsh started in 2009 as a weekend project by Robby Russell, and it’s now one of the most popular open-source frameworks for managing Zsh configs, with thousands of plugins and themes contributed by the community. <3

OpenCV: The computer vision powerhouse

OpenCV is the most widely used open source computer vision library in the world, powering robotics, medical imaging, and cutting-edge AI research. With a vast community of contributors, it remains the essential toolkit for developers working with images and video.

🧐 Did you know: OpenCV started in 1999 at Intel as a research project and today it powers everything from self-driving cars to Instagram filters, with over 40,000 stars on GitHub and millions of users worldwide!

Open Source Project Security Baseline (OSPSB): Raising the bar

Security isn’t glamorous, but maintaining a healthy open source ecosystem depends on it—and that’s where the Open Source Project Security Baseline (OSPSB) comes in. OSPSB, an initiative from the OpenSSF community, gives maintainers a practical, no-nonsense checklist of what “good security” actually looks like. Instead of vague best practices, it focuses on realistic, minimum requirements that any project can meet, no matter the size of the team.

At Universe 2025, OSPSB resonated with maintainers looking for clarity in a world of shifting threats. The maturity levels and self-assessment tools make it simple to understand where your project is strong, where it needs improvement, and how users can contribute back to security work — a win for the entire ecosystem.

💡 Fun fact: OSPSB is used by hundreds of projects as a self-assessment tool, and it’s supported by the GitHub Secure Open Source Fund to help maintainers keep their software resilient.

The resilience and sustainability of open source is a shared responsibility between maintainers and users. Beyond telling consumers why they should trust your project, Baseline will also tell them where they can contribute to security improvements.

Xavier René-Corail, Senior Director, GitHub Security Research

p5.js and Processing for Creative Coding

p5.js is a beginner-friendly JavaScript library that makes coding accessible for artists, educators, and developers alike. From interactive art to generative visuals, it empowers millions to express ideas through code and brings creative coding into classrooms and communities worldwide.

Processing is an open-source programming environment designed to teach code through visual art and interactive media. Used by artists, educators, and students worldwide, it bridges technology and creativity, making programming accessible, playful, and expressive.

PixiJS: Powering graphics on the web

PixiJS  is a powerful HTML5 engine for creating stunning 2D graphics on the web. Built on top of WebGL and WebGPU, it delivers one of the fastest and most flexible rendering experiences available. With an intuitive API, support for custom shaders, advanced text rendering, multi-touch interactivity, and accessibility features, PixiJS empowers developers to craft beautiful, interactive experiences that run smoothly across desktop, mobile, and beyond. With over 46,000 stars on GitHub and adoption by hundreds of global brands, PixiJS has become the go-to toolkit for building games, applications, and large-scale visualizations in the browser.

💡 Fun fact: PixiJS has been around for more than 12 years and has powered everything from hit games like Happy Wheels and Subway Surfers to immersive art installations projected onto city buildings. Developer Simone Seagle used PixiJS to bring The Met’s Open Access artworks to lifeanimating Kandinsky’s Violett with spring physics and transforming Monet’s water lilies into a swirling, interactive experience.

SparkJS: Splat the limits of 3D

Spark (no, not that one!) is an advanced 3D Gaussian Splatting renderer for THREE.js, letting developers blend cutting-edge research with the most popular JavaScript 3D engine on the web. Portable, fast, and surprisingly lightweight, SparkJS brings real-time splat rendering to almost any device with correct sorting, animation support, and compatibility for major splat formats like .PLY, .SPZ, and .KSPLAT.

What is Gaussian Splatting? Gaussian Splatting is a graphics technique that represents 3D objects as millions of tiny, semi-transparent ellipsoids (“splats”) instead of heavy polygon meshes. It delivers photorealistic detail, smooth surfaces, and fast real-time performance, making it a rising star in computer vision, neural rendering, and now, thanks to Spark, everyday web development.

Zulip: Conversations that scale

Zulip is the open source team chat platform built for thoughtful communication at scale. Unlike traditional chat apps where conversations quickly become noise, Zulip’s unique topic-based threading keeps discussions organized and discoverable, even days later. With integrations, bots, and clients for every platform, Zulip helps distributed teams collaborate without the chaos.

💡 Fun fact: Zulip began as a small startup in 2012, was acquired by Dropbox in 2014, and open sourced in 2015. Today it has over 1500 contributors worldwide, powering communities, classrooms, nonprofits, and companies that need conversations to stay useful.

Photo of Zulip's both in the GitHub Universe Open Source Zone.
From left-to-right, @gnprice, @alya, @timabbott stand at the Zulip booth.

We want to thank the maintainers for participating at GitHub Universe in the Open Source Zone, and for your projects that are making our world turn. You all are what open source is about! <3

Even if you didn’t get to meet these folks at Universe, it’s never too late to check out their work. Or, you can keep powering open source by contributing to or sponsoring a project.

Want to showcase your project at GitHub Universe next year? Apply now! You’ll get two free tickets and a space on the show floor.

The post This year’s most influential open source projects appeared first on The GitHub Blog.

]]>
93051
MCP joins the Linux Foundation: What this means for developers building the next era of AI tools and agents https://github.blog/open-source/maintainers/mcp-joins-the-linux-foundation-what-this-means-for-developers-building-the-next-era-of-ai-tools-and-agents/ Tue, 09 Dec 2025 21:00:13 +0000 https://github.blog/?p=92752 MCP is moving to the Linux Foundation. Here's how that will affect developers.

The post MCP joins the Linux Foundation: What this means for developers building the next era of AI tools and agents appeared first on The GitHub Blog.

]]>

Over the past year, AI development has exploded. More than 1.1 million public GitHub repositories now import an LLM SDK (+178% YoY), and developers created nearly 700,000 new AI repositories, according to this year’s Octoverse report. Agentic tools like vllm, ollama, continue, aider, ragflow, and cline are quickly becoming part of the modern developer stack.

As this ecosystem expands, we’ve seen a growing need to connect models to external tools and systems—securely, consistently, and across platforms. That’s the gap the Model Context Protocol (MCP) has rapidly filled. 

Born as an open source idea inside Anthropic, MCP grew quickly because it was open from the very beginning and designed for the community to extend, adopt, and shape together. That openness is a core reason it became one of the fastest-growing standards in the industry. That also allowed companies like GitHub and Microsoft to join in and help build out the standard.  

Now, Anthropic is donating MCP to the Agentic AI Foundation, which will be managed by the Linux Foundation, and the protocol is entering a new phase of shared stewardship. This will provide developers with a foundation for long-term tooling, production agents, and enterprise systems.  This is exciting for those of us who have been involved in the MCP community. And given our long-term support of the Linux Foundation, we are hugely supportive of this move.

The past year has seen incredible growth and change for MCP.  I thought it would be great to review how MCP got here, and what its transition to the Linux Foundation means for the next wave of AI development.

Before MCP: Fragmented APIs and brittle integrations

LLMs started as isolated systems. You sent them prompts and got responses back. We would use patterns like retrieval-augmented generation (RAG) to help us bring in data to give more context to the LLM, but that was limited. OpenAI’s introduction of function calling brought about a huge change as, for the first time, you could call any external function. This is what we initially built on top of as part of GitHub Copilot. 

By early 2023, developers were connecting LLMs to external systems through a patchwork of incompatible APIs: bespoke extensions, IDE plugins, and platform-specific agent frameworks, among other things. Every provider had its own integration story, and none of them worked in exactly the same way. 

Nick Cooper, an OpenAI engineer and MCP steering committee member, summarized it plainly: “All the platforms had their own attempts like function calling, plugin APIs, extensions, but they just didn’t get much traction.”

This wasn’t a tooling problem. It was an architecture problem.

Connecting a model to the realtime web, a database, ticketing system, search index, or CI pipeline required bespoke code that often broke with the next model update. Developers had to write deep integration glue one platform at a time.

As David Soria Parra, a senior engineer at Anthropic and one of the original architects of MCP, put it, the industry was running headfirst into an n×m integration problem with too many clients, too many systems, and no shared protocol to connect them.

In practical terms, the n×m integration problem describes a world where every model client (n) must integrate separately with every tool, service, or system developers rely on (m). This would mean five different AI clients talking to ten internal systems, resulting in fifty bespoke integrations—each with different semantics, authentication flows, and failure modes. MCP collapses this by defining a single, vendor-neutral protocol that both clients and tools can speak. With something like GitHub Copilot, where we are connecting to all of the frontier labs models and developers using Copilot, we also need to connect to hundreds of systems as part of their developer platform. This was not just an integration challenge, but an innovation challenge. 

And the absence of a standard wasn’t just inefficient; it slowed real-world adoption. In regulated industries like finance, healthcare, security, developers needed secure, auditable, cross-platform ways to let models communicate with systems. What they got instead were proprietary plugin ecosystems with unclear trust boundaries.

MCP: A protocol built for how developers work

Across the industry including at Anthropic, GitHub, Microsoft, and others, engineers kept running into the same wall: reliably connecting models to context and tools. Inside Anthropic, teams noticed that their internal prototypes kept converging on similar patterns for requesting data, invoking tools, and handling long-running tasks. 

Soria Parra described MCP’s origin simply: it was a way to standardize patterns Anthropic engineers were reinventing. MCP distilled those patterns into a protocol designed around communication, or how models and systems talk to each other, request context, and execute tools.

Anthropic’s Jerome Swanwick recalled an early internal hackathon where “every entry was built on MCP … went viral internally.”

That early developer traction became the seed. Once Anthropic released MCP publicly alongside high-quality reference servers, we saw the value immediately, and it was clear that the broader community understood the value immediately. MCP offered a shared way for models to communicate with external systems, regardless of client, runtime, or vendor.

Why MCP clicked: Built for real developer workflows

When MCP launched, adoption was immediate and unlike any standard I have seen before.

Developers building AI-powered tools and agents had already experienced the pain MCP solved. As Microsoft’s Den Delimarsky, a principal engineer and core MCP steering committee member focused on security and OAuth, said: “It just clicked. I got the problem they were trying to solve; I got why this needs to exist.”

Within weeks, contributors from Anthropic, Microsoft, GitHub, OpenAI, and independent developers began expanding and hardening the protocol. Over the next nine months, the community added:

  • OAuth flows for secure, remote servers
  • Sampling semantics (These help ensure consistent model behavior when tools are invoked or context is requested, giving developers more predictable execution across different MCP clients.)
  • Refined tool schemas
  • Consistent server discovery patterns
  • Expanded reference implementations
  • Improving long-running task support

Long-running task APIs are a critical feature. They allow builds, indexing operations, deployments, and other multi-minute jobs to be tracked predictably, without polling hacks or custom callback channels. This was essential for the long-running AI agent workflows that we now see today.

Delimarsky’s OAuth work also became an inflection point. Prior to it, most MCP servers ran locally, which limited usage in enterprise environments and caused installation friction. OAuth enabled remote MCP servers, unlocking secure, compliant integrations at scale. This shift is what made MCP viable for multi-machine orchestration, shared enterprise services, and non-local infrastructure.

Just as importantly, OAuth gives MCP a familiar and proven security model with no proprietary token formats or ad-hoc trust flows. That makes it significantly easier to adopt inside existing enterprise authentication stacks.

Similarly, the MCP Registry—developed in the open by the MCP community with contributions and tooling support from Anthropic, GitHub, and others—gave developers a discoverability layer and gave enterprises governance control. Toby Padilla, who leads MCP Server and Registry efforts at GitHub, described this as a way to ensure “developers can find high-quality servers, and enterprises can control what their users adopt.”

But no single company drove MCP’s trajectory. What stands out across all my conversations with the community is the sense of shared stewardship.

Cooper articulated it clearly: “I don’t meet with Anthropic, I meet with David. And I don’t meet with Google, I meet with Che.” The work was never about corporate boundaries. It was about the protocol.

This collaborative culture, reminiscent of the early days of the web, is the absolute best of open source. It’s also why, in my opinion, MCP spread so quickly.

Developer momentum: MCP enters the Octoverse

The 2025 Octoverse report, our annual deep dive into open source and public activity on GitHub, highlights an unprecedented surge in AI development:

  • 1.13M public repositories now import an LLM SDK (+178% YoY)
  • 693k new AI repositories were created this year
  • 6M+ monthly commits to AI repositories
  • Tools like vllm, ollama, continue, aider, cline, and ragflow dominated fastest-growing repos
  • Standards are emerging in real time with MCP alone, hitting 37k stars in under eight months

These signals tell a clear story: developers aren’t just experimenting with LLMs, they’re operationalizing them.

With hundreds of thousands of developers building AI agents, local runners, pipelines, and inference stacks, the ecosystem needs consistent ways to connect models to tools, services, and context.

MCP isn’t riding the wave. The protocol aligns with where developers already are and where the ecosystem is heading.

The Linux Foundation move: The protocol becomes infrastructure

As MCP adoption accelerated, the need for neutral governance became unavoidable. Openness is what drove its initial adoption, but that also demands shared stewardship—especially once multiple LLM providers, tool builders, and enterprise teams began depending on the protocol.

By transitioning governance to the Linux Foundation, Anthropic and the MCP steering committee are signaling that MCP has reached the maturity threshold of a true industry standard.

Open, vendor-neutral governance offers everyone:

1. Long-term stability

A protocol is only as strong as its longevity. Linux Foundation’s backing reduces risk for teams adopting MCP for deep integrations.

2. Equal participation

Whether you’re a cloud provider, startup, or individual maintainer, Linux Foundation governance processes support equal contribution rights and transparent evolution.

3. Compatibility guarantees

As more clients, servers, and agent frameworks rely on MCP, compatibility becomes as important as the protocol itself.

4. The safety of an open standard

In an era where AI is increasingly part of regulated workloads, neutral governance makes MCP a safer bet for enterprises.

MCP is now on the same path as technologies like Kubernetes, SPDX, GraphQL, and the CNCF stack—critical infrastructure maintained in the open.

Taken together, this move aligns with the Agentic AI Foundation’s intention to bring together multiple model providers, platform teams, enterprise tool builders, and independent developers under a shared, neutral process. 

What MCP unlocks for developers today

Developers often ask: “What do I actually get from adopting MCP?”

Here’s the concrete value as I see it:

1. One server, many clients

Expose a tool once. Use it across multiple AI clients, agents, shells, and IDEs.

No more bespoke function-calling adapters per model provider.

2. Predictable, testable tool invocation

MCP’s schemas make tool interaction debuggable and reliable, which is closer to API contracts than prompt engineering.

3. A protocol for agent-native workloads

As Octoverse shows, agent workflows are moving into mainstream engineering:

  • 1M+ agent-authored pull requests via GitHub Copilot coding agent alone in the five months since it was released
  • Rapid growth of key AI projects like vllm and ragflow
  • Local inference tools exploding in popularity

Agents need structured ways to call tools and fetch context. MCP provides exactly that.

4. Secure, remote execution

OAuth and remote-server support mean MCP works for:

  • Enterprises
  • Regulated workloads
  • Multi-machine orchestration
  • Shared internal tools

5. A growing ecosystem of servers

With a growing set of community and vendor-maintained MCP servers (and more added weekly), developers can connect to:

  • Issue trackers
  • Code search and repositories
  • Observability systems
  • Internal APIs
  • Cloud services
  • Personal productivity tools

Soria Parra emphasized that MCP isn’t just for LLMs calling tools. It can also invert the workflow by letting developers use a model to understand their own complex systems.

6. It matches how developers already build software

MCP aligns with developer habits:

  • Schema-driven interfaces (JSON Schema–based)
  • Reproducible workflows
  • Containerized infrastructure
  • CI/CD environments
  • Distributed systems
  • Local-first testing

Most developers don’t want magical behavior—they want predictable systems. MCP meets that expectation.

MCP also intentionally mirrors patterns developers already know from API design, distributed systems, and standards evolution—favoring predictable, contract-based interactions over “magical” model behaviors.

What happens next

The Linux Foundation announcement is the beginning of MCP’s next phase, and the move signals:

  • Broader contribution
  • More formal governance
  • Deeper integration into agent frameworks
  • Cross-platform interoperability
  • An expanding ecosystem of servers and clients

Given the global developer growth highlighted in Octoverse—36M new developers on GitHub alone this year—the industry needs shared standards for AI tooling more urgently than ever.

MCP is poised to be part of that future. It’s a stable, open protocol that lets developers build agents, tools, and workflows without vendor lock-in or proprietary extensions.

The next era of software will be shaped not just by models, but by how models interact with systems. MCP is becoming the connective tissue for that interaction.

And with its new home in the Linux Foundation, that future now belongs to the community.

Explore the MCP specification and the GitHub MCP Registry to join the community working on the next phase of the protocol.

The post MCP joins the Linux Foundation: What this means for developers building the next era of AI tools and agents appeared first on The GitHub Blog.

]]>
92752
“The local-first rebellion”: How Home Assistant became the most important project in your house https://github.blog/open-source/maintainers/the-local-first-rebellion-how-home-assistant-became-the-most-important-project-in-your-house/ Tue, 02 Dec 2025 17:19:32 +0000 https://github.blog/?p=92596 Learn how one of GitHub’s fastest-growing open source projects is redefining smart homes without the cloud.

The post “The local-first rebellion”: How Home Assistant became the most important project in your house appeared first on The GitHub Blog.

]]>

Franck Nijhof—better known as Frenck—is one of those maintainers who ended up at the center of a massive open source project not because he chased the spotlight, but because he helped hold together one of the most active, culturally important, and technically demanding open source ecosystems on the planet. As a lead of Home Assistant and a GitHub Star, Frenck guides the project that didn’t just grow. It exploded.

This year’s Octoverse report confirms it: Home Assistant was one of the fastest-growing open source projects by contributors, ranking alongside AI infrastructure giants like vLLM, Ollama, and Transformers. It also appeared in the top projects attracting first-time contributors, sitting beside massive developer platforms such as VS Code. In a year dominated by AI tooling, agentic workflows, and typed language growth, Home Assistant stood out as something else entirely: an open source system for the physical world that grew at an AI-era pace.

The scale is wild. Home Assistant is now running in more than 2 million households, orchestrating everything from thermostats and door locks to motion sensors and lighting. All on users’ own hardware, not the cloud. The contributor base behind that growth is just as remarkable: 21,000 contributors in a single year, feeding into one of GitHub’s most lively ecosystems at a time when a new developer joins GitHub every second.

In our podcast interview, Frenck explains it almost casually.

Home Assistant is a free and open source home automation platform. It allows you to connect all your devices together, regardless of the brands they’re from… And it runs locally.

Franck Nijhof, lead of Home Assistant

He smiles when he describes just how accessible it is. “Flash Home Assistant to an SD card, put it in, and it will start scanning your home,” he says. 

This is the paradox that makes Home Assistant compelling to developers: it’s simple to use, but technically enormous. A local-first, globally maintained automation engine for the home. And Frenck is one of the people keeping it all running.

The architecture built to tame thousands of device ecosystems

At its core, Home Assistant’s problem is combinatorial explosion. The platform supports “hundreds, thousands of devices… over 3,000 brands,” as Frenck notes. Each one behaves differently, and the only way to normalize them is to build a general-purpose abstraction layer that can survive vendor churn, bad APIs, and inconsistent firmware.

Instead of treating devices as isolated objects behind cloud accounts, everything is represented locally as entities with states and events. A garage door is not just a vendor-specific API; it’s a structured device that exposes capabilities to the automation engine. A thermostat is not a cloud endpoint; it’s a sensor/actuator pair with metadata that can be reasoned about.

That consistency is why people can build wildly advanced automations.

Frenck describes one particularly inventive example: “Some people install weight sensors into their couches so they actually know if you’re sitting down or standing up again. You’re watching a movie, you stand up, and it will pause and then turn on the lights a bit brighter so you can actually see when you get your drink. You get back, sit down, the lights dim, and the movie continues.”

A system that can orchestrate these interactions is fundamentally a distributed event-driven runtime for physical spaces. Home Assistant may look like a dashboard, but under the hood it behaves more like a real-time OS for the home.

Running everything locally is not a feature. It’s a hard constraint. 

Almost every mainstream device manufacturer has pivoted to cloud-centric models. Frenck points out the absurdity:

It’s crazy that we need the internet nowadays to change your thermostat.

The local-first architecture means Home Assistant can run on hardware as small as a Raspberry Pi but must handle workloads that commercial systems offload to the cloud: device discovery, event dispatch, state persistence, automation scheduling, voice pipeline inference (if local), real-time sensor reading, integration updates, and security constraints.

This architecture forces optimizations few consumer systems attempt. If any of this were offloaded to a vendor cloud, the system would be easier to build. But Home Assistant’s philosophy reverses the paradigm: the home is the data center.

Everything from SSD wear leveling on the Pi to MQTT throughput to Zigbee network topologies becomes a software challenge. And because the system must keep working offline, there’s no fallback.

This is engineering with no safety net.

The open home foundation: governance as a technical requirement

When you build a system that runs in millions of homes, the biggest long-term risk isn’t bugs. It’s ownership.

“It can never be bought, it can never be sold,” Frenck says of Home Assistant’s move to the Open Home Foundation. “We want to protect Home Assistant from the big guys in the end.”

This governance model isn’t philosophical; it is an architectural necessity. If Home Assistant ever became a commercial acquisition, cloud lock-in would follow. APIs would break. Integrations would be deprecated. Automations built over years would collapse.

A list of the fastest-growing open source projects by contributors. home-assistant/core is number 10.

The Foundation encodes three engineering constraints that ripple through every design decision:

  • Privacy: “Local control and privacy first.” All processing must occur on-device.
  • Choice: “You should be able to choose your own devices” and expect them to interoperate.
  • Sustainability: If a vendor kills its cloud service, the device must still work.

Frenck calls out Nest as an example: “If some manufacturer turns off the cloud service… that turns into e-waste.”

This is more than governance; it is technical infrastructure. It dictates API longevity, integration strategy, reverse engineering priorities, and local inference choices. It’s also a blueprint that forces the project to outlive any individual device manufacturer.

The community model that accidentally solved software quality

We don’t build Home Assistant, the community does.

“We cannot build hundreds, thousands of device integrations. I don’t have tens of thousands of devices in my home,” Frenck says.

This is where the project becomes truly unique.

Developers write integrations for devices they personally own. Reviewers test contributions against devices in their own homes. Break something, and you break your own house. Improve something, and you improve your daily life.

“That’s where the quality comes from,” Frenck says. “People run this in their own homes… and they take care that it needs to be good.”

This is the unheard-of secret behind Home Assistant’s engineering velocity. Every contributor has access to production hardware. Every reviewer has a high-stakes environment to protect. No staging environment could replicate millions of real homes, each with its own weird edge cases.

Assist: A local voice assistant built before the AI hype wave

Assist is Home Assistant’s built-in voice assistant, a modular system that lets you control your home using speech without sending audio or transcripts to any cloud provider. As Frenck puts it:

We were building a voice assistant before the AI hype… we want to build something privacy-aware and local.

Rather than copying commercial assistants like Alexa or Google Assistant, Assist takes a two-layer approach that prioritizes determinism, speed, and user choice.

Stage 1: Deterministic, no-AI commands

Assist began with a structured intent engine powered by hand-authored phrases contributed by the community. Commands like “Turn on the kitchen light” or “Turn off the living room fan” are matched directly to known actions without using machine learning at all. This makes them extremely fast, reliable, and fully local. No network calls. No cloud. No model hallucinations. Just direct mapping from phrase to automation.

Stage 2: Optional AI when you want natural language

One of the more unusual parts of Assist is that AI is never mandatory. Frenck emphasizes that developers and users get to choose their inference path: “You can even say you want to connect your own OpenAI account. Or your own Google Gemini account. Or get a Llama running locally in your own home.”

Assist evaluates each command and decides whether it needs AI. If a command is known, it bypasses the model entirely.

“Home Assistant would be like, well, I don’t have to ask AI,” Frenck says. “I know what this is. Let me turn off the lights.”

The system only uses AI when a command requires flexible interpretation, making AI a fallback instead of the foundation.

Open hardware to support the system

To bootstrap development and give contributors a reference device, the team built a fully open source smart speaker—the Voice Assistant Preview Edition.

“We created a small speaker with a microphone array,” Frenck says. “It’s fully open source. The hardware is open source; the software running on it is ESPHome.”

This gives developers a predictable hardware target for building and testing voice features, instead of guessing how different microphones, DSP pipelines, or wake word configurations behave across vendors.

Hardware as a software accelerator

Most open source projects avoid hardware. Home Assistant embraced it out of practical necessity.

“In order to get the software people building the software for hardware, you need to build hardware,” Frenck says.

Home Assistant Green, its prebuilt plug-and-play hub, exists because onboarding requires reliable hardware. The Voice Assistant Preview Edition exists because the voice pipeline needs a known microphone and speaker configuration.

This is a rare pattern: hardware serves as scaffolding for software evolution. It’s akin to building a compiler and then designing a reference CPU so contributors can optimize code paths predictably.

The result is a more stable, more testable, more developer-friendly software ecosystem.

A glimpse into the future: local agents and programmable homes

The trajectory is clear. With local AI models, deterministic automations, and a stateful view of the entire home, the next logical step is agentic behavior that runs entirely offline.

If a couch can trigger a movie automation, and a brewery can run a fermentation pipeline, the home itself becomes programmable. Every sensor is an input. Every device is an actuator. Every automation is a function. The entire house becomes a runtime.

And unlike cloud-bound competitors, Home Assistant’s runtime belongs to the homeowner, not the service provider.

Frenck sums up the ethos: “We give that control to our community.”

Looking to stay one step ahead? Read the latest Octoverse report and consider trying Copilot CLI.

The post “The local-first rebellion”: How Home Assistant became the most important project in your house appeared first on The GitHub Blog.

]]>
92596
Building beyond the browser: Keeley Hammond on Electron, open source, and the future of maintainership https://github.blog/open-source/maintainers/building-beyond-the-browser-keeley-hammond-on-electron-open-source-and-the-future-of-maintainership/ Thu, 25 Sep 2025 17:05:48 +0000 https://github.blog/?p=91120 Learn what it really takes to sustain one of the web’s most widely used frameworks on this episode of the GitHub Podcast.

The post Building beyond the browser: Keeley Hammond on Electron, open source, and the future of maintainership appeared first on The GitHub Blog.

]]>

Every so often, a conversation completely reframes how you see something you thought you understood. That’s what happened when Kedasha Kerr and I sat down with Keeley Hammond, a longtime maintainer of the Electron Project.

Over the past 15 years I’ve been in the open source ecosystem, I’ve watched Electron power more and more of the tools we use daily: VS Code, Slack, Discord. Though I’ve worked on the OpenJS Foundation Board, when I was talking with Keeley for The GitHub Podcast, I realized I’d been missing a crucial part of the story:

Electron allows you to build cross-platform desktop applications using web technology. It’s like React Native or Flutter but for desktop.

Simple, right? But as we dug deeper, what emerged wasn’t just a technical framework discussion. It was a living example of what I’ve been thinking about for years: how to build sustainable pathways to maintainership.

This conversation reminded me why I love this work. It’s not just about the code. It’s about the people, the systems, and the culture we build together.

Listen to the full episode👇

From “newbie questions” to core maintainer

Keeley’s path to maintainership started at InVision, where no one really knew Electron yet. She saw an opening:

I thought, okay, I’ll be the Electron person. I reached out to the maintainers and they were so welcoming. That’s why I’m still here.

That warmth mattered. Instead of being brushed off for asking “basic” questions, Keeley found quick, patient responses and even a private Slack where she could fumble and learn. Years later, she’s paying it forward by helping shape a culture where newcomers feel just as supported.

Misconceptions about Electron

If you’ve ever heard that Electron apps are bloated or slow, Keeley’s take might surprise you:

Bad JavaScript is bad JavaScript no matter where it lives. You’ll see native apps hogging resources too. It’s about how you build.

Electron apps can be slim, fast, and secure. The team backports Chromium changes weekly, maintains three active release lines, and invests heavily in patching vulnerabilities. In other words, Electron takes security and performance as seriously as any native framework.

How governance sustains growth

Projects at Electron’s scale don’t run on passion alone. Keeley described a governance model with seven working groups — covering everything from releases to APIs — that spreads responsibility across maintainers.

Paid contributors from Slack and Microsoft anchor the project, but volunteers remain essential. Electron leans on their expertise in packaging, installers, and ecosystem tools.

As many volunteers as we can hire, we do. When we can’t, we look for ways to support them — funding, travel, resources. Nobody should feel like a second-class citizen.

That intentional balance between corporate support and volunteer energy is part of why Electron continues to thrive.

The systems that sustain maintainers

If there’s one lesson Keeley wanted other maintainers to take away, it’s this: automate the grunt work.

Issue templates that request missing details. Labels and canned responses that keep triage moving. Runbooks that standardize how mentors support new contributors.

Open source is a firehose. Automation frees you up to focus on the harder, human work of debugging, mentoring, and building.

This resonates deeply with what I’ve been advocating for years. The right systems can transform a project from chaos to collaboration. Setting up issue templates or writing runbooks isn’t glamorous work, but it’s the foundation that makes everything else possible. 

AI, spam, and the next challenge

But here’s where things get complicated – and where maintainers need our support more than ever. Keeley flagged a rising problem: AI-generated spam proposals, especially in programs like Google Summer of Code.

We got twice as many proposals this year. A good portion were AI-generated noise. It’s frustrating when you know some contributors put real thought into theirs.

But she also sees potential. Used responsibly, AI helps non-native English speakers communicate more clearly. It can assist with code exploration. The challenge isn’t banning AI in these spaces, it’s creating filters and teachable moments to separate noise from signal.

As fellow host Kedasha put it:

This is a teaching moment. AI can help, but you still need to understand the core problem. Otherwise it’s just a waste of time.

The very human skills of critical thinking, creativity, and resilience matter more than ever with the rise of AI.

What Electron teaches us about open source

After our conversation, I keep thinking about how intentional Electron is about culture. From triage systems to governance groups, from hiring maintainers when possible to sponsoring volunteer contributions when not, everything is designed to keep the community welcoming and sustainable.

We can learn so much from this approach.

Because if projects like Electron show us anything, it’s that successful open source isn’t just about shipping code. It’s about building systems and cultures that make contributing feel worthwhile.

We’re always looking for new contributors and maintainers.

That’s an open invitation.

What you can steal from Electron’s playbook

  • Set up issue templates with auto-responses for missing details.
  • Create runbooks for common interactions (they use Notion).
  • Establish working groups to distribute ownership.
  • Run regular triage meetings (Electron’s releases group meets weekly).
  • Centralize communications (they route all GSoC emails to one Slack channel).
  • Be intentional about culture. Write down how you’ll behave toward contributors.

Looking forward

Electron is powering the apps we use every day. But it goes so much further. It’s also modeling what sustainable open source can look like in a world where the pressures are bigger than ever — spam, scaling, and the constant firehose of contributions.

Keeley’s journey from “newbie” to core maintainer isn’t unique because she’s exceptional (though she is). It’s replicable because Electron built the pathways to make it possible.

My takeaway: The health of open source isn’t measured in lines of code or stars. It’s measured in how well we support the people behind the projects.

Listen to our full conversation with Keeley Hammond on The GitHub Podcast. And don’t miss the next episode by subscribing today!

The post Building beyond the browser: Keeley Hammond on Electron, open source, and the future of maintainership appeared first on The GitHub Blog.

]]>
91120
Building personal apps with open source and AI https://github.blog/open-source/maintainers/building-personal-apps-with-open-source-and-ai/ Fri, 12 Sep 2025 16:00:00 +0000 https://github.blog/?p=90763 Hear about the personal tools we use to improve our workflows (and how to get started building your own) on this episode of the GitHub Podcast.

The post Building personal apps with open source and AI appeared first on The GitHub Blog.

]]>

There’s something magical about a tool that does exactly what you need, no matter how small the task. In my work on the GitHub Developer Advocacy team (and honestly just in life), I’ve found that the best solutions are often the simplest ones.

It doesn’t need to be a Swiss Army knife. It could be just like a really good scissors or paring knife.

Sometimes, it’s just taking a manual task and automating it. For example, my cohost Cassidy Williams shares a technical interview question with her newsletter subscribers each week. People submit answers in all sorts of formats — GitHub links, CodePen snippets, tweets, you name it. Gathering all those responses and formatting them for publishing used to be a tedious, manual slog. So she wrote a tiny script that converts these answers into a Markdown list.

I’ve also built a tool to convert CSV to Markdown. It’s not fancy, but it’s saved countless hours and so much mental energy.

These little tools may seem mundane. But their impact is huge. They free us from repetitive tasks, help us focus on what matters, and make our days a little brighter.

Listen to our full discussion on the GitHub Podcast 👇

Open source as a playground

One of the best things about being part of the open source community is knowing you’re never alone in your needs. Chances are, if you have a problem, someone else has faced it too — and maybe even built a solution!

I love browsing GitHub for those “just right” little tools. Sometimes, someone has already built exactly what I need. Other times, I find something close, fork it, and tweak it to fit my workflow. That’s the beauty of open source: It’s a playground for experimentation and sharing.

And when you open source your own creations, you’re not just helping yourself. You’re potentially helping countless others, maybe even inspiring contributions and new features. For example, my to-do app started as a personal project, but once I put it out there, people suggested new ideas, like a resume button for paused tasks. Some, I’ve added. Others, I encourage folks to fork and make their own.

That’s where open source comes in… fork it and use it.

AI as a force multiplier

If open source is the foundation, AI has become the rocket fuel for personal software. Building something just for yourself used to mean wrestling with unfamiliar frameworks or spending hours debugging arcane errors. Now? AI can help you scaffold a project, troubleshoot issues, or even just explain a tricky codebase.

I’ve seen friends who swore off frontend development, intimidated by the learning curve, turn around and build working dashboards in a single evening (with a little help from tools like GitHub Copilot). AI isn’t a replacement for learning, but it’s a facilitator for unblocking ideas and accelerating progress.

Watch me build an app in the demo below 👇

Reducing mental overhead and increasing joy

For me, the biggest benefit of building my own tools isn’t just the time saved, it’s the reduction in mental overhead. When you know that a part of your workflow is handled, you’re free to focus your mind on more creative or meaningful work.

I find it so much more fun to build now because I have my AI sidekick that will tell me where I went wrong or how to fix something that is incorrect. I’m no longer toiling over software that I want to build and crying because I can’t figure out the bugs.

Building personal software and using open source and AI have made building software more enjoyable.

Security, sharing, and growing your tools

Of course, when a tool is just for me, I don’t worry about making it bulletproof. But the moment I open source it and others start using it, security and maintainability become part of the conversation. That’s where community shines! Others may notice issues, suggest improvements, or even take the project in new directions.

I try to be clear in my contributing guidelines: If you want to add a feature that’s not on my personal roadmap, go ahead and fork it! That’s the beauty of open source — it enables everyone to shape software to their own needs.

Building personal tools, sharing them, and watching them grow is one of the most rewarding parts of being a developer. With open source and AI at our fingertips, there’s never been a better time to create the exact solutions you need — and maybe help someone else along the way.

Hear more stories and tips on the GitHub Podcast > 

The post Building personal apps with open source and AI appeared first on The GitHub Blog.

]]>
90763
How GitHub Models can help open source maintainers focus on what matters https://github.blog/open-source/maintainers/how-github-models-can-help-open-source-maintainers-focus-on-what-matters/ Thu, 28 Aug 2025 19:02:44 +0000 https://github.blog/?p=90506 Learn how GitHub Models helps open source maintainers automate repetitive tasks like issue triage, duplicate detection, and contributor onboarding — saving hours each week.

The post How GitHub Models can help open source maintainers focus on what matters appeared first on The GitHub Blog.

]]>

Open source runs on passion and persistence. Maintainers are the volunteers who show up to triage issues, review contributions, manage duplicates, and do the quiet work that keeps projects going.

Most don’t plan on becoming community managers. But they built something useful, shared it, and stayed when people started depending on it. That’s how creators become stewards.

But as your project grows, your time to build shrinks. Instead, you’re writing the same “this looks like a duplicate of #1234” comment, asking for missing reproduction steps, and manually labeling issues. It’s necessary work. But it’s not what sparked your love for the project or open source.

That’s why we built GitHub Models: to help you automate the repetitive parts of project management using AI, right where your code lives and in your workflows, so you can focus on what brought you here in the first place. 

What maintainers told us

We surveyed over 500 maintainers of leading open source projects about their AI needs. Here’s what they reported:

  • 60% want help with issue triage — labeling, categorizing, and managing the flow
  • 30% need duplicate detection — finding and linking similar issues automatically
  • 10% want spam protection — filtering out low quality contributions
  • 5% need slop detection — identifying low quality pull requests that add noise

Folks surveyed indicated that they wanted AI to serve as a second pair of eyes and to not intervene unless asked. They also said triaging issues, finding similar issues, helping write minimal reproductions were top of mind. Clustering issues based on topic or feature was also possibly the most important concern to some.

How GitHub Models + GitHub Actions = Continuous maintainer support

We’re calling this pattern Continuous AI using automated AI workflows to enhance collaboration, just like CI/CD transformed testing and deployment. With GitHub Models and GitHub Actions, you can start applying it today. 

Here’s how Continuous AI can help maintainers (you!) manage their projects


The following examples are designed for you to easily copy and paste into your project. Make sure GitHub Models is enabled for your repository or organization, and then just copy the YAML into your repo’s .github/workflows directory. Customize these code blocks as needed for your project.

Add permissions: models: read to your workflow YAML, and your action will be able to call models using the built-in GITHUB_TOKEN. No special setup or external keys are required for most projects. 

Automatic issue deduplication

Problem: You wake up to three new issues, two of them are describing the same bug. You copy and paste links, close duplicates, and move on… until it happens again tomorrow.

Solution: Implement GitHub Models and a workflow to automatically check if a new issue is similar to existing ones and post a comment with links.

name: Detect duplicate issues

on:
  issues:
    types: [opened, reopened]

permissions:
  models: read
  issues: write

concurrency:
  group: ${{ github.workflow }}-${{ github.event.issue.number }}
  cancel-in-progress: true

jobs:
  continuous-triage-dedup:
    if: ${{ github.event.issue.user.type != 'Bot' }}
    runs-on: ubuntu-latest
    steps:
      - uses: pelikhan/action-genai-issue-dedup@v0
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          # Optional tuning:
          # labels: "auto"          # compare within matching labels, or "bug,api"
          # count: "20"             # how many recent issues to check
          # since: "90d"            # look back window, supports d/w/m

This keeps your issues organized, reduces triage work, and helps contributors find answers faster. You can adjust labels, count, and since to fine tune what it compares against.

Issue completeness

Problem: A bug report lands in your repo with no version number, no reproduction steps, and no expected versus actual behavior. You need that information before you can help.

Solution: Automatically detect incomplete issues and ask for the missing details.

name: Issue Completeness Check

on:
  issues:
    types: [opened]

permissions:
  issues: write
  models: read

jobs:
  check-completeness:
    runs-on: ubuntu-latest
    steps:
      - name: Check issue completeness
        uses: actions/ai-inference@v1
        id: ai
        with:
          prompt: |
            Analyze this GitHub issue for completeness. If missing reproduction steps, version info, or expected/actual behavior, respond with a friendly request for the missing info. If complete, say so.
            
            Title: ${{ github.event.issue.title }}
            Body: ${{ github.event.issue.body }}
          system-prompt: You are a helpful assistant that helps analyze GitHub issues for completeness.
          model: openai/gpt-4o-mini
          temperature: 0.2

      - name: Comment on issue
        if: steps.ai.outputs.response != ''
        uses: actions/github-script@v7
        with:
          script: |
            github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: ${{ github.event.issue.number }},
              body: `${{ steps.ai.outputs.response }}`
            })

The bot could respond: “Hi! Thanks for reporting this. To help us investigate, could you please provide: 1) Your Node.js version, 2) Steps to reproduce the issue, 3) What you expected to happen versus what actually happened?”

Or you can take it a step further and ensure the issue is following your contributing guidelines, like ben-balter/ai-community-moderator (MIT License).

Spam and “slop” detection

Problem: You check notifications and find multiple spam pull requests or low effort “fix typo” issues.

Solution: Use AI to flag suspicious or low quality contributions as they come in.

name: Contribution Quality Check

on:
  pull_request:
    types: [opened]
  issues:
    types: [opened]

permissions:
  pull-requests: write
  issues: write
  models: read

jobs:
  quality-check:
    runs-on: ubuntu-latest
    steps:
      - name: Detect spam or low-quality content
        uses: actions/ai-inference@v1
        id: ai
        with:
          prompt: |
            Is this GitHub ${{ github.event_name == 'issues' && 'issue' || 'pull request' }} spam, AI-generated slop, or low quality?
            
            Title: ${{ github.event.issue.title || github.event.pull_request.title }}
            Body: ${{ github.event.issue.body || github.event.pull_request.body }}
            
            Respond with one of: spam, ai-generated, needs-review, or ok
          system-prompt: You detect spam and low-quality contributions. Be conservative - only flag obvious spam or AI slop.
          model: openai/gpt-4o-mini
          temperature: 0.1

      - name: Apply label if needed
        if: steps.ai.outputs.response != 'ok'
        uses: actions/github-script@v7
        with:
          script: |
            const label = `${{ steps.ai.outputs.response }}`;
            const number = ${{ github.event.issue.number || github.event.pull_request.number }};
            
            if (label && label !== 'ok') {
              await github.rest.issues.addLabels({
                owner: context.repo.owner,
                repo: context.repo.repo,
                issue_number: number,
                labels: [label]
              });
            }

This workflow auto-screens new issues and new pull requests for spam/slop/low-quality, and auto labels them based on an LLM’s judgment.

Tip: If the repo doesn’t already have spam or needs-review labels, addLabels will create them with default styling. If you want custom colors or descriptions, pre-create them.

You can also check out these related projects: github/ai-assessment-comment-labeler (MIT license) and github/ai-moderator (MIT license).

Continuous resolver

Problem: Your repo has hundreds of open issues, many of them already fixed or outdated. Closing them manually would take hours.

Solution: Run a scheduled workflow that identifies resolved or no-longer-relevant issues and pull requests, and either comments with context or closes them.

name: Continuous AI Resolver


on:
  schedule:
    - cron: '0 0 * * 0' # Runs every Sunday at midnight UTC
  workflow_dispatch:


permissions:
  issues: write
  pull-requests: write


jobs:
  resolver:
    runs-on: ubuntu-latest
    steps:
      - name: Run resolver
        uses: ashleywolf/continuous-ai-resolver@main
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}

Note: The above code references an existing action in ashleywolf/continuous-ai-resolver (MIT license).

This makes it easier for contributors to find active, relevant work. By automatically identifying and addressing stale issues, you prevent the dreaded “issue pileup” that discourages new contributors and makes it harder to spot actual problems that need attention.

New contributor onboarding

Problem: A first time contributor opens a pull request, but they’ve missed key steps from your CONTRIBUTING.md.

Solution: Send them a friendly, AI-generated welcome message with links to guidelines and any helpful suggestions.

name: Welcome New Contributors

on:
  pull_request:
    types: [opened]

permissions:
  pull-requests: write
  models: read

jobs:
  welcome:
    runs-on: ubuntu-latest
    if: github.event.pull_request.author_association == 'FIRST_TIME_CONTRIBUTOR'
    steps:
      - name: Generate welcome message
        uses: actions/ai-inference@v1
        id: ai
        with:
          prompt: |
            Write a friendly welcome message for a first-time contributor. Include:
            1. Thank them for their first PR
            2. Mention checking CONTRIBUTING.md
            3. Offer to help if they have questions
            
            Keep it brief and encouraging.
          model: openai/gpt-4o-mini
          temperature: 0.7

      - name: Post welcome comment
        uses: actions/github-script@v7
        with:
          script: |
            const message = `${{ steps.ai.outputs.response }}`;
            await github.rest.issues.createComment({
              owner: context.repo.owner,
              repo: context.repo.repo,
              issue_number: ${{ github.event.pull_request.number }},
              body: message
            });

This makes contributors feel welcome while setting them up for success by reducing rework and improving merge times.

Why these?

These examples hit the biggest pain points we hear from maintainers: triage, deduplication, spam handling, backlog cleanup, and onboarding. They’re quick to try, safe to run, and easy to tweak. Even one can save you hours per month.

Best practices 

  • Start with one workflow and expand from there
  • Keep maintainers in the loop until you trust the automation
  • Customize prompts so the AI matches your project’s tone and style
  • Monitor results and tweak as needed
  • Avoid one size fits all automation, unreviewed changes, or anything that spams your contributors

Get started today

If you’re ready to experiment with AI:

  1. Enable GitHub Models in your repository settings
  2. Start with the playground to test prompts and models
  3. Save working prompts as .prompt.yml files in your repo
  4. Build your first action using the examples above
  5. Share with the community — we’re all learning together!

The more we share what works, the better these tools will get. If you build something useful, add it to the Continuous AI Awesome List.

If you’re looking for more join the Maintainer Community >

The post How GitHub Models can help open source maintainers focus on what matters appeared first on The GitHub Blog.

]]>
90506