The latest developer news & insights - The GitHub Blog https://github.blog/news-insights/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Thu, 12 Mar 2026 03:23:58 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 The latest developer news & insights - The GitHub Blog https://github.blog/news-insights/ 32 32 153214340 GitHub availability report: February 2026 https://github.blog/news-insights/company-news/github-availability-report-february-2026/ Thu, 12 Mar 2026 03:23:54 +0000 https://github.blog/?p=94475 In February, we experienced six incidents that resulted in degraded performance across GitHub services.

The post GitHub availability report: February 2026 appeared first on The GitHub Blog.

]]>
In February, we experienced six incidents that resulted in degraded performance across GitHub services.

We recognize the impact these outages have had on teams, workflows, and overall confidence in our platform. Earlier today, we released a blog post outlining the root causes of recent incidents and the steps GitHub is taking to make our systems more resilient moving forward. Thank you for your patience as we work through near-term and long-term investments we’re making.

Below, we go over the six major incidents specific to February.

February 02 17:41 UTC (lasting 1 hour and 5 minutes)

From January 31, 2026, 00:30 UTC, to February 2, 2026, 18:00 UTC, Dependabot service was degraded and failed to create 10% of automated pull requests. This was due to a cluster failover that connected to a read-only database.

We mitigated the incident by pausing Dependabot queues until traffic was properly routed to healthy clusters. All failed jobs were identified and restarted.

We added new monitors and alerts to reduce our time to detect and prevent this in the future.

February 02 19:03 UTC (lasting 5 hours and 53 minutes)

On February 2, 2026, between 18:35 UTC and 22:20 UTC, GitHub Actions hosted runners and GitHub Codespaces were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners, February 3, 2026 at 00:30 UTC for larger runners, and February 3 at 00:15 for codespaces. During this time, actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot coding agent, Copilot code review, CodeQL, Dependabot, GitHub Enterprise Importer, and GitHub Pages. All regions and runner types were impacted. Codespaces creation and resume operations also failed in all regions. Self-hosted runners for actions on other providers were not impacted.

This outage was caused by a loss in telemetry that cascaded to mistakenly applying security policies to backend storage accounts in our underlying compute provider. Those policies blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available here. This was mitigated by rolling back the policy changes, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out.

We are working with our compute provider to improve our incident response and engagement time, improve early detection, and ensure safe rollout should similar changes occur in the future.

February 09 16:19 UTC (lasting 1 hour and 21 minutes) and February 09 19:01 UTC (lasting 1 hour and 8 minutes)

On February 9, 2026, GitHub experienced two related periods of degraded availability affecting github.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other services. The first period occurred between 16:12 UTC and 17:39 UTC, and the second between 18:53 UTC and 20:09 UTC. In total, users experienced approximately 2 hours and 43 minutes of degraded service across the two incidents.

During both incidents, users encountered errors loading pages on github.com, failures when pushing or pulling code over HTTPS, failures starting or completing GitHub Actions workflow runs, and errors using GitHub Copilot. Additional services including GitHub Issues, pull requests, webhooks, Dependabot, GitHub Pages, and GitHub Codespaces experienced intermittent errors. SSH-based Git operations were not affected during either incident.

Our investigation determined that both incidents shared the same underlying cause: a configuration change to a user settings caching mechanism caused a large volume of cache rewrites to occur simultaneously. In the first incident, asynchronous rewrites overwhelmed a shared infrastructure component responsible for coordinating background work, which led to cascading failures and connection exhaustion in the service proxying Git operations over HTTPS. We mitigated this incident by disabling async cache rewrites and restarting the affected Git proxy service across multiple datacenters.

The second incident arose when an additional source of cache updates, not addressed by the initial mitigation, introduced a high volume of synchronous writes. This caused replication delays, resulting in a similar cascade of failures and again leading to connection exhaustion in the Git HTTPS proxy. We mitigated by disabling the source of the cache rewrites and again restarting Git proxy.

We are taking the following immediate steps:

  • We optimized the caching mechanism to avoid write amplification and added self-throttling during bulk updates.
  • We are adding safeguards to ensure the caching mechanism responds more quickly to rollbacks and strengthening how changes to these caching systems are planned, validated, and rolled out with additional checks.
  • We are fixing the underlying cause of connection exhaustion in our Git HTTPS proxy layer so the proxy can recover from this failure mode automatically without requiring manual restarts.

February 12 07:53 UTC (lasting 2 hours and 3 minutes)

On February 12, 2026, between 00:51 UTC and 09:35 UTC, users attempting to create or resume Codespaces experienced elevated failure rates across Europe, Asia, and Australia, peaking at a 90% failure rate. Impact started in UK South and impacted other regions progressively. US regions were not impacted.

The failures were caused by an authorization claim change in a core networking dependency, which led to codespace pool provisioning failures. Alerts detected the issue but did not have the appropriate severity, leading to delayed detection and response. Learning from this, we have improved our validation of changes to this backend service and monitoring during rollout. Our alerting thresholds have also been updated to catch issues before they impact customers and improved our automated failover mechanisms to cover this area.

February 12 10:38 UTC (lasting 34 minutes)

On February 12, 2026, from 09:16 to 11:01 UTC, users attempting to download repository archives (tar.gz/zip) that include Git LFS objects received errors. Standard repository archives without LFS objects were not affected. On average, the archive download error rate was 0.0042% and peaked at 0.0339% of requests to the service. This was caused by the deployment of an incorrect network configuration in the LFS Service that caused service health checks to fail and an internal service to be incorrectly marked as unreachable.

We mitigated the incident by manually applying the corrected network setting. Additional checks for corruption and auto-rollback detection were added to prevent this type of configuration issue.


Follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the engineering section on the GitHub Blog.

The post GitHub availability report: February 2026 appeared first on The GitHub Blog.

]]>
94475
Addressing GitHub’s recent availability issues https://github.blog/news-insights/company-news/addressing-githubs-recent-availability-issues-2/ Wed, 11 Mar 2026 21:41:51 +0000 https://github.blog/?p=94472 GitHub recently experienced several availability incidents. We understand the impact these outages have on our customers and are sharing details on the stabilization work we’re prioritizing right now.

The post Addressing GitHub’s recent availability issues appeared first on The GitHub Blog.

]]>

Over the past several weeks, GitHub has experienced significant availability and performance issues affecting multiple services. Three of the most significant incidents happened on February 2, February 9, and March 5.

First and foremost, we take responsibility. We have not met our own availability standards, and we know that reliability is foundational to the work you do every day. We understand the impact these outages have had on your teams, your workflows, and your confidence in our platform.

Here, we’ll unpack what’s been causing these incidents and what we’re doing to make our systems more resilient moving forward.

What happened

These incidents have occurred during a period of extremely rapid usage growth across our platform, exposing scaling limitations in parts of our current architecture. Specifically, we’ve found that recent platform instability was primarily driven by rapid load growth, architectural coupling that allowed localized issues to cascade across critical services, and inability of the system to adequately shed load from misbehaving clients.

Before we cover what we are doing to prevent these issues going forward, it is worth diving into the details of the most impactful incidents.

February 9 incident

On Monday, February 9, we experienced a high‑impact incident due to a core database cluster that supports authentication and user management becoming overloaded. The mistakes that led to the problem were made days and weeks earlier.

In early February, two very popular client-side applications that make a significant amount of API calls against our servers were released, with unintentional changes driving a more-than-tenfold increase in read traffic they generated. Because these applications end up being updated by the users over time, the increase in usage doesn’t become evident right away; it appears as enough users upgrade.

On Saturday, February 7, we deployed a new model. While trying to get it to customers as quickly as possible, we changed a refresh TTL on a cache storing user settings from 12 to 2 hours. The model was released to a narrower set of customers due to limited capacity, which made the change necessary. At this point, everything was operating normally because the weekend load is significantly lower, and we didn’t have sufficiently granular alarms to detect the looming issue.

Three things then compounded on February 9: our regular peak load, many customers updating to the new version of the client apps as they were starting their week, and another new model release. At this point, the write volume due to the increased TTL and the read volume from the client apps combined to overwhelm the database cluster. While the TTL change was quickly identified as a culprit, it took much longer to understand why the read load kept increasing, which prolonged the incident. Further, due to the interaction between different services after the database cluster became overwhelmed, we needed to block the extra load further up the stack, and we didn’t have sufficiently granular switches to identify which traffic we needed to block at that level.

The investigation for the February 9 incident raised a lot of important questions about why the user settings were stored in this particular database cluster and in this particular way. The architecture was originally selected for simplicity at a time when there were very few models and very few governance controls and policies related to those models. But over time, something that was a few bytes per user grew into kilobytes. We didn’t catch how dangerous that was because the load was visible only during new model or policy rollouts and was masked by the TTL. Since this database cluster houses data for authentication and user management, any services that depend on these were impacted.

GitHub Actions incidents on February 2 and March 5

We also had two significant instances where our failover solution was either insufficient or didn’t function correctly:

  • Actions hosted runners had a significant outage on February 2. Most cloud infrastructure issues in this area typically do not cause impact as they occur in a limited number of regions, and we automatically shift traffic to healthy regions. However, in this case, there was a cascading set of events triggered by a telemetry gap that caused existing security policies to be applied to key internal storage accounts affecting all regions. This blocked access to VM metadata on VM creates and halted hosted runner lifecycle operations.
  • Another impactful incident for Actions occurred on March 5. Automated failover has been progressively rolling out across our Redis infrastructure, and on this day, a failover occurred for a Redis cluster used by Actions job orchestration. The failover performed as expected, but a latent configuration issue meant the failover left the cluster in a state with no writable primary. With writes failing and failover not available as a mitigation, we had to correct the state manually to mitigate. This was not an aggressive rollout or missing resiliency mechanism, but rather latent configuration that was only exposed by an event in production infrastructure.

For both of these incidents, the investigations brought up unexpected single points of failure that we needed to protect and needed to dry run failover procedures in the production more rigorously.

Across these incidents, contributing factors expanded the scope of impact to be much broader or longer than necessary, including:

  • Insufficient isolation between critical path components in our architecture
  • Inadequate safeguards for load shedding and throttling
  • Gaps in end-to-end validation, monitoring for attention on earlier signals, and partner coordination during incident response

What we are doing now

Our engineering teams are fully engaged in both near-term mitigations and durable longer-term architecture and process investments. We are addressing two common themes: managing rapidly increasing load by focusing on resilience and isolation of critical paths and preventing localized failures from ever causing broad service degradation.

In the near term, we are prioritizing stabilization work to reduce the likelihood and impact of incidents. This includes:

  1. Redesigning our user cache system, which hosts model policies and more, to accommodate significantly higher volume in a segmented database cluster.
  2. Expediting capacity planning and completing a full audit of fundamental health for critical data and compute infrastructure to address urgent growth.
  3. Further isolate key dependencies so that critical systems like GitHub Actions and Git will not be impacted by any shared infrastructure issues, reducing cascade risk. This is being done through a combination of removing or handling dependency failures where possible or isolating dependencies.
  4. Protecting downstream components during spikes to prevent cascading failures while prioritizing critical traffic loads.

In parallel, we are accelerating deeper platform investments to deliver on GitHub’s commitment to supporting sustained, high-rate growth with high availability. These include:

  1. Migrating our infrastructure to Azure to accommodate rapid growth, enabling both vertical scaling within regions and horizontal scaling across regions. In the short term, this provides a hybrid approach for infrastructure resiliency. As of today, 12.5% of all GitHub traffic is served from our Azure Central US region, and we are on track to serving 50% of all GitHub traffic by July. Longer term, this enables simplification of our infrastructure architecture and more global resiliency by adopting managed services.
  2. Breaking apart the monolith into more isolated services and data domains as appropriate, so we can scale independently, enable more isolated change management, and implement localized decisions about shedding traffic when needed.

We are also continuing tactical repair work from every incident.

Our commitment to transparency

We recognize that it’s important to provide you with clear communication and transparency when something goes wrong. We publish summaries of all incidents that result in degraded performance of GitHub services on our status page and in our monthly availability reports. The February report will publish later today with a detailed explanation of incidents that occurred last month, and our March report will publish in April.

Given the scope of recent incidents, we felt it was important to address them with the community today. We know GitHub is critical digital infrastructure, and we are taking urgent action to ensure our platform is available when and where you need it. Thank you for your patience as we strengthen the stability and resilience of the GitHub platform.

The post Addressing GitHub’s recent availability issues appeared first on The GitHub Blog.

]]>
94472
How AI is reshaping developer choice (and Octoverse data proves it) https://github.blog/ai-and-ml/generative-ai/how-ai-is-reshaping-developer-choice-and-octoverse-data-proves-it/ Thu, 19 Feb 2026 17:00:00 +0000 https://github.blog/?p=93955 AI is rewiring developer preferences through convenience loops. Octoverse 2025 reveals how AI compatibility is becoming the new standard for technology choice.

The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.

]]>

You know that feeling when a sensory trigger instantly pulls you back to a moment in your life? For me, it’s Icy Hot. One whiff and I’m back to 5 a.m. formation time in the army. My shoulders tense. My body remembers. It’s not logical. It’s just how memory works. We build strong associations between experiences and cues around them. Those patterns get encoded and guide our behavior long after the moment passes.

That same pattern is happening across the software ecosystem as AI becomes a default part of how we build. For example, we form associations between convenience and specific technologies. Those loops influence what developers reach for, what they choose to learn, and ultimately, which technologies gain momentum.

Octoverse 2025 data illustrates this in real time. And it’s not subtle. 

In August 2025, TypeScript surpassed both Python and JavaScript to become the most-used language on GitHub for the first time ever. That’s the headline. But the deeper story is what it signals: AI isn’t just speeding up coding. It’s reshaping which languages, frameworks, and tools developers choose in the first place.

A chart showing the top 10 programming languages on GitHub from 2023 to 2025. TypeScript rises to #1 in 2025, overtaking Python and JavaScript, which move to #2 and #3 respectively. Other top languages include Java, C#, PHP, Shell, C++, HCL, and Go. The chart tracks ranking changes over time on a dark background with colored lines representing each language.

The convenience loop is how memory becomes behavior

When a task or process goes smoothly, your brain remembers. Convenience captures attention. Reduced friction becomes a preference—and preferences at scale can shift ecosystems.

Eighty percent of new developers on GitHub use Copilot within their first week. Those early exposures reset the baseline for what “easy” means.

When AI handles boilerplate and error-prone syntax, the penalty for choosing powerful but complex languages disappears. Developers stop avoiding tools with high overhead and start picking based on utility instead. The language adoption data shows this behavioral shift:

That last one matters. We didn’t suddenly love Bash. AI absorbed the friction that made shell scripting painful. So now we use the right tool for the job without the usual cost. 

This is what Octoverse is really showing us: developer choice is shifting toward  technologies that work best with the tools we’re already using.

The technical reason behind the shift

There are concrete, technical reasons AI performs better with strongly typed languages.

Strongly typed languages give AI much clearer constraints. In JavaScript, a variable could be anything. In TypeScript, declaring x: string immediately eliminates all non-string operations. That constraint matters. Constraints help AI generate more reliable, contextually correct code. And developers respond to that reliability.

That effect compounds when you look at AI model integration across GitHub. Over 1.1 million public repositories now use LLM SDKs. This is mainstream adoption, not fringe experimentation. And it’s concentrating around the languages and frameworks that work best with AI.

A line and area chart titled ‘Cumulative count of public projects using generative AI model SDKs,’ showing rapid growth from 2021 to 2025. The curve starts near zero and climbs steeply to over 1.1 million repositories by 2025, illustrating the widespread adoption of LLM and AI model SDKs. The chart features a purple-to-pink gradient fill on a dark background with geometric ribbons on the left.

Moving fast without breaking your architecture 

AI tools are amplifying developer productivity in ways we haven’t seen before. The question is how to use them strategically. The teams getting the best results aren’t fighting the convenience loop. They’re designing their workflows to harness it while maintaining the architectural standards that matter.

For developers and teams

Establish patterns before you generate. AI is fantastic at following established patterns, but struggles to invent them cleanly. If you define your first few endpoints or components with strong structure, Copilot will follow those patterns. Good foundations scale. Weak ones get amplified.

Use type systems as guardrails, not crutches. TypeScript reduces errors, but passing type checks isn’t the same as expressing correct business logic. Use types to bound the space of valid code, not as your primary correctness signal.

Test AI-generated code harder, not less. There’s a temptation to trust AI output because it “looks right” and passes initial checks. Resist that. Don’t skip testing.

For engineering leaders

Recognize the velocity jump and prepare for its costs. AI-assisted development often produces a 20–30 percent increase in throughput. That’s a win. But higher throughput means architectural drift can accumulate faster without the right guardrails.

Standardize before you scale. Document patterns. Publish template repositories. Make your architectural decisions explicit. AI tools will mirror whatever structures they see.

Track what AI is generating, not just how much. The Copilot usage metrics dashboard (now in public preview for Enterprise) lets you see beyond acceptance rates. You can track daily and weekly active users, agent adoption percentages, lines of code added and deleted, and language and model usage patterns across your organization. The dashboard answers a critical question: how well are teams using AI? 

Use these metrics to identify patterns. If you’re seeing high agent adoption but code quality issues in certain teams, that’s a signal those teams need better prompt engineering training or stricter review standards. If specific languages or models correlate with higher defect rates, that’s data you can act on. The API provides user-level granularity for deeper analysis, so you can build custom dashboards that track the metrics that matter most to your organization.

Invest in architectural review capacity. As developers become more productive, senior engineering time becomes more valuable, not less. Someone must ensure the system remains coherent as more code lands faster.

Make architectural decisions explicit and accessible. AI learns from context. ADRs, READMEs, comments, and well-structured repos all help AI generate code aligned with your design principles.

What the Octoverse 2025 findings mean for you

The technology choices you make today are shaped by forces you may not notice: convenience, habit, AI-assisted flow, and how much friction each stack introduces..

💡 Pro tip: Look at the last three technology decisions you made. Language for a new project, framework for a feature, tool for your workflow. How much did AI tooling support factor into those choices? If the answer is “not much,” I’d bet it factored in more than you realized.

AI isn’t just changing how fast we code. It’s reshaping the ecosystem around which tools work best with which languages. Once those patterns set in, reversing them becomes difficult.

If you’re choosing technologies without considering AI compatibility, you’re setting yourself up for future friction. If you’re building languages or frameworks, AI support can’t be an afterthought.

Here’s a challenge

Next time you start a project, notice which technologies feel “natural” to reach for. Notice when AI suggestions feel effortless and when they don’t. Those moments of friction and flow are encoding your future preferences right now.

Are you choosing your tools consciously, or are your tools choosing themselves through the path of least resistance?

We’re all forming our digital “Icy Hot” moments. The trick is being aware of them.

Looking to stay one step ahead? Read the latest Octoverse report and try the Copilot usage metrics dashboard.

The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.

]]>
93955
What to expect for open source in 2026 https://github.blog/open-source/maintainers/what-to-expect-for-open-source-in-2026/ Wed, 18 Feb 2026 18:41:42 +0000 https://github.blog/?p=93939 Let’s dig into the 2025’s open source data on GitHub to see what we can learn about the future.

The post What to expect for open source in 2026 appeared first on The GitHub Blog.

]]>

Over the years (decades), open source has grown and changed along with software development, evolving as the open source community becomes more global.

But with any growth comes pain points. In order for open source to continue to thrive, it’s important for us to be aware of these challenges and determine how to overcome them.

To that end, let’s take a look at what Octoverse 2025 reveals about the direction open source is taking. Feel free to check out the full Octoverse report, and make your own predictions.

Growth that’s global in scope

In 2025, GitHub saw about 36 million new developers join our community. While that number alone is huge, it’s also important to see where in the world that growth comes from. India added 5.2 million developers, and there was significant growth across Brazil, Indonesia, Japan, and Germany. 

What does this mean? It’s clear that open source is becoming more global than it was before. It also means that oftentimes, the majority of developers live outside the regions where the projects they’re working on originated. This is a fundamental shift. While there have always been projects with global contributors, it’s now starting to become a reality for a greater number of projects.

Given this global scale, open source can’t rely on contributors sharing work hours, communication strategies, cultural expectations, or even language. The projects that are going to thrive are the ones that support the global community.

One of the best ways to do this is through explicit communication maintained in areas like contribution guidelines, codes of conduct, review expectations, and governance documentation. These are essential infrastructure for large projects that want to support this community. Projects that don’t include these guidelines will have trouble scaling as the number of contributors increases across the globe. Those that do provide them will be more resilient, sustainable, and will provide an easier path to onboard new contributors.

The double-edged sword of AI

AI has had a major role in accelerating global participation over 2025. It’s created a pathway that makes it easier for new developers to enter the coding world by dramatically lowering the barrier to entry. It helps contributors understand unfamiliar codebases, draft patches, and even create new projects from scratch. Ultimately, it has helped new developers make their first contributions sooner.

However, it has also created a lot of noise, or what is called “AI slop.” AI slop is a large quantity of low-quality—and oftentimes inaccurate—contributions that don’t add value to the project. Or they are contributions that would require so much work to incorporate, it would be faster to implement the solution yourself. 

This makes it harder than ever to maintain projects and make sure they continue moving forward in the intended direction. Auto-generated issues and pull requests increase volume without always increasing the quality of the project. As a result, maintainers need to spend more time reviewing contributions from developers with vastly variable levels of skill. In a lot of cases, the amount of time it takes to review the additional suggestions has risen faster than the number of maintainers.

Even if you remove AI slop from the equation, the sheer volume of contributions has grown, potentially to unmanageable levels. It can feel like a denial of service attack on human attention.

This is why maintainers have been asking: how do you sift through the noise and find the most important contributions? Luckily, we’ve added some tools to help. There are also a number of open source AI projects specifically trying to address the AI slop issue. In addition, maintainers have been using AI defensively, using it to triage issues, detect duplicate issues, and handle simple maintenance like the labeling of issues. By helping to offload some of the grunt work, it gives maintainers more time to focus on the issues that require human intervention and decision making.

Expect the open source projects that continue to expand and grow over the next year to be those that incorporate AI as part of the community infrastructure. In order to deal with this quantity of information, AI cannot be just a coding assistant. It needs to find ways to ease the pressure of being a maintainer and find a way to make that work more scalable.

Record growth is healthy, if it’s planned for

On the surface, record global growth looks like success. But this influx of newer developers can also be a burden. The sheer popularity of projects that cover basics, such as contributing your first pull request to GitHub, shows that a lot of these new developers are very much in their infancy in terms of comfort with open source. There’s uncertainty about how to move forward and how to interact with the community. Not to mention challenges with repetitive onboarding questions and duplicate issues.

This results in a growing gap between the number of participants in open source projects and the number of maintainers with a sense of ownership. As new developers grow at record rates, this gap will widen.

The way to address this is going to be less about having individuals serving as mentors—although that will still be important. It will be more about creating durable systems that show organizational maturity. What does this mean? While not an exhaustive list, here are some items:

  • Having a clear, defined path to move from contributor to reviewer to maintainer. Be aware that this can be difficult without a mentor to help guide along this path.
  • Shared governance models that don’t rely on a single timezone or small group of people.
  • Documentation that provides guidance on how to contribute and the goals of the project.

By helping to make sure that the number of maintainers keeps relative pace with the number of contributors, projects will be able to take advantage of the record growth. This does create an additional burden on the current maintainers, but the goal is to invest in a solid foundation that will result in a more stable structure in the future. Projects that don’t do this will have trouble functioning at the increased global scale and might start to stall or see problems like increased technical debt.

But what are people building?

It can’t be denied that AI was a major focus—about 60% of the top growing projects were AI focused. However, there were several that had nothing to do with AI. These projects (e.g., Home Assistant, VS Code, Godot) continue to thrive because they meet real needs and support broad, international communities.

A list of the fastest-growing open source projects by contribution: zen-browser/desktop, cline/cline, vllm-project/vllm, astral-sh/uv, microsoft/vscode, infiniflow/ragflow, sgl-project/sglang, continuedev/continue, comfyanonymous/ComfyUI, and home-assistant/core.

Just as the developer space is growing on a global scale, the same can be said about the projects that garner the most interest. These types of projects that support a global community and address their needs are going to continue to be popular and have the most support. 

This just continues to reinforce how open source is really embracing being a global phenomenon as opposed to a local one.

What this year will likely hold

Open source in 2026 won’t be defined by a single trend that emerged over 2025. Instead, it will be shaped by how the community responds to the pressures identified over the last year, particularly with the surge in AI and an explosively growing global community.

For developers, this means that it’s important to invest in processes as much as code. Open source is scaling in ways that would have been impossible to imagine a decade ago, and the important question going forward isn’t how much it will grow—it’s how can you make that growth sustainable.

Read the full Octoverse report >

The post What to expect for open source in 2026 appeared first on The GitHub Blog.

]]>
93939
GitHub availability report: January 2026 https://github.blog/news-insights/company-news/github-availability-report-january-2026/ Wed, 11 Feb 2026 23:12:34 +0000 https://github.blog/?p=93780 In January, we experienced two incidents that resulted in degraded performance across GitHub services.

The post GitHub availability report: January 2026 appeared first on The GitHub Blog.

]]>
In January, we experienced two incidents that resulted in degraded performance across GitHub services.

January 13 09:38 UTC (lasting 46 minutes)

On January 13, 2026, from 09:25 to 10:11 UTC, GitHub Copilot experienced a service outage with error rates averaging 18% and peaking at 100%. This impacted chat features across Copilot Chat, VS Code, JetBrains IDEs, and other dependent products. The incident was triggered by a configuration error introduced during a model update and was initially mitigated by rolling back the change. A secondary recovery phase extended until 10:46 UTC due to upstream provider Open AI experiencing degraded availability for GPT‑4.1 model.

We have completed a detailed root‑cause review and are implementing stronger monitors, improved test environments, and tighter configuration safeguards to prevent recurrence and accelerate detection and mitigation of future issues.

January 15 16:56 UTC (lasting 1 hour and 40 minutes)

On January 15, 2026, between 16:40 UTC and 18:20 UTC, we observed increased latency and timeouts across issues, pull requests, notifications, actions, repositories, API, account login, and an internal service, Alive, that powers live updates on GitHub. An average 1.8% of combined web and API requests saw failure, peaking briefly at 10% early on. The majority of impact was observed for unauthenticated users, but authenticated users were impacted as well.

This was caused by an infrastructure update to some of our data stores. Upgrading this infrastructure to a new major version resulted in unexpected resource contention, leading to distributed impact in the form of slow queries and increased timeouts across services that depend on these datasets. We mitigated this by rolling back to the previous stable version.

We are working to improve our validation process for these types of upgrades to catch issues that only occur under high load before full release, improve detection time, and reduce mitigation times in the future.

Looking ahead 

Please note that the incidents that occurred on February 9, 2026, will be included in next month’s February Availability Report. In the meantime, you can refer to incident report on the GitHub Status site for more details.


Follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the engineering section on the GitHub Blog.

The post GitHub availability report: January 2026 appeared first on The GitHub Blog.

]]>
93780
Pick your agent: Use Claude and Codex on Agent HQ  https://github.blog/news-insights/company-news/pick-your-agent-use-claude-and-codex-on-agent-hq/ Wed, 04 Feb 2026 17:00:19 +0000 https://github.blog/?p=93566 Claude by Anthropic and OpenAI Codex are now available in public preview on GitHub and VS Code with a Copilot Pro+ or Copilot Enterprise subscription. Here's what you need to know and how to get started today.

The post Pick your agent: Use Claude and Codex on Agent HQ  appeared first on The GitHub Blog.

]]>

Context switching equals friction in software development. Today, we’re removing some of that friction with the latest updates to Agent HQ which lets you run coding agents from multiple providers directly inside GitHub and your editor, keeping context, history, and review attached to your work.

Copilot Pro+ and Copilot Enterprise users can now run multiple coding agents directly inside GitHub, GitHub Mobile, and Visual Studio Code (with Copilot CLI support coming soon). That means you can use agents like GitHub Copilot, Claude by Anthropic, and OpenAI Codex (both in public preview) today.

With Codex, Claude, and Copilot in Agent HQ, you can move from idea to implementation using different agents for different steps without switching tools or losing context. 

We’re bringing Claude into GitHub to meet developers where they are. With Agent HQ, Claude can commit code and comment on pull requests, enabling teams to iterate and ship faster and with more confidence. Our goal is to give developers the reasoning power they need, right where they need it.

Katelyn Lesse, Head of Platform, Anthropic

From faster code to better decisions 

Agent HQ also lets you compare how different agents approach the same problem, too. You can assign multiple agents to a task, and see how Copilot, Claude, and Codex reason about tradeoffs and arrive at different solutions.  

In practice, this helps you surface issues earlier by using agents for different kinds of review:  

  • Architectural guardrails: Ask one or more agents to evaluate modularity and coupling, helping identify changes that could introduce unintended side effects. 
  • Logical pressure testing: Use another agent to hunt for edge cases, async pitfalls, or scale assumptions that could cause problems in production. 
  • Pragmatic implementation: Have a separate agent propose the smallest, backward-compatible change to keep the blast radius of a refactor low.

This method of working moves your reviews and thinking to strategy over syntax. 

Our collaboration with GitHub has always pushed the frontier of how developers build software. The first Codex model helped power Copilot and inspired a new generation of AI-assisted coding. We share GitHub’s vision of meeting developers wherever they work, and we’re excited to bring Codex to GitHub and VS Code. Codex helps engineers work faster and with greater confidence—and with this integration, millions more developers can now use it directly in their primary workspace, extending the power of Codex everywhere code gets written.

Alexander Embiricos, OpenAI 

Why running agents on GitHub matters 

GitHub is already where code lives, collaboration happens, and decisions are reviewed, governed, and shipped. 

Making coding agents native to that workflow, rather than external tools, makes them even more useful at scale. Instead of copying and pasting context between tools, documents, and threads, all discussion and proposed changes stay attached to the repository itself. 

With Copilot, Claude, and Codex working directly in GitHub and VS Code, you can: 

  • Explore tradeoffs early: Run agents in parallel to surface competing approaches and edge cases before code hardens. 
  • Keep context attached to the work: Agents operate inside your repository, issues, and pull requests instead of starting from stateless prompts. 
  • Avoid new review processes: Agent-generated changes show up as draft pull requests and comments, reviewed the same way you’d review a teammate’s work. 

There are no new dashboards to learn, and no separate AI workflows to manage. Everything runs inside the environments you already use. 

Built for teams, not just individuals 

These workflows don’t just benefit individual developers. Agent HQ gives you org-wide visibility and systematic control over how AI interacts with your codebase: 

  • Agent controls: Manage access and security policies in one place, allowing enterprise admins to define which agents and models are permitted across the organization. 
  • Code quality checks: GitHub Code Quality (in public preview) extends Copilot’s security checks to evaluate the maintainability and reliability impact of changed code, helping ensure “LGTM” reflects long-term code health. 
  • Automated first-pass review: We have integrated a code review step directly into the Copilot’s workflow, allowing Copilot to address initial problems before a developer ever sees the code. 
  • Impact metrics: Use the Copilot metrics dashboard (in public preview) to track usage and impact across your entire organization, providing clear traceability for agent-generated work. 
  • Security and auditability: Maintain full control with audit logging and enterprise-grade access management, ensuring agents work with your security posture instead of against it. 

This allows teams to adopt agent-based workflows without sacrificing code quality, accountability, or trust. 

More agents coming soon 

Access to Claude and Codex will soon expand to more Copilot subscription types. In the meantime, we’re actively working with partners, including Google, Cognition, and xAI to bring more specialized agents into GitHub, VS Code, and Copilot CLI workflows. 

Read the docs to get started >

The post Pick your agent: Use Claude and Codex on Agent HQ  appeared first on The GitHub Blog.

]]>
93566
What the fastest-growing tools reveal about how software is being built https://github.blog/news-insights/octoverse/what-the-fastest-growing-tools-reveal-about-how-software-is-being-built/ Tue, 03 Feb 2026 17:00:00 +0000 https://github.blog/?p=93551 What languages are growing fastest, and why? What about the projects that people are interested in the most? Where are new developers cutting their teeth? Let’s take a look at Octoverse data to find out.

The post What the fastest-growing tools reveal about how software is being built appeared first on The GitHub Blog.

]]>

In 2025, software development crossed a quiet threshold. In our latest Octoverse report, we found that the fastest-growing languages, tools, and open source projects on GitHub are no longer about shipping more code. Instead, they’re about reducing friction in a world where AI is helping developers build more, faster.

By looking at some of the areas of fastest growth over the past year, we can see how developers are adapting through: 

  • The programming languages that are growing most in AI-assisted development workflows.
  • The tools that win when speed and reproducibility matter.
  • The areas where new contributors are showing up (and what helps them stick).

Rather than catalog trends, we want to focus on what those signals mean for how software is being built today and what choices you might consider heading into 2026. 

The elephant in the room: Typescript is the new #1

In August 2025, TypeScript became the most-used language on GitHub, overtaking Python and JavaScript for the first time. Over the past year, TypeScript added more than one million contributors, which was the largest absolute growth of any language on GitHub. 

A chart showing the top 10 programming languages on GitHub from 2023 to 2025. TypeScript rises to #1 in 2025, overtaking Python and JavaScript, which move to #2 and #3 respectively. Other top languages include Java, C#, PHP, Shell, C++, HCL, and Go. The chart tracks ranking changes over time on a dark background with colored lines representing each language.

Python also continued to grow rapidly, adding roughly 850,000 contributors (+48.78% YoY), while JavaScript grew more slowly (+24.79%, ~427,000 contributors). Together, TypeScript and Python both significantly outpaced JavaScript in both total and percentage growth. 

This shift signals more than a preference change. Typed languages are increasingly becoming the default for new development, particularly as AI-assisted coding becomes routine. Why is that?

In practice, a significant portion of the failures teams encounter with AI-generated code surface as type mismatches, broken contracts, or incorrect assumptions between components. Stronger type systems act as early guardrails: they can help catch errors sooner, reduce review churn, and make AI-generated changes easier to reason about before code reaches production. 

If you’re going to be using AI in your software design, which more and more developers are doing on a daily basis, strongly typed languages are your friend.

Here’s what this means in practice: 

  • If you’re starting a new project today, TypeScript is increasingly becoming the default (especially for teams using AI in daily development).
  • If you’re introducing AI-assisted workflows into an existing JavaScript codebase, adding types may reduce friction more than switching models or tools.

Python is key for AI

Contributor counts show who is using a language. Repository data shows what that language is being used to build. 

When we look specifically at AI-focused repositories, Python stands apart. As of August 2025, nearly half of all new AI projects on GitHub were built primarily in Python. 

A chart listing the most commonly used programming languages in AI-tagged projects on GitHub in 2025. Python ranks first with 582,000 repositories (+50.7% year over year), followed by JavaScript with 88,000 (+24.8%), TypeScript with 86,000 (+77.9%), Shell with 9,000 (+324%), and C++ with 7,800 (+11%). The chart includes brief descriptions of each language’s role in AI development, displayed on a blue gradient background with green geometric ribbon graphics.

This matters because AI projects now account for a disproportionate share of open source momentum. Six of the ten fastest-growing open source projects by contributors in 2025 were directly focused on AI infrastructure or tooling.

A table listing the fastest-growing open source projects on GitHub in 2025 by contributors. The top ten are zen-browser/desktop, cline/cline, vllm-project/vllm, astral-sh/uv, microsoft/vscode, infiniflow/ragflow, sgl-project/sglang, continuedev/continue, comfyanonymous/ComfyUI, and home-assistant/core. Growth rates range from 2,301% to 6,836%, with most projects marked as AI-focused. Displayed on a blue gradient background with the GitHub Octoverse ribbon graphic.

Python’s role here isn’t new, but it is evolving. The data suggests a shift from experimentation toward production-ready AI systems, with Python increasingly anchoring packaging, orchestration, and deployment rather than living only in notebooks. 

Moreover, Python is likely to continue to grow in 2026, as AI continues to gain support and additional projects.

Here’s what this means in practice:

  • Python remains the backbone of applied AI work from training and inference to orchestration.
  • Production-focused Python skills such as packaging, typing, CI, and containerization are becoming more important than exploratory scripting alone. 

A deeper look at the top open source projects

Looking across the fastest-growing projects, a clear pattern emerges: developers are optimizing for speed, control, and predictable outcomes. 

Many of the fastest-growing tools emphasize performance and minimalism. Projects like astral-sh/uv, a package and project manager, focus on dramatically faster Python package management. This reflects a growing intolerance for slow feedback loops and non-deterministic environments. 

Having just one of these projects could be an anomaly, but having multiple indicates a clear trend. This trend aligns closely with AI-assisted workflows where iteration speed and reproducibility directly impact developer productivity. 

Here’s what this means in practice: 

  • Fast installs and deterministic builds increasingly matter as much as feature depth.
  • Tools that reduce “works on my machine” moments are winning developer mindshare.

Where first-time open source contributors are showing up

As the developer population grows, understanding where first-time contributors show up (and why) becomes increasingly important. 

A chart showing the open source projects that attracted the most first-time contributors on GitHub in 2025. The top ten are microsoft/vscode, firstcontributions/first-contributions, home-assistant/core, slackblitz/bolt.new, flutter/flutter, zen-browser/desktop, is-a-dev/register, vllm-project/vllm, comfyanonymous/ComfyUI, and ollama/ollama. Displayed on a blue gradient background with green 3D ribbon graphics.

Projects like VS Code and First Contributions continued to top the list over the last year, reflecting both the scale of widely used tools and the persistent need for low-friction entry points into open source (notably, we define contributions as any content-generating activity on GitHub).

Despite this growth, basic project governance remains uneven across the ecosystem. README files are common, but contributor guides and codes of conduct are still relatively rare even as first-time contributions increase.

This gap represents one of the highest-leverage improvements maintainers and open source communities can make. The fact that most of the projects on this list have detailed documentation on what the project is and how to contribute shows the importance of this guidance.

Here’s what this means in practice: 

  • Clear documentation lowers the cost of contribution more than new features.
  • Contributor guides and codes of conduct can help convert curiosity into sustained participation.
  • Improving project hygiene is often the fastest way to grow a contributor base.

Putting it all together

Taken together, these trends point to a shift in what developers value and how they choose tools. 

AI is no longer a separate category of development. It’s shaping the languages teams use, which tools gain traction, and which projects attract contributors. 

Typed languages like TypeScript are becoming the default for reliability at scale, while Python remains central to AI-driven systems as they move from prototypes into production. 

Across the ecosystem, developers are rewarding tools that minimize friction with faster feedback loops, reproducible environments, and clearer contribution paths.

Developers and teams that optimize for speed, clarity, and reliability are shaping how software is being built.

As a reminder, you can check out the full 2025 Octoverse report for more information and make your own conclusions. There’s a lot of good data in there, and we’re just scratching the surface of what you can learn from it.

The post What the fastest-growing tools reveal about how software is being built appeared first on The GitHub Blog.

]]>
93551
Year recap and future goals for the GitHub Innovation Graph https://github.blog/news-insights/policy-news-and-insights/year-recap-and-future-goals-for-the-github-innovation-graph/ Wed, 28 Jan 2026 16:00:00 +0000 https://github.blog/?p=93489 Discover the latest trends and insights on public software development activity on GitHub with data from the Innovation Graph through Q3 2025.

The post Year recap and future goals for the GitHub Innovation Graph appeared first on The GitHub Blog.

]]>

Today’s data release marks our second full year of regular releases since the launch of the GitHub Innovation Graph. The Innovation Graph serves as a stable, regularly updated source for aggregated statistics on public software development activity around the world, informing public policy, strengthening research, guiding funding decisions, and equipping organizations with the evidence needed to build secure and resilient AI systems.  

Updated bar chart races

With our new data release, we’ve updated the bar chart race videos to the git pushes, repositories, developers, and organizations global metrics pages.

Let’s take a look back at some of the progress the Innovation Graph has helped drive. 

Academic papers

One of the most rewarding aspects of the past year has been seeing the growing range of research questions addressed with Innovation Graph data. Recent papers have explored everything from global collaboration networks to the institutional foundations of digital capabilities.

These studies showcase how network analysis techniques can be applied to Innovation Graph data, in addition to  earlier work we referenced last year linking open source to economic value, innovation measurement, labor markets, and AI-driven productivity through other methodologies.

Historical Institutions and Modern Digital Capabilities: New Evidence from GitHub in Africa

Research by an economist at the Federal Reserve Board uses GitHub data to examine how the density of Protestant mission stations correlates with present-day participation in digital production across African countries.

The Structure of Cross-National Collaboration in Open-Source Software Development

Researchers from MIT, Carnegie Mellon, and the University of Chicago analyze international collaboration patterns in the Innovation Graph’s economy collaborators dataset, shedding light on how common colonial histories influence modern software development collaboration activities.

  • Xu, Henry, et al. “The Structure of Cross-National Collaboration in Open-Source Software Development,” (November 10, 2025). Available at doi.org/10.1145/3746252.3761237.
  • Replication package available at https://github.com/hehao98/github-innovation-graph.  

Small-World Phenomenon of Global Open-Source Software Collaboration on GitHub

A social network analysis by researchers at Midwestern State University and Tarleton State University highlights the tightly connected, small-world structure of global OSS collaboration.

  • Zhang, Guoying, et al. “Small-World Phenomenon of Global Open-Source Software Collaboration on Github: A Social Network Analysis.” Journal of Global Information Management Vol. 33, No. 1 (2025). Available at doi.org/10.4018/JGIM.387412. 

The Software Complexity of Nations

These researchers extend countries’ software economic complexity into the digital economy by leveraging the geographic distribution of programming languages in open source software, showing that software economic complexity predicts GDP, income inequality, and emissions, which have important policy implications.

Conferences

The Innovation Graph and related GitHub datasets were featured prominently in academic and policy discussions at a wide range of venues, including:

News publications

We were also encouraged to see Innovation Graph data referenced in major international reporting. In 2025, two pieces in The Economist drew on GitHub data examining China’s approach to open technology (June 17, 2025) and India’s potential role as a distinctive kind of AI superpower (September 18, 2025). Coverage like this reinforces the role that data on open source activity can play in understanding geopolitical and economic shifts.

Reports

Once again, Innovation Graph data contributed to several flagship reports, including:

We continue to value these opportunities to support macro-level measurement efforts, and we’re equally excited by complementary work that dives deeper into regional, institutional, and community-level dynamics.

Moving forward

As we move through 2026, we’re grateful for the community that has formed around the Innovation Graph, and we’re looking forward to building the next chapter together. Our focus will be on deepening collaboration, welcoming new perspectives, and creating clearer pathways for people to apply the Innovation Graph data in their own contexts, from strategy and research to product development and policy.

The post Year recap and future goals for the GitHub Innovation Graph appeared first on The GitHub Blog.

]]>
93489
7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript https://github.blog/developer-skills/programming-languages-and-frameworks/7-learnings-from-anders-hejlsberg-the-architect-behind-c-and-typescript/ Tue, 27 Jan 2026 17:17:28 +0000 https://github.blog/?p=93457 Anders Hejlsberg shares lessons from C# and TypeScript on fast feedback loops, scaling software, open source visibility, and building tools that last.

The post 7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript appeared first on The GitHub Blog.

]]>

Anders Hejlsberg’s work has shaped how millions of developers code. Whether or not you recognize his name, you likely have touched his work: He’s the creator of Turbo Pascal and Delphi, the lead architect of C#, and the designer of TypeScript. 

We sat down with Hejlsberg to discuss his illustrious career and what it’s felt like to watch his innovations stand up to real world pressure. In a long-form conversation, Hejlsberg reflects on what language design looks like once the initial excitement fades, when performance limits appear, when open source becomes unavoidable, and how AI can impact a tool’s original function.

What emerges is a set of patterns for building systems that survive contact with scale. Here’s what we learned.

Watch the full interview above.

Fast feedback matters more than almost anything else

Hejlberg’s early instincts were shaped by extreme constraints. In the era of 64KB machines, there was no room for abstraction that did not pull its weight.

“You could keep it all in your head,” he recalls.

When you typed your code, you wanted to run it immediately.

Anders Hejlsberg

Turbo Pascal’s impact did not come from the Pascal language itself. It came from shortening the feedback loop. Edit, compile, run, fail, repeat, without touching disk or waiting for tooling to catch up. That tight loop respected developers’ time and attention.

The same idea shows up decades later in TypeScript, although in a different form. The language itself is only part of the story. Much of TypeScript’s value comes from its tooling: incremental checking, fast partial results, and language services that respond quickly even on large codebases.

The lesson here is not abstract. Developers can apply this directly to how they evaluate and choose tools. Fast feedback changes behavior. When errors surface quickly, developers experiment more, refactor more confidently, and catch problems closer to the moment they are introduced. When feedback is slow or delayed, teams compensate with conventions, workarounds, and process overhead. 

Whether you’re choosing a language, framework, or internal tooling, responsiveness matters. Tools that shorten the distance between writing code and understanding its consequences tend to earn trust. Tools that introduce latency, even if they’re powerful, often get sidelined. 

Scaling software means letting go of personal preferences 

As Hejlsberg moved from largely working alone to leading teams, particularly during the Delphi years, the hardest adjustment wasn’t technical.

It was learning to let go of personal preferences.

You have to accept that things get done differently than you would have preferred. Fixing it would not really change the behavior anyway.

Anders Hejlsberg

That mindset applies well beyond language design. Any system that needs to scale across teams requires a shift from personal taste to shared outcomes. The goal stops being code that looks the way you would write it, and starts being code that many people can understand, maintain, and evolve together. C# did not emerge from a clean-slate ideal. It emerged from conflicting demands. Visual Basic developers wanted approachability, C++ developers wanted power, and Windows demanded pragmatism.

The result was not theoretical purity. It was a language that enough people could use effectively.

Languages do not succeed because they are perfectly designed. They succeed because they accommodate the way teams actually work.

Why TypeScript extended JavaScript instead of replacing it

TypeScript exists because JavaScript succeeded at a scale few languages ever reach. As browsers became the real cross-platform runtime, teams started building applications far larger than dynamic typing comfortably supports.

Early attempts to cope were often extreme. Some teams compiled other languages into JavaScript just to get access to static analysis and refactoring tools.

That approach never sat well with Hejlsberg.

Telling developers to abandon the ecosystem they were already in was not realistic. Creating a brand-new language in 2012 would have required not just a compiler, but years of investment in editors, debuggers, refactoring tools, and community adoption.

Instead, TypeScript took a different path. It extended JavaScript in place, inheriting its flaws while making large-scale development more tractable.

This decision was not ideological, but practical. TypeScript succeeded because it worked with the constraints developers already had, rather than asking them to abandon existing tools, libraries, and mental models. 

The broader lesson is about compromise. Improvements that respect existing workflows tend to spread while improvements that require a wholesale replacement rarely do. In practice, meaningful progress often comes from making the systems you already depend on more capable instead of trying to start over.

Visibility is a part of what makes open source work

TypeScript did not take off immediately. Early releases were nominally open source, but development still happened largely behind closed doors.

That changed in 2014 when the project moved to GitHub and adopted a fully public development process. Features were proposed through pull requests, tradeoffs were discussed in the open, and issues were prioritized based on community feedback.

This shift made decision-making visible. Developers could see not just what shipped, but why certain choices were made and others were not. For the team, it also changed how work was prioritized. Instead of guessing what mattered most, they could look directly at the issues developers cared about.

The most effective open source projects do more than share code. They make decision-making visible so contributors and users can understand how priorities are set, and why tradeoffs are made.

Leaving JavaScript as an implementation language was a necessary break

For many years, TypeScript was self-hosted. The compiler was written in TypeScript and ran as JavaScript. This enabled powerful browser-based tooling and made experimentation easy.

Over time, however, the limitations became clear. JavaScript is single-threaded, has no shared-memory concurrency, and its object model is flexible (but expensive). As TypeScript projects grew, the compiler was leaving a large amount of available compute unused.

The team reached a point where further optimization would not be enough. They needed a different execution model.

The controversial decision was to port the compiler to Go.

This was not a rewrite. The goal was semantic fidelity. The new compiler needed to behave exactly like the old one, including quirks and edge cases. Rust, despite its popularity, would have required significant redesign due to ownership constraints and pervasive cyclic data structures. Go’s garbage collection and structural similarity made it possible to preserve behavior while unlocking performance and concurrency.

The result was substantial performance gains, split between native execution and parallelism. More importantly, the community did not have to relearn the compiler’s behavior.

Sometimes the most responsible choice isn’t the most ambitious one, but instead preserves behavior, minimizes disruption, and removes a hard limit that no amount of incremental optimization can overcome.

In an AI-driven workflow, grounding matters more than generation

Hejlberg is skeptical of the idea of AI-first programming languages. Models are best at languages they have already seen extensively, which naturally favors mainstream ecosystems like JavaScript, Python, and TypeScript.

But AI does change things when it comes to tooling.

The traditional IDE model assumed a developer writing code and using tools for assistance along the way. Increasingly, that relationship is reversing. AI systems generate code. Developers supervise and correct. Deterministic tools like type checkers and refactoring engines provide guardrails that prevent subtle errors.

In that world, the value of tooling is not creativity. It is accuracy and constraint. Tools need to expose precise semantic information so that AI systems can ask meaningful questions and receive reliable answers.

The risk is not that AI systems will generate bad code. Instead, it’s that they will generate plausible, confident code that lacks enough grounding in the realities of a codebase. 

For developers, this shifts where attention should go. The most valuable tools in an AI-assisted workflow aren’t the ones that generate the most code, but the ones that constrain it correctly. Strong type systems, reliable refactoring tools, and accurate semantic models become essential guardrails. They provide the structure that allows AI output to be reviewed, validated, and corrected efficiently instead of trusted blindly. 

Why open collaboration is critical

Despite the challenges of funding and maintenance, Hejlberg remains optimistic about open collaboration. One reason is institutional memory. Years of discussion, decisions, and tradeoffs remain searchable and visible.

That history does not disappear into private email threads or internal systems. It remains available to anyone who wants to understand how and why a system evolved.

Despite the challenges of funding and maintenance, Hejlsberg remains optimistic about open collaboration. And a big reason is institutional memory.

“We have 12 years of history captured in our project,” he explains. “If someone remembers that a discussion happened, we can usually find it. The context doesn’t disappear into email or private systems.”

That visibility changes how systems evolve. Design debates, rejected ideas, and tradeoffs remain accessible long after individual decisions are made. For developers joining a project later, that shared context often matters as much as the code itself.

A pattern that repeats across decades

Across four decades of language design, the same themes recur:

  • Fast feedback loops matter more than elegance
  • Systems need to accommodate imperfect code written by many people
  • Behavioral compatibility often matters more than architectural purity
  • Visible tradeoffs build trust

These aren’t secondary concerns. They’re fundamental decisions that determine whether a tool can adapt as its audience grows. Moreover, they ground innovation by ensuring new ideas can take root without breaking what already works.

For anyone building tools they want to see endure, those fundamentals matter as much as any breakthrough feature. And that may be the most important lesson of all.

Did you know TypeScript was the top language used in 2025? Read more in the Octoverse report >

The post 7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript appeared first on The GitHub Blog.

]]>
93457
Help shape the future of open source in Europe https://github.blog/news-insights/policy-news-and-insights/help-shape-the-future-of-open-source-in-europe/ Tue, 27 Jan 2026 14:16:04 +0000 https://github.blog/?p=93481 Read GitHub’s position on the European Open Digital Ecosystem Strategy and learn how to participate.

The post Help shape the future of open source in Europe appeared first on The GitHub Blog.

]]>

At GitHub, we believe that open source is a primary driver of innovation, security, and economic competitiveness. The European Union is currently at a pivotal moment in defining how it supports this ecosystem, and it wants to hear from you, the builders.

The European Commission is planning to adopt an open source strategy called “Towards European Open Digital Ecosystems“. This initiative is not about passing new laws; instead the EU is looking to develop a strategic framework and funding measures to help the EU open source sector scale up and become more competitive. This effort aims to strengthen the EU’s technological sovereignty by supporting open source software and hardware across critical sectors like AI, cloud computing, and cybersecurity. 

We’ve been advocating for this kind of support for a long time. For instance, we previously highlighted the need for a European Sovereign Tech Fund to invest in the maintenance of critical basic open source technologies such as libraries or programming languages. This new strategy is a chance to turn those kinds of ideas into official EU policy.

You can read GitHub’s response to the European Commission here.  Brand new data from GitHub Innovation Graph shows that the EU is a global open source powerhouse: There are now almost 25 million EU developers on GitHub, who made over 155 million contributions to public projects in the last year alone.

The EU wants to help European companies turn open source projects into successful businesses, which is an admirable goal with plenty of opportunities to achieve it. For example, the EU can create better conditions for open source businesses by making it easier for them to participate in public procurement and access the growth capital they need to turn great code into sustainable products. By supporting the business models and infrastructure that surround it, the EU can turn its massive developer talent into long-term economic leadership.

It is important to understand, though, that not all open source projects can be turned into commercial products—and that commercialization is not every developer’s goal. A successful EU open source policy should also support the long-term sustainability of non-commercially produced open source components that benefit us all.

That is why the European Commission needs to hear the full spectrum of experiences from the community—from individual maintainers, startups, companies, and researchers. Over 900 people have already shared their views, and we encourage you to join them. The European Commission is specifically looking for responses covering these five topics:

  1. Strengths and weaknesses: What is standing in the way of open source adoption and sustainable open source contributions in the EU?
  2. Added value: How does open source benefit the public and private sectors?
  3. Concrete actions: What should the EU do to support open source?
  4. Priority areas: Which technologies (e.g., AI, IoT, or Cloud) should be the focus?
  5. Sector impact: In which industries (e.g., automotive or manufacturing) could open source increase competitiveness and cybersecurity?

How to Participate

The “Call for Evidence” is your opportunity to help shape the future tech policy of the EU. It only takes a few minutes to provide your perspective. Submit your feedback by February 3 (midnight CET). Your voice is essential to ensuring that the next generation of European digital policy is built with the needs of real developers in mind.

At GitHub Developer Policy, we are always open to feedback from developers. Please do not hesitate to contact us as well.

The post Help shape the future of open source in Europe appeared first on The GitHub Blog.

]]>
93481