The latest Octoverse findings - The GitHub Blog https://github.blog/news-insights/octoverse/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Thu, 19 Feb 2026 02:02:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 The latest Octoverse findings - The GitHub Blog https://github.blog/news-insights/octoverse/ 32 32 153214340 How AI is reshaping developer choice (and Octoverse data proves it) https://github.blog/ai-and-ml/generative-ai/how-ai-is-reshaping-developer-choice-and-octoverse-data-proves-it/ Thu, 19 Feb 2026 17:00:00 +0000 https://github.blog/?p=93955 AI is rewiring developer preferences through convenience loops. Octoverse 2025 reveals how AI compatibility is becoming the new standard for technology choice.

The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.

]]>

You know that feeling when a sensory trigger instantly pulls you back to a moment in your life? For me, it’s Icy Hot. One whiff and I’m back to 5 a.m. formation time in the army. My shoulders tense. My body remembers. It’s not logical. It’s just how memory works. We build strong associations between experiences and cues around them. Those patterns get encoded and guide our behavior long after the moment passes.

That same pattern is happening across the software ecosystem as AI becomes a default part of how we build. For example, we form associations between convenience and specific technologies. Those loops influence what developers reach for, what they choose to learn, and ultimately, which technologies gain momentum.

Octoverse 2025 data illustrates this in real time. And it’s not subtle. 

In August 2025, TypeScript surpassed both Python and JavaScript to become the most-used language on GitHub for the first time ever. That’s the headline. But the deeper story is what it signals: AI isn’t just speeding up coding. It’s reshaping which languages, frameworks, and tools developers choose in the first place.

A chart showing the top 10 programming languages on GitHub from 2023 to 2025. TypeScript rises to #1 in 2025, overtaking Python and JavaScript, which move to #2 and #3 respectively. Other top languages include Java, C#, PHP, Shell, C++, HCL, and Go. The chart tracks ranking changes over time on a dark background with colored lines representing each language.

The convenience loop is how memory becomes behavior

When a task or process goes smoothly, your brain remembers. Convenience captures attention. Reduced friction becomes a preference—and preferences at scale can shift ecosystems.

Eighty percent of new developers on GitHub use Copilot within their first week. Those early exposures reset the baseline for what “easy” means.

When AI handles boilerplate and error-prone syntax, the penalty for choosing powerful but complex languages disappears. Developers stop avoiding tools with high overhead and start picking based on utility instead. The language adoption data shows this behavioral shift:

That last one matters. We didn’t suddenly love Bash. AI absorbed the friction that made shell scripting painful. So now we use the right tool for the job without the usual cost. 

This is what Octoverse is really showing us: developer choice is shifting toward  technologies that work best with the tools we’re already using.

The technical reason behind the shift

There are concrete, technical reasons AI performs better with strongly typed languages.

Strongly typed languages give AI much clearer constraints. In JavaScript, a variable could be anything. In TypeScript, declaring x: string immediately eliminates all non-string operations. That constraint matters. Constraints help AI generate more reliable, contextually correct code. And developers respond to that reliability.

That effect compounds when you look at AI model integration across GitHub. Over 1.1 million public repositories now use LLM SDKs. This is mainstream adoption, not fringe experimentation. And it’s concentrating around the languages and frameworks that work best with AI.

A line and area chart titled ‘Cumulative count of public projects using generative AI model SDKs,’ showing rapid growth from 2021 to 2025. The curve starts near zero and climbs steeply to over 1.1 million repositories by 2025, illustrating the widespread adoption of LLM and AI model SDKs. The chart features a purple-to-pink gradient fill on a dark background with geometric ribbons on the left.

Moving fast without breaking your architecture 

AI tools are amplifying developer productivity in ways we haven’t seen before. The question is how to use them strategically. The teams getting the best results aren’t fighting the convenience loop. They’re designing their workflows to harness it while maintaining the architectural standards that matter.

For developers and teams

Establish patterns before you generate. AI is fantastic at following established patterns, but struggles to invent them cleanly. If you define your first few endpoints or components with strong structure, Copilot will follow those patterns. Good foundations scale. Weak ones get amplified.

Use type systems as guardrails, not crutches. TypeScript reduces errors, but passing type checks isn’t the same as expressing correct business logic. Use types to bound the space of valid code, not as your primary correctness signal.

Test AI-generated code harder, not less. There’s a temptation to trust AI output because it “looks right” and passes initial checks. Resist that. Don’t skip testing.

For engineering leaders

Recognize the velocity jump and prepare for its costs. AI-assisted development often produces a 20–30 percent increase in throughput. That’s a win. But higher throughput means architectural drift can accumulate faster without the right guardrails.

Standardize before you scale. Document patterns. Publish template repositories. Make your architectural decisions explicit. AI tools will mirror whatever structures they see.

Track what AI is generating, not just how much. The Copilot usage metrics dashboard (now in public preview for Enterprise) lets you see beyond acceptance rates. You can track daily and weekly active users, agent adoption percentages, lines of code added and deleted, and language and model usage patterns across your organization. The dashboard answers a critical question: how well are teams using AI? 

Use these metrics to identify patterns. If you’re seeing high agent adoption but code quality issues in certain teams, that’s a signal those teams need better prompt engineering training or stricter review standards. If specific languages or models correlate with higher defect rates, that’s data you can act on. The API provides user-level granularity for deeper analysis, so you can build custom dashboards that track the metrics that matter most to your organization.

Invest in architectural review capacity. As developers become more productive, senior engineering time becomes more valuable, not less. Someone must ensure the system remains coherent as more code lands faster.

Make architectural decisions explicit and accessible. AI learns from context. ADRs, READMEs, comments, and well-structured repos all help AI generate code aligned with your design principles.

What the Octoverse 2025 findings mean for you

The technology choices you make today are shaped by forces you may not notice: convenience, habit, AI-assisted flow, and how much friction each stack introduces..

💡 Pro tip: Look at the last three technology decisions you made. Language for a new project, framework for a feature, tool for your workflow. How much did AI tooling support factor into those choices? If the answer is “not much,” I’d bet it factored in more than you realized.

AI isn’t just changing how fast we code. It’s reshaping the ecosystem around which tools work best with which languages. Once those patterns set in, reversing them becomes difficult.

If you’re choosing technologies without considering AI compatibility, you’re setting yourself up for future friction. If you’re building languages or frameworks, AI support can’t be an afterthought.

Here’s a challenge

Next time you start a project, notice which technologies feel “natural” to reach for. Notice when AI suggestions feel effortless and when they don’t. Those moments of friction and flow are encoding your future preferences right now.

Are you choosing your tools consciously, or are your tools choosing themselves through the path of least resistance?

We’re all forming our digital “Icy Hot” moments. The trick is being aware of them.

Looking to stay one step ahead? Read the latest Octoverse report and try the Copilot usage metrics dashboard.

The post How AI is reshaping developer choice (and Octoverse data proves it) appeared first on The GitHub Blog.

]]>
93955
What to expect for open source in 2026 https://github.blog/open-source/maintainers/what-to-expect-for-open-source-in-2026/ Wed, 18 Feb 2026 18:41:42 +0000 https://github.blog/?p=93939 Let’s dig into the 2025’s open source data on GitHub to see what we can learn about the future.

The post What to expect for open source in 2026 appeared first on The GitHub Blog.

]]>

Over the years (decades), open source has grown and changed along with software development, evolving as the open source community becomes more global.

But with any growth comes pain points. In order for open source to continue to thrive, it’s important for us to be aware of these challenges and determine how to overcome them.

To that end, let’s take a look at what Octoverse 2025 reveals about the direction open source is taking. Feel free to check out the full Octoverse report, and make your own predictions.

Growth that’s global in scope

In 2025, GitHub saw about 36 million new developers join our community. While that number alone is huge, it’s also important to see where in the world that growth comes from. India added 5.2 million developers, and there was significant growth across Brazil, Indonesia, Japan, and Germany. 

What does this mean? It’s clear that open source is becoming more global than it was before. It also means that oftentimes, the majority of developers live outside the regions where the projects they’re working on originated. This is a fundamental shift. While there have always been projects with global contributors, it’s now starting to become a reality for a greater number of projects.

Given this global scale, open source can’t rely on contributors sharing work hours, communication strategies, cultural expectations, or even language. The projects that are going to thrive are the ones that support the global community.

One of the best ways to do this is through explicit communication maintained in areas like contribution guidelines, codes of conduct, review expectations, and governance documentation. These are essential infrastructure for large projects that want to support this community. Projects that don’t include these guidelines will have trouble scaling as the number of contributors increases across the globe. Those that do provide them will be more resilient, sustainable, and will provide an easier path to onboard new contributors.

The double-edged sword of AI

AI has had a major role in accelerating global participation over 2025. It’s created a pathway that makes it easier for new developers to enter the coding world by dramatically lowering the barrier to entry. It helps contributors understand unfamiliar codebases, draft patches, and even create new projects from scratch. Ultimately, it has helped new developers make their first contributions sooner.

However, it has also created a lot of noise, or what is called “AI slop.” AI slop is a large quantity of low-quality—and oftentimes inaccurate—contributions that don’t add value to the project. Or they are contributions that would require so much work to incorporate, it would be faster to implement the solution yourself. 

This makes it harder than ever to maintain projects and make sure they continue moving forward in the intended direction. Auto-generated issues and pull requests increase volume without always increasing the quality of the project. As a result, maintainers need to spend more time reviewing contributions from developers with vastly variable levels of skill. In a lot of cases, the amount of time it takes to review the additional suggestions has risen faster than the number of maintainers.

Even if you remove AI slop from the equation, the sheer volume of contributions has grown, potentially to unmanageable levels. It can feel like a denial of service attack on human attention.

This is why maintainers have been asking: how do you sift through the noise and find the most important contributions? Luckily, we’ve added some tools to help. There are also a number of open source AI projects specifically trying to address the AI slop issue. In addition, maintainers have been using AI defensively, using it to triage issues, detect duplicate issues, and handle simple maintenance like the labeling of issues. By helping to offload some of the grunt work, it gives maintainers more time to focus on the issues that require human intervention and decision making.

Expect the open source projects that continue to expand and grow over the next year to be those that incorporate AI as part of the community infrastructure. In order to deal with this quantity of information, AI cannot be just a coding assistant. It needs to find ways to ease the pressure of being a maintainer and find a way to make that work more scalable.

Record growth is healthy, if it’s planned for

On the surface, record global growth looks like success. But this influx of newer developers can also be a burden. The sheer popularity of projects that cover basics, such as contributing your first pull request to GitHub, shows that a lot of these new developers are very much in their infancy in terms of comfort with open source. There’s uncertainty about how to move forward and how to interact with the community. Not to mention challenges with repetitive onboarding questions and duplicate issues.

This results in a growing gap between the number of participants in open source projects and the number of maintainers with a sense of ownership. As new developers grow at record rates, this gap will widen.

The way to address this is going to be less about having individuals serving as mentors—although that will still be important. It will be more about creating durable systems that show organizational maturity. What does this mean? While not an exhaustive list, here are some items:

  • Having a clear, defined path to move from contributor to reviewer to maintainer. Be aware that this can be difficult without a mentor to help guide along this path.
  • Shared governance models that don’t rely on a single timezone or small group of people.
  • Documentation that provides guidance on how to contribute and the goals of the project.

By helping to make sure that the number of maintainers keeps relative pace with the number of contributors, projects will be able to take advantage of the record growth. This does create an additional burden on the current maintainers, but the goal is to invest in a solid foundation that will result in a more stable structure in the future. Projects that don’t do this will have trouble functioning at the increased global scale and might start to stall or see problems like increased technical debt.

But what are people building?

It can’t be denied that AI was a major focus—about 60% of the top growing projects were AI focused. However, there were several that had nothing to do with AI. These projects (e.g., Home Assistant, VS Code, Godot) continue to thrive because they meet real needs and support broad, international communities.

A list of the fastest-growing open source projects by contribution: zen-browser/desktop, cline/cline, vllm-project/vllm, astral-sh/uv, microsoft/vscode, infiniflow/ragflow, sgl-project/sglang, continuedev/continue, comfyanonymous/ComfyUI, and home-assistant/core.

Just as the developer space is growing on a global scale, the same can be said about the projects that garner the most interest. These types of projects that support a global community and address their needs are going to continue to be popular and have the most support. 

This just continues to reinforce how open source is really embracing being a global phenomenon as opposed to a local one.

What this year will likely hold

Open source in 2026 won’t be defined by a single trend that emerged over 2025. Instead, it will be shaped by how the community responds to the pressures identified over the last year, particularly with the surge in AI and an explosively growing global community.

For developers, this means that it’s important to invest in processes as much as code. Open source is scaling in ways that would have been impossible to imagine a decade ago, and the important question going forward isn’t how much it will grow—it’s how can you make that growth sustainable.

Read the full Octoverse report >

The post What to expect for open source in 2026 appeared first on The GitHub Blog.

]]>
93939
What the fastest-growing tools reveal about how software is being built https://github.blog/news-insights/octoverse/what-the-fastest-growing-tools-reveal-about-how-software-is-being-built/ Tue, 03 Feb 2026 17:00:00 +0000 https://github.blog/?p=93551 What languages are growing fastest, and why? What about the projects that people are interested in the most? Where are new developers cutting their teeth? Let’s take a look at Octoverse data to find out.

The post What the fastest-growing tools reveal about how software is being built appeared first on The GitHub Blog.

]]>

In 2025, software development crossed a quiet threshold. In our latest Octoverse report, we found that the fastest-growing languages, tools, and open source projects on GitHub are no longer about shipping more code. Instead, they’re about reducing friction in a world where AI is helping developers build more, faster.

By looking at some of the areas of fastest growth over the past year, we can see how developers are adapting through: 

  • The programming languages that are growing most in AI-assisted development workflows.
  • The tools that win when speed and reproducibility matter.
  • The areas where new contributors are showing up (and what helps them stick).

Rather than catalog trends, we want to focus on what those signals mean for how software is being built today and what choices you might consider heading into 2026. 

The elephant in the room: Typescript is the new #1

In August 2025, TypeScript became the most-used language on GitHub, overtaking Python and JavaScript for the first time. Over the past year, TypeScript added more than one million contributors, which was the largest absolute growth of any language on GitHub. 

A chart showing the top 10 programming languages on GitHub from 2023 to 2025. TypeScript rises to #1 in 2025, overtaking Python and JavaScript, which move to #2 and #3 respectively. Other top languages include Java, C#, PHP, Shell, C++, HCL, and Go. The chart tracks ranking changes over time on a dark background with colored lines representing each language.

Python also continued to grow rapidly, adding roughly 850,000 contributors (+48.78% YoY), while JavaScript grew more slowly (+24.79%, ~427,000 contributors). Together, TypeScript and Python both significantly outpaced JavaScript in both total and percentage growth. 

This shift signals more than a preference change. Typed languages are increasingly becoming the default for new development, particularly as AI-assisted coding becomes routine. Why is that?

In practice, a significant portion of the failures teams encounter with AI-generated code surface as type mismatches, broken contracts, or incorrect assumptions between components. Stronger type systems act as early guardrails: they can help catch errors sooner, reduce review churn, and make AI-generated changes easier to reason about before code reaches production. 

If you’re going to be using AI in your software design, which more and more developers are doing on a daily basis, strongly typed languages are your friend.

Here’s what this means in practice: 

  • If you’re starting a new project today, TypeScript is increasingly becoming the default (especially for teams using AI in daily development).
  • If you’re introducing AI-assisted workflows into an existing JavaScript codebase, adding types may reduce friction more than switching models or tools.

Python is key for AI

Contributor counts show who is using a language. Repository data shows what that language is being used to build. 

When we look specifically at AI-focused repositories, Python stands apart. As of August 2025, nearly half of all new AI projects on GitHub were built primarily in Python. 

A chart listing the most commonly used programming languages in AI-tagged projects on GitHub in 2025. Python ranks first with 582,000 repositories (+50.7% year over year), followed by JavaScript with 88,000 (+24.8%), TypeScript with 86,000 (+77.9%), Shell with 9,000 (+324%), and C++ with 7,800 (+11%). The chart includes brief descriptions of each language’s role in AI development, displayed on a blue gradient background with green geometric ribbon graphics.

This matters because AI projects now account for a disproportionate share of open source momentum. Six of the ten fastest-growing open source projects by contributors in 2025 were directly focused on AI infrastructure or tooling.

A table listing the fastest-growing open source projects on GitHub in 2025 by contributors. The top ten are zen-browser/desktop, cline/cline, vllm-project/vllm, astral-sh/uv, microsoft/vscode, infiniflow/ragflow, sgl-project/sglang, continuedev/continue, comfyanonymous/ComfyUI, and home-assistant/core. Growth rates range from 2,301% to 6,836%, with most projects marked as AI-focused. Displayed on a blue gradient background with the GitHub Octoverse ribbon graphic.

Python’s role here isn’t new, but it is evolving. The data suggests a shift from experimentation toward production-ready AI systems, with Python increasingly anchoring packaging, orchestration, and deployment rather than living only in notebooks. 

Moreover, Python is likely to continue to grow in 2026, as AI continues to gain support and additional projects.

Here’s what this means in practice:

  • Python remains the backbone of applied AI work from training and inference to orchestration.
  • Production-focused Python skills such as packaging, typing, CI, and containerization are becoming more important than exploratory scripting alone. 

A deeper look at the top open source projects

Looking across the fastest-growing projects, a clear pattern emerges: developers are optimizing for speed, control, and predictable outcomes. 

Many of the fastest-growing tools emphasize performance and minimalism. Projects like astral-sh/uv, a package and project manager, focus on dramatically faster Python package management. This reflects a growing intolerance for slow feedback loops and non-deterministic environments. 

Having just one of these projects could be an anomaly, but having multiple indicates a clear trend. This trend aligns closely with AI-assisted workflows where iteration speed and reproducibility directly impact developer productivity. 

Here’s what this means in practice: 

  • Fast installs and deterministic builds increasingly matter as much as feature depth.
  • Tools that reduce “works on my machine” moments are winning developer mindshare.

Where first-time open source contributors are showing up

As the developer population grows, understanding where first-time contributors show up (and why) becomes increasingly important. 

A chart showing the open source projects that attracted the most first-time contributors on GitHub in 2025. The top ten are microsoft/vscode, firstcontributions/first-contributions, home-assistant/core, slackblitz/bolt.new, flutter/flutter, zen-browser/desktop, is-a-dev/register, vllm-project/vllm, comfyanonymous/ComfyUI, and ollama/ollama. Displayed on a blue gradient background with green 3D ribbon graphics.

Projects like VS Code and First Contributions continued to top the list over the last year, reflecting both the scale of widely used tools and the persistent need for low-friction entry points into open source (notably, we define contributions as any content-generating activity on GitHub).

Despite this growth, basic project governance remains uneven across the ecosystem. README files are common, but contributor guides and codes of conduct are still relatively rare even as first-time contributions increase.

This gap represents one of the highest-leverage improvements maintainers and open source communities can make. The fact that most of the projects on this list have detailed documentation on what the project is and how to contribute shows the importance of this guidance.

Here’s what this means in practice: 

  • Clear documentation lowers the cost of contribution more than new features.
  • Contributor guides and codes of conduct can help convert curiosity into sustained participation.
  • Improving project hygiene is often the fastest way to grow a contributor base.

Putting it all together

Taken together, these trends point to a shift in what developers value and how they choose tools. 

AI is no longer a separate category of development. It’s shaping the languages teams use, which tools gain traction, and which projects attract contributors. 

Typed languages like TypeScript are becoming the default for reliability at scale, while Python remains central to AI-driven systems as they move from prototypes into production. 

Across the ecosystem, developers are rewarding tools that minimize friction with faster feedback loops, reproducible environments, and clearer contribution paths.

Developers and teams that optimize for speed, clarity, and reliability are shaping how software is being built.

As a reminder, you can check out the full 2025 Octoverse report for more information and make your own conclusions. There’s a lot of good data in there, and we’re just scratching the surface of what you can learn from it.

The post What the fastest-growing tools reveal about how software is being built appeared first on The GitHub Blog.

]]>
93551
7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript https://github.blog/developer-skills/programming-languages-and-frameworks/7-learnings-from-anders-hejlsberg-the-architect-behind-c-and-typescript/ Tue, 27 Jan 2026 17:17:28 +0000 https://github.blog/?p=93457 Anders Hejlsberg shares lessons from C# and TypeScript on fast feedback loops, scaling software, open source visibility, and building tools that last.

The post 7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript appeared first on The GitHub Blog.

]]>

Anders Hejlsberg’s work has shaped how millions of developers code. Whether or not you recognize his name, you likely have touched his work: He’s the creator of Turbo Pascal and Delphi, the lead architect of C#, and the designer of TypeScript. 

We sat down with Hejlsberg to discuss his illustrious career and what it’s felt like to watch his innovations stand up to real world pressure. In a long-form conversation, Hejlsberg reflects on what language design looks like once the initial excitement fades, when performance limits appear, when open source becomes unavoidable, and how AI can impact a tool’s original function.

What emerges is a set of patterns for building systems that survive contact with scale. Here’s what we learned.

Watch the full interview above.

Fast feedback matters more than almost anything else

Hejlberg’s early instincts were shaped by extreme constraints. In the era of 64KB machines, there was no room for abstraction that did not pull its weight.

“You could keep it all in your head,” he recalls.

When you typed your code, you wanted to run it immediately.

Anders Hejlsberg

Turbo Pascal’s impact did not come from the Pascal language itself. It came from shortening the feedback loop. Edit, compile, run, fail, repeat, without touching disk or waiting for tooling to catch up. That tight loop respected developers’ time and attention.

The same idea shows up decades later in TypeScript, although in a different form. The language itself is only part of the story. Much of TypeScript’s value comes from its tooling: incremental checking, fast partial results, and language services that respond quickly even on large codebases.

The lesson here is not abstract. Developers can apply this directly to how they evaluate and choose tools. Fast feedback changes behavior. When errors surface quickly, developers experiment more, refactor more confidently, and catch problems closer to the moment they are introduced. When feedback is slow or delayed, teams compensate with conventions, workarounds, and process overhead. 

Whether you’re choosing a language, framework, or internal tooling, responsiveness matters. Tools that shorten the distance between writing code and understanding its consequences tend to earn trust. Tools that introduce latency, even if they’re powerful, often get sidelined. 

Scaling software means letting go of personal preferences 

As Hejlsberg moved from largely working alone to leading teams, particularly during the Delphi years, the hardest adjustment wasn’t technical.

It was learning to let go of personal preferences.

You have to accept that things get done differently than you would have preferred. Fixing it would not really change the behavior anyway.

Anders Hejlsberg

That mindset applies well beyond language design. Any system that needs to scale across teams requires a shift from personal taste to shared outcomes. The goal stops being code that looks the way you would write it, and starts being code that many people can understand, maintain, and evolve together. C# did not emerge from a clean-slate ideal. It emerged from conflicting demands. Visual Basic developers wanted approachability, C++ developers wanted power, and Windows demanded pragmatism.

The result was not theoretical purity. It was a language that enough people could use effectively.

Languages do not succeed because they are perfectly designed. They succeed because they accommodate the way teams actually work.

Why TypeScript extended JavaScript instead of replacing it

TypeScript exists because JavaScript succeeded at a scale few languages ever reach. As browsers became the real cross-platform runtime, teams started building applications far larger than dynamic typing comfortably supports.

Early attempts to cope were often extreme. Some teams compiled other languages into JavaScript just to get access to static analysis and refactoring tools.

That approach never sat well with Hejlsberg.

Telling developers to abandon the ecosystem they were already in was not realistic. Creating a brand-new language in 2012 would have required not just a compiler, but years of investment in editors, debuggers, refactoring tools, and community adoption.

Instead, TypeScript took a different path. It extended JavaScript in place, inheriting its flaws while making large-scale development more tractable.

This decision was not ideological, but practical. TypeScript succeeded because it worked with the constraints developers already had, rather than asking them to abandon existing tools, libraries, and mental models. 

The broader lesson is about compromise. Improvements that respect existing workflows tend to spread while improvements that require a wholesale replacement rarely do. In practice, meaningful progress often comes from making the systems you already depend on more capable instead of trying to start over.

Visibility is a part of what makes open source work

TypeScript did not take off immediately. Early releases were nominally open source, but development still happened largely behind closed doors.

That changed in 2014 when the project moved to GitHub and adopted a fully public development process. Features were proposed through pull requests, tradeoffs were discussed in the open, and issues were prioritized based on community feedback.

This shift made decision-making visible. Developers could see not just what shipped, but why certain choices were made and others were not. For the team, it also changed how work was prioritized. Instead of guessing what mattered most, they could look directly at the issues developers cared about.

The most effective open source projects do more than share code. They make decision-making visible so contributors and users can understand how priorities are set, and why tradeoffs are made.

Leaving JavaScript as an implementation language was a necessary break

For many years, TypeScript was self-hosted. The compiler was written in TypeScript and ran as JavaScript. This enabled powerful browser-based tooling and made experimentation easy.

Over time, however, the limitations became clear. JavaScript is single-threaded, has no shared-memory concurrency, and its object model is flexible (but expensive). As TypeScript projects grew, the compiler was leaving a large amount of available compute unused.

The team reached a point where further optimization would not be enough. They needed a different execution model.

The controversial decision was to port the compiler to Go.

This was not a rewrite. The goal was semantic fidelity. The new compiler needed to behave exactly like the old one, including quirks and edge cases. Rust, despite its popularity, would have required significant redesign due to ownership constraints and pervasive cyclic data structures. Go’s garbage collection and structural similarity made it possible to preserve behavior while unlocking performance and concurrency.

The result was substantial performance gains, split between native execution and parallelism. More importantly, the community did not have to relearn the compiler’s behavior.

Sometimes the most responsible choice isn’t the most ambitious one, but instead preserves behavior, minimizes disruption, and removes a hard limit that no amount of incremental optimization can overcome.

In an AI-driven workflow, grounding matters more than generation

Hejlberg is skeptical of the idea of AI-first programming languages. Models are best at languages they have already seen extensively, which naturally favors mainstream ecosystems like JavaScript, Python, and TypeScript.

But AI does change things when it comes to tooling.

The traditional IDE model assumed a developer writing code and using tools for assistance along the way. Increasingly, that relationship is reversing. AI systems generate code. Developers supervise and correct. Deterministic tools like type checkers and refactoring engines provide guardrails that prevent subtle errors.

In that world, the value of tooling is not creativity. It is accuracy and constraint. Tools need to expose precise semantic information so that AI systems can ask meaningful questions and receive reliable answers.

The risk is not that AI systems will generate bad code. Instead, it’s that they will generate plausible, confident code that lacks enough grounding in the realities of a codebase. 

For developers, this shifts where attention should go. The most valuable tools in an AI-assisted workflow aren’t the ones that generate the most code, but the ones that constrain it correctly. Strong type systems, reliable refactoring tools, and accurate semantic models become essential guardrails. They provide the structure that allows AI output to be reviewed, validated, and corrected efficiently instead of trusted blindly. 

Why open collaboration is critical

Despite the challenges of funding and maintenance, Hejlberg remains optimistic about open collaboration. One reason is institutional memory. Years of discussion, decisions, and tradeoffs remain searchable and visible.

That history does not disappear into private email threads or internal systems. It remains available to anyone who wants to understand how and why a system evolved.

Despite the challenges of funding and maintenance, Hejlsberg remains optimistic about open collaboration. And a big reason is institutional memory.

“We have 12 years of history captured in our project,” he explains. “If someone remembers that a discussion happened, we can usually find it. The context doesn’t disappear into email or private systems.”

That visibility changes how systems evolve. Design debates, rejected ideas, and tradeoffs remain accessible long after individual decisions are made. For developers joining a project later, that shared context often matters as much as the code itself.

A pattern that repeats across decades

Across four decades of language design, the same themes recur:

  • Fast feedback loops matter more than elegance
  • Systems need to accommodate imperfect code written by many people
  • Behavioral compatibility often matters more than architectural purity
  • Visible tradeoffs build trust

These aren’t secondary concerns. They’re fundamental decisions that determine whether a tool can adapt as its audience grows. Moreover, they ground innovation by ensuring new ideas can take root without breaking what already works.

For anyone building tools they want to see endure, those fundamentals matter as much as any breakthrough feature. And that may be the most important lesson of all.

Did you know TypeScript was the top language used in 2025? Read more in the Octoverse report >

The post 7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript appeared first on The GitHub Blog.

]]>
93457
Why AI is pushing developers toward typed languages https://github.blog/ai-and-ml/llms/why-ai-is-pushing-developers-toward-typed-languages/ Thu, 08 Jan 2026 22:25:54 +0000 https://github.blog/?p=93132 AI is settling the “typed vs. untyped” debate by turning type systems into the safety net for code you didn’t write yourself.

The post Why AI is pushing developers toward typed languages appeared first on The GitHub Blog.

]]>

It’s a tale as old as time: tabs vs. spaces, dark mode vs. light mode, typed languages vs. untyped languages. It all depends!

But as developers use AI tools, not only are they choosing the more popular (thus more trained into the model) libraries and languages, they are also using tools that reduce risk. When code comes not just from developers, but also from their AI tools, reliability becomes a much bigger part of the equation. 

Typed vs. untyped

Dynamic languages like Python and JavaScript make it easy to move quickly when building, and developers who argue for those languages push for the speed and flexibility they provide. But that agility lacks the safety net you get with typed languages.

Untyped code is not gone, and can still be great. I love, personally, that I can just write code and not define every aspect of something on my average side project. But, when you don’t control every line of code, subtle errors can pass, unchecked. That’s when the types-driven safety net concept becomes a lot more appealing, and even necessary. AI just increases the volume of “code you didn’t personally write,” which raises the stakes. 

Type systems fill a unique role of surfacing ambiguous logic and mismatches of expected inputs and outputs. They ensure that code from any source can conform to project standards. They’ve basically become a shared contract between developers, frameworks, and AI tools that are generating more and more scaffolding and boilerplate for developers. 

With AI tools and agents producing larger volumes of code and features than ever, it only makes sense that reliability is more critical. And… that is where typed languages win the debate. Not because untyped languages are “bad,” but because types catch the exact class of surprises that AI-generated code can sometimes introduce.

Is type safety that big of a deal?

Yes!

Next question.

But actually though, a 2025 academic study found that a whopping 94% of LLM-generated compilation errors were type-check failures. Imagine how much your projects would improve if 94% of your failures went away! Your life would be better. Your skin would clear. You’d get taller. Or at least you’d have fewer “why does this return a string now?” debugging sessions.

What Octoverse 2025 says about the rise of typed languages

Octoverse 2025 confirmed it: TypeScript is now the most used language on GitHub, overtaking both Python and JavaScript as of August 2025. TypeScript grew by over 1 million contributors in 2025 (+66% YoY, Aug ‘25 vs. Aug ‘24) with an estimated 2.6 million developers total. This was driven, in part, by frameworks that scaffold projects in TypeScript by default (like Astro, Next.js, and Angular). But the report also found correlative evidence that TypeScript’s rise got a boost from AI-assisted development.

That means AI is influencing not only how fast code is written, but which languages and tools developers use. And typed ecosystems are benefiting too, because they help AI slot new code into existing projects without breaking assumptions. 

It’s not just TypeScript. Other typed languages are growing fast, too! 

Luau, Roblox’s scripting language, saw >194% YoY growth as a gradually typed language. Typst, often compared to LaTeX, but with functional design and strong typing, saw >108% YoY growth. Even older languages like Java, C++, and C# saw more growth than ever in this year’s report.

That means gradual typing, optional typing, and strong typing are all seeing momentum—and each offers different levels of guardrails depending on what you’re building and how much you want AI to automate.  

Where do we go from here?

Type systems don’t replace dynamic languages. But, they have become a common safety feature for developers working with and alongside AI coding tools for a reason. As we see AI-assisted development and agent development increase in popularity, we can expect type systems to become even more central to how we build and ship reliable software.

Static types help ensure that code is more trustworthy and more maintainable. They give developers a shared, predictable structure. That reduction in surprises means you can be in the flow (pun intended!) more.

Looking to stay one step ahead? Read the latest Octoverse report and try Copilot CLI.

The post Why AI is pushing developers toward typed languages appeared first on The GitHub Blog.

]]>
93132
The new identity of a developer: What changes and what doesn’t in the AI era https://github.blog/news-insights/octoverse/the-new-identity-of-a-developer-what-changes-and-what-doesnt-in-the-ai-era/ Mon, 08 Dec 2025 18:15:17 +0000 https://github.blog/?p=92692 Discover how advanced AI users are redefining software development—shifting from code producers to strategic orchestrators—through delegation, verification, and a new era of AI-fluent engineering.

The post The new identity of a developer: What changes and what doesn’t in the AI era appeared first on The GitHub Blog.

]]>

For the past four years, the conversation about AI and software development has moved faster than most people can track. Every week, there is a new tool, a new benchmark, a new paper, or a new claim about what AI will or won’t replace. There is certainly noise, but even if sometimes data seems inconclusive or contradictory, we still know more now than three years ago about AI adoption. 

With four years of AI adoption under our belt, we are also able to start seeing the shift in what it means to be a software developer. I lead key research initiatives at GitHub where I focus especially on understanding developers’ behavior, sentiment, and motivations. The time we are in with AI is pivotal, and I interview developers regularly to capture their current perspective. Most recently I conducted interviews to understand how developers see their identity, work, and preferences change as they work more closely than ever with AI.

The TL;DR? The developers who have gone furthest with AI are working differently. They describe their role less as “code producer” and more as “creative director of code,” where the core skill is not implementation, but orchestration and verification. Let’s dive in for the more detailed findings, alongside key stats from the 2025 Octoverse report.

2023: Curiosity, hesitation, and identity questions

Two years ago, we interviewed developers to understand their openness to having AI more deeply integrated into their workflow. At the time, code completions had become mainstream and agents were only a whisper in the AI space. Back then, we found developers eager to get AI’s help with complex tasks, not just filling in boilerplate code. Developers were most interested in: 

  1. Summaries and explanations to speed up how they make sense of code related to their task, and 
  2. AI-suggested plans of action that reduce activation energy. In contrast, developers wanted AI to stay at arm’s length (at least) on decision-making and generating code that implements whole tasks

The explanation of that qualitative trend from 2023 is important. At the time, AI was seen as still unreliable for large implementations. But there was more to the rationale. Developers were reluctant to cede implementation because it was core to their identity. 

That was our baseline in 2023, which we documented in a research-focused blog. Since then, developers’ relationship with AI has changed (and continues to evolve), making each view a snapshot. That makes it critical to update our understanding as the tools have evolved and developer behavior has consequently changed. 

One of the interviewees in 2023 wrapped their hesitation in a question: “If I’m not writing the code, what am I doing?” 

That question has been important to answer since then, especially as we hear future-looking statements about AI writing 90% of code. If we don’t describe what developers do if/when AI does the bulk of implementation, why would they ever be interested in embracing AI meaningfully for their work?   

2025: Fluency, delegation, and a new center of gravity

Fast forward to this year: we interviewed developers again, and this time, we focused on advanced users of AI. This was, in part, because we found a growing number of influential developer blogs focused on agentic workflows. They described sophisticated setups over time, and signalled optimism around coding with and delegating to AI (see here, here, and here for just a few examples). It was important to capture that rationale, assess if/how it’s shared by more AI-experienced developers, and understand what fuels it. 

The developers we spoke with described their agentic workflows and how they reached AI fluency: relentless trial-and-error and pushing themselves to use AI tools every day for everything

That was their method for gaining confidence in their AI strategy, from identifying which tools would be helpful for which task to prompting and iterating effectively.

The tools did not feel magical or intuitive all the time, but their determination eventually led them to make more informed decisions: for example, when to work synchronously with an agent, when to have multiple agents working in parallel, or when to prompt an AI tool to “interview” them for more information (and how to check what it understands). None of these AI strategists started out that way. Most of them started as AI skeptics or timid explorers. As we synthesized the interviewees’ reported experiences, we identified that they matured in their knowledge and use of AI moving from Skeptic, to Explorer, to Collaborator, to Strategist.

Side-by-side cards illustrating Stage 1 ‘AI Skeptic’ and Stage 2 ‘AI Explorer.’ The left card shows a skeptical face emoji and describes low tolerance for iteration and errors. The right card shows a compass emoji and describes developers who use AI for quick wins and gradually build trust. Decorative GitHub-style cube artwork appears on the right with colorful radiating lines.
Side-by-side cards illustrating Stage 3 ‘AI Collaborator’ and Stage 4 ‘AI Strategist.’ The left card shows a handshake emoji and describes developers who co-create with AI and iterate frequently. The right card shows a target emoji and outlines developers who plan, orchestrate, and verify work with high iteration tolerance and multi-agent workflows. Decorative GitHub-style green cube artwork appears on the right side.

Each stage came with a better understanding of capabilities and limitations, and different expectations around speed, iteration, and accuracy. A developer that used AI to co-create solutions (a stage we call “AI Collaborator”) knew to expect back-and-forth iteration with an agent. But when they were using exclusively code completions or boilerplate snippets (probably an “AI Skeptic”), they expected low-latency, one-shot success—or they quickly reverted to doing things without AI. 

Interestingly, each stage in a developer’s comfort with AI had a matching evolution in the tools and workflows they felt ready to adopt: completions, then a chat and copy-paste workflow, then AI-enabled IDEs, and then multi-agent workflows. The advanced AI users we interviewed used several AI tools and agents in parallel, relying on a self-configured AI stack. 

What looks from the outside like “new features” was, from the inside, a gradual widening of what developers were willing to delegate. By the time they reached that latest stage, the nature of their work had changed in a noticeable way. 

At the Strategist stage, their development sessions looked very different from the days when they worked with AI as autocomplete. They focused less on writing code in the traditional sense, and more on defining intent, guiding agents, resolving ambiguity, and validating correctness. Delegation and verification became the primary activities as they felt the center of gravity shift in their work.

This transition is the identity shift. Developers who once wondered, “If I’m not the one writing the code, what am I doing?” now answer that question in practice: they set direction, constraints, architecture, and standards. They shift—using a phrase we first heard in interviews—from code producers to creative directors of code. And, crucially, the developers who reach this level do not describe it as a loss of craft but as a reinvention of it.

This shift is imperceivable unless experienced. The path to becoming AI-fluent is paved with trial and error, frustration, a gradual build up of trust, and ah-ha moments when steps start working as intended and become workflows. Most of the interviewees told us that their sentiment toward the future of software development changed only after they saw the shift in their own work. What once felt like an existential threat began to feel like a strategic advantage. Their outlook became more optimistic as they learned how to use AI tools with confidence and agency.

Ecosystem signals that reinforce the shift

These interviews are early signals about the impact of AI in developer workflows from the most advanced users. But we are already seeing their practices diffuse outward. Developers are beginning to make different choices because AI is present in the workflow and they assume it will increase. Choices about abstractions, code style, testing strategy, and even programming languages are shifting as developers adjust to a world where delegation is normal and verification is foundational.

The 2025 Octoverse report captured one striking example of this shift: TypeScript became the #1 programming language by monthly contributors on GitHub in August 2025. Many factors influence language popularity, but this particular rise says something important about the developer–AI relationship. TypeScript brings clarity and structure to a codebase, expresses intent more explicitly, and provides a type system that helps both developers and AI reason about correctness. In interviews, developers mentioned needing to give AI more guardrails and more context to reduce ambiguity and make verification easier. When AI writes large proportions of code, languages that enforce structure and surface errors early become a strategic choice. The shift toward TypeScript is a way of choosing languages that make delegation safer.

We also saw another telling signal in Octoverse: 80% of new developers on GitHub in 2025 used Copilot within their first week. This signals that developers are getting their first contact with AI at the beginning of their journey. If early contact brings early confidence, we may see developers reach more advanced stages of AI maturity sooner than previous cohorts did.

Another compelling data point was shared at GitHub Universe this year: within the first five months of the release of Copilot coding agent, GitHub’s autonomous agent that can generate pull requests, issues, or tasks, developers used it to merge more than 1 million pull requests

Each one of those pull requests represents a small story of delegation and verification. A developer had to imagine the change, articulate intent, decompose the task, provide context, and set boundaries. They had to review, test, and validate the output before merging. Seen collectively, these pull requests are a measure of developers stepping into a new role. They show developers trying AI with increasingly meaningful units of work, and they show developers gradually building trust while taking responsibility for ensuring those units are correct.

That brings us to the natural next question: what skills support this new identity and role?

The skills that support developers’ new role

As delegation and verification become the focus, the skills developers will need to rely on shift upward. The work moves from implementation to three layers where developers focus, described below: understanding the work, directing the work, and verifying the work. Across interviews, developers consistently described strengths in all three layers as essential to working confidently with AI.

1. Understanding the work

These skills help developers determine what needs to be built, why, and how to shape the problem before any code comes into play.

AI fluency

Developers need an intuitive grasp of how different AI systems behave: what they are good at, where they fail, how much context they require, and how to adjust workflows as capabilities evolve. This fluency comes from repeated use, experimentation, and pattern recognition. With increased AI fluency developers are able to compose their AI stack: tools and features that they use for different projects and tasks, or in parallel configurations for end-to-end workflows.  

Fundamentals

Even as AI takes on more implementation, deep technical understanding remains essential for developers. Knowledge of algorithms, data structures, and system behavior enables developers to evaluate complex output, diagnose hidden issues, and determine whether an AI-generated solution is sound.

Product understanding

Developers will need to increasingly think at the level of outcomes and systems, not snippets. This includes understanding user needs, defining requirements clearly, and reasoning about how a change affects the product as a whole. Framing work from an outcome perspective ensures what is delegated to AI aligns with the actual goals of the feature or system.

2. Directing the work 

These skills enable developers to guide AI systems, tools, and agents so that the work moves forward effectively and safely.

Delegation and agent orchestration

Effective delegation requires clear problem framing, breaking work into meaningful units, providing the right context, articulating constraints, and setting success criteria. Advanced developers also decide when to collaborate interactively with an agent, versus running tasks independently in the background. Strong communication—precise, thorough, and structured—turns delegation into a repeatable practice.

Developer–AI collaboration

Synchronous collaboration with agents depends on tight, iterative loops: setting stopping points, giving corrective feedback, asking agents to self critique, or prompting them to ask clarifying questions. Some developers described instructing agents to interview them first, as a way to build shared understanding before generating any code.

Architecture and systems design

As AI handles more low-level code generation, architecture becomes even more important. Developers design the scaffolding around the work: system boundaries, patterns, data flow, and component interactions. Clear architecture gives agents a safer, more structured environment and makes integration more reliable.

3. Verifying the work 

This skill category is becoming part of the defining center of the developer role: ensuring correct and high-quality outputs.

Verification and quality control

AI-generated output requires rigorous scrutiny. Developers validate behavior through reviews, tests, security checks, and assumption checking. Many reported spending more time verifying work than generating it, and feeling this was the right distribution of effort. Strong verification practices are what make larger-scale delegation possible, and allow developers to gradually trust agents with meaningful units of work.

Verification always was a step of the process, usually at the end. In AI-supported workflows, it becomes a continuous practice.

Annie Vella, a Distinguished Engineer at Westpac NZ, recently wrote an exceptional post about how the software engineer role changes with AI and the new competency map for engineers building systems with LLMs and agents. Annie’s experience (and research) share many similarities with the findings from our interviews with advanced AI users. A worthy read! 

This year’s snapshot 

What started as curiosity has now become preparedness. Developers see their profession changing in real time. They believe that AI will continue to evolve rapidly, and that the pace of change will not slow. Many are adapting to the change by building AI fluency, practicing confident orchestration of tools and agents, and treating delegation and verification as core parts of their craft. They see these skills as a competitive advantage, one that will define the next era of software development.

The value of a developer is shifting toward judgment, architecture, reasoning, and responsibility for outcomes, moving their work up the ladder of abstraction. As we build tools to support developers and look to measure AI’s impact, it’s important that our perspective matches the evolution of their work and identity.

How to track the evolving landscape

There is no single source of truth for how AI is changing software development, but there are reliable signals:

  • Large-scale data reports (such as GitHub’s Octoverse report) show macro-level adoption and behavior patterns.
  • Longitudinal industry studies (e.g., DORA) reveal where productivity gains stall or compound.
  • Field research and developer interviews help us interpret big data correctly and identify trends before we see them at scale.
  • Open source activity is a leading indicator of what workflows developers adopt first, before enterprises (you can track open source activity on GitHub in the Innovation Graph, a public data resource we update quarterly).

Who did we interview?

In this round of interviews, we recruited 22 US-based participants working full time as software engineers. We used the Respondent.io platform for recruitment, and there was no requirement for interviewees to be GitHub users. Participants were selected based on a screener that assessed the depth and breadth of their AI use. We included only those who used AI for more than half of their coding work, used at least four AI tools from the thirteen we listed, and indicated experience with all of the advanced AI-assisted development activities included in the screener.  

Participants worked in organizations of various sizes (55% in large or extra-large enterprises, 41% in small- or medium-sized enterprises, and 4% in a startup). Finally, we recruited participants across the spectrum of years of professional experience (14% had 0-5 years of experience, 41% had 6-10 years, 27% had 11-15 years, and 18% had over 16 years of experience).

We are grateful to all the developers who participated in the interviews for their invaluable input.

The post The new identity of a developer: What changes and what doesn’t in the AI era appeared first on The GitHub Blog.

]]>
92692
“The local-first rebellion”: How Home Assistant became the most important project in your house https://github.blog/open-source/maintainers/the-local-first-rebellion-how-home-assistant-became-the-most-important-project-in-your-house/ Tue, 02 Dec 2025 17:19:32 +0000 https://github.blog/?p=92596 Learn how one of GitHub’s fastest-growing open source projects is redefining smart homes without the cloud.

The post “The local-first rebellion”: How Home Assistant became the most important project in your house appeared first on The GitHub Blog.

]]>

Franck Nijhof—better known as Frenck—is one of those maintainers who ended up at the center of a massive open source project not because he chased the spotlight, but because he helped hold together one of the most active, culturally important, and technically demanding open source ecosystems on the planet. As a lead of Home Assistant and a GitHub Star, Frenck guides the project that didn’t just grow. It exploded.

This year’s Octoverse report confirms it: Home Assistant was one of the fastest-growing open source projects by contributors, ranking alongside AI infrastructure giants like vLLM, Ollama, and Transformers. It also appeared in the top projects attracting first-time contributors, sitting beside massive developer platforms such as VS Code. In a year dominated by AI tooling, agentic workflows, and typed language growth, Home Assistant stood out as something else entirely: an open source system for the physical world that grew at an AI-era pace.

The scale is wild. Home Assistant is now running in more than 2 million households, orchestrating everything from thermostats and door locks to motion sensors and lighting. All on users’ own hardware, not the cloud. The contributor base behind that growth is just as remarkable: 21,000 contributors in a single year, feeding into one of GitHub’s most lively ecosystems at a time when a new developer joins GitHub every second.

In our podcast interview, Frenck explains it almost casually.

Home Assistant is a free and open source home automation platform. It allows you to connect all your devices together, regardless of the brands they’re from… And it runs locally.

Franck Nijhof, lead of Home Assistant

He smiles when he describes just how accessible it is. “Flash Home Assistant to an SD card, put it in, and it will start scanning your home,” he says. 

This is the paradox that makes Home Assistant compelling to developers: it’s simple to use, but technically enormous. A local-first, globally maintained automation engine for the home. And Frenck is one of the people keeping it all running.

The architecture built to tame thousands of device ecosystems

At its core, Home Assistant’s problem is combinatorial explosion. The platform supports “hundreds, thousands of devices… over 3,000 brands,” as Frenck notes. Each one behaves differently, and the only way to normalize them is to build a general-purpose abstraction layer that can survive vendor churn, bad APIs, and inconsistent firmware.

Instead of treating devices as isolated objects behind cloud accounts, everything is represented locally as entities with states and events. A garage door is not just a vendor-specific API; it’s a structured device that exposes capabilities to the automation engine. A thermostat is not a cloud endpoint; it’s a sensor/actuator pair with metadata that can be reasoned about.

That consistency is why people can build wildly advanced automations.

Frenck describes one particularly inventive example: “Some people install weight sensors into their couches so they actually know if you’re sitting down or standing up again. You’re watching a movie, you stand up, and it will pause and then turn on the lights a bit brighter so you can actually see when you get your drink. You get back, sit down, the lights dim, and the movie continues.”

A system that can orchestrate these interactions is fundamentally a distributed event-driven runtime for physical spaces. Home Assistant may look like a dashboard, but under the hood it behaves more like a real-time OS for the home.

Running everything locally is not a feature. It’s a hard constraint. 

Almost every mainstream device manufacturer has pivoted to cloud-centric models. Frenck points out the absurdity:

It’s crazy that we need the internet nowadays to change your thermostat.

The local-first architecture means Home Assistant can run on hardware as small as a Raspberry Pi but must handle workloads that commercial systems offload to the cloud: device discovery, event dispatch, state persistence, automation scheduling, voice pipeline inference (if local), real-time sensor reading, integration updates, and security constraints.

This architecture forces optimizations few consumer systems attempt. If any of this were offloaded to a vendor cloud, the system would be easier to build. But Home Assistant’s philosophy reverses the paradigm: the home is the data center.

Everything from SSD wear leveling on the Pi to MQTT throughput to Zigbee network topologies becomes a software challenge. And because the system must keep working offline, there’s no fallback.

This is engineering with no safety net.

The open home foundation: governance as a technical requirement

When you build a system that runs in millions of homes, the biggest long-term risk isn’t bugs. It’s ownership.

“It can never be bought, it can never be sold,” Frenck says of Home Assistant’s move to the Open Home Foundation. “We want to protect Home Assistant from the big guys in the end.”

This governance model isn’t philosophical; it is an architectural necessity. If Home Assistant ever became a commercial acquisition, cloud lock-in would follow. APIs would break. Integrations would be deprecated. Automations built over years would collapse.

A list of the fastest-growing open source projects by contributors. home-assistant/core is number 10.

The Foundation encodes three engineering constraints that ripple through every design decision:

  • Privacy: “Local control and privacy first.” All processing must occur on-device.
  • Choice: “You should be able to choose your own devices” and expect them to interoperate.
  • Sustainability: If a vendor kills its cloud service, the device must still work.

Frenck calls out Nest as an example: “If some manufacturer turns off the cloud service… that turns into e-waste.”

This is more than governance; it is technical infrastructure. It dictates API longevity, integration strategy, reverse engineering priorities, and local inference choices. It’s also a blueprint that forces the project to outlive any individual device manufacturer.

The community model that accidentally solved software quality

We don’t build Home Assistant, the community does.

“We cannot build hundreds, thousands of device integrations. I don’t have tens of thousands of devices in my home,” Frenck says.

This is where the project becomes truly unique.

Developers write integrations for devices they personally own. Reviewers test contributions against devices in their own homes. Break something, and you break your own house. Improve something, and you improve your daily life.

“That’s where the quality comes from,” Frenck says. “People run this in their own homes… and they take care that it needs to be good.”

This is the unheard-of secret behind Home Assistant’s engineering velocity. Every contributor has access to production hardware. Every reviewer has a high-stakes environment to protect. No staging environment could replicate millions of real homes, each with its own weird edge cases.

Assist: A local voice assistant built before the AI hype wave

Assist is Home Assistant’s built-in voice assistant, a modular system that lets you control your home using speech without sending audio or transcripts to any cloud provider. As Frenck puts it:

We were building a voice assistant before the AI hype… we want to build something privacy-aware and local.

Rather than copying commercial assistants like Alexa or Google Assistant, Assist takes a two-layer approach that prioritizes determinism, speed, and user choice.

Stage 1: Deterministic, no-AI commands

Assist began with a structured intent engine powered by hand-authored phrases contributed by the community. Commands like “Turn on the kitchen light” or “Turn off the living room fan” are matched directly to known actions without using machine learning at all. This makes them extremely fast, reliable, and fully local. No network calls. No cloud. No model hallucinations. Just direct mapping from phrase to automation.

Stage 2: Optional AI when you want natural language

One of the more unusual parts of Assist is that AI is never mandatory. Frenck emphasizes that developers and users get to choose their inference path: “You can even say you want to connect your own OpenAI account. Or your own Google Gemini account. Or get a Llama running locally in your own home.”

Assist evaluates each command and decides whether it needs AI. If a command is known, it bypasses the model entirely.

“Home Assistant would be like, well, I don’t have to ask AI,” Frenck says. “I know what this is. Let me turn off the lights.”

The system only uses AI when a command requires flexible interpretation, making AI a fallback instead of the foundation.

Open hardware to support the system

To bootstrap development and give contributors a reference device, the team built a fully open source smart speaker—the Voice Assistant Preview Edition.

“We created a small speaker with a microphone array,” Frenck says. “It’s fully open source. The hardware is open source; the software running on it is ESPHome.”

This gives developers a predictable hardware target for building and testing voice features, instead of guessing how different microphones, DSP pipelines, or wake word configurations behave across vendors.

Hardware as a software accelerator

Most open source projects avoid hardware. Home Assistant embraced it out of practical necessity.

“In order to get the software people building the software for hardware, you need to build hardware,” Frenck says.

Home Assistant Green, its prebuilt plug-and-play hub, exists because onboarding requires reliable hardware. The Voice Assistant Preview Edition exists because the voice pipeline needs a known microphone and speaker configuration.

This is a rare pattern: hardware serves as scaffolding for software evolution. It’s akin to building a compiler and then designing a reference CPU so contributors can optimize code paths predictably.

The result is a more stable, more testable, more developer-friendly software ecosystem.

A glimpse into the future: local agents and programmable homes

The trajectory is clear. With local AI models, deterministic automations, and a stateful view of the entire home, the next logical step is agentic behavior that runs entirely offline.

If a couch can trigger a movie automation, and a brewery can run a fermentation pipeline, the home itself becomes programmable. Every sensor is an input. Every device is an actuator. Every automation is a function. The entire house becomes a runtime.

And unlike cloud-bound competitors, Home Assistant’s runtime belongs to the homeowner, not the service provider.

Frenck sums up the ethos: “We give that control to our community.”

Looking to stay one step ahead? Read the latest Octoverse report and consider trying Copilot CLI.

The post “The local-first rebellion”: How Home Assistant became the most important project in your house appeared first on The GitHub Blog.

]]>
92596
Why developers still flock to Python: Guido van Rossum on readability, AI, and the future of programming https://github.blog/developer-skills/programming-languages-and-frameworks/why-developers-still-flock-to-python-guido-van-rossum-on-readability-ai-and-the-future-of-programming/ Tue, 25 Nov 2025 17:00:00 +0000 https://github.blog/?p=92561 Discover how Python changed developer culture—and see why it keeps evolving.

The post Why developers still flock to Python: Guido van Rossum on readability, AI, and the future of programming appeared first on The GitHub Blog.

]]>

When we shared this year’s Octoverse data with Guido van Rossum, the creator of Python, his first reaction was genuine surprise.

While TypeScript overtook Python to become the most used language on GitHub as of August 2025 (marking the biggest language shift in more than a decade), Python still grew 49% year over year in 2025, and remains the default language of AI, science, and education for developers across the world. 

“I was very surprised by that number,” Guido told us, noting how this result is different from other popularity trackers like the TIOBE Index.

To learn more, we sat down with Guido for a candid conversation about Python’s roots, its ever-expanding reach, and the choices—both big and small—that have helped turn a one-time “hobby project” into the foundation for the next generation of developers and technologies.

Watch the full interview above. 👆

The origins of Python

For Guido, Python began as a tool to solve the very real (and very painful) gap between C’s complexity and the limitations of shell scripting.

I wanted something that was much safer than C, and that took care of memory allocation, and of all the out of bounds indexing stuff, but was still an actual programming language. That was my starting point.

Guido van Rossum, creator of Python

He was working on a novel operating system, and the only available language was C. 

“In C, even the simplest utility that reads two lines from input becomes an exercise in managing buffer overflows and memory allocation,” he says. 

Shell scripts weren’t expressive enough, and C was too brittle. Building utilities for a new operating system showed just how much friction existed in the developer workflow at the time. 

Guido wanted to create language that served as a practical tool between the pain of C and the limits of shell scripting. And that led to Python, which he designed to take care of the tough parts, and let programmers focus on what matters. 

Python’s core DNA—clarity, friendliness, and minimal friction—was baked in from the beginning, too. It’s strangely fitting that a language that started as such a practical project now sits at the center of open source, AI, data science, and enterprise AI.

Monty Python and the language’s personality

Unlike other programming languages named for ancient philosophers or stitched-together acronyms, Python’s namesake comes from Monty Python’s Flying Circus.

“I wanted to express a little irreverence,” Guido says. “A slight note of discord in the staid world of computer languages.” 

The name “Python” wasn’t a joke—it was a design choice, and a hint that programming doesn’t have to feel solemn or elitist.  

That sense of fun and accessibility has become as valuable to Python’s brand as its syntax. Ask practically anyone who’s learned to code with Python, and they’ll talk about its readability, its welcoming error messages, and the breadth of community resources that flatten that first steep climb.

If you wrote something in Python last week and, six months from now, you’re reading that code, it’s still clear. Python’s clarity and user friendliness compared to Perl was definitely one of the reasons why Python took over Perl in the early aughts.

Python and AI: ecosystem gravity and the NumPy to ML to LLM pipeline

Python’s influence in AI isn’t accidental. It’s a signal of the broader ecosystem compounding on itself. Today, some of the world’s fastest-growing AI infrastructure is built in Python, such as PyTorch and Hugging Face Transformers.

So, why Python? Guido credits the ecosystem around Python as the primary cause: after all, once a particular language has some use and seems to be a good solution, it sparks an avalanche of new software in that language, so it can take advantage of what already exists.

Moreover, he points to key Python projects: 

  • NumPy: foundational numerical arrays 
  • pandas: making data manipulation easier
  • PyTorch: Machine learning at scale
  • Local model runners and LLM agents: Today’s frontier with projects like ollama leading the charge. 

The people now writing things for AI are familiar with Python because they started out in machine learning.

Python isn’t just the language of AI. It enabled AI to become what it is today. 

That’s due, in part, to the language’s ability to evolve without sacrificing approachability. From optional static typing to a treasure trove of open source packages, Python adapts to the needs of cutting-edge fields without leaving beginners behind.

Does Python need stronger typing in the LLM era? Guido says no. 

With AI generating more Python than ever, the natural question is: does Python need stricter typing? 

Guido’s answer was immediate: “I don’t think we need to panic and start doing a bunch of things that might make things easier for AI.” 

He believes Python’s optional typing system—while imperfect—is “plenty.”

AI should adapt to us, not the other way around.

He also offered a key insight: The biggest issue isn’t Python typing, but the training data. 

“Most tutorials don’t teach static typing,” he says. “AI models don’t see enough annotated Python. 

But LLMs can improve. “If I ask an AI to add a type annotation,” he says, “it usually researches it and gets it right.” 

This reveals a philosophy that permeates the language: Python is for developers first and foremost. AI should always meet developers where they are. 

Democratizing development, one developer-friendly error message at a time 

We asked why Python remains one of the most popular first programming languages. 

His explanation is simple and powerful: “There aren’t that many things you can do wrong that produce core dumps or incorrect magical results.” 

Python tells you what went wrong, and where. And Guido sees the downstream effect constantly: “A very common theme in fan mail is: Python made my career. Without it, I wouldn’t have gotten into software at all.” 

That’s not sentimentality. It’s user research. Python is approachable because it’s designed for developers who are learning, tinkering, and exploring. 

It’s also deeply global. 

This year’s Octoverse report showed that India alone added 5M+ developers in 2025, in a year where we saw more than one developer a second join GitHub. A number of these new developers come from non-traditional education paths. 

Guido saw this coming: “A lot of Python users and contributors do not have a computer science education … because their day jobs require skills that go beyond spreadsheets.” 

The clear syntax provides a natural entry point for first-time coders and tinkerers. As we’ve seen on GitHub, the language has been a launchpad not just for CS graduates, but for scientists in Brazil, aspiring AI developers in India, and anyone looking for the shortest path from idea to implementation.

Whitespace complaints: Guido’s other inbox

Python famously uses indentation for grouping. Most developers love this. But some really don’t. 

Guido still receives personal emails complaining. 

“Everyone else thinks that’s Python’s best feature,” he says. “But there is a small group of people who are unhappy with the use of indentation or whitespaces.” 

It’s charming, relatable, and deeply on brand. 

Stability without stagnation: soft keywords and backwards compatibility

Maintaining Python’s momentum hasn’t meant standing still. Guido and the core dev team are laser-focused on backward compatibility, carefully weighing every new feature against decades of existing code.

For every new feature, we have to very carefully consider: is this breaking existing code?

Sometimes, the best ideas grow from constraints.

For instance, Python’s soft keywords, context-sensitive new features that preserve old code, are a recent architectural decision that let the team introduce new syntax without breaking old programs. It’s a subtle but powerful engineering choice that keeps enterprises on solid ground while still allowing the language to evolve. 

This caution, often misinterpreted as reluctance, is exactly why Python has remained stable across three decades. 

For maintainers, the lessons are clear: learn widely, solve for yourself, invite input, and iterate. Python’s journey proves that what starts as a line of code to solve your own problem can become a bridge to millions of developers around the world.

Designed for developers. Ready for whatever comes next. 

Python’s future remains bright because its values align with how developers actually learn and build: 

  • Readability
  • Approachability 
  • Stability
  • A touch of irreverence

As AI continues to influence software development—and Octoverse shows that 80% of new developers on GitHub use GitHub Copilot in their first week—Python’s clarity matters more than ever. 

And as the next generation begins coding with AI, Python will be there to help turn ideas into implementations.

Looking to stay one step ahead? Read the latest Octoverse report and try Copilot CLI.

The post Why developers still flock to Python: Guido van Rossum on readability, AI, and the future of programming appeared first on The GitHub Blog.

]]>
92561
TypeScript, Python, and the AI feedback loop changing software development https://github.blog/news-insights/octoverse/typescript-python-and-the-ai-feedback-loop-changing-software-development/ Thu, 13 Nov 2025 16:00:00 +0000 https://github.blog/?p=92271 An interview with the leader of GitHub Next, Idan Gazit, on TypeScript, Python, and what comes next.

The post TypeScript, Python, and the AI feedback loop changing software development appeared first on The GitHub Blog.

]]>

When people talk about AI and software development, the focus usually lands on productivity: faster pull requests, fewer boilerplate chores, auto-generated tests, autocomplete that feels psychic. But according to Idan Gazit, who leads GitHub Next—the team behind Copilot and GitHub’s long-range R&D—that’s the shallow end of the change curve.

The deeper shift is happening before a single line of code is written.

“AI isn’t just changing how we write code,” Gazit says. “It’s starting to change what we choose to build with in the first place.”

That shift is already visible in this year’s Octoverse report. In 2025, TypeScript overtook both JavaScript and Python as the most-used language on GitHub—a 66% year-over-year surge and the biggest language movement in more than a decade. 

But the story isn’t “TypeScript beats Python.” It’s that AI is beginning to shape language trends from the inside out.

The last generation of change was about where code runs: cloud, containers, CI/CD, open source ecosystems.The next one is about what code is made of, and why those choices suddenly have different stakes.

TypeScript passed Python. But the real story is why.

Developers don’t usually switch languages just for philosophical reasons—they switch when something makes their work meaningfully faster, simpler, or less risky. And increasingly, what feels “easier” is tied to how well AI tools will support their work with that language.

“Statically typed languages give you guardrails,” Gazit says. “If an AI tool is going to generate code for me, I want a fast way to know whether that code is correct. Explicit types give me that safety net.”

Typed languages reduce hallucination surface area. They also give models more structure to reason about during generation. That’s not a theoretical benefit. It’s now a behavioral signal in the data:

  • AI models tend to perform better on languages which expose information about correctness, like a type system
  • Developers using AI tools are more likely to adopt typed languages for new projects
  • The more teams rely on AI assistance, the more language choice becomes an AI-compatibility decision, not merely a personal preference

That shift sets up a feedback loop.

AI assistance is a new consideration when developers are selecting languages and frameworks

AI models are strongest at authoring code in popular languages: TypeScript, Python, Java, and Go, just to name a few.

“If the model has seen a trillion examples of TypeScript and only thousands of Haskell, it’s just going to be better at TypeScript,” Gazit says. “That changes the incentive before you even start coding.”

If an AI tool is going to generate code for me, I want a fast way to know whether that code is correct. Explicit types give me that safety net.

Idan Gazit, head of GitHub Next

Before AI, picking a language was a tradeoff between runtime, library ecosystem, and personal fluency. After AI, a new constraint appears: How much lift will the model give me if I choose this language?

“Python is the dominant language for machine learning, data science, and model training,” Gazit says. “Why would I not choose the one that already has the most robust frameworks and libraries? Those are wheels I don’t need to reinvent.” So TypeScript isn’t winning against Python; each is winning in the situations where it is the right tool for the job, and where AI makes it more valuable.

The surprise winners of the AI era: the “duct tape” languages

One of the most unexpected signals in the Octoverse data wasn’t about TypeScript or Python—it was about Bash.

Shell scripting saw +206% year-over-year growth in AI-generated projects. So what gives?

Because AI makes painful languages tolerable.

“Very few developers love writing Bash,” Gazit says. “But everybody needs it. It’s the duct tape of software. And now that I can ask an agent to write the unpleasant parts for me, I can use the right tool for the job without weighing that tradeoff.” If AI automates the drudgery layer of programming, the question stops being “Is this language enjoyable?” and becomes “should I consider using it when I don’t have to write the code myself?”

Very few developers love writing Bash. But everybody needs it. It’s the duct tape of software. And now that I can ask an agent to write the unpleasant parts for me, I can use the right tool for the job without weighing that tradeoff.

Enterprises aren’t asking “Should we adopt AI?” anymore. They’re asking “What happens after we do?”

“A lot of enterprises have been sitting on the sidelines, waiting to see when the water is warm enough to jump in,” Gazit says. “Now they’re seeing the value: junior developers ramp faster, and senior developers spend less time on toil and more on architecture.”

That creates second-order effects:

Before AIAfter AI
Skill measured by lines of codeSkill measured by validation, architecture, debugging
Juniors slow to shipJuniors ship faster than seniors can review
Senior devs write the hardest codeSenior devs now judge the hardest code
Tooling was mostly a matter of taste—IDEs, linters, build setups, etc.Tooling now defines the surface area AI can operate on: the wrong stack can block or limit agentic assistance

Typed languages accelerate this shift—the stronger the safety rails, the more work can be handed to automation.

The next horizon: when language stops being a constraint

Today, language choice matters because runtimes are still fragmented. Browsers require JavaScript. Models need Python. Firmware expects C.

But that’s already eroding.

“WebAssembly is starting to change the rules,” Gazit says. “If any language can target Wasm and run everywhere, that removes one key consideration when picking your stack.”

Combine that with AI-generated code, and you get a plausible future:

  • Developer writes in Rust (or Go, or Python)
  • AI generates code in that language
  • Compiler targets Wasm
  • The same code runs on web, edge, cloud, local sandbox

That’s not a TypeScript-wins future. It’s a portability-wins future, and a natural extension to the rise of containerization over the last decade as a means of packaging and running software.

Languages may end up competing less on syntax, and more on ecosystem leverage: package depth, tooling maturity, model familiarity, debugging ergonomics. We’re not fully in that world yet, but the early signals from AI-driven tooling to Wasm-powered portability suggest it’s coming faster than most teams expect.

What developers should actually take from this

This isn’t a “learn TypeScript now” blog (there are enough of those out there, to be sure). 

Here are the signals that matter:

ShiftWhat it really means
Typed languages risingAI benefits from structure
Python stays dominant in AIEcosystems outlast language/framework fashions
Shell scripts up +206%AI removes pain barriers, not just productivity barriers
Enterprises adopting AI fastThe definition of “senior engineer” is changing next
WebAssembly maturingLanguage loyalty gets replaced by language interoperability

The takeaway isn’t about switching stacks. It’s about optimizing for leverage, not loyalty.

The languages and tools that survive the next decade won’t be the ones developers love most—they’ll be the ones that give developers and machines the most shared advantage.

Looking to stay one step ahead? 

Read the latest Octoverse report and consider trying Copilot CLI.

More resources:

The post TypeScript, Python, and the AI feedback loop changing software development appeared first on The GitHub Blog.

]]>
92271
What 986 million code pushes say about the developer workflow in 2025 https://github.blog/news-insights/octoverse/what-986-million-code-pushes-say-about-the-developer-workflow-in-2025/ Fri, 07 Nov 2025 16:00:00 +0000 https://github.blog/?p=92246 Nearly a billion commits later, the way we ship code has changed for good. Here’s what the 2025 Octoverse data says about how devs really work now.

The post What 986 million code pushes say about the developer workflow in 2025 appeared first on The GitHub Blog.

]]>

If you’re building software today, you’ve probably noticed that it’s like… really fast now. And that’s the thing: it’s not just that we code faster. It’s how we code, review, and ship that has changed (and is changing).

You might have seen the Octoverse 2025 report, but in case you haven’t, the stats are pretty wild: developers created 230+ repositories per minute and pushed 986 million commits last year. Almost a billion commits! With a b!

Because developers (and teams of developers) are moving faster overall, the expectation is to make different choices because they’re moving faster. When they move faster, their workflows change, too.

Iteration is the default state

What’s really interesting is that this doesn’t feel like a temporary spike. It feels like an actual long-term shift in iteration. The days of shipping big releases once per quarter are rapidly going away.

Developers are pushing constantly, not just when things are “ready.” Smaller and more frequent commits are becoming more of the norm. Personally, I love that. Nobody wants to review a gigantic, 1000-line pull request all the time (only to inevitably plop in a “LGTM” as their eyes glaze over). It’s still more code, shipped faster, but in smaller bursts. 

The new normal is lightweight commits. You fix a bug, write a small feature, adjust some configurations, and… push. The shift we’re seeing is that things continue to move, not that things are “done” in huge chunks, because “done”-ness is temporary!

Art Code is never finished, only abandoned iterated upon.”

Leonardo Da Vinci Cassidy, as well as most developers at this point

And devs know that shipping constantly is about reducing risk, too. Small, frequent changes are easier to debug, and easier to roll back if things go wrong. You don’t have to sift through a month’s worth of changes to get something fixed.This cycle changes how teams think about quality, about communication, and even hiring. If your team is still moving at a pace where they wait weeks to ship something, your team honestly isn’t working like a lot of the world is anymore.

Shipping looks different now

Because we’re iterating differently, we’re shipping differently. In practice, that looks like:

  • More feature flags: Feature flags used to be “for A/B testing and maybe the spooky experimental feature.” Now they’re core to how we ship incomplete work safely. Feature flags are everywhere and let teams ship code behind a toggle. You can push that “maybe” feature to prod, see how it behaves, and then turn it off instantly if something goes sideways. Teams don’t have to hold up releases to finish edge cases. And feature flags are more a part of main workflows now instead of an afterthought.
  • CI/CD runs everything: Every push sets off a chain of events: tests, builds, artifact generations, security scans… if it passes, it deploys. Developers expect pipelines to kick in automatically, and manual deploys are more and more rare.
  • Smaller, focused pull requests: Pull requests simply aren’t novels anymore. We’re seeing more short, readable pull requests with a single purpose. It’s easier and faster to review, and that mental overhead alone increases speed and saves us some brain cells.
  • Tests drive momentum: Developers used 11.5 billion GitHub Actions minutes running tests last year (a 35% increase! That’s with a b! Again!). With all this automation, we’re seeing unit tests, integration tests, end-to-end tests, and all the tests becoming more and more necessary because automation is how we keep up with the new pace.

How teams communicate should also change

We know it’s a fact now that developer workflows have changed, but I personally think that communication around development should also follow suit.

This is how I envision that future:

  • Standups are shorter (or async).
  • Status updates live in issues (and “where code lives,” not meetings).
  • “Blocked because the pull request isn’t reviewed yet” is no longer acceptable.
  • Hiring shifts toward people who can ship fast and communicate clearly.

Yes, the code got faster, so developers have to move faster as well.

Developers are still a part of engineering speed. But developer workflows should never slow things down too much.

Take this with you

It’ll be interesting to see what developer workflows in 2026 look like after such rapid changes in 2025.

I think “AI fatigue” is incredibly real (and valid) and we’ll see many tools fall by the wayside, of course, as the natural productivity enhancers succeed and the ones that add friction go away. But I also think new standards and tooling will emerge as our new “baseline” for our ever-changing success metrics.

In the future, specs and code will live closer together (Markdown-to-code workflows are only going to grow). That will mean more communication across teams, and perhaps even more documentation overall. And we’ll continue to see more and more constant and collaborative shipping (even from companies that are still slow to adopt AI tooling) because it’s necessary.

This year, we’ve seen a lot of growth across the board in terms of pull requests, projects overall, contributions, and so on… so perhaps we’ll see some stabilization? 

But, of course, the only constant is change.

Looking to stay one step ahead? 

Read the latest Octoverse report and consider trying Copilot CLI.

The post What 986 million code pushes say about the developer workflow in 2025 appeared first on The GitHub Blog.

]]>
92246