The latest on programming languages - The GitHub Blog https://github.blog/developer-skills/programming-languages-and-frameworks/ Updates, ideas, and inspiration from GitHub to help developers build and design software. Tue, 27 Jan 2026 17:17:30 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://github.blog/wp-content/uploads/2019/01/cropped-github-favicon-512.png?fit=32%2C32 The latest on programming languages - The GitHub Blog https://github.blog/developer-skills/programming-languages-and-frameworks/ 32 32 153214340 7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript https://github.blog/developer-skills/programming-languages-and-frameworks/7-learnings-from-anders-hejlsberg-the-architect-behind-c-and-typescript/ Tue, 27 Jan 2026 17:17:28 +0000 https://github.blog/?p=93457 Anders Hejlsberg shares lessons from C# and TypeScript on fast feedback loops, scaling software, open source visibility, and building tools that last.

The post 7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript appeared first on The GitHub Blog.

]]>

Anders Hejlsberg’s work has shaped how millions of developers code. Whether or not you recognize his name, you likely have touched his work: He’s the creator of Turbo Pascal and Delphi, the lead architect of C#, and the designer of TypeScript. 

We sat down with Hejlsberg to discuss his illustrious career and what it’s felt like to watch his innovations stand up to real world pressure. In a long-form conversation, Hejlsberg reflects on what language design looks like once the initial excitement fades, when performance limits appear, when open source becomes unavoidable, and how AI can impact a tool’s original function.

What emerges is a set of patterns for building systems that survive contact with scale. Here’s what we learned.

Watch the full interview above.

Fast feedback matters more than almost anything else

Hejlberg’s early instincts were shaped by extreme constraints. In the era of 64KB machines, there was no room for abstraction that did not pull its weight.

“You could keep it all in your head,” he recalls.

When you typed your code, you wanted to run it immediately.

Anders Hejlsberg

Turbo Pascal’s impact did not come from the Pascal language itself. It came from shortening the feedback loop. Edit, compile, run, fail, repeat, without touching disk or waiting for tooling to catch up. That tight loop respected developers’ time and attention.

The same idea shows up decades later in TypeScript, although in a different form. The language itself is only part of the story. Much of TypeScript’s value comes from its tooling: incremental checking, fast partial results, and language services that respond quickly even on large codebases.

The lesson here is not abstract. Developers can apply this directly to how they evaluate and choose tools. Fast feedback changes behavior. When errors surface quickly, developers experiment more, refactor more confidently, and catch problems closer to the moment they are introduced. When feedback is slow or delayed, teams compensate with conventions, workarounds, and process overhead. 

Whether you’re choosing a language, framework, or internal tooling, responsiveness matters. Tools that shorten the distance between writing code and understanding its consequences tend to earn trust. Tools that introduce latency, even if they’re powerful, often get sidelined. 

Scaling software means letting go of personal preferences 

As Hejlsberg moved from largely working alone to leading teams, particularly during the Delphi years, the hardest adjustment wasn’t technical.

It was learning to let go of personal preferences.

You have to accept that things get done differently than you would have preferred. Fixing it would not really change the behavior anyway.

Anders Hejlsberg

That mindset applies well beyond language design. Any system that needs to scale across teams requires a shift from personal taste to shared outcomes. The goal stops being code that looks the way you would write it, and starts being code that many people can understand, maintain, and evolve together. C# did not emerge from a clean-slate ideal. It emerged from conflicting demands. Visual Basic developers wanted approachability, C++ developers wanted power, and Windows demanded pragmatism.

The result was not theoretical purity. It was a language that enough people could use effectively.

Languages do not succeed because they are perfectly designed. They succeed because they accommodate the way teams actually work.

Why TypeScript extended JavaScript instead of replacing it

TypeScript exists because JavaScript succeeded at a scale few languages ever reach. As browsers became the real cross-platform runtime, teams started building applications far larger than dynamic typing comfortably supports.

Early attempts to cope were often extreme. Some teams compiled other languages into JavaScript just to get access to static analysis and refactoring tools.

That approach never sat well with Hejlsberg.

Telling developers to abandon the ecosystem they were already in was not realistic. Creating a brand-new language in 2012 would have required not just a compiler, but years of investment in editors, debuggers, refactoring tools, and community adoption.

Instead, TypeScript took a different path. It extended JavaScript in place, inheriting its flaws while making large-scale development more tractable.

This decision was not ideological, but practical. TypeScript succeeded because it worked with the constraints developers already had, rather than asking them to abandon existing tools, libraries, and mental models. 

The broader lesson is about compromise. Improvements that respect existing workflows tend to spread while improvements that require a wholesale replacement rarely do. In practice, meaningful progress often comes from making the systems you already depend on more capable instead of trying to start over.

Visibility is a part of what makes open source work

TypeScript did not take off immediately. Early releases were nominally open source, but development still happened largely behind closed doors.

That changed in 2014 when the project moved to GitHub and adopted a fully public development process. Features were proposed through pull requests, tradeoffs were discussed in the open, and issues were prioritized based on community feedback.

This shift made decision-making visible. Developers could see not just what shipped, but why certain choices were made and others were not. For the team, it also changed how work was prioritized. Instead of guessing what mattered most, they could look directly at the issues developers cared about.

The most effective open source projects do more than share code. They make decision-making visible so contributors and users can understand how priorities are set, and why tradeoffs are made.

Leaving JavaScript as an implementation language was a necessary break

For many years, TypeScript was self-hosted. The compiler was written in TypeScript and ran as JavaScript. This enabled powerful browser-based tooling and made experimentation easy.

Over time, however, the limitations became clear. JavaScript is single-threaded, has no shared-memory concurrency, and its object model is flexible (but expensive). As TypeScript projects grew, the compiler was leaving a large amount of available compute unused.

The team reached a point where further optimization would not be enough. They needed a different execution model.

The controversial decision was to port the compiler to Go.

This was not a rewrite. The goal was semantic fidelity. The new compiler needed to behave exactly like the old one, including quirks and edge cases. Rust, despite its popularity, would have required significant redesign due to ownership constraints and pervasive cyclic data structures. Go’s garbage collection and structural similarity made it possible to preserve behavior while unlocking performance and concurrency.

The result was substantial performance gains, split between native execution and parallelism. More importantly, the community did not have to relearn the compiler’s behavior.

Sometimes the most responsible choice isn’t the most ambitious one, but instead preserves behavior, minimizes disruption, and removes a hard limit that no amount of incremental optimization can overcome.

In an AI-driven workflow, grounding matters more than generation

Hejlberg is skeptical of the idea of AI-first programming languages. Models are best at languages they have already seen extensively, which naturally favors mainstream ecosystems like JavaScript, Python, and TypeScript.

But AI does change things when it comes to tooling.

The traditional IDE model assumed a developer writing code and using tools for assistance along the way. Increasingly, that relationship is reversing. AI systems generate code. Developers supervise and correct. Deterministic tools like type checkers and refactoring engines provide guardrails that prevent subtle errors.

In that world, the value of tooling is not creativity. It is accuracy and constraint. Tools need to expose precise semantic information so that AI systems can ask meaningful questions and receive reliable answers.

The risk is not that AI systems will generate bad code. Instead, it’s that they will generate plausible, confident code that lacks enough grounding in the realities of a codebase. 

For developers, this shifts where attention should go. The most valuable tools in an AI-assisted workflow aren’t the ones that generate the most code, but the ones that constrain it correctly. Strong type systems, reliable refactoring tools, and accurate semantic models become essential guardrails. They provide the structure that allows AI output to be reviewed, validated, and corrected efficiently instead of trusted blindly. 

Why open collaboration is critical

Despite the challenges of funding and maintenance, Hejlberg remains optimistic about open collaboration. One reason is institutional memory. Years of discussion, decisions, and tradeoffs remain searchable and visible.

That history does not disappear into private email threads or internal systems. It remains available to anyone who wants to understand how and why a system evolved.

Despite the challenges of funding and maintenance, Hejlsberg remains optimistic about open collaboration. And a big reason is institutional memory.

“We have 12 years of history captured in our project,” he explains. “If someone remembers that a discussion happened, we can usually find it. The context doesn’t disappear into email or private systems.”

That visibility changes how systems evolve. Design debates, rejected ideas, and tradeoffs remain accessible long after individual decisions are made. For developers joining a project later, that shared context often matters as much as the code itself.

A pattern that repeats across decades

Across four decades of language design, the same themes recur:

  • Fast feedback loops matter more than elegance
  • Systems need to accommodate imperfect code written by many people
  • Behavioral compatibility often matters more than architectural purity
  • Visible tradeoffs build trust

These aren’t secondary concerns. They’re fundamental decisions that determine whether a tool can adapt as its audience grows. Moreover, they ground innovation by ensuring new ideas can take root without breaking what already works.

For anyone building tools they want to see endure, those fundamentals matter as much as any breakthrough feature. And that may be the most important lesson of all.

Did you know TypeScript was the top language used in 2025? Read more in the Octoverse report >

The post 7 learnings from Anders Hejlsberg: The architect behind C# and TypeScript appeared first on The GitHub Blog.

]]>
93457
Why AI is pushing developers toward typed languages https://github.blog/ai-and-ml/llms/why-ai-is-pushing-developers-toward-typed-languages/ Thu, 08 Jan 2026 22:25:54 +0000 https://github.blog/?p=93132 AI is settling the “typed vs. untyped” debate by turning type systems into the safety net for code you didn’t write yourself.

The post Why AI is pushing developers toward typed languages appeared first on The GitHub Blog.

]]>

It’s a tale as old as time: tabs vs. spaces, dark mode vs. light mode, typed languages vs. untyped languages. It all depends!

But as developers use AI tools, not only are they choosing the more popular (thus more trained into the model) libraries and languages, they are also using tools that reduce risk. When code comes not just from developers, but also from their AI tools, reliability becomes a much bigger part of the equation. 

Typed vs. untyped

Dynamic languages like Python and JavaScript make it easy to move quickly when building, and developers who argue for those languages push for the speed and flexibility they provide. But that agility lacks the safety net you get with typed languages.

Untyped code is not gone, and can still be great. I love, personally, that I can just write code and not define every aspect of something on my average side project. But, when you don’t control every line of code, subtle errors can pass, unchecked. That’s when the types-driven safety net concept becomes a lot more appealing, and even necessary. AI just increases the volume of “code you didn’t personally write,” which raises the stakes. 

Type systems fill a unique role of surfacing ambiguous logic and mismatches of expected inputs and outputs. They ensure that code from any source can conform to project standards. They’ve basically become a shared contract between developers, frameworks, and AI tools that are generating more and more scaffolding and boilerplate for developers. 

With AI tools and agents producing larger volumes of code and features than ever, it only makes sense that reliability is more critical. And… that is where typed languages win the debate. Not because untyped languages are “bad,” but because types catch the exact class of surprises that AI-generated code can sometimes introduce.

Is type safety that big of a deal?

Yes!

Next question.

But actually though, a 2025 academic study found that a whopping 94% of LLM-generated compilation errors were type-check failures. Imagine how much your projects would improve if 94% of your failures went away! Your life would be better. Your skin would clear. You’d get taller. Or at least you’d have fewer “why does this return a string now?” debugging sessions.

What Octoverse 2025 says about the rise of typed languages

Octoverse 2025 confirmed it: TypeScript is now the most used language on GitHub, overtaking both Python and JavaScript as of August 2025. TypeScript grew by over 1 million contributors in 2025 (+66% YoY, Aug ‘25 vs. Aug ‘24) with an estimated 2.6 million developers total. This was driven, in part, by frameworks that scaffold projects in TypeScript by default (like Astro, Next.js, and Angular). But the report also found correlative evidence that TypeScript’s rise got a boost from AI-assisted development.

That means AI is influencing not only how fast code is written, but which languages and tools developers use. And typed ecosystems are benefiting too, because they help AI slot new code into existing projects without breaking assumptions. 

It’s not just TypeScript. Other typed languages are growing fast, too! 

Luau, Roblox’s scripting language, saw >194% YoY growth as a gradually typed language. Typst, often compared to LaTeX, but with functional design and strong typing, saw >108% YoY growth. Even older languages like Java, C++, and C# saw more growth than ever in this year’s report.

That means gradual typing, optional typing, and strong typing are all seeing momentum—and each offers different levels of guardrails depending on what you’re building and how much you want AI to automate.  

Where do we go from here?

Type systems don’t replace dynamic languages. But, they have become a common safety feature for developers working with and alongside AI coding tools for a reason. As we see AI-assisted development and agent development increase in popularity, we can expect type systems to become even more central to how we build and ship reliable software.

Static types help ensure that code is more trustworthy and more maintainable. They give developers a shared, predictable structure. That reduction in surprises means you can be in the flow (pun intended!) more.

Looking to stay one step ahead? Read the latest Octoverse report and try Copilot CLI.

The post Why AI is pushing developers toward typed languages appeared first on The GitHub Blog.

]]>
93132
Why developers still flock to Python: Guido van Rossum on readability, AI, and the future of programming https://github.blog/developer-skills/programming-languages-and-frameworks/why-developers-still-flock-to-python-guido-van-rossum-on-readability-ai-and-the-future-of-programming/ Tue, 25 Nov 2025 17:00:00 +0000 https://github.blog/?p=92561 Discover how Python changed developer culture—and see why it keeps evolving.

The post Why developers still flock to Python: Guido van Rossum on readability, AI, and the future of programming appeared first on The GitHub Blog.

]]>

When we shared this year’s Octoverse data with Guido van Rossum, the creator of Python, his first reaction was genuine surprise.

While TypeScript overtook Python to become the most used language on GitHub as of August 2025 (marking the biggest language shift in more than a decade), Python still grew 49% year over year in 2025, and remains the default language of AI, science, and education for developers across the world. 

“I was very surprised by that number,” Guido told us, noting how this result is different from other popularity trackers like the TIOBE Index.

To learn more, we sat down with Guido for a candid conversation about Python’s roots, its ever-expanding reach, and the choices—both big and small—that have helped turn a one-time “hobby project” into the foundation for the next generation of developers and technologies.

Watch the full interview above. 👆

The origins of Python

For Guido, Python began as a tool to solve the very real (and very painful) gap between C’s complexity and the limitations of shell scripting.

I wanted something that was much safer than C, and that took care of memory allocation, and of all the out of bounds indexing stuff, but was still an actual programming language. That was my starting point.

Guido van Rossum, creator of Python

He was working on a novel operating system, and the only available language was C. 

“In C, even the simplest utility that reads two lines from input becomes an exercise in managing buffer overflows and memory allocation,” he says. 

Shell scripts weren’t expressive enough, and C was too brittle. Building utilities for a new operating system showed just how much friction existed in the developer workflow at the time. 

Guido wanted to create language that served as a practical tool between the pain of C and the limits of shell scripting. And that led to Python, which he designed to take care of the tough parts, and let programmers focus on what matters. 

Python’s core DNA—clarity, friendliness, and minimal friction—was baked in from the beginning, too. It’s strangely fitting that a language that started as such a practical project now sits at the center of open source, AI, data science, and enterprise AI.

Monty Python and the language’s personality

Unlike other programming languages named for ancient philosophers or stitched-together acronyms, Python’s namesake comes from Monty Python’s Flying Circus.

“I wanted to express a little irreverence,” Guido says. “A slight note of discord in the staid world of computer languages.” 

The name “Python” wasn’t a joke—it was a design choice, and a hint that programming doesn’t have to feel solemn or elitist.  

That sense of fun and accessibility has become as valuable to Python’s brand as its syntax. Ask practically anyone who’s learned to code with Python, and they’ll talk about its readability, its welcoming error messages, and the breadth of community resources that flatten that first steep climb.

If you wrote something in Python last week and, six months from now, you’re reading that code, it’s still clear. Python’s clarity and user friendliness compared to Perl was definitely one of the reasons why Python took over Perl in the early aughts.

Python and AI: ecosystem gravity and the NumPy to ML to LLM pipeline

Python’s influence in AI isn’t accidental. It’s a signal of the broader ecosystem compounding on itself. Today, some of the world’s fastest-growing AI infrastructure is built in Python, such as PyTorch and Hugging Face Transformers.

So, why Python? Guido credits the ecosystem around Python as the primary cause: after all, once a particular language has some use and seems to be a good solution, it sparks an avalanche of new software in that language, so it can take advantage of what already exists.

Moreover, he points to key Python projects: 

  • NumPy: foundational numerical arrays 
  • pandas: making data manipulation easier
  • PyTorch: Machine learning at scale
  • Local model runners and LLM agents: Today’s frontier with projects like ollama leading the charge. 

The people now writing things for AI are familiar with Python because they started out in machine learning.

Python isn’t just the language of AI. It enabled AI to become what it is today. 

That’s due, in part, to the language’s ability to evolve without sacrificing approachability. From optional static typing to a treasure trove of open source packages, Python adapts to the needs of cutting-edge fields without leaving beginners behind.

Does Python need stronger typing in the LLM era? Guido says no. 

With AI generating more Python than ever, the natural question is: does Python need stricter typing? 

Guido’s answer was immediate: “I don’t think we need to panic and start doing a bunch of things that might make things easier for AI.” 

He believes Python’s optional typing system—while imperfect—is “plenty.”

AI should adapt to us, not the other way around.

He also offered a key insight: The biggest issue isn’t Python typing, but the training data. 

“Most tutorials don’t teach static typing,” he says. “AI models don’t see enough annotated Python. 

But LLMs can improve. “If I ask an AI to add a type annotation,” he says, “it usually researches it and gets it right.” 

This reveals a philosophy that permeates the language: Python is for developers first and foremost. AI should always meet developers where they are. 

Democratizing development, one developer-friendly error message at a time 

We asked why Python remains one of the most popular first programming languages. 

His explanation is simple and powerful: “There aren’t that many things you can do wrong that produce core dumps or incorrect magical results.” 

Python tells you what went wrong, and where. And Guido sees the downstream effect constantly: “A very common theme in fan mail is: Python made my career. Without it, I wouldn’t have gotten into software at all.” 

That’s not sentimentality. It’s user research. Python is approachable because it’s designed for developers who are learning, tinkering, and exploring. 

It’s also deeply global. 

This year’s Octoverse report showed that India alone added 5M+ developers in 2025, in a year where we saw more than one developer a second join GitHub. A number of these new developers come from non-traditional education paths. 

Guido saw this coming: “A lot of Python users and contributors do not have a computer science education … because their day jobs require skills that go beyond spreadsheets.” 

The clear syntax provides a natural entry point for first-time coders and tinkerers. As we’ve seen on GitHub, the language has been a launchpad not just for CS graduates, but for scientists in Brazil, aspiring AI developers in India, and anyone looking for the shortest path from idea to implementation.

Whitespace complaints: Guido’s other inbox

Python famously uses indentation for grouping. Most developers love this. But some really don’t. 

Guido still receives personal emails complaining. 

“Everyone else thinks that’s Python’s best feature,” he says. “But there is a small group of people who are unhappy with the use of indentation or whitespaces.” 

It’s charming, relatable, and deeply on brand. 

Stability without stagnation: soft keywords and backwards compatibility

Maintaining Python’s momentum hasn’t meant standing still. Guido and the core dev team are laser-focused on backward compatibility, carefully weighing every new feature against decades of existing code.

For every new feature, we have to very carefully consider: is this breaking existing code?

Sometimes, the best ideas grow from constraints.

For instance, Python’s soft keywords, context-sensitive new features that preserve old code, are a recent architectural decision that let the team introduce new syntax without breaking old programs. It’s a subtle but powerful engineering choice that keeps enterprises on solid ground while still allowing the language to evolve. 

This caution, often misinterpreted as reluctance, is exactly why Python has remained stable across three decades. 

For maintainers, the lessons are clear: learn widely, solve for yourself, invite input, and iterate. Python’s journey proves that what starts as a line of code to solve your own problem can become a bridge to millions of developers around the world.

Designed for developers. Ready for whatever comes next. 

Python’s future remains bright because its values align with how developers actually learn and build: 

  • Readability
  • Approachability 
  • Stability
  • A touch of irreverence

As AI continues to influence software development—and Octoverse shows that 80% of new developers on GitHub use GitHub Copilot in their first week—Python’s clarity matters more than ever. 

And as the next generation begins coding with AI, Python will be there to help turn ideas into implementations.

Looking to stay one step ahead? Read the latest Octoverse report and try Copilot CLI.

The post Why developers still flock to Python: Guido van Rossum on readability, AI, and the future of programming appeared first on The GitHub Blog.

]]>
92561
TypeScript’s rise in the AI era: Insights from Lead Architect, Anders Hejlsberg https://github.blog/developer-skills/programming-languages-and-frameworks/typescripts-rise-in-the-ai-era-insights-from-lead-architect-anders-hejlsberg/ Thu, 06 Nov 2025 17:00:00 +0000 https://github.blog/?p=92161 TypeScript just became the most-used language on GitHub. Here’s why, according to its creator.

The post TypeScript’s rise in the AI era: Insights from Lead Architect, Anders Hejlsberg appeared first on The GitHub Blog.

]]>

When Anders Hejlsberg started work on TypeScript in 2012, he wasn’t dreaming up a new language to compete with JavaScript. He was trying to solve a very real problem: JavaScript had become the backbone of the web, but it didn’t scale well for large, multi-developer codebases. Teams were shipping millions of lines of loosely typed code, and the language offered no help when those systems grew too complex to reason about.

What began as a pragmatic fix has since reshaped modern development. In 2025, TypeScript became the most-used language on GitHub, overtaking both JavaScript and Python for the first time. More than a million developers contributed in TypeScript this year alone—a 66% jump, according to Octoverse.

“I remember thinking,” he says, “maybe we’ll get 25% of the JavaScript community to take an interest—that would be success. But where we are now? I’m floored.”

In 2025, TypeScript became the most-used language on GitHub, surpassing both JavaScript and Python for the first time. According to this year’s Octoverse report, more than a million developers began contributing in TypeScript this year alone (a 66% YoY jump). 

So, how did a typed superset of JavaScript become the dominant language of the AI era? We sat down with Anders to talk about evolution, performance, and why a language built for better human collaboration is now powering machine-assisted coding.

“We thought 25-percent adoption would be a success.”

“When we started the project,” Anders says, “I figured if we got 25-percent of the JavaScript community interested, that’d be a win. But now, seeing how many people rely on it every day … I’m floored. The whole team is.”

Back in 2012, JavaScript was already entrenched. TypeScript’s bet wasn’t to replace it but to make large-scale JavaScript development sane by adding types, tooling, and refactorability to the world’s most permissive language.

It’s the joy of working on something you know is making a difference. We didn’t set out to be everywhere. We just wanted developers to be able to build big systems with confidence.

Anders Hejlsberg, creator of TypeScript

A decade later, that bet became the default. Nearly every modern frontend framework—React, Next.js, Angular, SvelteKit—now scaffolds with TypeScript out of the box. The result: safer codebases, better autocomplete, and fewer 3 a.m. debugging sessions over a rogue undefined.

“The magic was making TypeScript feel like JavaScript, but with superpowers,” Anders says.

Rewriting the compiler for the future

When TypeScript launched, it was famously self-hosted: written in TypeScript itself. That kept the compiler portable and hackable. But performance eventually became a problem.

“As much as it pained us to give up on self-hosting, we knew we couldn’t squeeze any more performance out of it,” Anders says.

We experimented with C#, with others, and finally chose Go. The performance gain was 10X. Half from being native, half from shared-memory concurrency. You can’t ignore 10X.

The rewrite delivered a compiler that’s faster, leaner, and more scalable for enterprise-scale codebases—but functionally identical to the old one.

On this note, Anders says, “We have a native compiler that’s a carbon copy of the old one down to the quirks. The community doesn’t have to throw anything away.”

This philosophy around preserving behavior while improving performance is one reason developers trust TypeScript. It’s not a clean-slate rewrite every few years; it’s an evolutionary system built to stay compatible.

“Open source is evolution captured in code.”

Anders reflects on open source as an ecosystem that mirrors natural selection.

“Open source was a big experiment,” Anders says. “No one ever really figured out how to fund it—and yet here we are. It’s bigger than ever, and it’s not going away. It’s evolution captured right there in the code.”

This year’s Octoverse data backs him up: developers pushed nearly 1 billion commits in 2025 (+25% YoY), and 1.12 billion of those were to public and open source repositories. That’s an evolutionary record written one pull request at a time.

TypeScript’s own repository with twelve years of issues, pull requests, and design notes has become a living archive of language evolution. “We have 12 years of history captured on GitHub,” Anders says. “It’s all searchable. It’s evolution you can grep.”

The AI effect: Why TypeScript is thriving now

One of the most striking data points from Octoverse 2025 is how AI is changing language preferences. Developers are moving toward typed languages that make AI-assisted coding more reliable and maintainable. 

Anders explains why: “AI’s ability to write code in a language is proportional to how much of that language it’s seen. It’s a big regurgitator, with some extrapolation. AI has seen tons of JavaScript, Python, and TypeScript so it’s great at writing them. New languages are actually disadvantaged.”

That data familiarity, combined with TypeScript’s static type system, makes it uniquely fit for an AI-first workflow.

“If you ask AI to translate half a million lines of code, it might hallucinate,” Anders says. “But if you ask it to generate a program that does that translation deterministically, you get a reliable result. That’s the kind of problem types were made for.”

The takeaway: in a world where code is written by both humans and machines, types aren’t bureaucracy. They’re truth checkers.

From IDEs to agents

The rise of large language models is also changing what “developer tools” even mean. IDEs are becoming environments not just for developers, but for agents.

AI started out as the assistant. Now it’s doing the work, and you’re supervising. It doesn’t need an IDE the way we do. It needs the services. That’s why all this Model Context Protocol work is exciting.

“AI started out as the assistant,” Anders says. “Now it’s doing the work, and you’re supervising. It doesn’t need an IDE the way we do. It needs the services. That’s why all this Model Context Protocol work is exciting.”

The Octoverse report describes this shift as “AI reshaping choices, not just code.” Typed languages like TypeScript give agents the structure they need to refactor safely, answer semantic queries, and reason about codebases in a deterministic way.

“The goal,” Anders adds, “is to box in AI workflows with just enough determinism that they stay useful without going off the rails.”

The language that keeps evolving

From Turbo Pascal to C#, and now TypeScript, Anders’ work spans decades. But what’s striking is his consistency. He builds languages that make complex software simpler to reason about.

There’s nothing more satisfying than working on something that makes a difference. TypeScript keeps changing, but it always comes back to the same thing: helping developers express intent clearly.

That clarity might explain why more than one new developer joined GitHub every second in 2025, and a growing share of them choose to start in TypeScript. 

The story of TypeScript isn’t just about language design; it’s about evolution. A project that began as a pragmatic fix for JavaScript’s scale has become the foundation for how developers—and now AI—write code together.

Read the 2025 Octoverse report or start using GitHub Copilot >

The post TypeScript’s rise in the AI era: Insights from Lead Architect, Anders Hejlsberg appeared first on The GitHub Blog.

]]>
92161
Why Java endures: The foundation of modern enterprise development https://github.blog/developer-skills/why-java-endures-the-foundation-of-modern-enterprise-development/ Tue, 11 Mar 2025 16:00:01 +0000 https://github.blog/?p=83142 For 30 years, Java has been a cornerstone of enterprise software development. Here’s why—and how to learn Java.

The post Why Java endures: The foundation of modern enterprise development appeared first on The GitHub Blog.

]]>

Here’s a true story: I learned Java after pretending to be an Android developer when I first started out in software development. While doing that, I quickly learned something important: Java isn’t just a convenient entry point into tech, it’s a strategic career choice.

That’s why I want to examine what makes Java so interesting—and how you can either get started or brush up on your skills. After all, when you are trying to break into tech, especially when you’re coming from a non-traditional background like me (I began my career as an Army soldier and construction manager), you need a language that’s both learnable and employable. And, in my experience, knowing your way around Java opens doors that might otherwise stay closed to a newcomer.

So, let’s jump in.

What is Java? And what’s the difference between Java and JavaScript?

When we think about the programming languages that form the backbone of enterprise software development, Java is one of the first names that comes to mind. At its core, Java is a versatile, object-oriented programming language, rooted in its “write once, run anywhere” (WORA) strength. Thanks to the Java Virtual Machine (JVM), it powers the foundation for scalable, secure applications across nearly every industry.

Despite sharing part of its name with JavaScript, Java serves a distinctly different purpose. JavaScript is a lightweight language used predominantly for front-end web development, backend work, and full-stack applications. Meanwhile, Java is a general purpose language that is used for everything from backend development to desktop applications, mobile apps, and large-scale enterprise applications.

Today, Java powers everything from your Spotify playlist in Android mobile to the Cash App transaction to split your dinner check. In short, Java powers massive systems that process millions of data transactions every day.

And let’s clear something up: if you’re picturing verbose enterprise code running on dusty servers, it’s time for an update. Java has evolved from its enterprise roots, where it was largely used for powering payroll systems and customer databases, into a versatile platform driving everything from backend services to gaming to AI-powered applications such as Netflix and LinkedIn.

With a history spanning three decades, Java has proven itself as a reliable, versatile, and constantly evolving language that continues to be a top choice for developers around the world.

So, what is it about Java that has not only endured but thrived in an environment full of constant change?

To understand this resilience and versatility, let’s take a step back and look at where Java came from and how it has grown over the years.

The birth of Java: write once, run anywhere

Java’s story begins back in 1991 when a team of engineers at Sun Microsystems, led by James Gosling, set out to create a language for interactive television. First developed under the name Oak, the language that would be renamed Java was designed to be simple, robust, and platform independent, using a virtual machine—later called a Java Virtual Machine, or JVM—to run code anywhere.

While that original project didn’t quite pan out, it laid the groundwork for something much bigger.

In 1995, Java 1.0 was officially released, and with it came a revolutionary promise: “Write Once, Run Anywhere” (WORA). This principle meant Java code could be written once and then run on any device that supports the Java platform, without needing to be recompiled for each operating system.

To fully appreciate the impact of this shift, it’s helpful to look back at the state of software development in the mid ‘90s.

If you were a developer back then and you wanted your application to run on multiple operating systems (Windows, Mac, and Unix), you needed to write separate versions for each one. This meant learning the unique APIs, libraries, and quirks of each operating system—which was even more fun then than it is now (seriously, it was a slog). Updating and fixing had to be done platform by platform in a tedious grind like a messy game of Whac-A-Mole. One fix here, another patch there, and plenty of ways for things to go wrong.

This fragmented approach was a significant barrier to building software for a wide, diverse audience. For enterprises looking to deploy applications across multiple environments, Java became an incredibly attractive choice. With the emergence of enterprise-focused frameworks like J2EE and Spring, coupled with strong corporate backing from Sun Microsystems and later Oracle, it provided the reliability and consistency that businesses needed.

From complex to clear: Java’s evolution for new developers

After 30 years, you might just say that Java has come a long way from its early days.

Let’s take the release of Java 23 in September 2024 as an example. This update brought a host of new features and enhancements designed to make developers’ lives easier while keeping Java aligned with the needs of modern development. This included enhancements such as:

  • Primitive types in patterns, instanceof, and switch (Preview - JEP 455), which allows primitive types like int and double to be used seamlessly with pattern matching and switch statements, simplifying code and reducing workarounds
  • Markdown documentation comments (JEP 467), which lets developers write Java Docs using Markdown syntax to create more readable documentation directly in the source code

One of the most notable changes was the simplification of the language’s entry point for new developers. The classic “Hello, World!” program, which is often the first thing a developer writes when learning a new language, was streamlined to just a few lines of code. The traditional version required understanding several complex concepts right from the start.

Need proof? Here’s the old way:


public class HelloWorld { // A class declaration that must match the file name public static void main(String[] args) { // The program's entry point System.out.println("Hello, World!"); // The actual operation we want } }

Each line introduced multiple new concepts—public classes, static methods, command-line arguments—before achieving the simple goal of just displaying some text.

In contrast, Java 23 streamlines the syntax to its essential requirements:

void main() {
    System.out.println("Hello, World!");
}

Now, you might be thinking, “That’s not a big change!” But think about it from a beginner’s perspective. When you’re just starting out with programming, even small bits of boilerplate code can be confusing and overwhelming. You’re trying to understand new concepts and syntax, and every extra line of code is one more thing to decipher.

But Java isn’t just for beginners. Java 23, for instance, brings powerful new features for more advanced uses, such as improved pattern matching and the evolution of record classes.

With the evolution of record classes, Java facilitates the implementation of modern design patterns by providing concise and immutable data structures like lists or sets. These collections can’t be changed once created, which means there are no accidental edits. This makes them ideal for building scalable microservices or event-driven architectures, where data integrity and consistency are essential.

On the other hand, the enhanced pattern matching supports primitive types, streamlining complex data handling while boosting performance by eliminating boxing overhead. This is especially critical for high-performance systems, such as financial platforms or data pipelines, where efficiency is not only key but required. Put another way, it’s the difference between a system that thrives under pressure and one that buckles.

Here’s an example of how pattern matching has been enhanced in Java 23:


switch (value) { case int i when i > 0 -> "Positive"; case int i when i < 0 -> "Negative"; case int i when i == 0 -> "Zero"; default -> "Not a number"; }

In this switch expression, the code checks both the type of the value and its specific value in one go. In older versions of Java, we had to use a series of if-else statements or a switch statement with multiple cases for each condition. But with this new syntax, the intent of the code is clear, and the logic is neatly encapsulated.

The Java ecosystem: Building blocks for modern innovation

It’s hard to talk about Java’s impact on modern software development without turning to Minecraft, one of the world’s most successful video games with over 10 million players worldwide, which was originally built entirely in Java. While Java is often associated with enterprise computing, the original Minecraft: Java Edition demonstrates how the language’s core strengths translate across different domains.

Consider how Minecraft generates its seemingly infinite worlds. The game engine uses Java’s object-oriented architecture to manage countless blocks, each with its own properties and behaviors. This mirrors how enterprise systems handle millions of business objects or how AI applications process vast datasets. Each block in Minecraft is essentially an object instance managed by Java’s efficient memory system—the same system that helps large-scale enterprise applications maintain performance under heavy loads.

Java’s enduring success is due in large part to the comprehensive ecosystem that has evolved around it after decades of enterprise use. The ecosystem builds on a powerful universal foundation: the Java Class Library (JCL). Think of JCL as a shared toolbox that every Java developer can rely on, whether they’re building fraud detection algorithms in São Paulo or recommendation engines in San Francisco. Just as the coordinated universal time (UTC) synchronizes clocks globally, Java developers everywhere work with these same tested, reliable building blocks.

This standardization has sparked a thriving global community that has built an expansive collection of open source tools on top of Java’s foundation. When developers face a new challenge, chances are the Java ecosystem already has a proven solution ready to use. For instance, when building enterprise applications, developers can leverage Spring’s dependency injection or Hibernate’s ORM capabilities, both of which extend the JCL’s fundamental database connectivity features.

Spring Boot relies heavily on JCL to supercharge Spring, making it faster to build production ready applications. Here’s a spring boot example of how modern Java applications pull on the larger Java ecosystem to handle the common scenario of generating and sending personalized email-based notifications:

@Service
public class SmartEmailService extends BaseNotificationService {
    private final EmailClient emailClient;

    @Autowired
    public SmartEmailService(UserRepository users, AIModelClient aiModel, EmailClient emailClient) {
        super(users, aiModel);
        this.emailClient = Objects.requireNonNull(emailClient, "EmailClient cannot be null");
    }

    @Override
    public String generatePersonalizedMessage(String userId) {
        User user = users.findById(userId)
                .orElseThrow(() -> new UserNotFoundException("User not found: " + userId));
        UserEngagement engagement = getUserEngagement(userId);

        var content = aiModel.generateContent(
                user.getPreferences(),
                engagement.getInteractionHistory(),
                engagement.getResponseRates()
        );
        return content != null ? content : "Personalized message unavailable";
    }

    @Override
    public void send(String userId, String baseMessage) {
        User user = users.findById(userId)
                .orElseThrow(() -> new UserNotFoundException("User not found: " + userId));
        String personalizedMessage = generatePersonalizedMessage(userId);
        String finalMessage = combineMessages(baseMessage, personalizedMessage);

        emailClient.sendEmail(user.getEmail(), finalMessage);
    }

    private String combineMessages(String base, String personalized) {
        String trimmedBase = base != null ? base.trim() : "";
        String trimmedPersonalized = personalized != null ? personalized.trim() : "";
        if (trimmedBase.isEmpty()) return trimmedPersonalized;
        if (trimmedPersonalized.isEmpty()) return trimmedBase;
        return trimmedBase + "\n\n" + trimmedPersonalized;
    }
}

This code example shows a few core Java principles: interfaces define clear contracts, inheritance enables code reuse, and polymorphism allows for flexible implementations. Spring’s integration demonstrates how Java’s ecosystem offers solutions for common enterprise needs from dependency management to application configuration.

By combining standardized foundations with powerful frameworks, Java allows developers to focus on solving business problems instead of reinventing basic infrastructure. This efficiency is part of the reason why Java remains the backbone of mission-critical systems across diverse industries.

AI ready: Java’s role in AI

While Python may dominate AI research headlines, Java’s robust ecosystem and reliability make it well suited for deploying AI solutions at scale in production environments. For instance, Uber’s Michelangelo machine learning platform extensively uses Java in its production infrastructure to serve real-time predictions, such as Uber Eats delivery times or ride demand forecasts across millions of requests daily, showcasing Java’s ability to handle high-throughput, low-latency workloads seamlessly.

Moreover, through frameworks like Deeplearning4j, LangChain4J, and integrations with tools like TensorFlow, organizations can enhance their existing Java systems with AI capabilities rather than rebuilding from scratch.

Consider a bank’s fraud detection system or an ecommerce platform’s recommendation engine: these are typically Java-based applications, and some organizations are experimenting with adding AI features to them. Rather than rewriting these critical systems in Python, which is a common language in AI, companies can use Java’s AI libraries to add intelligence, which helps avoid costly rebuilds while maintaining the security, reliability, and performance of the systems at hand.

Learning Java: A strategic path to career growth in software development

For many developers—myself included—learning Java isn’t just a hobby, but a strategic career move. As one of the most in-demand languages in the job market, Java can open doors to a wide range of opportunities from developing Android apps to building financial trading systems to crafting large-scale web applications to building the next version of Minecraft (goals, am I right?).

But learning any new language can be daunting—especially one with as much depth and breadth as Java. Fortunately, the resources for learning Java have never been better. Educators like Barry Burd, a professor at Drew University, are using Java’s new features, such as Records and Sealed Classes, to make the learning process smoother and more intuitive. Records eliminate the boilerplate of data classes, helping students focus on concepts rather than syntax, while Sealed Classes provide clear, enforceable hierarchies that make inheritance more understandable.

“I’ve been revising my introductory Java book using Java 23’s Implicitly Declared Classes preview features, and as an author and educator, these features make my work much easier,” Burd said in an interview with The New Stack. “Much of the verbose code in previous editions has gone by the wayside, which helps students concentrate on essential logic.”

The rise of online learning platforms, coding bootcamps, and AI developer tools like GitHub Copilot has also made it easier to pick up practical Java skills. With GitHub Copilot Free, for instance, you can ask questions about a codebase via Copilot Chat and get detailed explanations about key topics—or try writing code yourself and use Copilot’s suggestions to learn how Java works on the ground.

The future of Java: innovation meets stability

More than any specific feature or update, Java stands apart is its unwavering commitment to its core principles. Java has always been about empowering developers to write code that is robust, scalable, and maintainable.

Looking for hands-on practice? Check out beginner-friendly Java OSS projects like Exercism Java Track (coding exercises) and Strongbox (an artifact manager). These projects on GitHub offer approachable codebases and opportunities to learn core Java skills while contributing to real software.

If you’re developing enterprise-level systems or just starting in software development, Java provides a clear and rewarding path for growth (and did we mention it’s one of the most in-demand languages for professional developers?). Ready to add your code to it?

Start learning Java with GitHub Copilot Free
Our free version of Copilot is included by default in personal GitHub accounts.
Start using GitHub Copilot >

P.S. Not sure how to get the most out of Copilot’s options? Check out our Copilot Chat Cookbook with a collection of sample prompts covering common coding scenarios.

The post Why Java endures: The foundation of modern enterprise development appeared first on The GitHub Blog.

]]>
83142
Introducing Annotated Logger: A Python package to aid in adding metadata to logs https://github.blog/developer-skills/programming-languages-and-frameworks/introducing-annotated-logger-a-python-package-to-aid-in-adding-metadata-to-logs/ Thu, 19 Dec 2024 17:00:28 +0000 https://github.blog/?p=81705 We’re open sourcing Annotated Logger, a Python package that helps make logs searchable with consistent metadata.

The post Introducing Annotated Logger: A Python package to aid in adding metadata to logs appeared first on The GitHub Blog.

]]>

What it is

Annotated Logger is a Python package that allows you to decorate functions and classes, which then log when complete and can request a customized logger object, which has additional fields pre-added. GitHub’s Vulnerability Management team created this tool to make it easier to find and filter logs in Splunk.

Why we made it

We have several Python projects that have grown in complexity over the years and have used Splunk to ingest and search those logs. We have always sent our logs in via JSON, which makes it easy to add in extra fields. However, there were a number of fields, like what Git branch was deployed, that we also wanted to send, plus, there were fields, like the CVE name of the vulnerability being processed, that we wanted to add for messages in a given function. Both are possible with the base Python logger, but it’s a lot of manual work repeating the same thing over and over, or building and managing a dictionary of extra fields that are included in every log message.

The Annotated Logger started out as a simple decorator in one of our repositories, but was extracted into a package in its own right as we started to use it in all of our projects. As we’ve continued to use it, its features have grown and been updated.

How and why to use it

Now that I’ve gotten a bit of the backstory out of the way, here’s what it does, why you should use it, and how to configure it for your specific needs. At its simplest, you decorate a function with @annotate_logs() and it will “just work.” If you’d like to dive right in and poke around, the example folder contains examples that fully exercise the features.

@annotate_logs()
def foo():
    return True
>>> foo()
{"created": 1733176439.5067494, "levelname": "DEBUG", "name": "annotated_logger.8fcd85f5-d47f-4925-8d3f-935d45ceeefc", "message": "start", "action": "__main__:foo", "annotated": true}
{"created": 1733176439.506998, "levelname": "INFO", "name": "annotated_logger.8fcd85f5-d47f-4925-8d3f-935d45ceeefc", "message": "success", "action": "__main__:foo", "success": true, "run_time": "0.0", "annotated": true}
True

Here is a more complete example that makes use of a number of the features. Make sure to install the package: pip install annotated-logger first.

import os
from annotated_logger import AnnotatedLogger
al = AnnotatedLogger(
    name="annotated_logger.example",
    annotations={"branch": os.environ.get("BRANCH", "unknown-branch")}
)
annotate_logs = al.annotate_logs

@annotate_logs()
def split_username(annotated_logger, username):
    annotated_logger.annotate(username=username)
    annotated_logger.info("This is a very important message!", extra={"important": True})
    return list(username)
>>> split_username("crimsonknave")
{"created": 1733349907.7293086, "levelname": "DEBUG", "name": "annotated_logger.example.c499f318-e54b-4f54-9030-a83607fa8519", "message": "start", "action": "__main__:split_username", "branch": "unknown-branch", "annotated": true}
{"created": 1733349907.7296104, "levelname": "INFO", "name": "annotated_logger.example.c499f318-e54b-4f54-9030-a83607fa8519", "message": "This is a very important message!", "important": true, "action": "__main__:split_username", "branch": "unknown-branch", "username": "crimsonknave", "annotated": true}
{"created": 1733349907.729843, "levelname": "INFO", "name": "annotated_logger.example.c499f318-e54b-4f54-9030-a83607fa8519", "message": "success", "action": "__main__:split_username", "branch": "unknown-branch", "username": "crimsonknave", "success": true, "run_time": "0.0", "count": 12, "annotated": true}
['c', 'r', 'i', 'm', 's', 'o', 'n', 'k', 'n', 'a', 'v', 'e']
>>>
>>> split_username(1)
{"created": 1733349913.719831, "levelname": "DEBUG", "name": "annotated_logger.example.1c354f32-dc76-4a6a-8082-751106213cbd", "message": "start", "action": "__main__:split_username", "branch": "unknown-branch", "annotated": true}
{"created": 1733349913.719936, "levelname": "INFO", "name": "annotated_logger.example.1c354f32-dc76-4a6a-8082-751106213cbd", "message": "This is a very important message!", "important": true, "action": "__main__:split_username", "branch": "unknown-branch", "username": 1, "annotated": true}
{"created": 1733349913.7200255, "levelname": "ERROR", "name": "annotated_logger.example.1c354f32-dc76-4a6a-8082-751106213cbd", "message": "Uncaught Exception in logged function", "exc_info": "Traceback (most recent call last):\n  File \"/home/crimsonknave/code/annotated-logger/annotated_logger/__init__.py\", line 758, in wrap_function\n  result = wrapped(*new_args, **new_kwargs)  # pyright: ignore[reportCallIssue]\n  File \"<stdin>\", line 5, in split_username\nTypeError: 'int' object is not iterable", "action": "__main__:split_username", "branch": "unknown-branch", "username": 1, "success": false, "exception_title": "'int' object is not iterable", "annotated": true}
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<makefun-gen-0>", line 2, in split_username
  File "/home/crimsonknave/code/annotated-logger/annotated_logger/__init__.py", line 758, in wrap_function
    result = wrapped(*new_args, **new_kwargs)  # pyright: ignore[reportCallIssue]
  File "<stdin>", line 5, in split_username
TypeError: 'int' object is not iterable

There are a few things going on in this example. Let’s break it down piece by piece.

  • The Annotated Logger requires a small amount of setup to use; specifically, you need to instantiate an instance of the AnnotatedLogger class. This class contains all of the configuration for the loggers.
    • Here we set the name of the logger. (You will need to update the logging config if your name does not start with annotated_logger or there will be nothing configured to log your messages.)
    • We also set a branch annotation that will be sent with all log messages.
  • After that, we create an alias for the decorator. You don’t have to do this, but I find it’s easier to read than @al.annotate_logs().
  • Now, we decorate and define our method, but this time we’re going to ask the decorator to provide us with a logger object, annotated_logger. This annotated_logger variable can be used just like a standard logger object but has some extra features.
    • This annotated_logger argument is added by the decorator before calling the decorated method.
    • The signature of the decorated method is adjusted so that it does not have an annotated_logger parameter (see how it’s called with just name).
    • There are optional parameters to the decorator that allow type hints to correctly parse the modified signature.
  • We make use of one of those features right away by calling the annotate method, which will add whatever kwargs we pass to the extra field of all log messages that use the logger.
    • Any field added as an annotation will be included in each subsequent log message that uses that logger.
    • You can override an annotation by annotating again with the same name
  • At last, we send a log message! In this message we also pass in a field that’s only for that log message, in the same way you would when using logger.
  • In the second call, we passed an int to the name field and list threw an exception.
    • This exception is logged automatically and then re-raised.
    • This makes it much easier to know if/when a method ended (unless the process was killed).

Let’s break down each of the fields in the log message:

Field Source Description
created logging Standard Logging field.
levelname logging Standard Logging field.
name annotated_logger Logger name (set via class instantiation).
message logging Standard Logging field for log content.
action annotated_logger Method name the logger was created for.
branch AnnotatedLogger() Set from the configuration’s branch annotation.
annotated annotated_logger Boolean indicating if the message was sent via Annotated Logger.
important annotated_logger.info Annotation set for a specific log message.
username annotated_logger.annotate Annotation set by user.
success annotated_logger Indicates if the method completed successfully (True/False).
run_time annotated_logger Duration of the method execution.
count annotated_logger Length of the return value (if applicable).

The success, run_time and count fields are added automatically to the message (“success”) that is logged after a decorated method is completed without an exception being raised.

Under the covers

How it’s implemented

The Annotated Logger interacts with Logging via two main classes: AnnotatedAdapter and AnnotatedFilter. AnnotatedAdapter is a subclass of logging.LoggerAdapter and is what all annotated_logger arguments are instances of. AnnotatedFilter is a subclass of logging.Filter and is where the annotations are actually injected into the log messages. As a user outside of config and plugins, the only part of the code you will only interact with are AnnotatedAdapter in methods and the decorator itself. Each instance of the AnnotatedAdapter class has an AnnotatedFilter instance—the AnnotatedAdapter.annotate method passes those annotations on to the filter where they are stored. When a message is logged, that filter will calculate all the annotations it should have and then update the existing LogRecord object with those annotations.

Because each invocation of a method gets its own AnnotatedAdapter object it also has its own AnnotatedFilter object. This ensures that there is no leaking of annotations from one method call to another.

Type hinting

The Annotated Logger is fully type hinted internally and fully supports type hinting of decorated methods. But a little bit of additional detail is required in the decorator invocation. The annotate_logs method takes a number of optional arguments. For type hinting, _typing_self, _typing_requested, _typing_class and provided are relevant. The three arguments that start with _typing have no impact on the behavior of the decorator and are only used in method signature overrides for type hinting. Setting provided to True tells the decorator that the annotated_logger should not be created and will be provided by the caller (thus the signature shouldn’t be altered).

_typing_self defaults to True as that is how most of my code is written. provided, _typing_class and _typing_requested default to False.

class Example:
    @annotate_logs(_typing_requested=True)
    def foo(self, annotated_logger):
        ...

e = Example()
e.foo()

Plugins

There are a number of plugins that come packaged with the Annotated Logger. Plugins allow for the user to hook into two places: when an exception is caught by the decorator and when logging a message. You can create your own plugin by creating a class that defines the filter and uncaught_exception methods (or inherits from annotated_logger.plugins.BasePlugin which provides noop methods for both).

The filter method of a plugin is called when a message is being logged. Plugins are called in the order they are set in the config. They are called by the AnnotatedFilter object of the AnnotatedAdapter and work like any logging.Filter. They take a record argument which is a logging.LogRecord object. They can manipulate that record in any way they want and those modifications will persist. Additionally, just like any logging filter, they can stop a message from being logged by returning False.

The uncaught_exception method of a plugin is called when the decorator catches an exception in the decorated method. It takes two arguments, exception and logger. The logger argument is the annotated_logger for the decorated method. This allows the plugin to annotate the log message stating that there was an uncaught exception that is about to be logged once the plugins have all processed their uncaught_exception methods.

Here is an example of a simple plugin. The plugin inherits from the BasePlugin, which isn’t strictly needed here since it implements both filter and uncaught_exception, but if it didn’t, inheriting from the BasePlugin means that it would fall back to the default noop methods. The plugin has an init so that it can take and store arguments. The filter and uncaught_exception methods will end up with the same result: flagged=True being set if a word matches. But they do it slightly differently, filter is called while a given log message is being processed and so the annotation it adds is directly to that record. While uncaught_exception is called if an exception is raised and not caught during the execution of the decorated method, so it doesn’t have a specific log record to interact with and set an annotation on the logger. The only difference in outcome would be if another plugin emitted a log message during its uncaught_exception method after FlagWordPlugin, in that case, the additional log message would also have flagged=True on it.

from annotated_logger.plugins import BasePlugin

class FlagWordPlugin(BasePlugin):
    """Plugin that flags any log message/exception that contains a word in a list."""
    def __init__(self, *wordlist):
        """Save the wordlist."""
        self.wordlist = wordlist

    def filter(self, record):
    """Add annotation if the message contains words in the wordlist."""
    for word in self.wordlist:
        if word in record.msg:
            record.flagged = True

    def uncaught_exception(self, exception, logger):
    """Add annotation if exception title contains words in the wordlist."""
    for word in self.wordlist:
        if word in str(exception)
            logger.annotate(flagged=True)


AnnotatedLogger(plugins=[FlagWordPlugin("danger", "Will Robinson")])

Plugins are stored in a list and the order they are added can matter. The BasePlugin is always the first plugin in the list; any that are set in configuration are added after it.

When a log message is being sent the filter methods of each plugin will be called in the order they appear in the list. Because the filter methods often modify the record directly, one filter can break another if, for example, one filter removed or renamed a field that another filter used. Conversely, one filter could expect another to have added or altered a field before its run and would fail if it was ahead of the other filter. Finally, just like in the logging module, the filter method can stop a log from being emitted by returning False. As soon as a filter does so the processing ends and any Plugins later in the list will not have their filter methods called.

If the decorated method raises an exception that is not caught, then the plugins will again execute in order. The most common interaction is plugins attempting to set/modify the same annotation. The BasePlugin and RequestsPlugin both set the exception_title annotation. Since the BasePlugin is always first, the title it sets will be overridden. Other interactions would be one plugin setting an annotation before or after another plugin that emits a log message or sends data to a third-party. In both of those cases the order will impact if the annotation is present or not.

Plugins that come with the Annotated Logger:

  • GitHubActionsPlugin—Set a level of log messages to also be emitted in actions notation (notice::).
  • NameAdjusterPlugin—Add a pre/postfix to a name to avoid collisions in any log processing software (source is a field in Splunk, but we often include it as a field and it’s just hidden).
  • RemoverPlugin—Remove a field. Exclude password/key fields and set an object’s attributes to the log if you want or ignore fields like taskName that are set when running async, but not sync.
  • NestedRemoverPlugin—Remove a field no matter how deep in a dictionary it is.
  • RenamerPlugin—Rename one field to another (don’t like levelname and want level, this is how you do that).
  • RequestsPlugin—Adds a title and status code to the annotations if the exception inherits from requests.exceptions.HTTPError.
  • RuntimeAnnotationsPlugin—Sets dynamic annotations.

dictconfig

When adding the Annotated Logger to an existing project, or one that uses other packages that log messages (flask, django, and so on), you can configure all of the Annotated Logger via dictConfig by supplying a dictConfig compliant dictionary as the config argument when initializing the Annotated Logger class. If, instead, you wish to do this yourself you can pass config=False and reference annotated_logger.DEFAULT_LOGGING_CONFIG to obtain the config that is used when none is provided and alter/extract as needed.

There is one special case where the Annotated Logger will modify the config passed to it: if there is a filter named annotated_filter that entry will be replaced with a reference to a filter that is created by the instance of the Annotated Logger that’s being created. This allows any annotations or other options set to be applied to messages that use that filter. You can instead create a filter that uses the AnnotatedFilter class, but it won’t have any of the config the rest of your logs have.

Notes

dictConfig partly works when merging dictionaries. I have found that some parts of the config are not overwritten, but other parts seem to lose their references. So, I would encourage you to build up a logging config for everything and call it once only. If you pass config, the Annotated Logger will call logging.config.dictConfig on your config after it has the option to add/adjust the config.

The logging_config.py example has a much more detailed breakdown and set of examples.

Pytest mock

Included with the package is a pytest mock to assist in testing for logged messages. I know that there are some strong opinions about testing log messages, and I don’t suggest doing it extensively, or frequently, but sometimes it’s the easiest way to check a loop, or the log message is tied to an alert, and it is important how it’s formatted. In these cases, you can ask for the annotated_logger_mock fixture which will intercept, record and forward all log messages.

def test_logs(annotated_logger_mock):
    with pytest.raises(KeyError):
        complicated_method()
    annotated_logger_mock.assert_logged(
        "ERROR",  # Log level
        "That's not the right key",  # Log message
        present={"success": False, "key": "bad-key"},  # annotations and their values that are required
        absent=["fake-annotations"],  # annotations that are forbidden
        count=1  # Number of times log messages should match
    )

The assert_logged method makes use of pychoir for flexible matching. None of the parameters are required, so feel free to use whichever makes sense. Below is a breakdown of the default and valid values for each parameter.

Parameter Default Value Valid Values Description
level Matches anything String or string-based matcher Log level to check (e.g., “ERROR”).
message Matches anything String or string-based matcher Log message to check.
present Empty dictionary Dictionary with string keys and any value Annotations required in the log.
absent Empty set `ALL`, set, or list of strings Annotations that must not be present in the log.
count All positive integers Integer or integer-based matcher Number of times the log message should match.

The present key is often what makes the mock truly useful. It allows you to require the things you care about and ignore the things you don’t care about. For example, nobody wants their tests to fail because the run_time of a method went from 0.0 to 0.1 or fail because the hostname is different on different test machines. But both of those are useful things to have in the logs. This mock should replace everything you use the caplog fixture for and more.

Other features

Class decorators and persist

Classes can be decorated with @annotate_logs as well. These classes will have an annotated_logger attribute added after the init (I was unable to get it to work inside the __init__). Any decorated methods of that class will have an annotated_logger that’s based on the class logger. Calls to annotate that pass persist=True will set the annotations on the class Annotated Logger and so subsequent calls of any decorated method of that instance will have those annotations. The class instance’s annotated_logger will also have an annotation of class specifying which class the logs are coming from.

Iterators

The Annotated Logger also supports logging iterations of an enumerable object. annotated_logger.iterator will log the start, each step of the iteration, and when the iteration is complete. This can be useful for pagination in an API if your results object is enumerable, logging each time a page is fetched instead of sitting for a long time with no indication if the pages are hanging or there are simply many pages.

By default the iterator method will log the value of each iteration, but this can be disabled by setting value=False. You can also specify the level to log the iterations at if you don’t want the default of info.

Provided

Because each decorated method gets its own annotated_logger calls to other methods will not have any annotations from the caller. Instead of simply passing the annotated_logger object to the method being called, you can specify provided=True in the decorator invocation. This does two things: first, it means that this method won’t have an annotated_logger created and passed automatically, instead it requires that the first argument be an existing annotated_logger, which it will use as a basis for the annotated_logger object it creates for the function. Second, it adds the annotation of subaction and sets the decorated function’s name as its value, the action annotation is preserved as from the method that called and provided the annotated_logger. Annotations are not persisted from a method decorated with provided=True to the method that called it, unless the class of the calling method was decorated and the called action annotated with persist=True, in which case the annotation is set on the annotated_logger of the instance and shared with all methods as is normal for decorated classes.

The most common use of this is with private methods, especially ones created during a refactor to extract some self contained logic. But other uses are for common methods that are called from a number of different places.

Split messages

Long messages wreak havoc on log parsing tools. I’ve encountered cases where the HTML of a 500 error page was too long for Splunk to parse, causing the entire log entry to be discarded and its annotations to go unprocessed. Setting max_length when configuring the Annotated Logger will break long messages into multiple log messages each annotated with split=True, split_complete=False, message_parts=# and message_part=#. The last part of the long message will have split_complete=True when it is logged.

Only messages can be split like this; annotations will not trigger the splitting. However, a plugin could truncate any values with a length over a certain size.

Pre/Post hooks

You can register hooks that are executed before and after the decorated method is called. The pre_call and post_call parameters of the decorator take a reference to a function and will call that function right before passing in the same arguments that the function will be/was called with. This allows the hooks to add annotations and/or log anything that is desired (assuming the decorated function requested an annotated_logger).

Examples of this would be having a set of annotations that annotate fields on a model and a pre_call that sets them in a standard way. Or a post_call that logs if the function left a model in an unsaved state.

Runtime annotations

Most annotations are static, but sometimes you need something that’s dynamic. These are achieved via the RuntimeAnnotationsPlugin in the Annotated Logger config. The RuntimeAnnotationsPlugin takes a dict of names and references to functions. These functions will be called and passed the log record when the plugin’s filter method is invoked just before the log message is emitted. Whatever is returned by the function will be set as the value of the annotation of the log message currently being logged.

A common use case is to annotate a request/correlation id, which identifies all of the log messages that were part of a given API request. For Django, one way to do this is via django-guid.

Tips, tricks and gotchas

  • When using the decorator in more than one file, it’s useful to do all of the configuration in a file like log.py. That allows you to from project.log import annotate_logs everywhere you want to use it and you know it’s all configured and everything will be using the same setup.
  • Namespacing your loggers helps when there are two projects that both use the Annotated Logger (a package and a service that uses the package). If you are setting anything via dictConfig you will want to have a single config that has everything for all Annotated Loggers.
  • In addition to setting a correlation id for the API request being processed, passing the correlation id of the caller and then annotating that will allow you to trace from the logs of service A to the specific logs in Service B that relate to a call made by service A.
  • Plugins are very flexible. For example:
    • Send every exception log message to a service like Sentry.
    • Suppress logs from another package, like Django, that you don’t want to see (assuming you’ve configured Django’s logs to use a filter for your Annotated Logger).
    • Add annotations for extra information about specific types of exceptions (see the RequestsPlugin).
    • Set run time annotations on a subset of messages (instead of all messages with RuntimeAnnotationsPlugin)

Questions, feedback and requests

We’d love to hear any questions, comments or requests you might have in an issue. Pull requests welcome as well!

The post Introducing Annotated Logger: A Python package to aid in adding metadata to logs appeared first on The GitHub Blog.

]]>
81705
Boost your CLI skills with GitHub Copilot https://github.blog/developer-skills/programming-languages-and-frameworks/boost-your-cli-skills-with-github-copilot/ Thu, 26 Sep 2024 15:54:15 +0000 https://github.blog/?p=79913 Want to know how to take your terminal skills to the next level? Whether you’re starting out, or looking for more advanced commands, GitHub Copilot can help us explain and suggest the commands we are looking for.

The post Boost your CLI skills with GitHub Copilot appeared first on The GitHub Blog.

]]>

Working with the command line is something many developers love. Even though we love it, there are times when it can be really frustrating. What’s the command for switching to a branch? How do I fix merge conflicts? Do I have the correct credentials or permissions for my file or directory?

In our recent blogs, we showed you some of the top Git commands and useful commands for the GitHub CLI. However, there are hundreds, if not thousands, of terminal-based commands, and knowing them all would be difficult. We could search for the correct command in a browser but at the cost of breaking our flow and maybe still not finding exactly what we need.

In the previous blog, we showed you how to use --help to receive some helpful suggestions about which commands to use, but this is usually a basic list. Instead, wouldn’t it be great if we could have a conversation with our terminal and ask which commands to use? This is where GitHub Copilot in the CLI comes into play.

GitHub Copilot in the CLI

Many developers are loving GitHub Copilot Chat, and the time-saving benefits and productivity gains that come with it. So, we thought, “Why not bring GitHub Copilot to the command line?” With GitHub Copilot in the CLI, we can ask questions to help us with our terminal tasks, whether they are Git-related commands, GitHub, or even generic terminal commands.

If this sounds like something you want to try, then read on. We’ve also left you with some challenges for you to try yourself.

Getting started

To get started, you’ll need to make sure you have the GitHub CLI installed on your Windows, Mac, or Linux machine, and an active subscription of GitHub Copilot. If your Copilot subscription is part of a Business or Enterprise license, you’ll need to ensure your organization or enterprise’s policy allows you to use “Copilot in the CLI:”

Screenshot showing Copilot policies with four policies listed: Copilot in the CLI, Copilot Chat in the IDE, Copilot Chat in GitHub Copilot, Suggestions matching public doe (duplication detection filter). The first three are set to enabled and the final one is set to allowed.

You can find these Copilot settings by clicking your profile icon in the top right-hand corner on github.com → Settings → Copilot.

Ensure you’re authenticated in your terminal with GitHub by using gh auth login. You can follow the guide in our CLI Docs to ensure you authenticate correctly.

Copilot in the CLI is a GitHub CLI extension. Thus, you have to install it by typing gh extension install github/gh-copilot into your terminal of choice. Since I’m using Windows, all the examples you see here will be Windows PowerShell:

Now that you have Copilot in your CLI, you can use gh copilot to help you find information you are looking for. Let’s look at some of the most common things you can do with Copilot.

Have GitHub Copilot explain computer science concepts

GitHub Copilot is your friend when it comes to using the terminal, regardless of how familiar you are. Copilot can help explain something by using gh copilot explain, followed by a natural language statement of what you want to know more about. As an example, you might like to know how to roll back a commit:

You can receive help from Copilot when you don’t understand exactly what a particular command does. For example, a teammate recently passed me the script npx sirv-cli . to run in my terminal as part of a project we were working on. If I want to better understand what this command does, I can ask Copilot:

TRY IT: Ask Copilot to explain the difference between Git and GitHub.

If you get stuck, you can type gh copilot --help to see a list of commands and examples for how to use Copilot in the CLI:

GitHub Copilot can suggest commands

Explaining concepts is good for understanding and knowledge. When we want to execute a command, however, the explain command might not be enough. Instead, we can use the suggest command to have GitHub Copilot suggest an appropriate command to execute. When it comes to suggesting commands based on your questions, Copilot will follow up with another question, such as “What kind of command can I help you with?” with three options for you to choose from:

What kind of command can I help you with? [use arrows to move, type to filter] generic shell command, gh command, git command

The user can choose between:

  • generic shell command (terminal command)
  • gh command (GitHub CLI command)
  • git command

Copilot can then provide a suggestion based on the type of command you want to use. Let’s dive into each of the three command types.

Generic shell commands

There are hundreds of terminal specific commands we can execute. When asking GitHub Copilot for a suggested answer, we’ll need to select generic shell command from the drop down. As an example, let’s ask Copilot how to kill a process if we’re listening on a specific port:

Along the way, we are answering the questions Copilot is providing to us to help refine the prompt. At the end of each suggestion, we are able to have Copilot explain further, execute the command, copy the command, revise the command, or exit the current question.

TRY IT: Ask Copilot how to list only *.jpg files from all subfolders of a directory.

Git commands

In our recent blog, we went through some of the main Git commands every developer should know. Instead of having to search for a specific command, you can ask GitHub Copilot for help directly from the command line.

For Git commands, select git command from the Copilot drop down. If we wanted to know which branch we were on before making a commit, we could ask Copilot to suggest the best way to achieve this:

In this example, you can see I first ask to have the answer explained, and then execute the command to see that we are currently working on the main branch.

What if we accidentally checked our new changes onto the main branch, but we actually want them on a new branch? We can ask Copilot how would we go about fixing this:

Remember to also check the responses Copilot is giving you. In the above example, Copilot gave me a very long answer to my question, and my question was also rather long. We didn’t get exactly what I needed. Instead, I select to revise the question further. Still, we didn’t get exactly what we wanted. Let’s revise it again, and ask to add the changes to a new branch instead:

Now, I have three steps to execute in order to create a new branch, reset the previous branch, and then switch to the new branch. From there, I can make a new commit.

Once we’ve made a commit, we can ask how to update the previous commit message:

Now, we can change the previous commit message.

TRY IT: Ask Copilot how to merge a pull request as an admin.

GitHub commands

In our last blog, we showed you useful GitHub commands for using the GitHub CLI. Now, let’s ask GitHub Copilot when we get stuck. When we want to ask Copilot about GitHub- specific commands, choose the gh command from the drop down menu in the terminal.

Let’s look at diffs—the difference between two files or two commits on GitHub. We can ask Copilot how to view these differences. Here, I’m also asking Copilot in the terminal via VS Code, and it’s also providing me with suggestions for the question:

Here, I didn’t specify in the prompt whether it was a GitHub command. By choosing gh command, Copilot knows I am looking for a GitHub-specific command, and therefore shows me the command for showing a difference of the pull request number we selected.

Now, let’s see if there are any pull requests or issues from this repository that are assigned to me:

Copilot tells me there are none assigned to me from this repository—winning!

Let’s put a few Git and GitHub commands together. Let’s ask how to open a pull request from a branch using a gh command. Firstly, let’s ask Copilot to commit all my changes to a new branch, and then ensure I’m on the correct branch. After switching to the correct brand, we can ask Copilot how to open a pull request from a branch we are on:

Remember (again) to check the responses we are given. In this example, Copilot gave us the command gh pr create --base --head --title “” --body &lt;"pull_request_body&gt;" which provides all the tags. If we just use gh pr create then we are guided through the pull request process. We can follow the prompts within the command line and ask Copilot along the way for help if we get stuck. I created a draft pull request so I can work on it with my team further before converting it to an open pull request.

By answering the questions Copilot gives us, such as “What kind of command” is this, and selecting the correct option, we can have Copilot successfully execute a command. In this case, we have committed our code to a new branch, navigated to the correct branch, and opened the pull request as a draft.

TRY IT: Ask Copilot how to create a new release and edit the contents.

Working with aliases

All this typing gh copilot suggest has got me thinking “there’s got to be a faster way to use GitHub Copilot”, and there is. We can use the prebuilt ghcs alias to have “Copilot suggest” a command for us. We’ll need to configure GitHub Copilot in the CLI before we can use these commands. There are also flags like --target (or -t for short), which allow us to specify a target for the suggestion, such as shell, gh, or git. In this way, we can make our conversation with Copilot so much faster. To learn more about the commands and flags available, you can use --help with any Copilot command or either of the ghce and ghcs aliases.

Each system configures these aliases differently. Check out the Copilot Docs and video for how to configure aliases for you.

TRY IT: Configure Copilot in the CLI with aliases for even fewer keystrokes.

Using GitHub Copilot CLI

When it comes to using GitHub Copilot in the CLI, the question you ask–also called the prompt–is really important for receiving an answer that is correct for your situation. Unlike GitHub Copilot in your editor, Copilot CLI doesn’t have as much context to draw from. You’ll need to ensure the prompt you write is succinct, and captures the question you are wanting to ask. If you want some tips on writing good questions, check out our guide on prompt engineering. You can always revise your question to get the answer you are looking for.

This has been a brief introduction on using Copilot from the command line. Now, you’re ready to give our “try it” examples a go. When you try these out, share your results in this discussion so we can see the answers Copilot gives you and discuss them together.

The post Boost your CLI skills with GitHub Copilot appeared first on The GitHub Blog.

]]>
79913
How to use AI coding tools to learn a new programming language https://github.blog/developer-skills/programming-languages-and-frameworks/how-to-use-ai-coding-tools-to-learn-a-new-programming-language/ Wed, 07 Aug 2024 16:00:30 +0000 https://github.blog/?p=79185 Explore how AI coding tools like GitHub Copilot can accelerate your journey to learn new programming languages.

The post How to use AI coding tools to learn a new programming language appeared first on The GitHub Blog.

]]>

The days of the single-language developer are fading. While companies like Shutterstock built empires on a single language (Perl in their case), the landscape has shifted. Today’s developers are expected to navigate a diverse technological landscape and start building projects that require proficiency in a range of languages and frameworks. This increased demand for versatility can feel quite overwhelming.

Thankfully, AI coding tools like GitHub Copilot, cursor.sh, and phind are emerging to empower developers in their learning journeys. These broadly available and adaptable tools offer real-time assistance and personalized guidance, making learning new languages faster and more efficient for developers of all experience levels.

In this post, we will:

  • Hear from some developers both in and outside of GitHub who have harnessed AI coding tools to upskill and learn new languages.
  • Provide practical tips to maximize your language-learning experience.
  • Take a look at some of the benefits you’ll see by using AI coding tools to learn a new programming language.

Comic about programming languages.

Real developer journeys: Learning new languages with AI 📚

Meet Kedasha Kerr and Alessio Fiorentino. Kedasha is one of GitHub’s own developer advocates, and Alessio is a DevOps architect and GitHub user.

As a seasoned JavaScript developer, Kedasha embarked on a machine learning course to deepen her understanding of AI. However, the course curriculum required her to learn Python, a completely new language for her. To combat the learning curve, Kedasha turned to GitHub Copilot’s chat functionality. This feature allowed her to interact with the AI in a conversational manner to ask questions about Python syntax, best practices, and even seek help with specific coding challenges she had for coursework.

“I had dabbled with Python, but I had never used it seriously or built anything with it. So, I used Copilot to help me, especially with conditionals. I would see an example from the professor, then I would go to Copilot and ask for it to explain conditionals to me as if I was a high school student with zero coding experience. I even asked it to draw a diagram to show me how data flows in Python, and Copilot coupled with Mermaid literally walked me through the explanation,” Kedasha explains.

“I used these tools to explain context and help me visualize the things that I was trying to learn. And if I was ever totally confused, I would just pop problems into the chat interface and ask it to break things down for me step by step. Honestly, it’s been a great learning buddy,” she adds.

Similarly, Alessio sought out AI coding tools to help him learn Rust for a new project he was working on. Here’s what Alessio had to say about using AI to assist his learning journey:

Headshot photograph of Alessio Fiorentino, DevOps Architect
“One of the key benefits of using AI is that it helps me learn and write better Rust code. Rust is a powerful language that provides full control over the execution flow, but it has many nuances and requires a different way of thinking, especially for those who started with Python or JavaScript. AI assists me in navigating these complexities and ensures that I write efficient and idiomatic Rust code.

One of the standout features of AI is its ability to help me get straight to the problem without the need for trial and error in finding the right search terms. By providing context through prompts, AI delivers focused and relevant results.

In addition to Rust, AI aids me in working with frameworks that I’m less familiar with. For example, it provides in-depth guidance when I’m using FastAPI for backend development or Svelte for frontend development. This saves me a lot of time and effort in understanding and implementing these frameworks effectively.

While I believe in the importance of reading official documentation to gain a solid foundation, AI coding tools become incredibly valuable when tackling more complex and nuanced problems. It’s like a ‘training on the job 2.0’ experience, where you start with a little initial knowledge but are rapidly accelerated in becoming more productive with the assistance of AI.”

Though we acknowledge that these are individual experiences, they showcase the power of AI coding tools in language acquisition. AI coding tools helped both Kedasha and Alessio by acting as a personalized learning companion, while offering contextual guidance and reducing the time spent on tedious tasks. This anecdotal evidence hopefully can serve as inspiration for other developers, as well as pave the way for further research into the measurable impact of AI on the learning process.

Practical tips and tricks 🧑‍🏫

We gathered a few valuable tips for you to keep in your back pocket as you start learning with AI, but before we jump into those, we want you to keep this in mind:

AI tools are assistants, not replacements. They can suggest code, catch errors, and provide explanations, but you still need to understand the core concepts of the language. Don’t solely rely on AI-generated code—sometimes the suggestions are wrong. It’s important to always analyze outputs, understand why it works, and learn the underlying principles of the specific language.

  1. Optimize your learning environment. This begins with exploring different AI coding tools and finding one that suits your learning style and the language you want to learn. It’s also important to supplement these tools with traditional learning practices, such as online tutorials, textbooks, or video courses. These can provide a more structured learning path, as well as more in-depth explanations for certain concepts.

  2. Don’t be afraid to experiment! Use the AI as a safety net—try different approaches to problems and see how the AI reacts so you can learn both from successes and errors. For example, let the AI suggest code snippets, but try to actively think about the suggestions and why they will (or won’t) work. You can also practice with error correction by letting the AI highlight errors and using them as learning opportunities to identify and rectify mistakes in your code.

  3. Be specific and give context. When you’re learning a new programming language with AI coding tools, providing context is crucial for two main reasons:

    • Improved accuracy and relevance. AI tools rely on context to understand your intent and the problem you’re trying to solve.
    • Deeper understanding and skill development. AI suggestions based on context can lead you to explore the underlying reasons why certain code works the way it does. This deeper understanding goes beyond rote memorization and promotes long-term knowledge retention.

    It’s best to treat prompts as discrete and atomized tasks to get to an end result. Not only will this help you build better prompts for the AI, but it will also help you better articulate what you are trying to achieve.

  4. Reach out to developers in the community. Developers who are actively experimenting with AI or are fluent in the language you’re trying to learn will provide more help than this blog post ever could. GitHub Community discussions are a great place to find folks with similar interests or answers to questions on numerous topics. For example, you could check out the Copilot discussion to learn more about Copilot to see if it’s the right fit for your toolkit to learn a new language!

Benefits of hurdling the programming language barrier with AI 🚧

Now that you’ve explored how to use AI to learn a new programming language, here are some of the benefits you can expect to take advantage of:

  • Real-time assistance and feedback. These AI-powered coding tools leverage machine learning algorithms trained on vast public code repositories to offer functionalities, such as context-aware code completion, syntax and logic error detection, and immediate access to relevant documentation and code examples. This not only accelerates the writing process; it also helps identify errors and potential improvements to help developers grasp the nuances of a new language. Plus, some tools (like Copilot) employ a chat interface which helps you ask specific questions about a new language you’re learning.

Asking GitHub Copilot for assistance writing a Hello World program in Python and Rust.

  • Adaptive learning pathways. Everyone learns at different paces in different ways—some folks are autodidacts and others thrive with more personal instruction. Luckily, AI tools aren’t one-size-fits-all solutions when it comes to helping you learn a new programming language. They can assess your individual skill level and adjust their assistance accordingly. This personalized approach ensures you’re challenged appropriately, neither overwhelmed nor underwhelmed, leading to a more efficient and engaging learning experience. Here’s an example of how:

Scenario: you’re a beginner learning Python and you want to write a simple script to calculate the area of a rectangle.

Initial stage:

  • Low confidence: the AI initially detects you’re a beginner based on factors like the simplicity of your code and the use of basic syntax.
  • High level of assistance: after recognizing your inexperience, the AI offers extensive help:
    Contextual code completion: as you type area( for a function call, the AI suggests the complete function area(length, width).
  • Example code: the AI displays a code snippet demonstrating how to use the area function with sample values.

As you progress:

  • Gradual decrease in assistance: as you write code for calculating length and width, and then call the area function with those variables, the AI observes your growing understanding.
  • Reduced completion: the AI might only suggest function names or variable names after you start typing, requiring you to fill in the details.

Focus on best practices: the AI might highlight areas for improvement, suggesting ways to make your code more efficient or readable.

  • More time on your hands. By offering code completion and syntax checks, AI tools free up valuable time. For example, studies show 55% faster task completion with Copilot’s predictive text feature. This allows you to focus on understanding the core concepts of the language, experiment with new functionalities, and build more complex projects, all of which are crucial for solidifying your programming knowledge.
  • Opportunities to upskill. The world of tech is constantly evolving, and new languages emerge all the time. A prime example is Mojo, a new language launched in May 2023 that combines Python-like syntax with the performance of C++. With AI assistance, upskilling with a new language becomes a more achievable and less daunting prospect. Whether you need to learn Python for data science or Java for Android development, AI tools can equip you with the necessary foundational knowledge and accelerate your journey towards becoming a more versatile developer.

Take this with you 🤝

The landscape of software development is evolving, and AI coding tools are at the forefront of this transformation. With these tools, you can explore your own programming language interests, streamline your skill acquisition journey, and ultimately feel empowered to stay competitive in your career. Plus, the opportunities they offer to democratize access to programming knowledge and accelerate the growth of skilled developers everywhere is pretty exciting.

Check out this blog post to learn more exciting ways you can use Copilot with your projects.

If you’d like to try out GitHub Copilot as an aid to your learning journey, start your free trial here.

The post How to use AI coding tools to learn a new programming language appeared first on The GitHub Blog.

]]>
79185
What is Git? Our beginner’s guide to version control https://github.blog/developer-skills/programming-languages-and-frameworks/what-is-git-our-beginners-guide-to-version-control/ Mon, 27 May 2024 14:25:55 +0000 https://github.blog/?p=78178 Let’s get you started on your Git journey with basic concepts to know, plus a step-by-step on how to install and configure the most widely used version control system in the world.

The post What is Git? Our beginner’s guide to version control appeared first on The GitHub Blog.

]]>

If you’re new to software development, welcome! We’re so glad you’re here. You probably have a lot of questions and we’re excited to help you navigate them all.

Today, we’re going to dive into the basics of Git: what it is, why it’s important, how you can install and configure it, plus some basic concepts to get you started.

Here’s the deal: Git is the most widely used version control system (VCS) in the world—and version control is a system that tracks changes to files over a period of time.

Let’s use your resume as an example. You’ve probably had several iterations of your resume over the course of your career. On your computer, you probably have separate files labeled resume, resumev2, resumev4, etc. But with version control, you can keep just one main resume file because the version control system (Git) tracks all the changes for you. So, you can have one file where you’re able to see its history, previous versions, and all the changes you’ve made over time.

Git concepts every new dev must know. The defined terms include repository, branching, pulling, pushing, committing, merging, dev environment, and rebasing.

Terms to know

  • Working directory: this is where you make changes to your files. It’s like your workspace, holding the current state of your project that Git hasn’t yet been told to track.
  • Staging area: also called the index, this is where you prepare changes before committing them. It’s like a draft space, allowing you to review and adjust changes before they become part of the project’s history.
  • Local repository: your local repository is your project’s history stored on your computer. It includes all the commits and branches and acts as a personal record of your project’s changes.
  • Remote repository: a remote repository is a version of your project hosted on the internet or network. It allows multiple people to collaborate by pushing to and pulling from this shared resource.
  • Branches: branches are parallel versions of your project. They allow you to work on different features or fixes independently without affecting the main project until you’re ready to merge them back.
  • Pull request: a pull request is a way to propose changes from one branch to another. It’s a request to review, discuss, and possibly merge the changes into the target branch, and is often used in team collaborations.
  • Merge: merging is the process of integrating changes from one branch into another. It combines the histories of both branches, creating a single, unified history.

How do I install Git?

First thing first: to use Git, you’ll need to download it on your machine.

Installing Git on a MacOS

While macOS comes with a preinstalled version of Git, you’ll still want to download it to ensure you have the most up-to-date version.

  1. Get instructions: go to git-scm.com/downloads then click macOS.
  2. Install Homebrew, which will allow you to easily install software on your machine, so let’s copy the command /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)", then open up our terminal, paste the command, and hit enter—this will take a while to run so let’s give it a few moments.
  3. Once Homebrew is installed, return to the download page. Open up your terminal and paste the command brew install git. This will run the installer so we can have Git on our system. When it runs successfully, you now have Git on your machine!
  4. Open up your terminal and run the command git and you should see a list of all the commands available.

Installing Git on Windows 11

If you’re using a Windows machine, click on the Windows icon on the download page. This will give you the most recent version of Git.

Once you have that folder on your machine, double-click it and follow the onscreen wizard prompts:

  1. Click the “Next” button for the following: accept the terms, the location to save Git, and keep the default selections.
  2. Reset the default branch name to “main” as that’s the new convention.
  3. Click the “Next” button to accept the recommended path, the bundled OpenSSH program, and for all the other options.
  4. Click the “install” button.
  5. Once Git is installed on the machine, click the “Finish” button and open your terminal.
  6. Run the command git and you should see a list of all the commands available.

Now, you’re ready to start configuring and using it!

How to configure Git on your machine

Now that you’ve got Git on your machine, it’s time to get it set up. Here’s how to do it.

  1. Open your terminal and type git config --global user.name "FIRST_NAME LAST_NAME" git config --global user.email "[email protected]". This tells Git who made a change and will give you credit for the work that was done.
  2. If we run git config you can see all the other configuration options that are available, but we don’t need to worry about that right now.
  3. For now, we can check who Git thinks we are by running git config --list and this will return the configuration options we just set.
  4. Hit Q on your keyboard to exit this screen.

Basic terminal and Git commands

Now that you have a basic config set up, let’s go over some of the most basic terminal and Git commands, so you can start using the tool.

Creating a new folder

  1. Open up your terminal and type mkdir git-practice to create a new folder, then type cd git-practice to go into the folder you just created.
  2. Open the folder in your code editor.

Creating a new file

  1. In your terminal, type touch hello.md to create a new markdown file in our folder.
  2. Navigate to your code editor and you’ll see that file right there—hello.md

Initialize Git or create a new Git repository

  1. Go back to your terminal and run git init in this folder. This is the first command you run in a new project. It allows Git to start tracking your changes.
  2. Run git status and you’ll see that it’s currently tracking the empty hello.md file. This will show your changes and whether they have been staged. It will also show which files are being tracked by Git.

Add changes from the working directory to your staging area

  1. Run git add . Then run git status, and you’ll see a different color of the tracked file hello.md, which indicates that it’s currently in the staging area.
  2. Go back to your code editor and type the following: #I’m learning to use Git!
  3. Save the file and return it to your terminal. When we type git status this will show us that there’s been a change to the hello.md file.

Commit changes

  1. Run git commit -m ‘initial commit’. This command allows you to save your changes with a message attached.

One thing to note is that you’ll be using git status, git add, and git commit very often during your time using Git, so it’s important to practice these commands. You can see a list of all the available commands by running git in your terminal, or you can check out our docs to see a list of commands and how to use them.

Are Git and GitHub the same?

Nope! Git is a version control system that tracks file changes and GitHub is a platform that allows developers to collaborate and store their code in the cloud. Think of it this way: Git is responsible for everything GitHub-related that happens locally on your computer. They both work together to make building, scaling, securing, and storing software easier.

More resources to keep practicing your Git skills:

If you have questions or feedback, pop it in the GitHub Community thread and we’ll be sure to respond!

The post What is Git? Our beginner’s guide to version control appeared first on The GitHub Blog.

]]>
78178
10 unexpected ways to use GitHub Copilot https://github.blog/developer-skills/programming-languages-and-frameworks/10-unexpected-ways-to-use-github-copilot/ Mon, 22 Jan 2024 17:30:07 +0000 https://github.blog/?p=76298 GitHub Copilot is widely known for its code generation feature. Learn how the AI assistant’s abilities can extend beyond just code generation.

The post 10 unexpected ways to use GitHub Copilot appeared first on The GitHub Blog.

]]>
Writing code is more than just writing code. There’s commit messages to write, CLI commands to execute, and obscure syntax to try to remember. While you’ve probably used GitHub Copilot to support your coding, did you know it can also support your other workloads?

GitHub Copilot is widely known for its ability to help developers write code in their IDE. Today, I want to show you how the AI assistant’s abilities can extend beyond just code generation. In this post, we’ll explore 10 use cases where GitHub Copilot can help reduce friction during your developer workflow. This includes pull requests, working from the command line, debugging CI/CD workflows, and much more!

Let’s get into it.

1. Run terminal commands from GitHub Copilot Chat

If you ever forget how to run a particular command when you’re working in your VS Code, GitHub Copilot Chat is here to help! With the new @terminal agent in VS Code, you can ask GitHub Copilot how to run a particular command. Once it generates a response, you can then click the “Insert into Terminal” button to run the suggested command.

Let me show you what I mean:

The @terminal agent in VS Code also has context about the integrated shell terminal, so it can help you even further.

2. Write pull request summaries (Copilot Enterprise feature only)

We’ve all been there where we made a sizable pull request with tons of files and hundreds of changes. But, sometimes, it can be hard to remember every little detail that we’ve implemented or changed.

Yet it’s an important part of collaborating with other engineers/developers on my team. After all, if I don’t give them a summary of my proposed changes, I’m not giving them the full context they need to provide an effective review. Thankfully, GitHub Copilot is now integrated into pull requests! This means, with the assistance of AI, you can generate a detailed pull request summary of the changes you made in your files.

Let’s look at how you can generate these summaries:

Now, isn’t that grand! All you have to do is go in and edit what was generated and you have a great, detailed explanation of all the changes you’ve made—with links to changed files!

Note: You will need a Copilot Enterprise plan (which requires GitHub Enterprise Cloud) to use PR summaries. Learn more about this enterprise feature by reading our documentation.

3. Generate commit messages

I came across this one recently while making changes in VS Code. GitHub Copilot can help you generate commit messages right in your IDE. If you click on the source control button, you’ll notice a sparkle in the message input box.

Click on those sparkles and voilà, commit messages are generated on your behalf:

I thought this was a pretty nifty feature of GitHub Copilot in VS Code and Visual Studio.

4. Get help in the terminal with GitHub Copilot in the CLI

Another way to get help with terminal commands is to use GitHub Copilot in the CLI. This is an extension to GitHub CLI that helps you with general shell commands, Git commands, and gh cli commands.

GitHub Copilot in the CLI is a game-changer that is super useful for reminding you of commands, teaching you new commands or explaining random commands you come across online.

Learn how to get started with GitHub Copilot in the CLI by reading this post!

5. Talk to your repositories on GitHub.com (Copilot Enterprise feature only)

If you’ve ever gone to a new repository and have no idea what’s happening even though the README is there, you can now use GitHub Copilot Chat to explain the repository to you, right in GitHub.com. Just click on the Copilot icon in the top right corner of the repository and ask whatever you want to know about that repository.

On GitHub.com you can ask Copilot general software related questions, questions about the context of your project, questions about a specific file, or specified lines of code within a file.

Note: You will need a Copilot Enterprise plan (which requires GitHub Enterprise Cloud) to use GitHub Copilot Chat in repositories on GitHub.com. Learn more about this enterprise feature by reading our documentation.

6. Fix code inline

Did you know that in addition to asking for suggestions with comments, you can get help with your code inline? Just highlight the code you want to fix, right click, and select “Fix using Copilot.” Copilot will then provide you with a suggested fix for your code.

This is great to have for those small little fixes we sometimes need right in our current files.

7. Bulk close 1000+ GitHub Issues

My team and I had a use case where we needed to close over 1,600 invalid GitHub Issues submitted to one of our repositories. I created a custom GitHub Action that automatically closed all 1,600+ issues and implemented the solution with GitHub Copilot.

GitHub Copilot Chat helped me to create the GitHub Action, and also helped me implement the closeIssue() function very quickly by leveraging Octokit to grab all the issues that needed to be closed.

Example of a closeissues.js script generated by GitHub Copilot

You can read all about how I bulk closed 1000+ GitHub issues in this blog post, but just know that with GitHub Copilot Chat, we went from having 1,600+ open issues, to a measly 64 in a matter of minutes.

8. Generate documentation for your code

We all love documenting our code, but just in case some of us need a little help writing documentation, GitHub Copilot is here to help!

Regardless of your language, you can quickly generate documentation following language specific formats—Docstring for Python, JSDoc for Javascript or Javadoc for Java.

9. Get help with error messages in your terminal

Error messages can often be confusing. With GitHub Copilot in your IDE, you can now get help with error messages right in the terminal. Just highlight the error message, right click, and select “Explain with Copilot.” GitHub Copilot will then provide you with a description of the error and a suggested fix.

You can also bring error messages from your browser console into Copilot Chat so it can explain those messages to you as well with the /explain slash command.

10. Debug your GitHub Actions workflow

Whenever I have a speaking engagement, I like to create my slides using Slidev, an open source presentation slide builder for developers. I enjoy using it because I can create my slides in Markdown and still make them look splashy! Take a look at this one for example!

Anyway, there was a point in time where I had an issue with deploying my slides to GitHub Pages and I just couldn’t figure out what the issue was. So, of course, I turned to my trusty assistant—GitHub Copilot Chat that helped me debug my way through deploying my slides.

Conversation between GitHub Copilot Chat and developer to debug a GitHub Actions workflow

Read more about how I debugged my deployment workflow with GitHub Copilot Chat here.

GitHub Copilot goes beyond code completion

As you see above, GitHub Copilot extends far beyond your editor and code completion. It is truly evolving to be one of the best tools you can have in your developer toolkit. I’m still learning and discovering new ways to integrate GitHub Copilot into my daily workflow and I hope you give some of the above a chance!

Be sure to sign up for Github Copilot if you haven’t tried it out yet and stay up to date with all that’s happening by subscribing to our developer newsletter for more tips, technical guides, and best practices! You can also drop me a note on X if you have any questions, @itsthatladydev.

Until next time, happy coding!

The post 10 unexpected ways to use GitHub Copilot appeared first on The GitHub Blog.

]]>
76298