The Go Blogtag:blog.golang.org,2013:blog.golang.org2026-03-24T00:00:00+00:00Type Construction and Cycle Detectiontag:blog.golang.org,2013:blog.golang.org/type-construction-and-cycle-detection2026-03-24T00:00:00+00:002026-03-24T00:00:00+00:00Mark FreemanGo 1.26 simplifies type construction and enhances cycle detection for certain kinds of recursive types.
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/type-construction-and-cycle-detection">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>Type Construction and Cycle Detection</h1>
<p class="author">
Mark Freeman<br>
24 March 2026
</p>
<div class='markdown'>
<p>Go’s static typing is an important part of why Go is a good fit for production
systems that have to be robust and reliable. When a Go package is compiled, it
is first parsed—meaning that the Go source code within that package is converted
into an abstract syntax tree (or AST). This AST is then passed to the Go
<em>type checker</em>.</p>
<p>In this blog post, we’ll dive into a part of the type checker we significantly
improved in Go 1.26. How does this change things from a Go user’s perspective?
Unless one is fond of arcane type definitions, there’s no observable change
here. This refinement was intended to reduce corner cases, setting us up for
future improvements to Go. Also, it’s a fun look at something that seems quite
ordinary to Go programmers, but has some real subtleties hiding within.</p>
<p>But first, what exactly is <em>type checking</em>? It’s a step in the Go compiler that
eliminates whole classes of errors at compile time. Specifically, the Go type
checker verifies that:</p>
<ol>
<li>Types appearing in the AST are valid (for example, a map’s key type must be
<code>comparable</code>).</li>
<li>Operations involving those types (or their values) are valid (for example,
one can’t add an <code>int</code> and a <code>string</code>).</li>
</ol>
<p>To accomplish this, the type checker constructs an internal representation for
each type it encounters while traversing the AST—a process informally called
<em>type construction</em>.</p>
<p>As we’ll soon see, even though Go is known for its simple type system, type
construction can be deceptively complex in certain corners of the language.</p>
<h2 id="type-construction">Type construction</h2>
<p>Let’s start by considering a simple pair of type declarations:</p>
<pre><code class="language-go">type T []U
type U *int
</code></pre>
<p>When the type checker is invoked, it first encounters the type declaration for
<code>T</code>. Here, the AST records a type definition of a type name <code>T</code> and a
<em>type expression</em> <code>[]U</code>. <code>T</code> is a <a href="/ref/spec#Types">defined type</a>; to represent
the actual data structure that the type checker uses when constructing defined
types, we’ll use a <code>Defined</code> struct.</p>
<p>The <code>Defined</code> struct contains a pointer to the type for the type expression to
the right of the type name. This <code>underlying</code> field is relevant for sourcing the
type’s <a href="/ref/spec#Underlying_types">underlying type</a>. To help illustrate the
type checker’s state, let’s see how walking the AST fills in the data
structures, starting with:</p>
<img style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/01.svg" />
<p>At this point, <code>T</code> is <em>under construction</em>, indicated by the color yellow. Since
we haven’t evaluated the type expression <code>[]U</code> yet—it’s still black—<code>underlying</code>
points to <code>nil</code>, indicated by an open arrow.</p>
<p>When we evaluate <code>[]U</code>, the type checker constructs a <code>Slice</code> struct, the
internal data structure used to represent slice types. Similarly to <code>Defined</code>,
it contains a pointer to the element type for the slice. We don’t yet know what
the name <code>U</code> refers to, though we expect it to refer to a type. So, again, this
pointer is <code>nil</code>. We are left with:</p>
<img style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/02.svg" />
<p>By now you might be getting the gist, so we’ll pick up the pace a bit.</p>
<p>To convert the type name <code>U</code> to a type, we first locate its declaration. Upon
seeing that it represents another defined type, we construct a separate
<code>Defined</code> for <code>U</code> accordingly. Inspecting the right side of <code>U</code>, we see the type
expression <code>*int</code>, which evaluates to a <code>Pointer</code> struct, with the base type of
the pointer being the type expression <code>int</code>.</p>
<p>When we evaluate <code>int</code>, something special happens: we get back a predeclared
type. Predeclared types are constructed <em>before</em> the type checker even begins
walking the AST. Since the type for <code>int</code> is already constructed, there’s
nothing for us to do but point to that type.</p>
<p>We now have:</p>
<img style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/03.svg" />
<p>Note that the <code>Pointer</code> type is <em>complete</em> at this point, indicated by the color
green. Completeness means that the type’s internal data structure has all of its
fields populated and any types pointed to by those fields are complete.
Completeness is an important property of a type because it ensures that
accessing the internals, or <em>deconstruction</em>, of that type is sound: we have all
the information describing the type.</p>
<p>In the image above, the <code>Pointer</code> struct only contains a <code>base</code> field, which
points to <code>int</code>. Since <code>int</code> has no fields to populate, it’s “vacuously”
complete, making the type for <code>*int</code> complete.</p>
<p>From here, the type checker begins unwinding the stack. Since the type for
<code>*int</code> is complete, we can complete the type for <code>U</code>, meaning we can complete
the type for <code>[]U</code>, and so on for <code>T</code>. When this process ends, we are left with
only complete types, as shown below:</p>
<img style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/04.svg" />
<p>The numbering above shows the order in which the types were completed (after the
<code>Pointer</code>). Note that the type on the bottom completed first. Type construction
is naturally a depth-first process, since completing a type requires its
dependencies to be completed first.</p>
<h2 id="recursive-types">Recursive types</h2>
<p>With this simple example out of the way, let’s add a bit more nuance. Go’s type
system also allows us to express recursive types. A typical example is something
like:</p>
<pre><code class="language-go">type Node struct {
next *Node
}
</code></pre>
<p>If we reconsider our example from above, we can add a bit of recursion by
swapping <code>*int</code> for <code>*T</code> like so:</p>
<pre><code class="language-go">type T []U
type U *T
</code></pre>
<p>Now for a trace: let’s start once more with <code>T</code>, but skip ahead to illustrate
the effects of this change. As one might suspect from our previous example, the
type checker will approach the evaluation of <code>*T</code> with the below state:</p>
<img style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/05.svg" />
<p>The question is what to do with the base type for <code>*T</code>. We have an idea of what
<code>T</code> is (a <code>Defined</code>), but it’s currently being constructed (its <code>underlying</code> is
still <code>nil</code>).</p>
<p>We simply point the base type for <code>*T</code> to <code>T</code>, even though <code>T</code> is incomplete:</p>
<img id="example" style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/06.svg" />
<p>We do this assuming that <code>T</code> will complete when it finishes construction
<em>in the future</em> (by pointing to a complete type). When that happens, <code>base</code> will
point to a complete type, thus making <code>*T</code> complete.</p>
<p>In the meantime, we’ll begin heading back up the stack:</p>
<img style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/07.svg" />
<p>When we get back to the top and finish constructing <code>T</code>, the “loop” of types
will close, completing each type in the loop simultaneously:</p>
<img style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/08.svg" />
<p>Before we considered recursive types, evaluating a type expression always
returned a complete type. That was a convenient property because it meant the
type checker could always deconstruct (look inside) a type returned from
evaluation.</p>
<p>But in the <a href="#example">example above</a>, evaluation of <code>T</code> returned an <em>incomplete</em>
type, meaning deconstructing <code>T</code> is unsound until it completes. Generally
speaking, recursive types mean that the type checker can no longer assume that
types returned from evaluation will be complete.</p>
<p>Yet, type checking involves many checks which require deconstructing a type. A
classic example is confirming that a map key is <code>comparable</code>, which requires
inspecting the <code>underlying</code> field. How do we safely interact with incomplete
types like <code>T</code>?</p>
<p>Recall that type completeness is a prerequisite for deconstructing a type. In
this case, type construction never deconstructs a type, it merely refers to
types. In other words, type completeness <em>does not</em> block type construction
here.</p>
<p>Because type construction isn’t blocked, the type checker can simply delay such
checks until the end of type checking, when all types are complete (note that
the checks themselves also do not block type construction). If a type were to
reveal a type error, it makes no difference when that error is reported during
type checking—only that it is reported eventually.</p>
<p>With this knowledge in mind, let’s examine a more complex example involving
values of incomplete types.</p>
<h2 id="recursive-types-and-values">Recursive types and values</h2>
<p>Let’s take a brief detour and have a look at Go’s
<a href="/ref/spec#Array_types">array types</a>. Importantly, array types have a size,
which is a <a href="/ref/spec#Array_types">constant</a> that is part of the type. Some
operations, like the built-in functions <code>unsafe.Sizeof</code> and <code>len</code> can return
constants when applied to <a href="/ref/spec#Package_unsafe">certain values</a> or
<a href="/ref/spec#Length_and_capacity">expressions</a>, meaning they can appear as array
sizes. Importantly, the values passed to those functions can be of any type,
even an incomplete type. We call these <em>incomplete values</em>.</p>
<p>Let’s consider this example:</p>
<pre><code class="language-go">type T [unsafe.Sizeof(T{})]int
</code></pre>
<p>In the same way as before, we’ll reach a state like the one below:</p>
<img style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/09.svg" />
<p>To construct the <code>Array</code>, we must calculate its size. From the value expression
<code>unsafe.Sizeof(T{})</code>, that’s the size of <code>T</code>. For array types (such as <code>T</code>),
calculating their size requires deconstruction: we need to look inside the type
to determine the length of the array and size of each element.</p>
<p>In other words, type construction for the <code>Array</code> <em>does</em> deconstruct <code>T</code>,
meaning the <code>Array</code> cannot finish construction (let alone complete) before <code>T</code>
completes. The “loop” trick that we used earlier—where a loop of types
simultaneously completes as the type starting the loop finishes
construction—doesn’t work here.</p>
<p>This leaves us in a bind:</p>
<ul>
<li><code>T</code> cannot be completed until the <code>Array</code> completes.</li>
<li>The <code>Array</code> cannot be completed until <code>T</code> completes.</li>
<li>They <em>cannot</em> be completed simultaneously (unlike before).</li>
</ul>
<p>Clearly, this is impossible to satisfy. What is the type checker to do?</p>
<h3 id="cycle-detection">Cycle detection</h3>
<p>Fundamentally, code such as this is invalid because the size of <code>T</code> cannot be
determined without knowing the size of <code>T</code>, regardless of how the type checker
operates. This particular instance—cyclic size definition—is part of a class of
errors called <em>cycle errors</em>, which generally involve cyclic definition of Go
constructs. As another example, consider <code>type T T</code>, which is also in this
class, but for different reasons. The process of finding and reporting cycle
errors in the course of type checking is called <em>cycle detection</em>.</p>
<p>Now, how does cycle detection work for <code>type T [unsafe.Sizeof(T{})]int</code>? To
answer this, let’s look at the inner <code>T{}</code>. Because <code>T{}</code> is a composite literal
expression, the type checker knows that its resulting value is of type <code>T</code>.
Because <code>T</code> is incomplete, we call the value <code>T{}</code> an <em>incomplete value</em>.</p>
<p>We must be cautious—operating on an incomplete value is only sound if it doesn’t
deconstruct the value’s type. For example, <code>type T [unsafe.Sizeof(new(T))]int</code>
<em>is</em> sound, since the value <code>new(T)</code> (of type <code>*T</code>) is never deconstructed—all
pointers have the same size. To reiterate, it is sound to size an incomplete
value of type <code>*T</code>, but not one of type <code>T</code>.</p>
<p>This is because the “pointerness” of <code>*T</code> provides enough type information for
<code>unsafe.Sizeof</code>, whereas just <code>T</code> does not. In fact, it’s never sound to operate
on an incomplete value <em>whose type is a defined type</em>, because a mere type name
conveys no (underlying) type information at all.</p>
<h4 id="where-to-do-it">Where to do it</h4>
<p>Up to now we’ve focused on <code>unsafe.Sizeof</code> directly operating on potentially
incomplete values. In <code>type T [unsafe.Sizeof(T{})]int</code>, the call to <code>unsafe.Sizeof</code>
is just the “root” of the array length expression. We can readily imagine the
incomplete value <code>T{}</code> as an operand in some other value expression.</p>
<p>For example, it could be passed to a function (i.e.
<code>type T [unsafe.Sizeof(f(T{}))]int</code>), sliced (i.e.
<code>type T [unsafe.Sizeof(T{}[:])]int</code>), indexed (i.e.
<code>type T [unsafe.Sizeof(T{}[0])]int</code>), etc. All of these are invalid because they
require deconstructing <code>T</code>. For instance, indexing <code>T</code>
<a href="/ref/spec#Index_expressions">requires checking</a> the underlying type of <code>T</code>.
Because these expressions “consume” potentially incomplete values, let’s call
them <em>downstreams</em>. There are many more examples of downstream operators, some
of which are not syntactically obvious.</p>
<p>Similarly, <code>T{}</code> is just one example of an expression that “produces” a
potentially incomplete value—let’s call these kinds of expressions <em>upstreams</em>:</p>
<img style="background-color:#f3f3f3" src="type-construction-and-cycle-detection/10.svg" />
<p>Comparatively, there are fewer and more syntactically obvious value expressions
that might result in incomplete values. Also, it’s rather simple to enumerate
these cases by inspecting Go’s syntax definition. For these reasons, it’ll be
simpler to implement our cycle detection logic via the upstreams, where
potentially incomplete values originate. Below are some examples of them:</p>
<pre><code class="language-go">
type T [unsafe.Sizeof(T(42))]int // conversion
func f() T
type T [unsafe.Sizeof(f())]int // function call
var i interface{}
type T [unsafe.Sizeof(i.(T))]int // assertion
type T [unsafe.Sizeof(<-(make(<-chan T)))]int // channel receive
type T [unsafe.Sizeof(make(map[int]T)[42])]int // map access
type T [unsafe.Sizeof(*new(T))]int // dereference
// ... and a handful more
</code></pre>
<p>For each of these cases, the type checker has extra logic where that particular
kind of value expression is evaluated. As soon as we know the type of the
resulting value, we insert a simple test that checks that the type is complete.</p>
<p>For instance, in the conversion example <code>type T [unsafe.Sizeof(T(42))]int</code>,
there is a snippet in the type checker that resembles:</p>
<pre><code class="language-go">func callExpr(call *syntax.CallExpr) operand {
x := typeOrValue(call.Fun)
switch x.mode() {
// ... other cases
case typeExpr:
// T(), meaning it's a conversion
T := x.typ()
// ... handle the conversion, T *is not* safe to deconstruct
}
}
</code></pre>
<p>As soon as we observe that the <code>CallExpr</code> is a conversion to <code>T</code>, we know that
the resulting type will be <code>T</code> (assuming no preceding errors). Before we pass
back a value (here, an <code>operand</code>) of type <code>T</code> to the rest of the type checker,
we need to check for completeness of <code>T</code>:</p>
<pre><code class="language-go">func callExpr(call *syntax.CallExpr) operand {
x := typeOrValue(call.Fun)
switch x.mode() {
// ... other cases
case typeExpr:
// T(), meaning it's a conversion
T := x.typ()
+ if !isComplete(T) {
+ reportCycleErr(T)
+ return invalid
+ }
// ... handle the conversion, T *is* safe to deconstruct
}
}
</code></pre>
<p>Instead of returning an incomplete value, we return a special <code>invalid</code> operand,
which signals that the call expression could not be evaluated. The rest of the
type checker has special handling for invalid operands. By adding this, we
prevented incomplete values from “escaping” downstream—both into the rest of the
type conversion logic and to downstream operators—and instead reported a cycle
error describing the problem with <code>T</code>.</p>
<p>A similar code pattern is used in all other cases, implementing cycle detection
for incomplete values.</p>
<h2 id="conclusion">Conclusion</h2>
<p>Systematic cycle detection involving incomplete values is a new addition to the
type checker. Before Go 1.26, we used a more complex type construction algorithm,
which involved more bespoke cycle detection that didn’t always work. Our new,
simpler approach addressed a number of (admittedly esoteric) compiler panics
(issues <a href="/issue/75918">#75918</a>, <a href="/issue/76383">#76383</a>,
<a href="/issue/76384">#76384</a>, <a href="/issue/76478">#76478</a>, and more), resulting in a more
stable compiler.</p>
<p>As programmers, we’ve become accustomed to features like recursive type
definitions and sized array types such that we might overlook the nuance of
their underlying complexity. While this post does skip over some finer details,
hopefully we’ve conveyed a deeper understanding of (and perhaps appreciation
for) the problems surrounding type checking in Go.</p>
</div>
</div>
<div class="Article prevnext">
<p>
<b>Previous article: </b><a href="/blog/inliner">//go:fix inline and the source-level inliner</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>
//go:fix inline and the source-level inlinertag:blog.golang.org,2013:blog.golang.org/inliner2026-03-10T00:00:00+00:002026-03-10T00:00:00+00:00Alan DonovanHow Go 1.26's source-level inliner works, and how it can help you with self-service API migrations.
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/inliner">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>//go:fix inline and the source-level inliner</h1>
<p class="author">
Alan Donovan<br>
10 March 2026
</p>
<div class='markdown'>
<style>
.beforeafter {
justify-content: center;
display: grid;
gap: 1em;
margin: 1em;
grid-template-columns: minmax(min-content, 1fr) auto minmax(min-content, 1fr);
font-size: 180%;
@media screen and (max-width: 57.7rem) {
grid-template-columns: 1fr;
}
}
#content .beforeafter pre {
margin: 0em; /* Handled by grid gap */
}
.beforeafter-context {
grid-column: 1 / -1;
}
#content .beforeafter > pre:nth-of-type(1) { background: var(--color-diff-old); }
#content .beforeafter > pre:nth-of-type(2) { background: var(--color-diff-new); }
.beforeafter-arrow {
place-self: center;
/* Undo unnecessary grid gap. */
margin: -0.5em;
}
.beforeafter-arrow::before {
content: "⟶";
@media screen and (max-width: 57.7rem) {
content: "⇓";
}
}
</style>
<p>Go 1.26 contains an all-new implementation of the <code>go fix</code> subcommand,
designed to help you keep your Go code up-to-date and modern. For an
introduction, start by reading our <a href="gofix">recent post</a> on the topic.
In this post, we’ll look at one particular feature, the source-level
inliner.</p>
<p>While <code>go fix</code> has several bespoke modernizers for specific new
language and library features,
the source-level inliner is the first fruit of our efforts to provide
“<a href="gofix#self-service">self-service</a>” modernizers and analyzers.
It enables any package author to express simple API migrations and
updates in a straightforward and safe way.
We’ll first explain what the source-level inliner is and how you can use it,
then we’ll dive into some aspects of the problem and the technology behind it.</p>
<h2 id="source-level-inlining">Source-level inlining</h2>
<p>In 2023, we built an <a href="https://pkg.go.dev/golang.org/x/tools/internal/refactor/inline" rel="noreferrer" target="_blank">algorithm</a> for source-level inlining of function calls in Go. To “inline” a call means to replace the call by a copy of the body of the called function, substituting arguments for parameters. We call it “source-level” inlining because it durably modifies the source code. By contrast, the inlining algorithm found in a typical compiler, including Go’s, applies a similar transformation, but to the compiler’s ephemeral <a href="https://en.wikipedia.org/wiki/Intermediate_representation" rel="noreferrer" target="_blank">intermediate representation</a>, to generate more efficient code.</p>
<p>If you’ve ever invoked <a href="/gopls/">gopls</a>’ “<a href="/gopls/features/transformation#refactorinlinecall-inline-call-to-function">Inline call</a>” interactive refactoring, you’ve used the source-level inliner. (In VS Code, this code action can be found on the “Source Action…” menu.) The before-and-after screenshots below show the effect of inlining the call to <code>sum</code> from the function named <code>six</code>.</p>
<center>
<img src="/gopls/assets/inline-before.png"/>
<img src="/gopls/assets/inline-after.png"/>
</center>
<p>The inliner is a crucial building block for a number of source transformation tools. For example, gopls uses it for the “Change signature” and “Remove unused parameter” refactorings because, as we’ll see below, it takes care of many subtle correctness issues that arise when refactoring function calls.</p>
<p>This same inliner is also one of the analyzers in the all-new <code>go fix</code> command.
In <code>go fix</code>, it enables self-service API migration and upgrades using a new <code>//go:fix inline</code> directive comment.
Let’s take a look at a few examples of how this works and what it can be used for.</p>
<h3 id="example-renaming-ioutilreadfile">Example: renaming <code>ioutil.ReadFile</code></h3>
<p>In Go 1.16, the <code>ioutil.ReadFile</code> function, which reads the content of a file, was deprecated in favor of the new <code>os.ReadFile</code> function. In effect, the function was renamed, though of course Go’s <a href="/doc/go1compat">compatibility promise</a> prevents us from ever removing the old name.</p>
<pre><code class="language-go">package ioutil
import "os"
// ReadFile reads the file named by filename…
// Deprecated: As of Go 1.16, this function simply calls [os.ReadFile].
func ReadFile(filename string) ([]byte, error) {
return os.ReadFile(filename)
}
</code></pre>
<p>Ideally, we would like to change every Go program in the world to stop using <code>ioutil.ReadFile</code> and to call <code>os.ReadFile</code> instead. The inliner can help us do that. First we annotate the old function with <code>//go:fix inline</code>. This comment tells the tool that any time it sees a call to this function, it should inline the call.</p>
<pre><code class="language-go">package ioutil
import "os"
// ReadFile reads the file named by filename…
// Deprecated: As of Go 1.16, this function simply calls [os.ReadFile].
//go:fix inline
func ReadFile(filename string) ([]byte, error) {
return os.ReadFile(filename)
}
</code></pre>
<p>When we run <code>go fix</code> on a file containing a call to <code>ioutil.ReadFile</code>, it applies the replacement:</p>
<pre><code>$ go fix -diff ./...
-import "io/ioutil"
+import "os"
- data, err := ioutil.ReadFile("hello.txt")
+ data, err := os.ReadFile("hello.txt")
</code></pre>
<p>The call has been inlined, in effect replacing a call to one function by a call to another.</p>
<p>Because the inliner replaces a function call by a copy of the body of
the called function, not by some arbitrary expression, in principle
the transformation should not change the program’s behavior
(barring code that inspects the call stack, of course).
This differs from other tools that allow for arbitrary rewrites,
such as <code>gofmt -r</code>, which are very powerful but need to be watched closely.</p>
<p>For many years now, our Google colleagues on the teams supporting
Java, Kotlin, and C++ have been using source-level inliner tools like this.
To date, these tools have eliminated millions of calls to deprecated
functions in Google’s code base.
Users simply add the directives, and wait.
During the night, robots quietly prepare, test, and submit batches of
code changes across a monorepo of billions of lines of code.
If all goes well, by the morning the old code is no longer in use and can be
safely deleted.
Go’s inliner is a relative newcomer, but it has already been used to
prepare more than 18,000 changelists to Google’s monorepo.</p>
<h3 id="example-fixing-api-design-flaws">Example: fixing API design flaws</h3>
<p>With a little creativity, a variety of migrations can be expressed as inlinings.
Consider this hypothetical <code>oldmath</code> package:</p>
<pre><code class="language-go">// Package oldmath is the bad old math package.
package oldmath
// Sub returns x - y.
func Sub(y, x int) int
// Inf returns positive infinity.
func Inf() float64
// Neg returns -x.
func Neg(x int) int
</code></pre>
<p>It has several design flaws: the <code>Sub</code> function declares its parameters in the wrong order; the <code>Inf</code> function implicitly prefers one of the two infinities; and the <code>Neg</code> function is redundant with <code>Sub</code>. Fortunately we have a <code>newmath</code> package that avoids these mistakes, and we’d like to get users to switch to it. The first step is to implement the old API in terms of the new package and to deprecate the old functions. Then we add inliner directives:</p>
<pre><code>// Package oldmath is the bad old math package.
package oldmath
import "newmath"
// Sub returns x - y.
// Deprecated: the parameter order is confusing.
//go:fix inline
func Sub(y, x int) int {
return newmath.Sub(x, y)
}
// Inf returns positive infinity.
// Deprecated: there are two infinite values; be explicit.
//go:fix inline
func Inf() float64 {
return newmath.Inf(+1)
}
// Neg returns -x.
// Deprecated: this function is unnecessary.
//go:fix inline
func Neg(x int) int {
return newmath.Sub(0, x)
}
</code></pre>
<p>Now, when users of <code>oldmath</code> run the <code>go fix</code> command on their code, it will replace all calls to the old functions by their new counterparts. By the way, gopls has included <code>inline</code> in its analyzer suite for some time, so if your editor uses gopls, the moment you add the <code>//go:fix inline</code> directives you should start seeing a diagnostic at each call site, such as “call of <code>oldmath.Sub</code> should be inlined”, along with a suggested fix that inlines that particular call.</p>
<p>For example, this old code:</p>
<pre><code>import "oldmath"
var nine = oldmath.Sub(1, 10) // diagnostic: "call to oldmath.Sub should be inlined"
</code></pre>
<p>will be transformed to:</p>
<pre><code>import "newmath"
var nine = newmath.Sub(10, 1)
</code></pre>
<p>Observe that after the fix, the arguments to <code>Sub</code> are in the logical order. This is progress! If you’re in luck, the inliner will succeed at removing every call to the functions in <code>oldmath</code>, perhaps allowing you to delete it as a dependency.</p>
<p>The <code>inline</code> analyzer works on types and constants too. If our <code>oldmath</code> package had originally declared a data type for rational numbers and a constant for π, we could use the following forwarding declarations to migrate them to the <code>newmath</code> package while preserving the behavior of existing code:</p>
<pre><code>package oldmath
//go:fix inline
type Rational = newmath.Rational
//go:fix inline
const Pi = newmath.Pi
</code></pre>
<p>Each time the <code>inline</code> analyzer encounters a reference to <code>oldmath.Rational</code> or <code>oldmath.Pi</code>, it will update them to refer instead to <code>newmath</code>.</p>
<h2 id="under-the-hood-of-the-inliner">Under the hood of the inliner</h2>
<p>At a glance, source inlining seems straightforward: just replace the
call with the body of the callee function, introduce variables for the
function parameters, and bind the call arguments to those variables.
But handling all of the complexities and corner cases correctly
while producing acceptable results is no small technical challenge:
the inliner is about 7,000 lines of dense, compiler-like logic.
Let’s look at six aspects of the problem that make it so tricky.</p>
<h3 id="1-parameter-elimination">1. Parameter elimination</h3>
<p>One of the inliner’s most important tasks is to attempt to replace each occurrence of a parameter in the callee by its corresponding argument from the call. In the simplest case, the argument is a trivial literal such as <code>0</code> or <code>""</code>, so the replacement is straightforward and the parameter can be eliminated.</p>
<div class="beforeafter">
<div class="beforeafter-context"><pre>
//go:fix inline
func show(prefix, item string) {
fmt.Println(prefix, item)
}
</pre></div>
<pre>
show("", "hello")
</pre>
<div class="beforeafter-arrow"></div>
<pre>
fmt.Println("", "hello")
</pre>
</div>
<p>For less trivial literals such as <code>404</code> or <code>"go.dev"</code>, the replacement is equally straightforward, so long as the parameter appears in the callee at most once. But if it appears multiple times, it would be bad style to sprinkle copies of these magic values throughout the code as it would obscure the relationship between them; a later change to only one of them might create an inconsistency.</p>
<p>In such cases the inliner must tread carefully and emit a more conservative result. Whenever one or more parameters cannot be completely substituted for any reason, the inliner inserts an explicit “parameter binding” declaration:</p>
<div class="beforeafter">
<div class="beforeafter-context"><pre>
//go:fix inline
func printPair(before, x, y, after string) {
fmt.Println(before, x, after)
fmt.Println(before, y, after)
}
</pre></div>
<pre>
printPair("[", "one", "two", "]")
</pre>
<div class="beforeafter-arrow"></div>
<pre>
// a “parameter binding” declaration
var before, after = "[", "]"
fmt.Println(before, "one", after)
fmt.Println(before, "two", after)
</pre>
</div>
<h3 id="2-side-effects">2. Side effects</h3>
<p>In Go, as in all imperative programming languages, calling a function may have the side effect of updating variables, which in turn may affect the behavior of other functions. Consider the call to <code>add</code> below:</p>
<pre><code class="language-go">func add(x, y int) int { return y + x }
z = add(f(), g())
</code></pre>
<p>A trivial inlining of the call would replace <code>x</code> with <code>f()</code> and <code>y</code> with <code>g()</code>, with this result:</p>
<pre><code>z = g() + f()
</code></pre>
<p>But this result is incorrect because evaluation of <code>g()</code> now occurs before <code>f()</code>; if the two functions have side effects, those effects will now be observed in a different order and may affect the result of the expression. Of course, it is bad form to write code that relies on effect ordering among call arguments, but that doesn’t mean people don’t do it, and our tools have to get it right.</p>
<p>So, the inliner must attempt to prove that <code>f()</code> and <code>g()</code> do not have side effects on each other. On success, it can safely proceed with the result above. Otherwise, it must fall back to an explicit parameter binding:</p>
<pre><code>var x = f()
z = g() + x
</code></pre>
<p>When considering side effects, it’s not only the argument expressions that matter. Also significant is the order in which parameters are evaluated relative to other code in the callee. Consider this call to <code>add2</code>:</p>
<pre><code class="language-go">//go:fix inline
func add2(x, y int) int {
return x + other() + y
}
add2(f(), g())
</code></pre>
<p>This time, parameters <code>x</code> and <code>y</code> are used in the same order they are declared, so the substitution <code>f() + other() + g()</code> won’t change the order of effects of <code>f()</code> and <code>g()</code>—but it will change the order of any effects of <code>other()</code> and <code>g()</code>. Furthermore, if the function body uses a parameter within a loop, substitution might change the cardinality of effects.</p>
<p>The inliner uses a novel <a href="https://cs.opensource.google/go/x/tools/+/refs/tags/v0.42.0:internal/refactor/inline/inline.go;l=1978;drc=e3a69ffcdbb984f50100e76ebca6ff53cf88de9c" rel="noreferrer" target="_blank">hazard analysis</a> to model the order of effects in each callee function. Nonetheless, its ability to construct the necessary safety proofs is quite limited. For example, if the calls <code>f()</code> and <code>g()</code> are simple accessors, it would be perfectly safe to call them in either order. Indeed, an optimizing compiler might use its knowledge of the internals of <code>f</code> and <code>g</code> to safely reorder the two calls. But unlike a compiler, which generates object code that reflects the source at a specific moment, the purpose of the inliner is to make permanent changes to the source, so it can’t take advantage of ephemeral details. As an extreme example, consider this <code>start</code> function:</p>
<pre><code>func start() { /* TODO: implement */ }
</code></pre>
<p>An optimizing compiler is free to delete each call to <code>start()</code> because it has no effects today, but the inliner is not, because it may become important tomorrow.</p>
<!-- There's a bit of a contradiction here since the hazard analysis uses implementation details du jour. -->
<p>In short, the inliner may produce results that—to the informed eye of a project maintainer—are clearly too conservative. In such cases, the fixed code would benefit stylistically from a little manual cleanup.</p>
<h3 id="3-fallible-constant-expressions">3. “Fallible” constant expressions</h3>
<p>You might imagine (as I once did) that it would always be safe to replace a parameter variable by a constant argument of the same type. Surprisingly, this turns out not to be the case, because some checks previously done at run time would now happen—and fail—at compile time. Consider this call to the <code>index</code> function:</p>
<pre><code>//go:fix inline
func index(s string, i int) byte {
return s[i]
}
index("", 0)
</code></pre>
<p>A naive inliner might replace <code>s</code> with <code>""</code> and <code>i</code> with <code>0</code>, resulting in <code>""[0]</code>, but this is not actually a legal Go expression because this particular index is out of bounds for this particular string. Because the expression <code>""[0]</code> is composed of constants, it is evaluated at compile time, and a program that contains it will not even build. By contrast, the original program would fail only if execution reaches this call to <code>index</code>, which presumably in a working program it does not.</p>
<p>Consequently, the inliner must keep track of all expressions and their operands that might become constant during parameter substitution, triggering additional compile-time checks. It builds a <a href="https://cs.opensource.google/go/x/tools/+/master:internal/refactor/inline/falcon.go;l=43;drc=1aca71e85510ecc45dddbc335b30b64298c2a31e" rel="noreferrer" target="_blank">constraint system</a> and attempts to solve it. Each unsatisfied constraint is resolved by adding an explicit binding for the constrained parameters.</p>
<!--
The fundamental reason for falcon is that we can’t type-check the result
since in a “separate analysis” system we don’t have type information
for all dependencies. See hidden comment within section
[gofix#synergistic-fixes](gofix#synergistic-fixes).
-->
<h3 id="4-shadowing">4. Shadowing</h3>
<p>Typical argument expressions contain one or more identifiers that refer to symbols (variables, functions, and so on) in the caller’s file. The inliner must make sure that each name in the argument expression would refer to the same symbol after parameter substitution; in other words, none of the caller’s names is <em>shadowed</em> in the callee. If this fails, the inliner must again insert parameter bindings, as in this example:</p>
<div class="beforeafter">
<div class="beforeafter-context"><pre>
//go:fix inline
func f(val string) {
x := 123
fmt.Println(val, x)
}
</pre></div>
<pre>
x := "hello"
f(x)
</pre>
<div class="beforeafter-arrow"></div>
<pre>
x := "hello"
{
// another “parameter binding” declaration
// to read the caller's x before shadowing it
var val string = x
x := 123
fmt.Println(val, x)
}
</pre>
</div>
<p>Conversely, the inliner must also check that each name in the <em>callee</em> function body would refer to the same thing when it is spliced into the call site. In other words, none of the callee’s names is shadowed or missing in the caller. For missing names, the inliner may need to insert additional imports.</p>
<h3 id="5-unused-variables">5. Unused variables</h3>
<p>When an argument expression has no effects and its corresponding parameter is never used, the expression may be eliminated. However, if the expression contains the last reference to a local variable at the caller, this may cause a compile error because the variable is now unused.</p>
<div class="beforeafter">
<div class="beforeafter-context"><pre>
//go:fix inline
func f(_ int) { print("hello") }
</pre></div>
<pre>
x := 42
f(x)
</pre>
<div class="beforeafter-arrow"></div>
<pre>
x := 42 // error: unused variable: x
print("hello")
</pre>
</div>
<p>So the inliner must account for references to local variables and avoid removing the last one. (Of course it is still possible that two different inliner fixes each remove the <em>second</em>-to-last reference to a variable, so the two fixes are valid in isolation but not together; see the discussion of <a href="gofix#merging-fixes-and-conflicts">semantic conflicts</a> in the previous post. Unfortunately manual cleanup is inevitably required in this case.)</p>
<h3 id="6-defer">6. Defer</h3>
<p>In some cases, it is simply impossible to inline away the call.
Consider a call to a function that uses a <code>defer</code> statement:
if we were to eliminate the call, the deferred function would execute
when the <em>caller</em> function returns, which is too late.
All we can safely do when the callee uses <code>defer</code> is to
put the body of the callee in a function literal and immediately call it.
This function literal, <code>func() { … }()</code>, delimits the lifetime of the
<code>defer</code> statement, as in this example:</p>
<div class="beforeafter">
<div class="beforeafter-context"><pre>
//go:fix inline
func callee() {
defer f()
…
}
</pre></div>
<pre>
callee()
</pre>
<div class="beforeafter-arrow"></div>
<pre>
func() {
defer f()
…
}()
</pre>
</div>
<p>If you invoke the inliner in gopls, you’ll see that it makes the change shown above and introduces the function literal. This result may be appropriate in an interactive setting, since you are likely to immediately tweak the code (or undo the fix) as you prefer, but it is rarely desirable in a batch tool, so as a matter of policy the analyzer in <code>go fix</code> refuses to inline such “literalized” calls.</p>
<h3 id="an-optimizing-compiler-for-tidiness">An optimizing compiler for “tidiness”</h3>
<p>We’ve now seen half a dozen examples of how the inliner handles tricky semantic edge cases correctly.
(Many thanks to Rob Findley, Jonathan Amsterdam, Olena Synenka, and Lasse Folger for insights, discussions, reviews, features, and fixes.)
By putting all of the smarts into the inliner, users can simply apply an “Inline call” refactoring in their IDE or add a <code>//go:fix inline</code> directive to their own functions and be confident that the resulting code transformations can be applied with only the most cursory review.</p>
<p>Although we have made good progress toward that goal, we have not yet fully attained it, and it is likely that we never will. Consider a compiler. A sound compiler produces correct output for any input and never miscompiles your code; this is the fundamental expectation that every user should have of their compiler. An <em>optimizing</em> compiler produces code carefully chosen for speed without compromising on safety. Similarly, an inliner is a bit like an optimizing compiler whose goal is not speed but <em>tidiness</em>: inlining a call must never change the behavior of your program, and ideally it produces code that is maximally neat and tidy. Unfortunately, an optimizing compiler is <a href="https://en.wikipedia.org/wiki/Rice%27s_theorem" rel="noreferrer" target="_blank">provably</a> never done: showing that two different programs are equivalent is an undecidable problem, and there will always be improvements that an expert knows are safe but the compiler cannot prove. So too with the inliner: there will always be cases where the inliner’s output is too fussy or otherwise stylistically inferior to that of a human expert, and there will always be more “tidiness optimizations” to add.</p>
<h2 id="try-it-out">Try it out!</h2>
<p>We hope this tour of the inliner gives you a sense of some of the challenges involved, and of our priorities and directions in providing sound, self-service code transformation tools. Please try out the inliner, either interactively in your IDE, or through <code>//go:fix inline</code> directives and the <code>go fix</code> command, and share with us your experiences and any ideas you have for further improvements or new tools.</p>
</div>
</div>
<div class="Article prevnext">
<p>
<b>Next article: </b><a href="/blog/type-construction-and-cycle-detection">Type Construction and Cycle Detection</a><br>
<b>Previous article: </b><a href="/blog/allocation-optimizations">Allocating on the Stack</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>
Allocating on the Stacktag:blog.golang.org,2013:blog.golang.org/allocation-optimizations2026-02-27T00:00:00+00:002026-02-27T00:00:00+00:00Keith RandallA description of some of the recent changes to do allocations on the stack instead of the heap.
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/allocation-optimizations">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>Allocating on the Stack</h1>
<p class="author">
Keith Randall<br>
27 February 2026
</p>
<div class='markdown'>
<p>We’re always looking for ways to make Go programs faster. In the last
2 releases, we have concentrated on mitigating a particular source of
slowness, heap allocations. Each time a Go program allocates memory
from the heap, there’s a fairly large chunk of code that needs to run
to satisfy that allocation. In addition, heap allocations present
additional load on the garbage collector. Even with recent
enhancements like <a href="/blog/greenteagc">Green Tea</a>, the garbage collector
still incurs substantial overhead.</p>
<p>So we’ve been working on ways to do more allocations on the stack
instead of the heap. Stack allocations are considerably cheaper to
perform (sometimes completely free). Moreover, they present no load
to the garbage collector, as stack allocations can be collected
automatically together with the stack frame itself. Stack allocations
also enable prompt reuse, which is very cache friendly.</p>
<h2 id="stack-allocation-of-constant-sized-slices">Stack allocation of constant-sized slices</h2>
<p>Consider the task of building a slice of tasks to process:</p>
<pre><code>func process(c chan task) {
var tasks []task
for t := range c {
tasks = append(tasks, t)
}
processAll(tasks)
}
</code></pre>
<p>Let’s walk through what happens at runtime when pulling tasks from the
channel <code>c</code> and adding them to the slice <code>tasks</code>.</p>
<p>On the first loop iteration, there is no backing store for <code>tasks</code>, so
<code>append</code> has to allocate one. Because it doesn’t know how big the
slice will eventually be, it can’t be too aggressive. Currently, it
allocates a backing store of size 1.</p>
<p>On the second loop iteration, the backing store now exists, but it is
full. <code>append</code> again has to allocate a new backing store, this time of
size 2. The old backing store of size 1 is now garbage.</p>
<p>On the third loop iteration, the backing store of size 2 is
full. <code>append</code> <em>again</em> has to allocate a new backing store, this time
of size 4. The old backing store of size 2 is now garbage.</p>
<p>On the fourth loop iteration, the backing store of size 4 has only 3
items in it. <code>append</code> can just place the item in the existing backing
store and bump up the slice length. Yay! No call to the allocator for
this iteration.</p>
<p>On the fifth loop iteration, the backing store of size 4 is full, and
<code>append</code> again has to allocate a new backing store, this time of size
8.</p>
<p>And so on. We generally double the size of the allocation each time it
fills up, so we can eventually append most new tasks to the slice
without allocation. But there is a fair amount of overhead in the
“startup” phase when the slice is small. During this startup phase we
spend a lot of time in the allocator, and produce a bunch of garbage,
which seems pretty wasteful. And it may be that in your program, the
slice never really gets large. This startup phase may be all you ever
encounter.</p>
<p>If this code was a really hot part of your program, you might be
tempted to start the slice out at a larger size, to avoid all of these
allocations.</p>
<pre><code>func process2(c chan task) {
tasks := make([]task, 0, 10) // probably at most 10 tasks
for t := range c {
tasks = append(tasks, t)
}
processAll(tasks)
}
</code></pre>
<p>This is a reasonable optimization to do. It is never incorrect; your
program still runs correctly. If the guess is too small, you get
allocations from <code>append</code> as before. If the guess is too large, you
waste some memory.</p>
<p>If your guess for the number of tasks was a good one, then there’s
only one allocation site in this program. The <code>make</code> call allocates a
slice backing store of the correct size, and <code>append</code> never has to do
any reallocation.</p>
<p>The surprising thing is that if you benchmark this code with 10
elements in the channel, you’ll see that you didn’t reduce the number
of allocations to 1, you reduced the number of allocations to 0!</p>
<p>The reason is that the compiler decided to allocate the backing store
on the stack. Because it knows what size it needs to be (10 times the
size of a task) it can allocate storage for it in the stack frame of
<code>process2</code> instead of on the heap<a href="#footnotes"><sup>1</sup></a>. Note
that this depends on the fact that the backing store does not <a href="/doc/gc-guide#Escape_analysis">escape
to the heap</a> inside of <code>processAll</code>.</p>
<h2 id="stack-allocation-of-variable-sized-slices">Stack allocation of variable-sized slices</h2>
<p>But of course, hard coding a size guess is a bit rigid.
Maybe we can pass in an estimated length?</p>
<pre><code>func process3(c chan task, lengthGuess int) {
tasks := make([]task, 0, lengthGuess)
for t := range c {
tasks = append(tasks, t)
}
processAll(tasks)
}
</code></pre>
<p>This lets the caller pick a good size for the <code>tasks</code> slice, which may
vary depending on where this code is being called from.</p>
<p>Unfortunately, in Go 1.24 the non-constant size of the backing store
means the compiler can no longer allocate the backing store on the
stack. It will end up on the heap, converting our 0-allocation code
to 1-allocation code. Still better than having <code>append</code> do all the
intermediate allocations, but unfortunate.</p>
<p>But never fear, Go 1.25 is here!</p>
<p>Imagine you decide to do the following, to get the stack allocation
only in cases where the guess is small:</p>
<pre><code>func process4(c chan task, lengthGuess int) {
var tasks []task
if lengthGuess <= 10 {
tasks = make([]task, 0, 10)
} else {
tasks = make([]task, 0, lengthGuess)
}
for t := range c {
tasks = append(tasks, t)
}
processAll(tasks)
}
</code></pre>
<p>Kind of ugly, but it would work. When the guess is small, you use a
constant size <code>make</code> and thus a stack-allocated backing store, and
when the guess is larger you use a variable size <code>make</code> and allocate
the backing store from the heap.</p>
<p>But in Go 1.25, you don’t need to head down this ugly road. The Go
1.25 compiler does this transformation for you! For certain slice
allocation locations, the compiler automatically allocates a small
(currently 32-byte) slice backing store, and uses that backing store
for the result of the <code>make</code> if the size requested is small
enough. Otherwise, it uses a heap allocation as normal.</p>
<p>In Go 1.25, <code>process3</code> performs zero heap allocations, if
<code>lengthGuess</code> is small enough that a slice of that length fits into 32
bytes. (And of course that <code>lengthGuess</code> is a correct guess for how
many items are in <code>c</code>.)</p>
<p>We’re always improving the performance of Go, so upgrade to the latest
Go release and <a href="https://youtu.be/FUm0pfgWehI?si=QRTt_JYwr-cRHDNJ&t=960" rel="noreferrer" target="_blank">be
surprised</a> by
how much faster and memory efficient your program becomes!</p>
<h2 id="stack-allocation-of-append-allocated-slices">Stack allocation of append-allocated slices</h2>
<p>Ok, but you still don’t want to have to change your API to add this
weird length guess. Anything else you could do?</p>
<p>Upgrade to Go 1.26!</p>
<pre><code>func process(c chan task) {
var tasks []task
for t := range c {
tasks = append(tasks, t)
}
processAll(tasks)
}
</code></pre>
<p>In Go 1.26, we allocate the same kind of small, speculative backing
store on the stack, but now we can use it directly at the <code>append</code>
site.</p>
<p>On the first loop iteration, there is no backing store for <code>tasks</code>, so
<code>append</code> uses a small, stack-allocated backing store as the first
allocation. If, for instance, we can fit 4 <code>task</code>s in that backing store,
the first <code>append</code> allocates a backing store of length 4 from the stack.</p>
<p>The next 3 loop iterations append directly to the stack backing store,
requiring no allocation.</p>
<p>On the 4th iteration, the stack backing store is finally full and we
have to go to the heap for more backing store. But we have avoided
almost all of the startup overhead described earlier in this article.
No heap allocations of size, 1, 2, and 4, and none of the garbage that
they eventually become. If your slices are small, maybe you will never
have a heap allocation.</p>
<h2 id="stack-allocation-of-append-allocated-escaping-slices">Stack allocation of append-allocated escaping slices</h2>
<p>Ok, this is all good when the <code>tasks</code> slice doesn’t escape. But what if
I’m returning the slice? Then it can’t be allocated on the stack, right?</p>
<p>Right! The backing store for the slice returned by <code>extract</code> below
can’t be allocated on the stack, because the stack frame for <code>extract</code>
disappears when <code>extract</code> returns.</p>
<pre><code>func extract(c chan task) []task {
var tasks []task
for t := range c {
tasks = append(tasks, t)
}
return tasks
}
</code></pre>
<p>But you might think, the <em>returned</em> slice can’t be allocated on the
stack. But what about all those intermediate slices that just become
garbage? Maybe we can allocate those on the stack?</p>
<pre><code>func extract2(c chan task) []task {
var tasks []task
for t := range c {
tasks = append(tasks, t)
}
tasks2 := make([]task, len(tasks))
copy(tasks2, tasks)
return tasks2
}
</code></pre>
<p>Then the <code>tasks</code> slice never escapes <code>extract2</code>. It can benefit from
all of the optimizations described above. Then at the very end of
<code>extract2</code>, when we know the final size of the slice, we do one heap
allocation of the required size, copy our <code>task</code>s into it, and return
the copy.</p>
<p>But do you really want to write all that additional code? It seems
error prone. Maybe the compiler can do this transformation for us?</p>
<p>In Go 1.26, it can!</p>
<p>For escaping slices, the compiler will transform the original <code>extract</code>
code to something like this:</p>
<pre><code>func extract3(c chan task) []task {
var tasks []task
for t := range c {
tasks = append(tasks, t)
}
tasks = runtime.move2heap(tasks)
return tasks
}
</code></pre>
<p><code>runtime.move2heap</code> is a special compiler+runtime function that is the
identity function for slices that are already allocated in the heap.
For slices that are on the stack, it allocates a new slice on the
heap, copies the stack-allocated slice to the heap copy, and returns
the heap copy.</p>
<p>This ensures that for our original <code>extract</code> code, if the number of
items fits in our small stack-allocated buffer, we perform exactly 1
allocation of exactly the right size. If the number of items exceeds
the capacity of our small stack-allocated buffer, we do our normal
doubling-allocation once the stack-allocated buffer overflows.</p>
<p>The optimization that Go 1.26 does is actually better than the
hand-optimized code, because it does not require the extra
allocation+copy that the hand-optimized code always does at the end.
It requires the allocation+copy only in the case that we’ve exclusively
operated on a stack-backed slice up to the return point.</p>
<p>We do pay the cost for a copy, but that cost is almost completely
offset by the copies in the startup phase that we no longer have to
do. (In fact, the new scheme at worst has to copy one more element
than the old scheme.)</p>
<h2 id="wrapping-up">Wrapping up</h2>
<p>Hand optimization can still be beneficial, especially if you have a
good estimate of the slice size ahead of time. But hopefully the
compiler will now catch a lot of the simple cases for you and allow
you to focus on the remaining ones that really matter.</p>
<p>There are a lot of details that the compiler needs to ensure to get
all these optimizations right. If you think that one of these
optimizations is causing correctness or (negative) performance issues
for you, you can turn them off with
<code>-gcflags=all=-d=variablemakehash=n</code>. If turning these optimizations
off helps, please <a href="/issue/new">file an issue</a> so we can investigate.</p>
<h2 id="footnotes">Footnotes</h2>
<p><sup>1</sup> Go stacks do not have any <code>alloca</code>-style mechanism for
dynamically-sized stack frames. All Go stack frames are constant
sized.</p>
</div>
</div>
<div class="Article prevnext">
<p>
<b>Next article: </b><a href="/blog/inliner">//go:fix inline and the source-level inliner</a><br>
<b>Previous article: </b><a href="/blog/gofix">Using go fix to modernize Go code</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>
Using go fix to modernize Go codetag:blog.golang.org,2013:blog.golang.org/gofix2026-02-17T00:00:00+00:002026-02-17T00:00:00+00:00Alan DonovanGo 1.26 includes a new implementation of go fix that can help you use more modern features of Go.
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/gofix">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>Using go fix to modernize Go code</h1>
<p class="author">
Alan Donovan<br>
17 February 2026
</p>
<div class='markdown'>
<style>
.beforeafter {
display: grid;
font-size: 180%;
grid-template-columns: 1fr 2em 1fr;
@media screen and (max-width: 57.7rem) {
grid-template-columns: 1fr;
}
}
.beforeafter-arrow {
place-self: center;
}
.beforeafter-arrow::before {
content: "⟶";
@media screen and (max-width: 57.7rem) {
content: "⇓";
}
}
</style>
<p>The 1.26 release of Go this month includes a completely rewritten go fix subcommand. Go fix uses a suite of algorithms to identify opportunities to improve your code, often by taking advantage of more modern features of the language and library. In this post, we’ll first show you how to use <code>go fix</code> to modernize your Go codebase. Then in the <a href="#go/analysis">second section</a> we’ll dive into the infrastructure behind it and how it is evolving. Finally, we’ll present the theme of <a href="#self-service">“self-service”</a> analysis tools to help module maintainers and organizations encode their own guidelines and best practices.</p>
<!-- see https://go.dev/blog/survey2025#challenges -->
<h2 id="running-go-fix">Running go fix</h2>
<p>The <code>go fix</code> command, like <code>go build</code> and <code>go vet</code>, accepts a set of patterns that denote packages. This command fixes all packages beneath the current directory:</p>
<pre><code>$ go fix ./...
</code></pre>
<p>On success, it silently updates your source files. It discards any fix that touches <a href="https://pkg.go.dev/cmd/go#hdr-Generate_Go_files_by_processing_source" rel="noreferrer" target="_blank">generated files</a> since the appropriate fix in that case is to the logic of the generator itself. We recommend running <code>go fix</code> over your project each time you update your build to a newer Go toolchain release. Since the command may fix hundreds of files, start from a clean git state so that the change consists only of edits from go fix; your code reviewers will thank you.</p>
<p>To preview the changes the above command would have made, use the <code>-diff</code> flag:</p>
<pre><code>$ go fix -diff ./...
--- dir/file.go (old)
+++ dir/file.go (new)
- eq := strings.IndexByte(pair, '=')
- result[pair[:eq]] = pair[1+eq:]
+ before, after, _ := strings.Cut(pair, "=")
+ result[before] = after
…
</code></pre>
<p>You can list the available fixers by running this command:</p>
<pre><code>$ go tool fix help
…
Registered analyzers:
any replace interface{} with any
buildtag check //go:build and // +build directives
fmtappendf replace []byte(fmt.Sprintf) with fmt.Appendf
forvar remove redundant re-declaration of loop variables
hostport check format of addresses passed to net.Dial
inline apply fixes based on 'go:fix inline' comment directives
mapsloop replace explicit loops over maps with calls to maps package
minmax replace if/else statements with calls to min or max
…
</code></pre>
<p>Adding the name of a particular analyzer shows its complete documentation:</p>
<pre><code>$ go tool fix help forvar
forvar: remove redundant re-declaration of loop variables
The forvar analyzer removes unnecessary shadowing of loop variables.
Before Go 1.22, it was common to write `for _, x := range s { x := x ... }`
to create a fresh variable for each iteration. Go 1.22 changed the semantics
of `for` loops, making this pattern redundant. This analyzer removes the
unnecessary `x := x` statement.
This fix only applies to `range` loops.
</code></pre>
<p>By default, the <code>go fix</code> command runs all analyzers. When fixing a large project it may reduce the burden of code review if you apply fixes from the most prolific analyzers as separate code changes. To enable only specific analyzers, use the flags matching their names. For example, to run just the <code>any</code> fixer, specify the <code>-any</code> flag. Conversely, to run all the analyzers <em>except</em> selected ones, negate the flags, for instance <code>-any=false</code>.</p>
<p>As with <code>go build</code> and <code>go vet</code>, each run of the <code>go fix</code> command analyzes only a specific build configuration. If your project makes heavy use of files tagged for different CPUs or platforms, you may wish to run the command more than once with different values of <code>GOARCH</code> and <code>GOOS</code> for better coverage:</p>
<pre><code>$ GOOS=linux GOARCH=amd64 go fix ./...
$ GOOS=darwin GOARCH=arm64 go fix ./...
$ GOOS=windows GOARCH=amd64 go fix ./...
</code></pre>
<p>Running the command more than once also provides opportunities for synergistic fixes, as we’ll see below.</p>
<h3 id="modernizers">Modernizers</h3>
<p>The introduction of <a href="intro-generics">generics</a> in Go 1.18 marked the end of an era of very few changes to the language spec and the start of a period of more rapid—though still careful—change, especially in the libraries. Many of the trivial loops that Go programmers routinely write, such as to gather the keys of a map into a slice, can now be conveniently expressed as a call to a generic function such as <a href="https://pkg.go.dev/maps#Keys" rel="noreferrer" target="_blank"><code>maps.Keys</code></a>. Consequently these new features create many opportunities to simplify existing code.</p>
<p>In December 2024, during the frenzied adoption of LLM coding assistants, we became aware that such tools tended—unsurprisingly—to produce Go code in a style similar to the mass of Go code used during training, even when there were newer, better ways to express the same idea. Less obviously, the same tools often refused to use the newer ways even when directed to do so in general terms such as “always use the latest idioms of Go 1.25.” In some cases, even when explicitly told to use a feature, the model would deny that it existed. (See my 2025 GopherCon <a href="https://www.youtube.com/watch?v=_VePjjjV9JU&t=3m50s" rel="noreferrer" target="_blank">talk</a> for more exasperating details.) To ensure that future models are trained on the latest idioms, we need to ensure that these idioms are reflected in the training data, which is to say the global corpus of open-source Go code.</p>
<p>Over the past year, we have built <a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/modernize" rel="noreferrer" target="_blank">dozens of analyzers</a> to identify opportunities for modernization. Here are three examples of the fixes they suggest:</p>
<p><strong>minmax</strong> replaces an <code>if</code> statement by a use of Go 1.21’s <code>min</code> or <code>max</code> functions:</p>
<div class="beforeafter">
<pre>
x := f()
if x < 0 {
x = 0
}
if x > 100 {
x = 100
}
</pre>
<div class="beforeafter-arrow"></div>
<pre>
x := min(max(f(), 0), 100)
</pre>
</div>
<p><strong>rangeint</strong> replaces a 3-clause <code>for</code> loop by a Go 1.22 <code>range</code>-over-int loop:</p>
<div class="beforeafter">
<pre>
for i := 0; i < n; i++ {
f()
}
</pre>
<div class="beforeafter-arrow"></div>
<pre>
for range n {
f()
}
</pre>
</div>
<p><strong>stringscut</strong> (whose <code>-diff</code> output we saw earlier) replaces uses of <code>strings.Index</code> and slicing by Go 1.18’s <code>strings.Cut</code>:</p>
<div class="beforeafter">
<pre>
i := strings.Index(s, ":")
if i >= 0 {
return s[:i]
}
</pre>
<div class="beforeafter-arrow"></div>
<pre>
before, _, ok := strings.Cut(s, ":")
if ok {
return before
}
</pre>
</div>
<p>These modernizers are included in <a href="/gopls">gopls</a>, to provide instant feedback as you type, and in <code>go fix</code>, so that you can modernize several entire packages at once in a single command. In addition to making code clearer, modernizers may help Go programmers learn about newer features. As part of the process of approving each new change to the language and standard library, the <a href="https://go.googlesource.com/proposal/+/master/README.md" rel="noreferrer" target="_blank">proposal</a> review group now considers whether it should be accompanied by a modernizer. We expect to add more modernizers with each release.</p>
<h2 id="example-a-modernizer-for-go-126s-newexpr">Example: a modernizer for Go 1.26’s new(expr)</h2>
<p>Go 1.26 includes a small but widely useful change to the language specification. The built-in <code>new</code> function creates a new variable and returns its address. Historically, its sole argument was required to be a type, such as <code>new(string)</code>, and the new variable was initialized to its “zero” value, such as <code>""</code>. In Go 1.26, the <code>new</code> function may be called with any value, causing it to create a variable initialized to that value, avoiding the need for an additional statement. For example:</p>
<div class="beforeafter">
<pre>
ptr := new(string)
*ptr = "go1.25"
</pre>
<div class="beforeafter-arrow"></div>
<pre>
ptr := new("go1.26")
</pre>
</div>
<p>This feature filled a gap that had been discussed for over a decade and resolved one of the most popular <a href="/issue/45624">proposals</a> for a change to the language. It is especially convenient in code that uses a pointer type <code>*T</code> to indicate an optional value of type <code>T</code>, as is common when working with serialization packages such as <a href="https://pkg.go.dev/encoding/json#Marshal" rel="noreferrer" target="_blank">json.Marshal</a> or <a href="https://protobuf.dev/getting-started/gotutorial/" rel="noreferrer" target="_blank">protocol buffers</a>. This is such a common pattern that people often capture it in a helper, such as the <code>newInt</code> function below, saving the caller from the need to break out of an expression context to introduce additional statements:</p>
<pre><code>type RequestJSON struct {
URL string
Attempts *int // (optional)
}
data, err := json.Marshal(&RequestJSON{
URL: url,
Attempts: newInt(10),
})
func newInt(x int) *int { return &x }
</code></pre>
<p>Helpers such as <code>newInt</code> are so frequently needed with protocol buffers that the <code>proto</code> API itself provides them as <a href="https://pkg.go.dev/google.golang.org/protobuf/proto#Int64" rel="noreferrer" target="_blank"><code>proto.Int64</code></a>, <a href="https://pkg.go.dev/google.golang.org/protobuf/proto#String" rel="noreferrer" target="_blank"><code>proto.String</code></a>, and so on. But Go 1.26 makes all these helpers unnecessary:</p>
<pre><code>data, err := json.Marshal(&RequestJSON{
URL: url,
Attempts: new(10),
})
</code></pre>
<p>To help you take advantage of this feature, the <code>go fix</code> command now includes a fixer, <a href="https://tip.golang.org/src/cmd/vendor/golang.org/x/tools/go/analysis/passes/modernize/newexpr.go" rel="noreferrer" target="_blank">newexpr</a>, that recognizes “new-like” functions such as <code>newInt</code> and suggests fixes to replace the function body with <code>return new(x)</code> and to replace every call, whether in the same package or an importing package, with a direct use of <code>new(expr)</code>.</p>
<p>To avoid introducing premature uses of new features, modernizers offer fixes only in files that require at least the minimum appropriate version of Go (1.26 in this instance), either through a <a href="/ref/mod#versions"><code>go 1.26</code> directive</a> in the enclosing go.mod file or a <code>//go:build go1.26</code> <a href="https://pkg.go.dev/cmd/go#hdr-Build_constraints" rel="noreferrer" target="_blank">build constraint</a> in the file itself.</p>
<p>Run this command to update all calls of this form in your source tree:</p>
<pre><code>$ go fix -newexpr ./...
</code></pre>
<p>At this point, with luck, all of your <code>newInt</code>-like helper functions will have become unused and may be safely deleted (assuming they aren’t part of a stable published API). A few calls may remain where it would be unsafe to suggest a fix, such as when the name <code>new</code> is locally shadowed by another declaration. You can also use the <a href="deadcode">deadcode</a> command to help identify unused functions.</p>
<h2 id="synergistic-fixes">Synergistic fixes</h2>
<p>Applying one modernization may create opportunities to apply another. For example, this snippet of code, which clamps <code>x</code> to the range 0–100, causes the minmax modernizer to suggest a fix to use <code>max</code>. Once that fix is applied it suggests a second fix, this time to use <code>min</code>.</p>
<div class="beforeafter">
<pre>
x := f()
if x < 0 {
x = 0
}
if x > 100 {
x = 100
}
</pre>
<div class="beforeafter-arrow"></div>
<pre>
x := min(max(f(), 0), 100)
</pre>
</div>
<p>Synergies may also occur between different analyzers. For example, a common mistake is to repeatedly concatenate strings within a loop, resulting in quadratic time complexity—a bug and a potential vector for a denial-of-service attack. The <code>stringsbuilder</code> modernizer recognizes the problem and suggests using Go 1.10’s <code>strings.Builder</code>:</p>
<div class="beforeafter">
<pre>
s := ""
for _, b := range bytes {
s += fmt.Sprintf("%02x", b)
}
use(s)
</pre>
<div class="beforeafter-arrow"></div>
<pre>
var s strings.Builder
for _, b := range bytes {
s.WriteString(fmt.Sprintf("%02x", b))
}
use(s.String())
</pre>
</div>
<p>Once this fix is applied, a second analyzer may recognize that the <code>WriteString</code> and <code>Sprintf</code> operations can be combined as <code>fmt.Fprintf(&s, "%02x", b)</code>, which is both cleaner and more efficient, and offer a second fix. (This second analyzer is <a href="https://staticcheck.dev/docs/checks#QF1012" rel="noreferrer" target="_blank">QF1012</a> from Dominik Honnef’s <a href="https://staticcheck.dev/" rel="noreferrer" target="_blank">staticcheck</a>, which is already enabled in gopls but not yet in <code>go fix</code>, though we <a href="/issue/76918">plan</a> to add staticcheck analyzers to the go command starting in Go 1.27.)</p>
<p>Consequently, it may be worth running <code>go fix</code> more than once until it reaches a fixed point; twice is usually enough.</p>
<!-- Aside: The reason the tool does not apply the fixed point iteration itself is that (a) despite our efforts there is a non-zero chance that the transformation breaks the build, preventing most analyzers (those not marked RunDespiteErrors) from running on the second pass, and (b) the transformations in the first round of fixes may add imports for packages whose type information is not available, requiring the “build” to be restarted, which is impossible in many drivers such as Blaze, nogo, Tricorder, etc. Fundamentally this is a consequence of the analysis framework being designed like a distributed build (batch, coarse-grained, distributed pure function) not like an IDE (interactive fine-grained local mutations). -->
<h3 id="merging-fixes-and-conflicts">Merging fixes and conflicts</h3>
<p>A single run of <code>go fix</code> may apply dozens of fixes within the same source file. All fixes are conceptually independent, analogous to a set of git commits with the same parent. The <code>go fix</code> command uses a simple three-way merge algorithm to reconcile the fixes in sequence, analogous to the task of merging a set of git commits that edit the same file. If a fix conflicts with the list of edits accumulated so far, it is discarded, and the tool issues a warning that some fixes were skipped and that the tool should be run again.</p>
<p>This reliably detects <em>syntactic</em> conflicts arising from overlapping edits, but another class of conflict is possible: a <em>semantic</em> conflict occurs when two changes are textually independent but their meanings are incompatible. As an example consider two fixes that each remove the second-to-last use of a local variable: each fix is fine by itself, but when both are applied together the local variable becomes unused, and in Go that’s a compilation error. Neither fix is responsible for removing the variable declaration, but someone has to do it, and that someone is the user of <code>go fix</code>.</p>
<p>A similar semantic conflict arises when a set of fixes causes an import to become unused. Because this case is so common, the <code>go fix</code> command applies a final pass to detect unused imports and remove them automatically.</p>
<p>Semantic conflicts are relatively rare. Fortunately they usually reveal themselves as compilation errors, making them impossible to overlook. Unfortunately, when they happen, they do demand some manual work after running <code>go fix</code>.</p>
<p>Let’s now delve into the infrastructure beneath these tools.</p>
<p><a name='go/analysis'></a></p>
<h2 id="the-go-analysis-framework">The Go analysis framework</h2>
<p>Since the earliest days of Go, the <code>go</code> command has had two subcommands for static analysis, <code>go vet</code> and <code>go fix</code>, each with its own suite of algorithms: “checkers” and “fixers”. A checker reports likely mistakes in your code, such as passing a string instead of an integer as the operand of a <code>fmt.Printf("%d")</code> conversion. A fixer safely edits your code to fix a bug or to express the same thing in a better way, perhaps more clearly, concisely, or efficiently. Sometimes the same algorithm appears in both suites when it can both report a mistake and safely fix it.</p>
<p>In 2017 we redesigned the then-monolithic <code>go vet</code> program to separate the checker algorithms (now called “analyzers”) from the “driver”, the program that runs them; the result was the <a href="https://pkg.go.dev/golang.org/x/tools/go/analysis" rel="noreferrer" target="_blank">Go analysis framework</a>. This separation enables an analyzer to be written once then run in a diverse range of drivers for different environments, such as:</p>
<ul>
<li><a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/unitchecker" rel="noreferrer" target="_blank">unitchecker</a>, which turns a suite of analyzers into a subcommand that can be run by the go command’s scalable incremental build system, analogous to a compiler in go build. This is the basis of <code>go fix</code> and <code>go vet</code>.</li>
<li><a href="https://github.com/bazel-contrib/rules_go/blob/master/go/nogo.rst" rel="noreferrer" target="_blank">nogo</a>, the analogous driver for alternative build systems such as Bazel and Blaze.</li>
<li><a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/singlechecker" rel="noreferrer" target="_blank">singlechecker</a>, which turns an analyzer into a standalone command that loads, parses, and type-checks a set of packages (perhaps a whole program) and then analyzes them. We often use it for ad hoc experiments and measurements over the module mirror (<a href="https://proxy.golang.org/" rel="noreferrer" target="_blank">proxy.golang.org</a>) corpus.</li>
<li><a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/multichecker" rel="noreferrer" target="_blank">multichecker</a>, which does the same thing for a suite of analyzers with a ‘swiss-army knife’ CLI.</li>
<li><a href="/gopls">gopls</a>, the <a href="https://microsoft.github.io/language-server-protocol/" rel="noreferrer" target="_blank">language server</a> behind VS Code and other editors, which provides real-time diagnostics from analyzers after each editor keystroke.</li>
<li>the highly configurable driver used by the <a href="https://staticcheck.dev/" rel="noreferrer" target="_blank">staticcheck</a> tool. (Staticcheck also provides a large suite of analyzers that can be run in other drivers.)</li>
<li><a href="https://research.google/pubs/tricorder-building-a-program-analysis-ecosystem/" rel="noreferrer" target="_blank">Tricorder</a>, the batch static analysis pipeline used by Google’s monorepo and integrated with its code review system.</li>
<li>gopls’ <a href="/gopls/features/mcp">MCP server</a>, which makes diagnostics available to LLM-based coding agents, providing more robust “guardrails”.</li>
<li><a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/analysistest" rel="noreferrer" target="_blank">analysistest</a>, the analysis framework’s test harness.</li>
</ul>
<p>One benefit of the framework is its ability to express helper analyzers that don’t report diagnostics or suggest fixes of their own but instead compute some intermediate data structure that may be useful to many other analyzers, amortizing the costs of its construction. Examples include <a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/ctrlflow" rel="noreferrer" target="_blank">control-flow graphs</a>, the <a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/buildssa" rel="noreferrer" target="_blank">SSA representation</a> of function bodies, and data structures for <a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/inspect" rel="noreferrer" target="_blank">optimized AST navigation</a>.</p>
<p>Another benefit of the framework is its support for making deductions across packages. An analyzer can attach a “<a href="https://pkg.go.dev/golang.org/x/tools/go/analysis#hdr-Modular_analysis_with_Facts" rel="noreferrer" target="_blank">fact</a>” to a function or other symbol so that information learned while analyzing the function’s body can be used when later analyzing a call to the function, even if the call appears in another package or the later analysis occurs in a different process. This makes it easy to define scalable interprocedural analyses. For example, the printf checker can tell when a function such as <code>log.Printf</code> is really just a wrapper around <code>fmt.Printf</code>, so it knows that calls to <code>log.Printf</code> should be checked in a similar manner. This process works by induction, so the tool will also check calls to further wrappers around <code>log.Printf</code>, and so on. An example of an analyzer that makes heavy use of facts is <a href="https://github.com/uber-go/nilaway" rel="noreferrer" target="_blank">Uber’s nilaway</a>, which reports potential mistakes resulting in nil pointer dereferences.</p>
<img src="gofix-analysis-facts.svg">
<p>The process of “separate analysis” in <code>go fix</code> is analogous to the process of separate compilation in <code>go build</code>. Just as the compiler builds packages starting from the bottom of the dependency graph and passing type information up to importing packages, the analysis framework works from the bottom of the dependency graph up, passing facts (and types) up to importing packages.</p>
<p>In 2019, as we started developing <a href="/gopls">gopls</a>, the language server for Go, we added the ability for an analyzer to suggest a <a href="https://pkg.go.dev/golang.org/x/tools/go/analysis#SuggestedFix" rel="noreferrer" target="_blank">fix</a> when reporting a diagnostic. The printf analyzer, for example, offers to replace <code>fmt.Printf(msg)</code> with <code>fmt.Printf("%s", msg)</code> to avoid misformatting should the dynamic <code>msg</code> value contain a <code>%</code> symbol. This mechanism has become the basis for many of the quick fixes and refactoring features of gopls.</p>
<p>While all these developments were happening to <code>go vet</code>, <code>go fix</code> remained stuck as it was back before the <a href="/doc/go1compat">Go compatibility promise</a>, when early adopters of Go used it to maintain their code during the rapid and sometimes incompatible evolution of the language and libraries.</p>
<p>The Go 1.26 release brings the Go analysis framework to <code>go fix</code>. The <code>go vet</code> and <code>go fix</code> commands have converged and are now almost identical in implementation. The only differences between them are the criteria for the suites of algorithms they use, and what they do with computed diagnostics. Go <a href="https://cs.opensource.google/go/go/+/refs/tags/go1.26rc1:src/cmd/vet/main.go;l=62" rel="noreferrer" target="_blank">vet analyzers</a> must detect likely mistakes with low false positives; their diagnostics are reported to the user. Go <a href="https://cs.opensource.google/go/go/+/refs/tags/go1.26rc1:src/cmd/fix/main.go;l=46" rel="noreferrer" target="_blank">fix analyzers</a> must generate fixes that are safe to apply without regression in correctness, performance, or style; their diagnostics may not be reported, but the fixes are directly applied. Aside from this difference of emphasis, the task of developing a fixer is no different from that of developing a checker.</p>
<h3 id="improving-analysis-infrastructure">Improving analysis infrastructure</h3>
<p>As the number of analyzers in <code>go vet</code> and <code>go fix</code> continues to grow, we have been investing in infrastructure both to improve the performance of each analyzer and to make it easier to write each new analyzer.</p>
<p>For example, most analyzers start by traversing the syntax trees of each file in the package looking for a particular kind of node such as a range statement or function literal. The existing <a href="https://pkg.go.dev/golang.org/x/tools/go/ast/inspector" rel="noreferrer" target="_blank">inspector</a> package makes this scan efficient by pre-computing a compact index of a complete traversal so that later traversals can quickly skip subtrees that don’t contain any nodes of interest. Recently we extended it with the <a href="https://pkg.go.dev/golang.org/x/tools/go/ast/inspector#Cursor" rel="noreferrer" target="_blank">Cursor</a> datatype to allow flexible and efficient navigation between nodes in all four cardinal directions—up, down, left, and right, similar to navigating the elements of an HTML DOM—making it easy and efficient to express a query such as “find each go statement that is the first statement of a loop body”:</p>
<pre><code> var curFile inspector.Cursor = ...
// Find each go statement that is the first statement of a loop body.
for curGo := range curFile.Preorder((*ast.GoStmt)(nil)) {
kind, index := curGo.ParentEdge()
if kind == edge.BlockStmt_List && index == 0 {
switch curGo.Parent().ParentEdgeKind() {
case edge.ForStmt_Body, edge.RangeStmt_Body:
...
}
}
}
</code></pre>
<p>Many analyzers start by searching for calls to a specific function, such as <code>fmt.Printf</code>. Function calls are among the most numerous expressions in Go code, so rather than search every call expression and test whether it is a call to <code>fmt.Printf</code>, it is much more efficient to pre-compute an index of symbol references, which is done by <a href="https://pkg.go.dev/golang.org/x/tools/internal/typesinternal/typeindex" rel="noreferrer" target="_blank">typeindex</a> and its <a href="https://pkg.go.dev/golang.org/x/[email protected]/internal/analysis/typeindex" rel="noreferrer" target="_blank">helper</a> analyzer. Then the calls to <code>fmt.Printf</code> can be enumerated directly, making the cost proportional to the number of calls instead of to the size of the package. For an analyzer such as <a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/hostport" rel="noreferrer" target="_blank">hostport</a> that seeks an infrequently used symbol (<code>net.Dial</code>), this can easily make it <a href="/cl/657958">1,000× faster</a>.</p>
<p>Some other infrastructural improvements over the past year include:</p>
<ul>
<li>a <strong>dependency graph of the standard library</strong> that analyzers can consult to avoid introducing import cycles. For example, we can’t introduce a call to <code>strings.Cut</code> in a package that is itself imported by <code>strings</code>.</li>
<li>support for <strong>querying the effective Go version</strong> of a file as determined by the enclosing go.mod file and build tags, so that analyzers don’t insert uses of features that are “too new”.</li>
<li>a richer <strong>library of refactoring primitives</strong> (e.g. “delete this statement”) that correctly handle adjacent comments and other tricky edge cases.</li>
</ul>
<p>We have come a long way, but there remains much to do. Fixer logic can be tricky to get right. Since we expect users to apply hundreds of suggested fixes with only cursory review, it’s critical that fixers are correct even in obscure edge cases. As just one example (see my GopherCon <a href="https://www.youtube.com/watch?v=_VePjjjV9JU&t=13m17s" rel="noreferrer" target="_blank">talk</a> for several more), we built a modernizer that replaces calls such as <code>append([]string{}, slice...)</code> by the clearer <code>slices.Clone(slice)</code> only to discover that, when <code>slice</code> is empty, the result of Clone is nil, a subtle behavior change that in rare cases can cause bugs; so we had to exclude <a href="https://pkg.go.dev/golang.org/x/tools/go/analysis/passes/modernize#hdr-Analyzer_appendclipped" rel="noreferrer" target="_blank">that modernizer</a> from the <code>go fix</code> suite.</p>
<p>Some of these difficulties for authors of analyzers can be ameliorated with better documentation (both for humans and LLMs), particularly checklists of surprising edge cases to consider and test. A pattern-matching engine for syntax trees, similar to those in <a href="https://pkg.go.dev/honnef.co/go/tools/pattern" rel="noreferrer" target="_blank">staticcheck</a> and <a href="https://tree-sitter.github.io/tree-sitter/using-parsers/queries/index.html" rel="noreferrer" target="_blank">Tree Sitter</a>, could simplify the fiddly task of efficiently identifying the locations that need fixing. A richer library of operators for computing accurate fixes would help avoid common mistakes. A better test harness would let us check that fixes don’t break the build, and preserve dynamic properties of the target code. These are all on our roadmap.</p>
<p><a name='self-service'></a></p>
<h2 id="the-self-service-paradigm">The “self-service” paradigm</h2>
<p>More fundamentally, we are turning our attention in 2026 to a “self-service” paradigm.</p>
<p>The <code>newexpr</code> analyzer we saw earlier is a typical modernizer: a bespoke algorithm tailored to a particular feature. The bespoke model works well for features of the language and standard library, but it doesn’t really help update uses of third-party packages. Although there’s nothing to stop you from writing a modernizer for your own public APIs and running it on your own project, there’s no automatic way to get users of your API to run it too. Your modernizer probably wouldn’t belong in gopls or the <code>go vet</code> suite unless your API is particularly widely used across the Go ecosystem. Even in that case you would have to obtain code reviews and approvals and then wait for the next release.</p>
<p>Under the self-service paradigm, Go programmers would be able to define modernizations for their own APIs that their users can apply without all the bottlenecks of the current centralized paradigm. This is especially important as the Go community and global Go corpus are growing much faster than the ability of our team to review analyzer contributions.</p>
<p>The <code>go fix</code> command in Go 1.26 includes a preview of the first fruits of this new paradigm: the <strong>annotation-driven source-level inliner</strong>, which is described in <a href="inliner">a follow-up post</a>. In the coming year, we plan to investigate two more approaches within this paradigm.</p>
<!-- TODO(adonovan): update the reference above when this post is ready: [//go:fix inline and source-level inliner](https://docs.google.com/document/d/16n29TcxMnZoEZtIo8BZcz6PSnh2dakWLSaa6UkROIEQ/edit?resourcekey=0-8QYiy7RDd2QbVAgKDOycoQ) -->
<p>First, we will be exploring the possibility of <a href="/issue/59869">dynamically loading</a> modernizers from the source tree and securely executing them, either in gopls or <code>go fix</code>. In this approach a package that provides an API for, say, a SQL database could additionally provide a checker for misuses of the API, such as SQL injection vulnerabilities or failure to handle critical errors. The same mechanism could be used by project maintainers to encode internal housekeeping rules, such as avoiding calls to certain problematic functions or enforcing stronger coding disciplines in critical parts of the code.</p>
<p>Second, many existing checkers can be informally described as “don’t forget to X after you Y!”, such as “close the file after you open it”, “cancel the context after you create it”, “unlock the mutex after you lock it”, “break out of the iterator loop after yield returns false”, and so on. What such checkers have in common is that they enforce certain invariants on all execution paths. We plan to explore generalizations and unifications of these control-flow checkers so that Go programmers can easily apply them to new domains, without complex analytical logic, simply by annotating their own code.</p>
<p>We hope that these new tools will save you effort during maintenance of your Go projects and help you learn about and benefit from newer features sooner. Please try out <code>go fix</code> on your projects and <a href="/issue/new">report</a> any problems you find, and do share any ideas you have for new modernizers, fixers, checkers, or self-service approaches to static analysis.</p>
<!--
Local Variables:
indent-tabs-mode: nil
tab-width: 4
End:
-->
</div>
</div>
<div class="Article prevnext">
<p>
<b>Next article: </b><a href="/blog/allocation-optimizations">Allocating on the Stack</a><br>
<b>Previous article: </b><a href="/blog/go1.26">Go 1.26 is released</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>
Go 1.26 is releasedtag:blog.golang.org,2013:blog.golang.org/go1.262026-02-10T00:00:00+00:002026-02-10T00:00:00+00:00Carlos Amedee, on behalf of the Go teamGo 1.26 adds a new garbage collector, cgo overhead reduction, experimental simd/archsimd package, experimental runtime/secret package, and more.
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/go1.26">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>Go 1.26 is released</h1>
<p class="author">
Carlos Amedee, on behalf of the Go team<br>
10 February 2026
</p>
<div class='markdown'>
<p>Today the Go team is pleased to release Go 1.26.
You can find its binary archives and installers on the <a href="/dl/">download page</a>.</p>
<h2 id="language-changes">Language changes</h2>
<p>Go 1.26 introduces two significant refinements to the language
<a href="/doc/go1.26#language">syntax and type system</a>.</p>
<p>First, the built-in <code>new</code> function, which creates a new variable, now allows its operand to be an
expression, specifying the initial value of the variable.</p>
<p>A simple example of this change means that code such as this:</p>
<pre><code class="language-go">x := int64(300)
ptr := &x
</code></pre>
<p>Can be simplified to:</p>
<pre><code class="language-go">ptr := new(int64(300))
</code></pre>
<p>Second, generic types may now refer to themselves in their own type parameter list. This change
simplifies the implementation of complex data structures and interfaces.</p>
<h2 id="performance-improvements">Performance improvements</h2>
<p>The previously experimental <a href="/doc/go1.26#new-garbage-collector">Green Tea garbage collector</a>
is now enabled by default.</p>
<p>The baseline <a href="/doc/go1.26#faster-cgo-calls">cgo overhead has been reduced</a>
by approximately 30%.</p>
<p>The compiler can now <a href="/doc/go1.26#compiler">allocate the backing store</a> for
slices on the stack in more situations, which improves performance.</p>
<h2 id="tool-improvements">Tool improvements</h2>
<p>The <code>go fix</code> command has been completely rewritten to use the
<a href="/pkg/golang.org/x/tools/go/analysis">Go analysis framework</a>, and now includes a
couple dozen “<a href="/pkg/golang.org/x/tools/go/analysis/passes/modernize">modernizers</a>”, analyzers
that suggest safe fixes to help your code take advantage of newer features of the language
and standard library. It also includes the
<a href="/pkg/golang.org/x/tools/go/analysis/passes/inline#hdr-Analyzer_inline"><code>inline</code> analyzer</a>, which
attempts to inline all calls to each function annotated with a <code>//go:fix inline</code> directive.
Two upcoming blog posts will address these features in more detail.</p>
<h2 id="more-improvements-and-changes">More improvements and changes</h2>
<p>Go 1.26 introduces many improvements over Go 1.25 across
its <a href="/doc/go1.26#tools">tools</a>, the <a href="/doc/go1.26#runtime">runtime</a>,
<a href="/doc/go1.26#compiler">compiler</a>, <a href="/doc/go1.26#linker">linker</a>,
and the <a href="/doc/go1.26#library">standard library</a>.
This includes the addition of three new packages: <a href="/doc/go1.26#new-cryptohpke-package"><code>crypto/hpke</code></a>,
<a href="/doc/go1.26#cryptomlkempkgcryptomlkem"><code>crypto/mlkem/mlkemtest</code></a>, and
<a href="/doc/go1.26#testingcryptotestpkgtestingcryptotest"><code>testing/cryptotest</code></a>.
There are <a href="/doc/go1.26#ports">port-specific</a> changes
and <a href="/doc/godebug#go-126"><code>GODEBUG</code> settings</a> updates.</p>
<p>Some of the additions in Go 1.26 are in an experimental stage
and become exposed only when you explicitly opt in. Notably:</p>
<ul>
<li>
<p>An <a href="/doc/go1.26#simd">experimental <code>simd/archsimd</code> package</a> provides access to “single instruction,
multiple data” (SIMD) operations.</p>
</li>
<li>
<p>An <a href="/doc/go1.26#new-experimental-runtimesecret-package">experimental <code>runtime/secret</code> package</a> provides
a facility for securely erasing temporaries used in code that manipulates secret
information, typically cryptographic in nature.</p>
</li>
<li>
<p>An <a href="/doc/go1.26#goroutineleak-profiles">experimental <code>goroutineleak</code> profile</a>
in the <code>runtime/pprof</code> package that reports leaked goroutines.</p>
</li>
</ul>
<p>These experiments are all expected to be generally available in a
future version of Go. We encourage you to try them out ahead of time.
We really value your feedback!</p>
<p>Please refer to the <a href="/doc/go1.26">Go 1.26 Release Notes</a> for the complete list
of additions, changes, and improvements in Go 1.26.</p>
<p>Over the next few weeks, follow-up blog posts will cover some of the topics
relevant to Go 1.26 in more detail. Check back later to read those posts.</p>
<p>Thanks to everyone who contributed to this release by writing code, filing bugs,
trying out experimental additions, sharing feedback, and testing the release candidates.
Your efforts helped make Go 1.26 as stable as possible.
As always, if you notice any problems, please <a href="/issue/new">file an issue</a>.</p>
<p>We hope you enjoy using the new release!</p>
</div>
</div>
<div class="Article prevnext">
<p>
<b>Next article: </b><a href="/blog/gofix">Using go fix to modernize Go code</a><br>
<b>Previous article: </b><a href="/blog/survey2025">Results from the 2025 Go Developer Survey</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>
Results from the 2025 Go Developer Surveytag:blog.golang.org,2013:blog.golang.org/survey20252026-01-21T00:00:00+00:002026-01-21T00:00:00+00:00Todd Kulesza, on behalf of the Go teamThe 2025 Go Developer Survey results, focused on developer sentiment towards Go, use cases, challenges, and developer environments.
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/survey2025">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>Results from the 2025 Go Developer Survey</h1>
<p class="author">
Todd Kulesza, on behalf of the Go team<br>
21 January 2026
</p>
<div class='markdown'>
<style type="text/css" scoped>
blockquote p {
color: var(--color-text-subtle) !important;
}
.chart {
margin-left: 1.5rem;
margin-right: 1.5rem;
width: 800px;
}
.quote_source {
font-style: italic;
}
@media (prefers-color-scheme: dark) {
.chart {
border-radius: 8px;
}
}
</style>
<p>Hello! In this article we’ll discuss the results of the 2025 Go Developer
Survey, conducted during September 2025.</p>
<p>Thank you to the 5,379 Go developers who responded to our survey invitation
this year. Your feedback helps both the Go team at Google and the wider Go
community understand the current state of the Go ecosystem and prioritize
projects for the year ahead.</p>
<p>Our three biggest findings are:</p>
<ul>
<li>Broadly speaking, Go developers asked for help with identifying and applying
best practices, making the most of the standard library, and expanding the
language and built-in tooling with more modern capabilities.</li>
<li>Most Go developers are now using AI-powered development tools when seeking
information (e.g., learning how to use a module) or toiling (e.g., writing
repetitive blocks of similar code), but their satisfaction with these tools
is middling due, in part, to quality concerns.</li>
<li>A surprisingly high proportion of respondents said they frequently need to
review documentation for core <code>go</code> subcommands, including <code>go build</code>, <code>go run</code>, and <code>go mod</code>, suggesting meaningful room for improvement with the <code>go</code>
command’s help system.</li>
</ul>
<p>Read on for the details about these findings, and much more.</p>
<h2 id="sections">Sections</h2>
<ul>
<li><a href="#demographics">Who did we hear from?</a></li>
<li><a href="#sentiment">How do people feel about Go?</a></li>
<li><a href="#uses">What are people building with Go?</a></li>
<li><a href="#challenges">What are the biggest challenges facing Go
developers?</a></li>
<li><a href="#devenv">What do their development environments look like?</a></li>
<li><a href="#methodology">Survey methodology</a></li>
</ul>
<h2 id="demographics">Who did we hear from?</h2>
<p>Most survey respondents self-identified as professional developers (87%) who
use Go for their primary job (82%). A large majority also uses Go for personal
or open-source projects (72%). Most respondents were between 25 – 45
years old (68%) with at least six years of professional development experience
(75%). Going deeper, 81% of respondents told us they had more professional
development experience than Go-specific experience, strong evidence that Go is
usually not the first language developers work with. In fact, one of the
themes that repeatedly surfaced during this year’s survey analysis seems to
stem from this fact: when the way to do a task in Go is substantially
different from a more familiar language, it creates friction for developers to
first learn the new (to them) idiomatic Go pattern, and then to consistently
recall these differences as they continue to work with multiple languages.
We’ll return to this theme later.</p>
<p>The single most common industry respondents work in was “Technology” (46%),
but a majority of respondents work outside of the tech industry (54%). We saw
representation of all sizes of organizations, with a bare majority working
somewhere with 2 – 500 employees (51%), 9% working alone, and 30%
working at enterprises of over 1,000 employees. As in prior years, a majority
of responses come from North America and Europe.</p>
<p>This year we observed a decrease in the proportion of respondents who said
they were fairly new to Go, having worked with it for less than one year
(13%, vs. 21% in 2024). We suspect this is related to <a
href="https://digitaleconomy.stanford.edu/wp-content/uploads/2025/08/Canaries_BrynjolfssonChandarChen.pdf">industry-wide
declines in entry-level software engineering roles</a>; we commonly hear from
people that they learned Go for a specific job, so a downturn in hiring would
be expected to reduce the number of developers learning Go in that year. This
hypothesis is further supported by our finding that over 80% of respondents
learned Go <em>after</em> beginning their professional career.</p>
<p>Other than the above, we found no significant changes in other demographics
since our 2024 survey.</p>
<p><img src="survey2025/where.svg" alt="During the past year, in which types of
situations have you used Go?" class="chart" /> <img
src="survey2025/go_exp.svg" alt="How long have you been using Go?"
class="chart" /> <img src="survey2025/role.svg" alt="Which of the following
best describe how or why you work with Go?" class="chart" /> <img
src="survey2025/age.svg" alt="How old are you?" class="chart" /> <img
src="survey2025/pro_dev_exp.svg" alt="How many years of professional coding
experience do you have?" class="chart" /> <img src="survey2025/org_size.svg"
alt="How many people work at your organization?" class="chart" /> <img
src="survey2025/industry.svg" alt="Which of the following best describes the
industry in which your organization operates?" class="chart" /> <img
src="survey2025/location.svg" alt="Where do you live?" class="chart" /></p>
<h2 id="sentiment">How do people feel about Go?</h2>
<p>The vast majority of respondents (91%) said they felt satisfied while working
with Go. Almost ⅔ were “very satisfied”, the highest rating. Both of these
metrics are incredibly positive, and have been stable since we began asking
this question in 2019. The stability over time is really what we monitor from
this metric — we view it as a lagging indicator, meaning by the time
this satisfaction metric shows a meaningful change, we would expect to already
have seen earlier signals from issue reports, mailing lists, or other
community feedback.</p>
<p><img src="survey2025/csat.svg" alt="Overall, how satisfied or dissatisfied
have you been using Go during the past year?" class="chart" /></p>
<p>Why were respondents so positive about Go? Looking at open-text responses to
several different survey questions suggests that it’s the gestalt, rather than
any one thing. These folks are telling us that they find tremendous value in
Go as a holistic platform. That doesn’t mean it supports all programming
domains equally well (it surely does not), but that developers’ value the
domains it <em>does</em> nicely support via stdlib and built-in tooling.</p>
<p>Below are some representative quotations from respondents. To provide context
for each quote, we also identify the satisfaction level, years of experience
with Go, and industry of the respondent.</p>
<blockquote>
<p>“Go is by far my favorite language; other languages feel far too complex and
unhelpful. The fact that Go is comparatively small, simple, with fewer bells
and whistles plays a massive role in making it such a good long-lasting
foundation for building programs with it. I love that it scales well to
being used by a single programmer and in large teams.” <span
class="quote_source">— Very satisfied / 10+ years / Technology
company</span></p>
</blockquote>
<blockquote>
<p>“The entire reason I use Go is the great tooling and standard library. I’m
very thankful to the team for focusing on great HTTP, crypto, math, sync,
and other tools that make developing service-oriented applications easy and
reliable.” <span class="quote_source">— Very satisfied / 10+ years /
Energy company</span></p>
</blockquote>
<blockquote>
<p>“[The] Go ecosystem is the reason why I really like the programming
language. There are a lot of npm issues lately but not with Go.” <span
class="quote_source">— Very satisfied / 3 – 10 years / Financial
services</span></p>
</blockquote>
<p>This year we also asked about the other languages that people use. Survey
respondents said that besides Go, they enjoy working with Python, Rust, and
TypeScript, among a long tail of other languages. Some shared characteristics
of these languages align with common points of friction reported by Go
developers, including areas like error handling, enums, and object-oriented
design patterns. For example, when we sum the proportion of respondents who
said their next-favorite language included one of the following factors, we
found that majorities of respondents enjoy using languages with inheritance,
type-safe enums, and exceptions, with only a bare majority of these languages
including a static type system by default.</p>
<table>
<thead>
<tr>
<th>Concept or feature</th>
<th>Proportion of respondents</th>
</tr>
</thead>
<tbody>
<tr>
<td>Inheritance</td>
<td>71%</td>
</tr>
<tr>
<td>Type-safe enums</td>
<td>65%</td>
</tr>
<tr>
<td>Exceptions</td>
<td>60%</td>
</tr>
<tr>
<td>Static typing</td>
<td>51%</td>
</tr>
</tbody>
</table>
<p>We think this is important because it reveals the larger environment in which
developers operate — it suggests that people need to use different
design patterns for fairly mundane tasks, depending on the language of the
codebase they’re currently working on. This leads to additional cognitive load
and confusion, not only among developers new to Go (who must learn idiomatic
Go design patterns), but also among the many developers who work in multiple
codebases or projects. One way to alleviate this additional load is
context-specific guidance, such as a tutorial on “Error handling in Go for
Java developers”. There may even be opportunities to build some of this
guidance into code analyzers, making it easier to surface directly in an IDE.</p>
<p><img src="survey2025/fav_lang.svg" alt="Not including Go, what's your favorite
programming language?" class="chart" /></p>
<p>This year we asked the Go community to share their sentiment towards the Go
project itself. These results were quite different from the 91% satisfaction
rate we discussed above, and point to areas the Go Team plans to invest our
energy during 2026. In particular, we want to encourage more contributors to
get involved, and ensure the Go Team accurately understands the challenges Go
developers currently face. We hope this focus, in turn, will help to increase
developer trust in both the Go project and the Go Team leadership. As one
respondent explained the problem:</p>
<blockquote>
<p>“Now that the founding first generation of Go Team members [are] not
involved much anymore in the decision making, I am a bit worried about the
future of Go in terms of quality of maintenance, and its balanced decisions
so far wrt to changes in the language and std lib. More presence in form of
talks [by] the new core team members about the current state and future
plans might be helpful to strengthen trust.” <span
class="quote_source">— Very satisfied / 10+ years / Technology
company</span></p>
</blockquote>
<p><img src="survey2025/trust.svg" alt="To what extent do you agree or disagree
with the following statements?" class="chart" /></p>
<h2 id="uses">What are people building with Go?</h2>
<p>We revised this list of “what types of things do you build with Go?” from 2024
with the intent of more usefully teasing apart what people are building with
Go, and avoid confusion around evolving terms like “agents”. Respondent’s top
use cases remain CLIs and API services, with no meaningful change in either
since 2024. In fact, a majority of respondents (55%) said they build <em>both</em>
CLIs and API services with Go. Over ⅓ of respondents specifically build cloud
infrastructure tooling (a new category), and 11% work with ML models, tools,
or agents (an expanded category). Unfortunately embedded use cases were left
off of the revised list, but we’ll fix this for next year’s survey.</p>
<p><img src="survey2025/build.svg" alt="What types of things do you build with
Go?" class="chart" /></p>
<p>Most respondents said they are not currently building AI-powered features into
the Go software they work on (78%), with ⅔ reporting that their software does
not use AI functionality at all (66%). This appears to be a decrease in
production-related AI usage year-over-year; in 2024, 59% of respondents were
not involved in AI feature work, while 39% indicated some level of
involvement. That marks a shift of 14 points away from building AI-powered
systems among survey respondents, and may reflect some natural pullback from
the early hype around AI-powered applications: it’s plausible that lots of
folks tried to see what they could do with this technology during its initial
rollout, with some proportion deciding against further exploration (at least
at this time).</p>
<p><img src="survey2025/genai.svg" alt="Think about the Go software that you've
worked on the most during the past month. Does it use AI for any of its
functionality?" class="chart" /></p>
<p>Among respondents who are building AI- or LLM-powered functionality, the most
common use case was to create summaries of existing content (45%). Overall,
however, there was little difference between most uses, with between 28%
– 33% of respondents adding AI functionality to support classification,
generation, solution identification, chatbots, and software development.</p>
<p><img src="survey2025/genai_use.svg" alt="The Go software that I build uses AI
or LLMs to:" class="chart" /></p>
<h2 id="challenges">What are the biggest challenges facing Go developers?</h2>
<p>One of the most helpful types of feedback we receive from developers are
details about the challenges people run into while working with Go. The Go
Team considers this information holistically and over long time horizons,
because there is often tension between improving Go’s rougher edges and
keeping the language and tooling consistent for developers. Beyond technical
factors, every change also incurs some cost in terms of developer attention
and cognitive disruption. Minimizing disruption may sound a bit dull or
boring, but we view this as an important strength of Go. As Russ Cox wrote in
2023, <a href="/blog/compat">“Boring is good… Boring means being able to focus on your work, not
on what’s different about Go.”</a>.</p>
<p>In that spirit, this year’s top challenges are not radically different from
last year’s. The top three frustrations respondents reported were “Ensuring
our Go code follows best practices / Go idioms” (33% of respondents), “A
feature I value from another language isn’t part of Go” (28%), and “Finding
trustworthy Go modules and packages” (26%). We examined open-text responses to
better understand what people meant. Let’s take a minute to dig into each.</p>
<p>Respondents who were most frustrated by writing idiomatic Go were often
looking for more official guidance, as well as tooling support to help enforce
this guidance in their codebase. As in prior surveys, questions about how to
structure Go projects were also a common theme. For example:</p>
<blockquote>
<p>“The simplicity of go helps to read and understand code from other
developers, but there are still some aspects that can differ quite a lot
between programmers. Especially if developers come from other languages,
e.g. Java.” <span class="quote_source">— Very satisfied / 3 – 10
years / Healthcare and life sciences</span></p>
</blockquote>
<blockquote>
<p>“More opinionated way to write go code. Like how to structure a Go project
for services/cli tool.” <span class="quote_source">— Very satisfied /
< 3 years / Technology</span></p>
</blockquote>
<blockquote>
<p>“It’s hard to figure out what are good idioms. Especially since the core
team doesn’t keep Effective Go up-to-date.” <span
class="quote_source">— Very satisfied / 3 – 10 years /
Technology</span></p>
</blockquote>
<p>The second major category of frustrations were language features that
developers enjoyed working with in other ecosystems. These open-text comments
largely focused on error handling and reporting patterns, enums and sum types,
nil pointer safety, and general expressivity / verbosity:</p>
<blockquote>
<p>“Still not sure what is the best way to do error handling.” <span
class="quote_source">— Very satisfied / 3 – 10 years / Retail
and consumer goods</span></p>
</blockquote>
<blockquote>
<p>“Rust’s enums are great, and lead to writing great type safe code.” <span
class="quote_source">— Somewhat satisfied / 3 – 10 years /
Healthcare and life sciences</span></p>
</blockquote>
<blockquote>
<p>“There is nothing (in the compiler) that stops me from using a maybe nil
pointer, or using a value without checking the err first. That should be
[baked into] the type system.” <span class="quote_source">— Somewhat
satisfied / < 3 years / Technology</span></p>
</blockquote>
<blockquote>
<p>“I like [Go] but I didn’t expect it to have nil pointer exceptions :)” <span
class="quote_source">— Somewhat satisfied / 3 – 10 years /
Financial services</span></p>
</blockquote>
<blockquote>
<p>“I often find it hard to build abstractions and to provide clear intention
to the future readers of my code.” <span class="quote_source">—
Somewhat dissatisfied / < 3 years / Technology</span></p>
</blockquote>
<p>The third major frustration was finding trustworthy Go modules. Respondents
often described two aspects to this problem. One is that they considered many
3rd-party modules to be of marginal quality, making it hard for really good
modules to stand out. The second is identifying which modules are commonly
used and under which types of conditions (including recent trends over time).
These are both problems that could be addressed by showing what we’ll vaguely
call “quality signals” on pkg.go.dev. Respondents provided helpful
explanations of the signals they use to identify trustworthy modules,
including project activity, code quality, recent adoption trends, or the
specific organizations that support or rely upon the module.</p>
<blockquote>
<p>“Being able to filter by criteria like stable version, number of users and
last update age at pkg.go.dev could make things a bit easier.” <span
class="quote_source">— Very satisfied / < 3 years / Technology</span></p>
</blockquote>
<blockquote>
<p>“Many pacakges are just clones/forks or one-off pojects with no
history/maintenance. [sic]” <span class="quote_source">— Very
satisfied / 10+ years / Financial services</span></p>
</blockquote>
<blockquote>
<p>“Maybe flagging trustworthy packages based on experience, maturity and
community feedback?” <span class="quote_source">— Very satisfied / 3
– 10 years / Healthcare and life sciences</span></p>
</blockquote>
<p>We agree that these are all areas where the developer experience with Go could
be improved. The challenge, as discussed earlier, is doing so in such a way
that doesn’t lead to breaking changes, increased confusion among Go
developers, or otherwise gets in the way of people trying to get their work
done with Go. Feedback from this survey is a major source of information we
use when discussing proposals, but if you’d like to get involved more directly
or follow along with other contributors, visit the <a href="https://github.com/golang/go/issues?q=state%3Aopen%20label%3AProposal" rel="noreferrer" target="_blank">Go proposals on
GitHub</a>;
please be sure to <a href="https://github.com/golang/proposal" rel="noreferrer" target="_blank">follow this process</a> if
you’d like to add a new proposal.</p>
<p><img src="survey2025/frustrations.svg" alt="What are your three most
frustrating things about working with Go?" class="chart" /></p>
<p>In addition to these (potentially) ecosystem-wide challenges, this year we
also asked specifically about working with the <code>go</code> command. We’ve informally
heard from developers that this tool’s help system can be confusing to
navigate, but we haven’t had a great sense of how frequently people find
themselves reviewing this documentation.</p>
<p>Respondents told us that except for <code>go test</code>, between 15% – 25% of them
felt they “often needed to review documentation” with working with these
tools. This was surprising, especially for commonly-used subcommands like
<code>build</code> and <code>run</code>. Common reasons included remembering specific flags,
understanding what different options do, and navigating the help system
itself. Participants also confirmed that infrequent use was one reason for
frustration, but navigating and parsing command help appears to be the
underlying cause. In other words, we all expect to need to review
documentation sometimes, but we don’t expect to need help navigating the
documentation system itself. As on respondent described their journey:</p>
<blockquote>
<p>“Accessing the help is painful. go test –help # didn’t work, but tell[s] me
to type <code>go help test</code> instead… go help test # oh, actually, the info I’m
looking for is in <code>testflag</code> go help testflag # visually parsing through
text that looks all the same without much formatting… I just lack time to
dig into this rabbit hole.” <span class="quote_source">— Very
satisfied / 10+ years / Technology</span></p>
</blockquote>
<p><img src="survey2025/go_help.svg" alt="Do you find yourself frequently
reviewing documentation for any of the following Go subcommands?"
class="chart" /></p>
<h2 id="devenv">What does their development environment look like?</h2>
<h3 id="operating-systems-and-architectures">Operating systems and architectures</h3>
<p>Generally, respondents told us their development platforms are UNIX-like. Most
respondents develop on macOS (60%) or Linux (58%) and deploy to Linux-based
systems, including containers (96%). The largest year-over-year change was
among “embedded devices / IoT” deployments, which increased from 2% -> 8% of
respondents; this was the only meaningful change in deployment platforms since
2024.</p>
<p>The vast majority of respondents develop on x86-64 or ARM64 architectures,
with a sizable group (25%) still potentially working on 32-bit x86 systems.
However, we believe the wording of this question was confusing to respondents;
next year we’ll clarify the 32-bit vs. 64-bit distinction for each
architecture.</p>
<p><img src="survey2025/os_dev.svg" alt="Which platforms do you use when writing
Go code?" class="chart" /> <img src="survey2025/deploy.svg" alt="Which systems
do you deploy your Go software to?" class="chart" /> <img
src="survey2025/arch.svg" alt="Which architectures do you deploy your Go
software to?" class="chart" /></p>
<h3 id="code-editors">Code editors</h3>
<p>Several new code editors have become available in the past two years, and we
expanded our survey question to include the most popular ones. While we saw
some evidence of early adoption, most respondents continued to favor <a href="https://code.visualstudio.com/" rel="noreferrer" target="_blank">VS
Code</a> (37%) or
<a href="https://www.jetbrains.com/go/" rel="noreferrer" target="_blank">GoLand</a> (28%). Of the newer editors, Zed and
Cursor were the highest ranked, each becoming the preferred editor of 4% of
respondents. To put those numbers in context, we looked back at when VS Code
and GoLand were first introduced. VS Code (released in 2015) was favored by
16% of respondents one year after its release. IntelliJ has had a
community-led Go plugin longer than we’ve been surveying Go developers (💙),
but if we look at when JetBrains began officially supporting Go in IntelliJ
(2016), within one year IntelliJ was preferred by 20% of respondents.</p>
<p>Note: This analysis of code editors does not include respondents who were
referred to the survey directly from VS Code or GoLand.</p>
<p><img src="survey2025/editor.svg" alt="What is your favorite code editor for
Go?" class="chart" /></p>
<h3 id="cloud-environments">Cloud environments</h3>
<p>The most common deployment environments for Go continue to be Amazon Web
Services (AWS) at 46% of respondents, company-owned servers (44%), and Google
Cloud Platform (GCP) at 26%. These numbers show minor shifts since 2024, but
nothing statistically significant. We found that the “Other” category
increased to 11% this year, and this was primarily driven by Hetzner (20% of
Other responses); we plan to include Hetzner as a response choice in next
year’s survey.</p>
<p>We also asked respondents about their development experience of working with
different cloud providers. The most common responses, however, showed that
respondents weren’t really sure (46%) or don’t directly interact with public
cloud providers (21%). The biggest driver behind these responses was a theme
we’ve heard often before: with containers, it’s possible to abstract many
details of the cloud environment away from the developer, so that they don’t
meaningfully interact with most provider-specific technologies. This result
suggests that even developers whose work is <em>deployed</em> to clouds may have
limited experience with the larger suite of tools and technology associated
with each cloud provider. For example:</p>
<blockquote>
<p>“Kinda abstract to the platform, Go is very easy to put in a container and
so pretty easy to deploy anywhere: one of its big strength[s].” <span
class="quote_source">— [no satisfaction response] / 3 – 10 years
/ Technology</span></p>
</blockquote>
<blockquote>
<p>“The cloud provider really doesn’t make much difference to me. I write code
and deploy it to containers, so whether that’s AWS or GCP I don’t really
care.” <span class="quote_source">— Somewhat satisfied / 3 – 10
years / Financial services</span></p>
</blockquote>
<p>We suspect this level of abstraction is dependant on the use case and
requirements of the service that’s being deployed — it may not always
make sense or be possible to keep it highly abstracted. In the future, we plan
to further investigate how Go developers tend to interact with the platforms
where their software is ultimately deployed.</p>
<p><img src="survey2025/work_deploy.svg" alt="My team at work deploys Go programs
to:" class="chart" /></p>
<p><img src="survey2025/favorite_go_cloud.svg" alt="In your experience, which
public cloud provider offers the best experience for Go developers?"
class="chart" /></p>
<h3 id="developing-with-ai">Developing with AI</h3>
<p>Finally, we can’t discuss development environments in 2025 without also
mentioning AI-powered software development tools. Our survey suggests
bifurcated adoption — while a majority of respondents (53%) said they
use such tools daily, there is also a large group (29%) who do not use these
at all, or only used them a few times during the past month. We expected this
to negatively correlate with age or development experience, but were unable to
find strong evidence supporting this theory except for <em>very</em> new developers:
respondents with less than one year of professional development experience
(not specific to Go) did report more AI use than every other cohort, but this
group only represented 2% of survey respondents.</p>
<p><img src="survey2025/ai_freq.svg" alt="During the past month, how often did
you use AI-powered tools when writing Go?" class="chart" /></p>
<p>At this time, agentic use of AI-powered tools appears nascent among Go
developers, with only 17% of respondents saying this is their primary way of
using such tools, though a larger group (40%) are occasionally trying agentic
modes of operation.</p>
<p><img src="survey2025/ai_agent.svg" alt="When working with AI-powered
development tools, do you tend to use them as unsupervised agents?"
class="chart" /></p>
<p>The most commonly used AI assistants remain ChatGPT, GitHub Copilot, and
Claude. Most of these agents show lower usage numbers <a href="/blog/survey2024-h2-results#ai-assistance">compared with our 2024
survey</a> (Claude and Cursor are
notable exceptions), but due to a methodology change, this is not an
apples-to-apples comparison. It is, however, plausible that developers are
“shopping around” less than they were when these tools were first released,
resulting in more people using a single assistant for most of their work.</p>
<p><img src="survey2025/ai_asst.svg" alt="When writing Go code, which AI
assistants or agents have you used in the last month?" class="chart" /></p>
<p>We also asked about overall satisfaction with AI-powered development tools. A
majority (55%) reported being satisfied, but this was heavily weighted towards
the “Somewhat satisfied” category (42%) vs. the “Very satisfied” group (13%).
Recall that Go itself consistently shows a 90%+ satisfaction rate each year;
this year, 62% of respondents said they are “Very satisfied” with Go. We add
this context to show that while AI-powered tooling is starting to see adoption
and finding some successful use cases, developer sentiment towards them
remains much softer than towards more established tooling (among Go
developers, at least).</p>
<p>What is driving this lower rate of satisfaction? In a word: quality. We asked
respondents to tell us something good they’ve accomplished with these tools,
as well as something that didn’t work out well. A majority said that creating
non-functional code was their primary problem with AI developer tools (53%),
with 30% lamenting that even working code was of poor quality. The most
frequently cited benefits, conversely, were generating unit tests, writing
boilerplate code, enhanced autocompletion, refactoring, and documentation
generation. These appear to be cases where code quality is perceived as less
critical, tipping the balance in favor of letting AI take the first pass at a
task. That said, respondents also told us the AI-generated code in these
successful cases still required careful review (and often, corrections), as it
can be buggy, insecure, or lack context.</p>
<blockquote>
<p>“I’m never satisfied with code quality or consistency, it never follows the
practices I want to.” <span class="quote_source">— [no satisfaction
response] / 3 – 10 years / Financial services</span></p>
</blockquote>
<blockquote>
<p>“All AI tools tend to hallucinate quickly when working with medium-to-large
codebases (10k+ lines of code). They can explain code effectively but
struggle to generate new, complex features” <span
class="quote_source">— Somewhat satisfied / 3 – 10 years /
Retail and consumer goods</span></p>
</blockquote>
<blockquote>
<p>“Despite numerous efforts to make it write code in an established codebase,
it would take too much effort to steer it to follow the practices in the
project, and it would add subtle behaviour paths - i.e. if it would miss
some method it would try to find its way around it or rely on some side
effect. Sometimes those things are hard to recognize during code review. I
also found it mentally taxing to review ai generated code and that overhead
kills the productivity potential in writing code.” <span
class="quote_source">— Very satisfied / 10+ years / Technology</span></p>
</blockquote>
<p><img src="survey2025/ai_csat.svg" alt="Overall, how satisfied or dissatisfied
have you felt while working with your AI-powered development tools during the
past month?" class="chart"></p>
<p>When we asked developers what they used these tools for, a pattern emerged
that is consistent with these quality concerns. The tasks with most adoption
(green in the chart below) and least resistance (red) deal with bridging
knowledge gaps, improving local code, and avoiding toil. The frustrations that
developers talk about with code-generating tools were much less evident when
they’re seeking information, like how to use a specific API or configure test
coverage, and perhaps as a result, we see higher usage of AI in these areas.
Another spot that stood out was <em>local</em> code review and related suggestions
— people were less interested in using AI to review other people’s code
than in reviewing their own. Surprisingly, “testing code” showed lower AI
adoption than other toilsome tasks, though we don’t yet have strong
understanding of why.</p>
<p>Of all the tasks we asked about, “Writing code” was the most bifurcated, with
66% of respondents already or hoping to soon use AI for this, while ¼ of
respondents didn’t want AI involved at all. Open-ended responses suggest
developers primarily use this for toilsome, repetitive code, and continue to
have concerns about the quality of AI-generated code.</p>
<p><img src="survey2025/ai_tools_what.svg" alt="How are you using AI-powered
tools with Go today?" class="chart" /></p>
<h2 id="closing">Closing</h2>
<p>Once again, a tremendous thank-you to everyone who responded to this year’s Go
Developer Survey!</p>
<p>We plan to share the raw survey dataset in Q1 2026, so the larger community
can also explore the data underlying these findings. This will only include
responses from people who opted in to share this data (82% of all
respondents), so there may be some differences from the numbers we reference
in this post.</p>
<h2 id="methodology">Survey methodology</h2>
<p>This survey was conducted between Sept 9 - Sept 30, 2025. Participants were
publicly invited to respond via the Go Blog, invitations on social media
channels (including Bluesky, Mastodon, Reddit, and X), as well as randomized
in-product invitations to people using VS Code and GoLand to write Go
software. We received a total of 7,070 responses. After data cleaning to
remove bots and other very low quality responses, 5,379 were used for the
remainder of our analysis. The median survey response time was between 12
– 13 minutes.</p>
<p>Throughout this report we use charts of survey responses to provide supporting
evidence for our findings. All of these charts use a similar format. The title
is the exact question that survey respondents saw. Unless otherwise noted,
questions were multiple choice and participants could only select a single
response choice; each chart’s subtitle will tell the reader if the question
allowed multiple response choices or was an open-ended text box instead of a
multiple choice question. For charts of open-ended text responses, a Go team
member read and manually categorized all of the responses. Many open-ended
questions elicited a wide variety of responses; to keep the chart sizes
reasonable, we condensed them to a maximum of the top 10-12 themes, with
additional themes all grouped under “Other”. The percentage labels shown in
charts are rounded to the nearest integer (e.g., 1.4% and 0.8% will both be
displayed as 1%), but the length of each bar and row ordering are based on the
unrounded values.</p>
<p>To help readers understand the weight of evidence underlying each finding, we
included error bars showing the 95% <a href="https://en.wikipedia.org/wiki/Confidence_interval" rel="noreferrer" target="_blank">confidence
interval</a> for responses;
narrower bars indicate increased confidence. Sometimes two or more responses
have overlapping error bars, which means the relative order of those responses
is not statistically meaningful (i.e., the responses are effectively tied).
The lower right of each chart shows the number of people whose responses are
included in the chart, in the form “n = [number of respondents]”.</p>
</div>
</div>
<div class="Article prevnext">
<p>
<b>Next article: </b><a href="/blog/go1.26">Go 1.26 is released</a><br>
<b>Previous article: </b><a href="/blog/16years">Go’s Sweet 16</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>
Go’s Sweet 16tag:blog.golang.org,2013:blog.golang.org/16years2025-11-14T00:00:00+00:002025-11-14T00:00:00+00:00Austin Clements, for the Go teamHappy Birthday, Go!
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/16years">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>Go’s Sweet 16</h1>
<p class="author">
Austin Clements, for the Go team<br>
14 November 2025
</p>
<div class='markdown'>
<p>This past Monday, November 10th, we celebrated the 16th anniversary of Go’s
<a href="https://opensource.googleblog.com/2009/11/hey-ho-lets-go.html" rel="noreferrer" target="_blank">open source
release</a>!</p>
<p>We released <a href="/blog/go1.24">Go 1.24 in February</a> and <a href="/blog/go1.25">Go 1.25 in
August</a>, following our now well-established and dependable release
cadence. Continuing our mission to build the most productive language platform
for building production systems, these releases included new APIs for building
robust and reliable software, significant advances in Go’s track record for
building secure software, and some serious under-the-hood improvements.
Meanwhile, no one can ignore the seismic shifts in our industry brought by
generative AI. The Go team is applying its thoughtful and uncompromising mindset
to the problems and opportunities of this dynamic space, working to bring Go’s
production-ready approach to building robust AI integrations, products, agents,
and infrastructure.</p>
<h1 id="core-language-and-library-improvements">Core language and library improvements</h1>
<p>First released in Go 1.24 as an experiment and then graduated in Go 1.25, the
new <a href="https://pkg.go.dev/testing/synctest" rel="noreferrer" target="_blank"><code>testing/synctest</code></a> package
significantly simplifies writing tests for <a href="/blog/testing-time">concurrent, asynchronous
code</a>. Such code is particularly common in network services,
and is traditionally very hard to test well. The <code>synctest</code> package works by
virtualizing time itself. It takes tests that used to be slow, flaky, or both,
and makes them easy to rewrite into reliable and nearly instantaneous tests,
often with just a couple extra lines of code. It’s also a great example of Go’s
integrated approach to software development: behind an almost trivial API, the
<code>synctest</code> package hides a deep integration with the Go runtime and other parts
of the standard library.</p>
<p>This isn’t the only boost the <code>testing</code> package got over the past year. The new
<a href="https://pkg.go.dev/testing#B.Loop" rel="noreferrer" target="_blank"><code>testing.B.Loop</code></a> API is both easier to use
than the original <code>testing.B.N</code> API and addresses many of the traditional—and
often invisible!—<a href="/blog/testing-b-loop">pitfalls</a> of writing Go benchmarks. The
<code>testing</code> package also has new APIs that <a href="https://pkg.go.dev/testing#T.Context" rel="noreferrer" target="_blank">make it easy to
cleanup</a> in tests that use
<a href="https://pkg.go.dev/context#Context" rel="noreferrer" target="_blank"><code>Context</code></a>, and that <a href="https://pkg.go.dev/testing#T.Output" rel="noreferrer" target="_blank">make it
easy</a> to write to the test’s log.</p>
<p>Go and containerization grew up together and work great with each other. Go 1.25
launched <a href="/blog/container-aware-gomaxprocs">container-aware scheduling</a>, making
this pairing even stronger. Without developers having to lift a finger, this
transparently adjusts the parallelism of Go workloads running in containers,
preventing CPU throttling that can impact tail latency and improving Go’s
out-of-the-box production-readiness.</p>
<p>Go 1.25’s new <a href="/blog/flight-recorder">flight recorder</a> builds on our already
powerful execution tracer, enabling deep insights into the dynamic behavior of
production systems. While the execution tracer generally collected <em>too much</em>
information to be practical in long-running production services, the flight
recorder is like a little time machine, allowing a service to snapshot recent
events in great detail <em>after</em> something has gone wrong.</p>
<h2 id="secure-software-development">Secure software development</h2>
<p>Go continues to strengthen its commitment to secure software development, making
significant strides in its native cryptography packages and evolving its
standard library for enhanced safety.</p>
<p>Go ships with a full suite of native cryptography packages in the standard
library, which reached two major milestones over the past year. A security
audit conducted by independent security firm <a href="https://www.trailofbits.com/" rel="noreferrer" target="_blank">Trail of
Bits</a> yielded <a href="/blog/tob-crypto-audit">excellent
results</a>, with only a single low-severity finding.
Furthermore, through a collaborative effort between the Go Security Team and
<a href="https://geomys.org/" rel="noreferrer" target="_blank">Geomys</a>, these packages achieved CAVP certification,
paving the way for <a href="/blog/fips140">full FIPS 140-3 certification</a>. This is a
vital development for Go users in certain regulated environments. FIPS 140
compliance, previously a source of friction due to the need for unsupported
solutions, will now be seamlessly integrated, addressing concerns related to
safety, developer experience, functionality, release velocity, and compliance.</p>
<p>The Go standard library has continued to evolve to be <em>safe by default</em> and
<em>safe by design</em>. For example, the <a href="https://pkg.go.dev/os#Root" rel="noreferrer" target="_blank"><code>os.Root</code></a>
API—added in Go 1.24—enables <a href="/blog/osroot">traversal-resistant file system
access</a>, effectively combating a class of vulnerabilities where an
attacker could manipulate programs into accessing files intended to be
inaccessible. Such vulnerabilities are notoriously challenging to address
without underlying platform and operating system support, and the new
<a href="https://pkg.go.dev/os#Root" rel="noreferrer" target="_blank"><code>os.Root</code></a> API offers a straightforward,
consistent, and portable solution.</p>
<h2 id="under-the-hood-improvements">Under-the-hood improvements</h2>
<p>In addition to user-visible changes, Go has made significant improvements under
the hood over the past year.</p>
<p>For Go 1.24, we completely <a href="/blog/swisstable">redesigned the <code>map</code>
implementation</a>, building on the latest and greatest ideas in
hash table design. This change is completely transparent, and brings significant
improvements to <code>map</code> performance, lower tail latency of <code>map</code> operations, and
in some cases even significant memory wins.</p>
<p>Go 1.25 includes an experimental and significant advancement in Go’s garbage
collector called <a href="/blog/greenteagc">Green Tea</a>. Green Tea reduces garbage
collection overhead in many applications by at least 10% and sometimes as much
as 40%. It uses a novel algorithm designed for the capabilities and constraints
of today’s hardware and opens up a new design space that we’re eagerly
exploring. For example, in the forthcoming Go 1.26 release, Green Tea will
achieve an additional 10% reduction in garbage collector overhead on hardware
that supports AVX-512 vector instructions—something that would have been nigh
impossible to take advantage of in the old algorithm. Green Tea will be enabled
by default in Go 1.26; users need only upgrade their Go version to benefit.</p>
<h1 id="furthering-the-software-development-stack">Furthering the software development stack</h1>
<p>Go is about far more than the language and standard library. It’s a software
development platform, and over the past year, we’ve also made four regular
releases of the <a href="/gopls">gopls language server</a>, and have formed partnerships to
support emerging new frameworks for agentic applications.</p>
<p>Gopls provides Go support to VS Code and other LSP-powered editors and IDEs.
Every release sees a litany of features and improvements to the experience of
reading and writing Go code (see the <a href="/gopls/release/v0.17.0">v0.17.0</a>,
<a href="/gopls/release/v0.18.0">v0.18.0</a>, <a href="/gopls/release/v0.19.0">v0.19.0</a>, and
<a href="/gopls/release/v0.20.0">v0.20.0</a> release notes for full details, or our new
<a href="/gopls/features">gopls feature documentation</a>!). Some highlights include many
new and enhanced analyzers to help developers write more idiomatic and robust Go
code; refactoring support for variable extraction, variable inlining, and JSON
struct tags; and an <a href="/gopls/features/mcp">experimental built-in server</a> for the
Model Context Protocol (MCP) that exposes a subset of gopls’ functionality to AI
assistants in the form of MCP tools.</p>
<p>With gopls v0.18.0, we began exploring <em>automatic code modernizers</em>. As Go
evolves, every release brings new capabilities and new idioms; new and better
ways to do things that Go programmers have been finding other ways to do. Go
stands by its <a href="/doc/go1compat">compatibility promise</a>—the old way will continue
to work in perpetuity—but nevertheless this creates a bifurcation between old
idioms and new idioms. Modernizers are static analysis tools that recognize old
idioms and suggest faster, more readable, more secure, more <em>modern</em>
replacements, and do so with push-button reliability. What <code>gofmt</code> did for
<a href="/blog/gofmt">stylistic consistency</a>, we hope modernizers can do for idiomatic
consistency. We’ve integrated modernizers as IDE suggestions, where they can
help developers not only maintain more consistent coding standards, but where we
believe they will help developers discover new features and keep up with the
state of the art. We believe modernizers can also help AI coding assistants keep
up with the state of the art and combat their proclivity to reinforce outdated
knowledge of the Go language, APIs, and idioms. The upcoming Go 1.26 release
will include a total overhaul of the long-dormant <code>go fix</code> command to make it
apply the full suite of modernizers in bulk, a return to its <a href="/blog/introducing-gofix">pre-Go 1.0
roots</a>.</p>
<p>At the end of September, in collaboration with
<a href="https://www.anthropic.com/" rel="noreferrer" target="_blank">Anthropic</a> and the Go community, we released
<a href="https://github.com/modelcontextprotocol/go-sdk/releases/tag/v1.0.0" rel="noreferrer" target="_blank">v1.0.0</a> of
the <a href="https://github.com/modelcontextprotocol/go-sdk" rel="noreferrer" target="_blank">official Go SDK</a> for the
<a href="https://modelcontextprotocol.io/" rel="noreferrer" target="_blank">Model Context Protocol (MCP)</a>. This SDK
supports both MCP clients and MCP servers, and underpins the new MCP
functionality in gopls. Contributing this work in open source helps empower
other areas of the growing open source agentic ecosystem built around Go, such
as the recently released <a href="https://github.com/google/adk-go" rel="noreferrer" target="_blank">Agent Development Kit (ADK) for
Go</a> from <a href="https://www.google.com/" rel="noreferrer" target="_blank">Google</a>.
ADK Go builds on the Go MCP SDK to provide an idiomatic framework for building
modular multi-agent applications and systems. The Go MCP SDK and ADK Go
demonstrate how Go’s unique strengths in concurrency, performance, and
reliability differentiate Go for production AI development and we are expecting
more AI workloads to be written in Go in the coming years.</p>
<h1 id="looking-ahead">Looking ahead</h1>
<p>Go has an exciting year ahead of it.</p>
<p>We’re working on advancing developer productivity through the brand new <code>go fix</code>
command, deeper support for AI coding assistants, and ongoing improvements to
gopls and VS Code Go. General availability of the Green Tea garbage collector,
native support for Single Instruction Multiple Data (SIMD) hardware features,
and runtime and standard library support for writing code that scales even
better to massive multicore hardware will continue to align Go with modern
hardware and improve production efficiency. We’re focusing on Go’s “production
stack” libraries and diagnostics, including a massive (and long in the making)
<a href="/issue/71497">upgrade to <code>encoding/json</code></a>, driven by Joe Tsai and people across
the Go community; <a href="/design/74609-goroutine-leak-detection-gc">leaked goroutine
profiling</a>, contributed by
<a href="https://www.uber.com/us/en/about/" rel="noreferrer" target="_blank">Uber’s</a> Programming Systems team; and many
other improvements to <code>net/http</code>, <code>unicode</code>, and other foundational packages.
We’re working to provide well-lit paths for building with Go and AI, evolving
the language platform with care for the evolving needs of today’s developers,
and building tools and capabilities that help both human developers and AI
assistants and systems alike.</p>
<p>On this 16th anniversary of Go’s open source release, we’re also looking to the
future of the Go open source project itself. From its <a href="https://www.youtube.com/watch?v=wwoWei-GAPo" rel="noreferrer" target="_blank">humble
beginnings</a>, Go has formed a
thriving contributor community. To continue to best meet the needs of our
ever-expanding user base, especially in a time of upheaval in the software
industry, we’re working on ways to better scale Go’s development
processes—without losing sight of Go’s fundamental principles—and more deeply
involve our wonderful contributor community.</p>
<p>Go would not be where it is today without our incredible user and contributor
communities. We wish you all the best in the coming year!</p>
</div>
</div>
<div class="Article prevnext">
<p>
<b>Next article: </b><a href="/blog/survey2025">Results from the 2025 Go Developer Survey</a><br>
<b>Previous article: </b><a href="/blog/greenteagc">The Green Tea Garbage Collector</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>
The Green Tea Garbage Collectortag:blog.golang.org,2013:blog.golang.org/greenteagc2025-10-29T00:00:00+00:002025-10-29T00:00:00+00:00Michael Knyszek and Austin ClementsGo 1.25 includes a new experimental garbage collector, Green Tea.
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/greenteagc">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>The Green Tea Garbage Collector</h1>
<p class="author">
Michael Knyszek and Austin Clements<br>
29 October 2025
</p>
<div class='markdown'>
<style type="text/css" scoped>
.centered {
position: relative;
display: flex;
flex-direction: column;
align-items: center;
}
div.carousel {
display: flex;
width: 100%;
height: auto;
overflow-x: auto;
scroll-snap-type: x mandatory;
padding-bottom: 1.1em;
}
.hide-overflow {
overflow-x: hidden !important;
}
button.scroll-button-left {
left: 0;
bottom: 0;
}
button.scroll-button-right {
right: 0;
bottom: 0;
}
button.scroll-button {
position: absolute;
font-size: 1em;
font-family: inherit;
font-style: oblique;
}
figure.carouselitem {
display: flex;
flex-direction: column;
align-items: center;
margin: 0;
padding: 0;
width: 100%;
flex-shrink: 0;
scroll-snap-align: start;
}
figure.carouselitem figcaption {
display: table-caption;
caption-side: top;
text-align: left;
width: 80%;
height: auto;
padding: 8px;
}
figure.captioned {
display: flex;
flex-direction: column;
align-items: center;
margin: 0 auto;
padding: 0;
width: 95%;
}
figure.captioned figcaption {
display: table-caption;
caption-side: top;
text-align: center;
font-style: oblique;
height: auto;
padding: 8px;
}
div.row {
display: flex;
flex-direction: row;
justify-content: center;
align-items: center;
width: 100%;
}
</style>
<noscript>
<center>
<i>For the best experience, view <a href="/blog/greenteagc">this blog post</a>
in a browser with JavaScript enabled.</i>
</center>
</noscript>
<p>Go 1.25 includes a new experimental garbage collector called Green Tea,
available by setting <code>GOEXPERIMENT=greenteagc</code> at build time.
Many workloads spend around 10% less time in the garbage collector, but some
workloads see a reduction of up to 40%!</p>
<p>It’s production-ready and already in use at Google, so we encourage you to
try it out.
We know some workloads don’t benefit as much, or even at all, so your feedback
is crucial to helping us move forward.
Based on the data we have now, we plan to make it the default in Go 1.26.</p>
<p>To report back with any problems, <a href="/issue/new">file a new issue</a>.</p>
<p>To report back with any successes, reply to <a href="/issue/73581">the existing Green Tea issue</a>.</p>
<p>What follows is a blog post based on Michael Knyszek’s GopherCon 2025 talk.</p>
<div class="iframe" style="aspect-ratio: 560 / 315">
<iframe src="https://www.youtube.com/embed/gPJkM95KpKo" width="100%" height="100%" frameborder="0" allowfullscreen mozallowfullscreen webkitallowfullscreen></iframe>
</div>
<h2 id="tracing-garbage-collection">Tracing garbage collection</h2>
<p>Before we discuss Green Tea let’s get us all on the same page about garbage
collection.</p>
<h3 id="objects-and-pointers">Objects and pointers</h3>
<p>The purpose of garbage collection is to automatically reclaim and reuse memory
no longer used by the program.</p>
<p>To this end, the Go garbage collector concerns itself with <em>objects</em> and
<em>pointers</em>.</p>
<p>In the context of the Go runtime, <em>objects</em> are Go values whose underlying
memory is allocated from the heap.
Heap objects are created when the Go compiler can’t figure out how else to allocate
memory for a value.
For example, the following code snippet allocates a single heap object: the backing
store for a slice of pointers.</p>
<pre><code>var x = make([]*int, 10) // global
</code></pre>
<p>The Go compiler can’t allocate the slice backing store anywhere except the heap,
since it’s very hard, and maybe even impossible, for it to know how long <code>x</code> will
refer to the object for.</p>
<p><em>Pointers</em> are just numbers that indicate the location of a Go value in memory,
and they’re how a Go program references objects.
For example, to get the pointer to the beginning of the object allocated in the
last code snippet, we can write:</p>
<pre><code>&x[0] // 0xc000104000
</code></pre>
<h3 id="the-mark-sweep-algorithm">The mark-sweep algorithm</h3>
<p>Go’s garbage collector follows a strategy broadly referred to as <em>tracing garbage
collection</em>, which just means that the garbage collector follows, or traces, the
pointers in the program to identify which objects the program is still using.</p>
<p>More specifically, the Go garbage collector implements the mark-sweep algorithm.
This is much simpler than it sounds.
Imagine objects and pointers as a sort of graph, in the computer science sense.
Objects are nodes, pointers are edges.</p>
<p>The mark-sweep algorithm operates on this graph, and as the name might suggest,
proceeds in two phases.</p>
<p>In the first phase, the mark phase, it walks the object graph from well-defined
source edges called <em>roots</em>.
Think global and local variables.
Then, it <em>marks</em> everything it finds along the way as <em>visited</em>, to avoid going in
circles.
This is analogous to your typical graph flood algorithm, like a depth-first or
breadth-first search.</p>
<p>Next is the sweep phase.
Whatever objects were not visited in our graph walk are unused, or <em>unreachable</em>,
by the program.
We call this state unreachable because it is impossible with normal safe Go code
to access that memory anymore, simply through the semantics of the language.
To complete the sweep phase, the algorithm simply iterates through all the
unvisited nodes and marks their memory as free, so the memory allocator can reuse
it.</p>
<h3 id="thats-it">That’s it?</h3>
<p>You may think I’m oversimplifying a bit here.
Garbage collectors are frequently referred to as <em>magic</em>, and <em>black boxes</em>.
And you’d be partially right, there are more complexities.</p>
<p>For example, this algorithm is, in practice, executed concurrently with your
regular Go code.
Walking a graph that’s mutating underneath you brings challenges.
We also parallelize this algorithm, which is a detail that’ll come up again
later.</p>
<p>But trust me when I tell you that these details are mostly separate from the
core algorithm.
It really is just a simple graph flood at the center.</p>
<h3 id="graph-flood-example">Graph flood example</h3>
<p>Let’s walk through an example.
Navigate through the slideshow below to follow along.</p>
<noscript>
<i>Scroll horizontally through the slideshow!</i>
<br />
<br />
Consider viewing with JavaScript enabled, which will add "Previous" and "Next"
buttons.
This will let you click through the slideshow without the scrolling motion,
which will better highlight differences between the diagrams.
<br />
<br />
</noscript>
<div class="centered">
<button type="button" id="marksweep-prev" class="scroll-button scroll-button-left" hidden disabled>← Prev</button>
<button type="button" id="marksweep-next" class="scroll-button scroll-button-right" hidden>Next →</button>
<div id="marksweep" class="carousel">
<figure class="carouselitem">
<img src="greenteagc/marksweep-007.png" />
<figcaption>
Here we have a diagram of some global variables and Go heap.
Let's break it down, piece by piece.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-008.png" />
<figcaption>
On the left here we have our roots.
These are global variables x and y.
They will be the starting point of our graph walk.
Since they're marked blue, according to our handy legend in the bottom left, they're currently on our work list.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-009.png" />
<figcaption>
On the right side, we have our heap.
Currently, everything in our heap is grayed out because we haven't visited any of it yet.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-010.png" />
<figcaption>
Each one of these rectangles represents an object.
Each object is labeled with its type.
This object in particular is an object of type T, whose type definition is on the top left.
It's got a pointer to an array of children, and some value.
We can surmise that this is some kind of recursive tree data structure.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-011.png" />
<figcaption>
In addition to the objects of type T, you'll also notice that we have array objects containing *Ts.
These are pointed to by the "children" field of objects of type T.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-012.png" />
<figcaption>
Each square inside of the rectangle represents 8 bytes of memory.
A square with a dot is a pointer.
If it has an arrow, it is a non-nil pointer pointing to some other object.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-013.png" />
<figcaption>
And if it doesn't have a corresponding arrow, then it's a nil pointer.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-014.png" />
<figcaption>
Next, these dotted rectangles represents free space, what I'll call a free "slot." We could put an object there, but there currently isn't one.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-015.png" />
<figcaption>
You'll also notice that objects are grouped together by these labeled, dotted rounded rectangles.
Each of these represents a <i>page</i>, which is a contiguous
block of fixed-size, aligned memory.
In Go, pages are 8 KiB (regardless of the hardware virtual
memory page size).
These pages are labeled A, B, C, and D, and I'll refer to them that way.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-015.png" />
<figcaption>
In this diagram, each object is allocated as part of some page.
Like in the real implementation, each page here only contains objects of a certain size.
This is just how the Go heap is organized.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-016.png" />
<figcaption>
Pages are also how we organize per-object metadata.
Here you can see seven boxes, each corresponding to one of the seven object slots in page A.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-016.png" />
<figcaption>
Each box represents one bit of information: whether or not we have seen the object before.
This is actually how the real runtime manages whether an object has been visited, and it'll be an important detail later.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-017.png" />
<figcaption>
That was a lot of detail, so thanks for reading along.
This will all come into play later.
For now, let's just see how our graph flood applies to this picture.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-018.png" />
<figcaption>
We start by taking a root off of the work list.
We mark it red to indicate that it's now active.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-019.png" />
<figcaption>
Following that root's pointer, we find an object of type T, which we add to our work list.
Following our legend, we draw the object in blue to indicate that it's on our work list.
Note also that we set the seen bit corresponding to this object in our metadata.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-020.png" />
<figcaption>
Same goes for the next root.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-021.png" />
<figcaption>
Now that we've taken care of all the roots, we're left with two objects on our work list.
Let's take an object off the work list.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-022.png" />
<figcaption>
What we're going to do now is walk the pointers of the objects, to find more objects.
By the way, we call walking the pointers of an object "scanning" the object.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-023.png" />
<figcaption>
We find this valid array object…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-024.png" />
<figcaption>
… and add it to our work list.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-025.png" />
<figcaption>
From here, we proceed recursively.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-026.png" />
<figcaption>
We walk the array's pointers.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-027.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-028.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-029.png" />
<figcaption>
Find some more objects…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-030.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-031.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-032.png" />
<figcaption>
Then we walk the objects that the array object referred to!
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-033.png" />
<figcaption>
And note that we still have to walk over all pointers, even if they're nil.
We don't know ahead of time if they will be.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-034.png" />
<figcaption>
One more object down this branch…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-035.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-036.png" />
<figcaption>
And now we've reached the other branch, starting from that object in page A we found much earlier from one of the roots.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-036.png" />
<figcaption>
You may be noticing a last-in-first-out discipline for our work list here, indicating that our work list is a stack, and hence our graph flood is approximately depth-first.
This is intentional, and reflects the actual graph flood algorithm in the Go runtime.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-037.png" />
<figcaption>
Let's keep going…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-038.png" />
<figcaption>
Next we find another array object…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-039.png" />
<figcaption>
And walk it…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-040.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-041.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-042.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-043.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-044.png" />
<figcaption>
Just one more object left on our work list…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-045.png" />
<figcaption>
Let's scan it…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-046.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-047.png" />
<figcaption>
And we're done with the mark phase! There's nothing we're actively working on and there's nothing left on our work list.
Every object drawn in black is reachable, and every object drawn in gray is unreachable.
Let's sweep the unreachable objects, all in one go.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/marksweep-048.png" />
<figcaption>
We've converted those objects into free slots, ready to hold new objects.
</figcaption>
</figure>
</div>
</div>
<h2 id="the-problem">The problem</h2>
<p>After all that, I think we have a handle on what the Go garbage collector is actually doing.
This process seems to work well enough today, so what’s the problem?</p>
<p>Well, it turns out we can spend <em>a lot</em> of time executing this particular algorithm in some
programs, and it adds substantial overhead to nearly every Go program.
It’s not that uncommon to see Go programs spending 20% or more of their CPU time in the
garbage collector.</p>
<p>Let’s break down where that time is being spent.</p>
<h3 id="garbage-collection-costs">Garbage collection costs</h3>
<p>At a high level, there are two parts to the cost of the garbage collector.
The first is how often it runs, and the second is how much work it does each time it runs.
Multiply those two together, and you get the total cost of the garbage collector.</p>
<figure class="captioned">
<figcaption>
Total GC cost = Number of GC cycles × Average cost per GC cycle
</figcaption>
</figure>
<p>Over the years we’ve tackled both terms in this equation, and for more on <em>how often</em> the garbage
collector runs, see <a href="https://www.youtube.com/watch?v=07wduWyWx8M" rel="noreferrer" target="_blank">Michael’s GopherCon EU talk from 2022</a>
about memory limits.
<a href="/doc/gc-guide">The guide to the Go garbage collector</a> also has a lot to say about this topic,
and is worth a look if you want to dive deeper.</p>
<p>But for now let’s focus only on the second part, the cost per cycle.</p>
<p>From years of poring over CPU profiles to try to improve performance, we know two big things
about Go’s garbage collector.</p>
<p>The first is that about 90% of the cost of the garbage collector is spent marking,
and only about 10% is sweeping.
Sweeping turns out to be much easier to optimize than marking,
and Go has had a very efficient sweeper for many years.</p>
<p>The second is that, of that time spent marking, a substantial portion, usually at least 35%, is
simply spent <em>stalled</em> on accessing heap memory.
This is bad enough on its own, but it completely gums up the works on what makes modern CPUs
actually fast.</p>
<h3 id="a-microarchitectural-disaster">“A microarchitectural disaster”</h3>
<p>What does “gum up the works” mean in this context?
The specifics of modern CPUs can get pretty complicated, so let’s use an analogy.</p>
<p>Imagine the CPU driving down a road, where that road is your program.
The CPU wants to ramp up to a high speed, and to do that it needs to be able to see far ahead of it,
and the way needs to be clear.
But the graph flood algorithm is like driving through city streets for the CPU.
The CPU can’t see around corners and it can’t predict what’s going to happen next.
To make progress, it constantly has to slow down to make turns, stop at traffic lights, and avoid
pedestrians.
It hardly matters how fast your engine is because you never get a chance to get going.</p>
<p>Let’s make that more concrete by looking at our example again.
I’ve overlaid the heap here with the path that we took.
Each left-to-right arrow represents a piece of scanning work that we did
and the dashed arrows show how we jumped around between bits of scanning work.</p>
<figure class="captioned">
<img src="greenteagc/graphflood-path.png" />
<figcaption>
The path through the heap the garbage collector took in our graph flood example.
</figcaption>
</figure>
<p>Notice that we were jumping all over memory doing tiny bits of work in each place.
In particular, we’re frequently jumping between pages, and between different parts of pages.</p>
<p>Modern CPUs do a lot of caching.
Going to main memory can be up to 100x slower than accessing memory that’s in our cache.
CPU caches are populated with memory that’s been recently accessed, and memory that’s nearby to
recently accessed memory.
But there’s no guarantee that any two objects that point to each other will <em>also</em> be close to each
other in memory.
The graph flood doesn’t take this into account.</p>
<p>Quick side note: if we were just stalling fetches to main memory, it might not be so bad.
CPUs issue memory requests asynchronously, so even slow ones could overlap if the CPU could see
far enough ahead.
But in the graph flood, every bit of work is small, unpredictable, and highly dependent on the
last, so the CPU is forced to wait on nearly every individual memory fetch.</p>
<p>And unfortunately for us, this problem is only getting worse.
There’s an adage in the industry of “wait two years and your code will get faster.”</p>
<p>But Go, as a garbage collected language that relies on the mark-sweep algorithm, risks the opposite.
“Wait two years and your code will get slower.”
The trends in modern CPU hardware are creating new challenges for garbage collector performance:</p>
<p><strong>Non-uniform memory access.</strong>
For one, memory now tends to be associated with subsets of CPU cores.
Accesses by <em>other</em> CPU cores to that memory are slower than before.
In other words, the cost of a main memory access <a href="https://jprahman.substack.com/p/sapphire-rapids-core-to-core-latency" rel="noreferrer" target="_blank">depends on which CPU core is accessing
it</a>.
It’s non-uniform, so we call this non-uniform memory access, or NUMA for short.</p>
<p><strong>Reduced memory bandwidth.</strong>
Available memory bandwidth per CPU is trending downward over time.
This just means that while we have more CPU cores, each core can submit relatively fewer
requests to main memory, forcing non-cached requests to wait longer than before.</p>
<p><strong>Ever more CPU cores.</strong>
Above, we looked at a sequential marking algorithm, but the real garbage collector performs this
algorithm in parallel.
This scales well to a limited number of CPU cores, but the shared queue of objects to scan becomes
a bottleneck, even with careful design.</p>
<p><strong>Modern hardware features.</strong>
New hardware has fancy features like vector instructions, which let us operate on a lot of data at once.
While this has the potential for big speedups, it’s not immediately clear how to make that work for
marking because marking does so much irregular and often small pieces of work.</p>
<h2 id="green-tea">Green Tea</h2>
<p>Finally, this brings us to Green Tea, our new approach to the mark-sweep algorithm.
The key idea behind Green Tea is astonishingly simple:</p>
<p><em>Work with pages, not objects.</em></p>
<p>Sounds trivial, right?
And yet, it took a lot of work to figure out how to order the object graph walk and what we needed to
track to make this work well in practice.</p>
<p>More concretely, this means:</p>
<ul>
<li>Instead of scanning objects we scan whole pages.</li>
<li>Instead of tracking objects on our work list, we track whole pages.</li>
<li>We still need to mark objects at the end of the day, but we’ll track marked objects locally to each
page, rather than across the whole heap.</li>
</ul>
<h3 id="green-tea-example">Green Tea example</h3>
<p>Let’s see what this means in practice by looking at our example heap again, but this time
running Green Tea instead of the straightforward graph flood.</p>
<p>As above, navigate through the annotated slideshow to follow along.</p>
<noscript>
<i>Scroll horizontally through the slideshow!</i>
<br />
<br />
Consider viewing with JavaScript enabled, which will add "Previous" and "Next"
buttons.
This will let you click through the slideshow without the scrolling motion,
which will better highlight differences between the diagrams.
<br />
<br />
</noscript>
<div class="centered">
<button type="button" id="greentea-prev" class="scroll-button scroll-button-left" hidden disabled>← Prev</button>
<button type="button" id="greentea-next" class="scroll-button scroll-button-right" hidden>Next →</button>
<div id="greentea" class="carousel">
<figure class="carouselitem">
<img src="greenteagc/greentea-060.png" />
<figcaption>
This is the same heap as before, but now with two bits of metadata per object rather than one.
Again, each bit, or box, corresponds to one of the object slots in the page.
In total, we now have fourteen bits that correspond to the seven slots in page A.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-060.png" />
<figcaption>
The top bits represent the same thing as before: whether or not we've seen a pointer to the object.
I'll call these the "seen" bits.
The bottom set of bits are new.
These "scanned" bits track whether or not we've <i>scanned</i> the object.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-060.png" />
<figcaption>
This new piece of metadata is necessary because, in Green tea, <b>the work list tracks pages,
not objects</b>.
We still need to track objects at some level, and that's the purpose of these bits.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-062.png" />
<figcaption>
We start off the same as before, walking objects from the roots.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-063.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-064.png" />
<figcaption>
But this time, instead of putting an object on the work list,
we put a whole page–in this case page A–on the work list,
indicated by shading the whole page blue.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-066.png" />
<figcaption>
The object we found is also blue to indicate that when we do take this page off of the work list, we will need to look at that object.
Note that the object's blue hue directly reflects the metadata in page A.
Its corresponding seen bit is set, but its scanned bit is not.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-069.png" />
<figcaption>
We follow the next root, find another object, and again put the whole page–page C–on the work list and set the object's seen bit.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-071.png" />
<figcaption>
We're done following roots, so we turn to the work list and take page A off the work list.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-072.png" />
<figcaption>
Using the seen and scanned bits, we can tell there's one object to scan on page A.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-074.png" />
<figcaption>
We scan that object, following its pointers.
And as a result, we add page B to the work list, since the first object in page A points to an object in page B.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-075.png" />
<figcaption>
We're done with page A.
Next we take page C off the work list.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-076.png" />
<figcaption>
Similar to page A, there's a single object on page C to scan.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-078.png" />
<figcaption>
We found a pointer to another object in page B.
Page B is already on the work list, so we don't need to add anything to the work list.
We simply have to set the seen bit for the target object.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-079.png" />
<figcaption>
Now it's page B's turn.
We've accumulated two objects to scan on page B,
and we can process both of these objects in a row, in memory order!
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-081.png" />
<figcaption>
We walk the pointers of the first object…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-082.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-083.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-084.png" />
<figcaption>
We find a pointer to an object in page A.
Page A was previously on the work list, but isn't at this point, so we put it back on the work list.
Unlike the original mark-sweep algorithm, where any given object is only added to the work list at
most once per whole mark phase, in Green Tea, a given page can reappear on the work list several times
during a mark phase.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-085.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-086.png" />
<figcaption>
We scan the second seen object in the page immediately after the first.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-087.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-088.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-089.png" />
<figcaption>
We find a few more objects in page A…
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-090.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-091.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-092.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-093.png" />
<figcaption>
We're done scanning page B, so we pull page A off the work list.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-094.png" />
<figcaption>
This time we only need to scan three objects, not four,
since we already scanned the first object.
We know which objects to scan by looking at the difference between the "seen" and "scanned" bits.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-095.png" />
<figcaption>
We'll scan these objects in sequence.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-096.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-097.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-098.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-099.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-100.png" />
<figcaption>
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-101.png" />
<figcaption>
We're done! There are no more pages on the work list and there's nothing we're actively looking at.
Notice that the metadata now all lines up nicely, since all reachable objects were both seen and scanned.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-101.png" />
<figcaption>
You may have also noticed during our traversal that the work list order is a little different from the graph flood.
Where the graph flood had a last-in-first-out, or stack-like, order, here we're using a first-in-first-out, or queue-like, order for the pages on our work list.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-101.png" />
<figcaption>
This is intentional.
We let seen objects accumulate on each page while the page sits on the queue, so we can process as many as we can at once.
That's how we were able to hit so many objects on page A at once.
Sometimes laziness is a virtue.
</figcaption>
</figure>
<figure class="carouselitem">
<img src="greenteagc/greentea-102.png" />
<figcaption>
And finally we can sweep away the unvisited objects, as before.
</figcaption>
</figure>
</div>
</div>
<h3 id="getting-on-the-highway">Getting on the highway</h3>
<p>Let’s come back around to our driving analogy.
Are we finally getting on the highway?</p>
<p>Let’s recall our graph flood picture before.</p>
<figure class="captioned">
<img src="greenteagc/graphflood-path2.png" />
<figcaption>
The path the original graph flood took through the heap required 7 separate scans.
</figcaption>
</figure>
<p>We jumped around a whole lot, doing little bits of work in different places.
The path taken by Green Tea looks very different.</p>
<figure class="captioned">
<img src="greenteagc/greentea-path.png" />
<figcaption>
The path taken by Green Tea requires only 4 scans.
</figcaption>
</figure>
<p>Green Tea, in contrast, makes fewer, longer left-to-right passes over pages A and B.
The longer these arrows, the better, and with bigger heaps, this effect can be much stronger.
<em>That’s</em> the magic of Green Tea.</p>
<p>It’s also our opportunity to ride the highway.</p>
<p>This all adds up to a better fit with the microarchitecture.
We can now scan objects closer together with much higher probability, so
there’s a better chance we can make use of our caches and avoid main memory.
Likewise, per-page metadata is more likely to be in cache.
Tracking pages instead of objects means work lists are smaller,
and less pressure on work lists means less contention and fewer CPU stalls.</p>
<p>And speaking of the highway, we can take our metaphorical engine into gears we’ve never been able to
before, since now we can use vector hardware!</p>
<h3 id="vector-acceleration">Vector acceleration</h3>
<p>If you’re only vaguely familiar with vector hardware, you might be confused as to how we can use it here.
But besides the usual arithmetic and trigonometric operations,
recent vector hardware supports two things that are valuable for Green Tea:
very wide registers, and sophisticated bit-wise operations.</p>
<p>Most modern x86 CPUs support AVX-512, which has 512-bit wide vector registers.
This is wide enough to hold all of the metadata for an entire page in just two registers,
right on the CPU, enabling Green Tea to work on an entire page in just a few straight-line
instructions.
Vector hardware has long supported basic bit-wise operations on whole vector registers, but starting
with AMD Zen 4 and Intel Ice Lake, it also supports a new bit vector “Swiss army knife” instruction
that enables a key step of the Green Tea scanning process to be done in just a few CPU cycles.
Together, these allow us to turbo-charge the Green Tea scan loop.</p>
<p>This wasn’t even an option for the graph flood, where we’d be jumping between scanning objects that
are all sorts of different sizes.
Sometimes you needed two bits of metadata and sometimes you needed ten thousand.
There simply wasn’t enough predictability or regularity to use vector hardware.</p>
<p>If you want to nerd out on some of the details, read along!
Otherwise, feel free to skip ahead to the <a href="#evaluation">evaluation</a>.</p>
<h4 id="avx-512-scanning-kernel">AVX-512 scanning kernel</h4>
<p>To get a sense of what AVX-512 GC scanning looks like, take a look at the diagram below.</p>
<figure class="captioned">
<img src="greenteagc/avx512.svg" />
<figcaption>
The AVX-512 vector kernel for scanning.
</figcaption>
</figure>
<p>There’s a lot going on here and we could probably fill an entire blog post just on how this works.
For now, let’s just break it down at a high level:</p>
<ol>
<li>
<p>First we fetch the “seen” and “scanned” bits for a page.
Recall, these are one bit per object in the page, and all objects in a page have the same size.</p>
</li>
<li>
<p>Next, we compare the two bit sets.
Their union becomes the new “scanned” bits, while their difference is the “active objects” bitmap,
which tells us which objects we need to scan in this pass over the page (versus previous passes).</p>
</li>
<li>
<p>We take the difference of the bitmaps and “expand” it, so that instead of one bit per object,
we have one bit per word (8 bytes) of the page.
We call this the “active words” bitmap.
For example, if the page stores 6-word (48-byte) objects, each bit in the active objects bitmap
will be copied to 6 bits in the active words bitmap.
Like so:</p>
</li>
</ol>
<figure class="captioned">
<div class="row"><pre>0 0 1 1 ...</pre> → <pre>000000 000000 111111 111111 ...</pre></div>
</figure>
<ol start="4">
<li>
<p>Next we fetch the pointer/scalar bitmap for the page.
Here, too, each bit corresponds to a word (8 bytes) of the page, and it tells us whether that word
stores a pointer.
This data is managed by the memory allocator.</p>
</li>
<li>
<p>Now, we take the intersection of the pointer/scalar bitmap and the active words bitmap.
The result is the “active pointer bitmap”: a bitmap that tells us the location of every
pointer in the entire page contained in any live object we haven’t scanned yet.</p>
</li>
<li>
<p>Finally, we can iterate over the memory of the page and collect all the pointers.
Logically, we iterate over each set bit in the active pointer bitmap,
load the pointer value at that word, and write it back to a buffer that
will later be used to mark objects seen and add pages to the work list.
Using vector instructions, we’re able to do this 64 bytes at a time,
in just a couple instructions.</p>
</li>
</ol>
<p>Part of what makes this fast is the <code>VGF2P8AFFINEQB</code> instruction,
part of the “Galois Field New Instructions” x86 extension,
and the bit manipulation Swiss army knife we referred to above.
It’s the real star of the show, since it lets us do step (3) in the scanning kernel very, very
efficiently.
It performs a bit-wise <a href="https://en.wikipedia.org/wiki/Affine_transformation" rel="noreferrer" target="_blank">affine
transformations</a>,
treating each byte in a vector as itself a mathematical vector of 8 bits
and multiplying it by an 8x8 bit matrix.
This is all done over the <a href="https://en.wikipedia.org/wiki/Finite_field" rel="noreferrer" target="_blank">Galois field</a> <code>GF(2)</code>,
which just means multiplication is AND and addition is XOR.
The upshot of this is that we can define a few 8x8 bit matrices for each
object size that perform exactly the 1:n bit expansion we need.</p>
<p>For the full assembly code, see <a href="https://cs.opensource.google/go/go/+/master:src/internal/runtime/gc/scan/scan_amd64.s;l=23;drc=041f564b3e6fa3f4af13a01b94db14c1ee8a42e0" rel="noreferrer" target="_blank">this
file</a>.
The “expanders” use different matrices and different permutations for each size class,
so they’re in a <a href="https://cs.opensource.google/go/go/+/master:src/internal/runtime/gc/scan/expand_amd64.s;drc=041f564b3e6fa3f4af13a01b94db14c1ee8a42e0" rel="noreferrer" target="_blank">separate file</a>
that’s written by a <a href="https://cs.opensource.google/go/go/+/master:src/internal/runtime/gc/scan/mkasm.go;drc=041f564b3e6fa3f4af13a01b94db14c1ee8a42e0" rel="noreferrer" target="_blank">code generator</a>.
Aside from the expansion functions, it’s really not a lot of code.
Most of it is dramatically simplified by the fact that we can perform most of the above
operations on data that sits purely in registers.
And, hopefully soon this assembly code <a href="/issue/73787">will be replaced with Go code</a>!</p>
<p>Credit to Austin Clements for devising this process.
It’s incredibly cool, and incredibly fast!</p>
<h3 id="evaluation">Evaluation</h3>
<p>So that’s it for how it works.
How much does it actually help?</p>
<p>It can be quite a lot.
Even without the vector enhancements, we see reductions in garbage collection CPU costs
between 10% and 40% in our benchmark suite.
For example, if an application spends 10% of its time in the garbage collector, then that
would translate to between a 1% and 4% overall CPU reduction, depending on the specifics of
the workload.
A 10% reduction in garbage collection CPU time is roughly the modal improvement.
(See the <a href="/issue/73581">GitHub issue</a> for some of these details.)</p>
<p>We’ve rolled Green Tea out inside Google, and we see similar results at scale.</p>
<p>We’re still rolling out the vector enhancements,
but benchmarks and early results suggest this will net an additional 10% GC CPU reduction.</p>
<p>While most workloads benefit to some degree, there are some that don’t.</p>
<p>Green Tea is based on the hypothesis that we can accumulate enough objects to scan on a
single page in one pass to counteract the costs of the accumulation process.
This is clearly the case if the heap has a very regular structure: objects of the same size at a
similar depth in the object graph.
But there are some workloads that often require us to scan only a single object per page at a time.
This is potentially worse than the graph flood because we might be doing more work than before while
trying to accumulate objects on pages and failing.</p>
<p>The implementation of Green Tea has a special case for pages that have only a single object to scan.
This helps reduce regressions, but doesn’t completely eliminate them.</p>
<p>However, it takes a lot less per-page accumulation to outperform the graph flood
than you might expect.
One surprise result of this work was that scanning a mere 2% of a page at a time
can yield improvements over the graph flood.</p>
<h3 id="availability">Availability</h3>
<p>Green Tea is already available as an experiment in the recent Go 1.25 release and can be enabled
by setting the environment variable <code>GOEXPERIMENT</code> to <code>greenteagc</code> at build time.
This doesn’t include the aforementioned vector acceleration.</p>
<p>We expect to make it the default garbage collector in Go 1.26, but you’ll still be able to opt-out
with <code>GOEXPERIMENT=nogreenteagc</code> at build time.
Go 1.26 will also add vector acceleration on newer x86 hardware, and include a whole bunch of
tweaks and improvements based on feedback we’ve collected so far.</p>
<p>If you can, we encourage you to try at Go tip-of-tree!
If you prefer to use Go 1.25, we’d still love your feedback.
See <a href="/issue/73581#issuecomment-2847696497">this GitHub
comment</a> with some details on
what diagnostics we’d be interested in seeing, if you can share, and the preferred channels for
reporting feedback.</p>
<h2 id="the-journey">The journey</h2>
<p>Before we wrap up this blog post, let’s take a moment to talk about the journey that got us here.
The human element of the technology.</p>
<p>The core of Green Tea may seem like a single, simple idea.
Like the spark of inspiration that just one single person had.</p>
<p>But that’s not true at all.
Green Tea is the result of work and ideas from many people over several years.
Several people on the Go team contributed to the ideas, including Michael Pratt, Cherry Mui, David
Chase, and Keith Randall.
Microarchitectural insights from Yves Vandriessche, who was at Intel at the time, also really helped
direct the design exploration.
There were a lot of ideas that didn’t work, and there were a lot of details that needed figuring out.
Just to make this single, simple idea viable.</p>
<figure class="captioned">
<img src="greenteagc/timeline.png" />
<figcaption>
A timeline depicting a subset of the ideas we tried in this vein before getting to
where we are today.
</figcaption>
</figure>
<p>The seeds of this idea go all the way back to 2018.
What’s funny is that everyone on the team thinks someone else thought of this initial idea.</p>
<p>Green Tea got its name in 2024 when Austin worked out a prototype of an earlier version while cafe
crawling in Japan and drinking LOTS of matcha!
This prototype showed that the core idea of Green Tea was viable.
And from there we were off to the races.</p>
<p>Throughout 2025, as Michael implemented and productionized Green Tea, the ideas evolved and changed even
further.</p>
<p>This took so much collaborative exploration because Green Tea is not just an algorithm, but an entire
design space.
One that we don’t think any of us could’ve navigated alone.
It’s not enough to just have the idea, but you need to figure out the details and prove it.
And now that we’ve done it, we can finally iterate.</p>
<p>The future of Green Tea is bright.</p>
<p>Once again, please try it out by setting <code>GOEXPERIMENT=greenteagc</code> and let us know how it goes!
We’re really excited about this work and want to hear from you!</p>
<script src="greenteagc/carousel.js"></script>
</div>
</div>
<div class="Article prevnext">
<p>
<b>Next article: </b><a href="/blog/16years">Go’s Sweet 16</a><br>
<b>Previous article: </b><a href="/blog/flight-recorder">Flight Recorder in Go 1.25</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>
Flight Recorder in Go 1.25tag:blog.golang.org,2013:blog.golang.org/flight-recorder2025-09-26T00:00:00+00:002025-09-26T00:00:00+00:00Carlos Amedee and Michael KnyszekGo 1.25 introduces a new tool in the diagnostic toolbox, flight recording.
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/flight-recorder">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>Flight Recorder in Go 1.25</h1>
<p class="author">
Carlos Amedee and Michael Knyszek<br>
26 September 2025
</p>
<div class='markdown'>
<p>In 2024 we introduced the world to
<a href="/blog/execution-traces-2024">more powerful Go execution traces</a>. In that blog post
we gave a sneak peek into some of the new functionality we could unlock with our new execution
tracer, including <em>flight recording</em>. We’re happy to announce that flight recording is now
available in Go 1.25, and it’s a powerful new tool in the Go diagnostics toolbox.</p>
<h2 id="execution-traces">Execution traces</h2>
<p>First, a quick recap on Go execution traces.</p>
<p>The Go runtime can be made to write a log recording many of the events that happen during
the execution of a Go application. That log is called a runtime execution trace.
Go execution traces contain a plethora of information about how goroutines interact with each
other and the underlying system. This makes them very handy for debugging latency issues, since
they tell you both when your goroutines are executing, and crucially, when they are not.</p>
<p>The <a href="/pkg/runtime/trace">runtime/trace</a> package provides an API for collecting
an execution trace over a given time window by calling <code>runtime/trace.Start</code> and <code>runtime/trace.Stop</code>.
This works well if the code you’re tracing is just a test, microbenchmark, or command line
tool. You can collect a trace of the complete end-to-end execution, or just the parts you care about.</p>
<p>However, in long-running web services, the kinds of applications Go is known for, that’s not
good enough. Web servers might be up for days or even weeks, and collecting a trace of the
entire execution would produce far too much data to sift through. Often just one part
of the program’s execution goes wrong, like a request timing out or a failed health
check. By the time it happens it’s already too late to call <code>Start</code>!</p>
<p>One way to approach this problem is to randomly sample execution traces from across the fleet.
While this approach is powerful, and can help find issues before they become outages, it
requires a lot of infrastructure to get going. Large quantities of execution trace data
would need to be stored, triaged, and processed, much of which won’t contain anything
interesting at all. And when you’re trying to get to the bottom of a specific issue,
it’s a non-starter.</p>
<h2 id="flight-recording">Flight recording</h2>
<p>This brings us to the flight recorder.</p>
<p>A program often knows when something has gone wrong, but the root cause may have happened
long ago. The flight recorder lets you collect a trace of the last few seconds of
execution leading up to the moment a program detects there’s been a problem.</p>
<p>The flight recorder collects the execution trace as normal, but instead of writing it out to
a socket or a file, it buffers the last few seconds of the trace in memory. At any point,
the program can request the contents of the buffer and snapshot exactly the problematic
window of time. The flight recorder is like a scalpel cutting directly to the problem area.</p>
<h2 id="example">Example</h2>
<p>Let’s learn how to use the flight recorder with an example. Specifically, let’s use it to
diagnose a performance problem with an HTTP server that implements a “guess the number” game.
It exposes a <code>/guess-number</code> endpoint that accepts an integer and responds to the caller
informing them if they guessed the right number. There is also a goroutine that, once per
minute, sends a report of all the guessed numbers to another service via an HTTP request.</p>
<pre><code>// bucket is a simple mutex-protected counter.
type bucket struct {
mu sync.Mutex
guesses int
}
func main() {
// Make one bucket for each valid number a client could guess.
// The HTTP handler will look up the guessed number in buckets by
// using the number as an index into the slice.
buckets := make([]bucket, 100)
// Every minute, we send a report of how many times each number was guessed.
go func() {
for range time.Tick(1 * time.Minute) {
sendReport(buckets)
}
}()
// Choose the number to be guessed.
answer := rand.Intn(len(buckets))
http.HandleFunc("/guess-number", func(w http.ResponseWriter, r *http.Request) {
start := time.Now()
// Fetch the number from the URL query variable "guess" and convert it
// to an integer. Then, validate it.
guess, err := strconv.Atoi(r.URL.Query().Get("guess"))
if err != nil || !(0 <= guess && guess < len(buckets)) {
http.Error(w, "invalid 'guess' value", http.StatusBadRequest)
return
}
// Select the appropriate bucket and safely increment its value.
b := &buckets[guess]
b.mu.Lock()
b.guesses++
b.mu.Unlock()
// Respond to the client with the guess and whether it was correct.
fmt.Fprintf(w, "guess: %d, correct: %t", guess, guess == answer)
log.Printf("HTTP request: endpoint=/guess-number guess=%d duration=%s", guess, time.Since(start))
})
log.Fatal(http.ListenAndServe(":8090", nil))
}
// sendReport posts the current state of buckets to a remote service.
func sendReport(buckets []bucket) {
counts := make([]int, len(buckets))
for index := range buckets {
b := &buckets[index]
b.mu.Lock()
defer b.mu.Unlock()
counts[index] = b.guesses
}
// Marshal the report data into a JSON payload.
b, err := json.Marshal(counts)
if err != nil {
log.Printf("failed to marshal report data: error=%s", err)
return
}
url := "http://localhost:8091/guess-number-report"
if _, err := http.Post(url, "application/json", bytes.NewReader(b)); err != nil {
log.Printf("failed to send report: %s", err)
}
}
</code></pre>
<p>Here is the full code for the server:
<a href="/play/p/rX1eyKtVglF">https://go.dev/play/p/rX1eyKtVglF</a>, and for a simple client:
<a href="/play/p/2PjQ-1ORPiw">https://go.dev/play/p/2PjQ-1ORPiw</a>. To avoid a third
process, the “client” also implements the report server, though in a real system this would
be separate.</p>
<p>Let’s suppose that after deploying the application in production, we received complaints from
users that some <code>/guess-number</code> calls were taking longer than expected. When we look at our
logs, we see that sometimes response times exceed 100 milliseconds, while the majority of calls
are on the order of microseconds.</p>
<pre><code>2025/09/19 16:52:02 HTTP request: endpoint=/guess-number guess=69 duration=625ns
2025/09/19 16:52:02 HTTP request: endpoint=/guess-number guess=62 duration=458ns
2025/09/19 16:52:02 HTTP request: endpoint=/guess-number guess=42 duration=1.417µs
2025/09/19 16:52:02 HTTP request: endpoint=/guess-number guess=86 duration=115.186167ms
2025/09/19 16:52:02 HTTP request: endpoint=/guess-number guess=0 duration=127.993375ms
</code></pre>
<p>Before we continue, take a minute and see if you can spot what’s wrong!</p>
<p>Regardless of whether you found the problem or not, let’s dive deeper and see how we can
find the problem from first principles. In particular, it would be great if we could see
what the application was doing in the time leading up to the slow response. This is exactly
what the flight recorder was built for! We’ll use it to capture an execution trace once
we see the first response exceeding 100 milliseconds.</p>
<p>First, in <code>main</code>, we’ll configure and start the flight recorder:</p>
<pre><code>// Set up the flight recorder
fr := trace.NewFlightRecorder(trace.FlightRecorderConfig{
MinAge: 200 * time.Millisecond,
MaxBytes: 1 << 20, // 1 MiB
})
fr.Start()
</code></pre>
<p><code>MinAge</code> configures the duration for which trace data is reliably retained, and we
suggest setting it to around 2x the time window of the event. For example, if you
are debugging a 5-second timeout, set it to 10 seconds. <code>MaxBytes</code> configures the
size of the buffered trace so you don’t blow up your memory usage. On average,
you can expect a few MB of trace data to be produced per second of execution,
or 10 MB/s for a busy service.</p>
<p>Next, we’ll add a helper function to capture the snapshot and write it to a file:</p>
<pre><code>var once sync.Once
// captureSnapshot captures a flight recorder snapshot.
func captureSnapshot(fr *trace.FlightRecorder) {
// once.Do ensures that the provided function is executed only once.
once.Do(func() {
f, err := os.Create("snapshot.trace")
if err != nil {
log.Printf("opening snapshot file %s failed: %s", f.Name(), err)
return
}
defer f.Close() // ignore error
// WriteTo writes the flight recorder data to the provided io.Writer.
_, err = fr.WriteTo(f)
if err != nil {
log.Printf("writing snapshot to file %s failed: %s", f.Name(), err)
return
}
// Stop the flight recorder after the snapshot has been taken.
fr.Stop()
log.Printf("captured a flight recorder snapshot to %s", f.Name())
})
}
</code></pre>
<p>And finally, just before logging a completed request, we’ll trigger the snapshot if the request
took more than 100 milliseconds:</p>
<pre><code class="language-go">// Capture a snapshot if the response takes more than 100ms.
// Only the first call has any effect.
if fr.Enabled() && time.Since(start) > 100*time.Millisecond {
go captureSnapshot(fr)
}
</code></pre>
<p>Here’s the full code for the server, now instrumented with the flight recorder:
<a href="/play/p/3V33gfIpmjG">https://go.dev/play/p/3V33gfIpmjG</a></p>
<p>Now, we run the server again and send requests until we get a slow request that triggers a
snapshot.</p>
<p>Once we’ve gotten a trace, we’ll need a tool that will help us examine it. The Go toolchain
provides a built-in execution trace analysis tool via the
<a href="https://pkg.go.dev/cmd/trace" rel="noreferrer" target="_blank"><code>go tool trace</code> command</a>. Run <code>go tool trace snapshot.trace</code> to
launch the tool, which starts a local web server, then open the displayed URL in your browser
(if the tool doesn’t open your browser automatically).</p>
<p>This tool gives us a few ways to look at the trace, but let’s focus on visualizing the trace
to get a sense of what’s going on. Click “View trace by proc” to do so.</p>
<p>In this view, the trace is presented as a timeline of events. At the top of the page, in
the “STATS” section, we can see a summary of the application’s state, including the
number of threads, the heap size, and the goroutine count.</p>
<p>Below that, in the “PROCS” section, we can see how the execution of goroutines is mapped
onto <code>GOMAXPROCS</code> (the number of operating system threads created by the Go application). We
can see when and how each goroutine starts, runs, and finally stops executing.</p>
<p>For now, let’s turn our attention to this massive gap in execution on the right side of the
viewer. For a period of time, around 100ms, nothing is happening!</p>
<p><a href="flight-recorder/flight_recorder_1.png"><img src="flight-recorder/flight_recorder_1.png" width=100%></a></p>
<p>By selecting the <code>zoom</code> tool (or pressing <code>3</code>), we can inspect the section of the trace right
after the gap with more detail.</p>
<p><a href="flight-recorder/flight_recorder_2.png"><img src="flight-recorder/flight_recorder_2.png" width=100%></a></p>
<p>In addition to the activity of each individual goroutine, we can see how goroutines interact
via “flow events.” An incoming flow event indicates what happened to make a goroutine start
running. An outgoing flow edge indicates what effect one goroutine had on another. Enabling the
visualization of all flow events often provides clues that hint at the source of a problem.</p>
<p><a href="flight-recorder/flight_recorder_3.png"><img src="flight-recorder/flight_recorder_3.png" width=100%></a></p>
<p>In this case, we can see that many of the goroutines have a direct connection to a single
goroutine right after the pause in activity.</p>
<p>Clicking on the single goroutine shows an event table filled with outgoing flow events, which
matches what we saw when the flow view was enabled.</p>
<p>What happened when this goroutine ran? Part of the information stored in the trace is a view
of the stack trace at different points in time. When we look at the goroutine we can see that
the start stack trace shows that it was waiting for the HTTP request to complete when the
goroutine was scheduled to run. And the end stack trace shows that the <code>sendReport</code> function
had already returned and it was waiting for the ticker for the next scheduled time to send
the report.</p>
<p><a href="flight-recorder/flight_recorder_4.png"><img src="flight-recorder/flight_recorder_4.png" style="padding: inherit;margin:auto;display: block;"></a></p>
<p>Between the start and the end of this goroutine running, we see a huge number of
“outgoing flows,” where it interacts with other goroutines. Clicking on one of the
<code>Outgoing flow</code> entries takes us to a view of the interaction.</p>
<p><a href="flight-recorder/flight_recorder_5.png"><img src="flight-recorder/flight_recorder_5.png" width=100%></a></p>
<p>This flow implicates the <code>Unlock</code> in <code>sendReport</code>:</p>
<pre><code class="language-go">for index := range buckets {
b := &buckets[index]
b.mu.Lock()
defer b.mu.Unlock()
counts[index] = b.guesses
}
</code></pre>
<p>In <code>sendReport</code>, we intended to acquire a lock on each bucket and release the lock after
copying the value.</p>
<p>But here’s the problem: we don’t actually release the lock immediately after copying the
value contained in <code>bucket.guesses</code>. Because we used a <code>defer</code> statement to release the
lock, that release doesn’t happen until the function returns. We hold the lock not just
past the end of the loop, but until after the HTTP request completes. That’s a subtle
error that may be difficult to track down in a large production system.</p>
<p>Fortunately, execution tracing helped us pinpoint the problem. However, if we tried
to use the execution tracer in a long-running server without the new flight-recording
mode, it would likely amass a huge amount of execution trace data, which an operator
would have to store, transmit, and sift through. The flight recorder gives us the power
of hindsight. It lets us capture just what went wrong, after it’s already happened,
and quickly zero in on the cause.</p>
<p>The flight recorder is just the latest addition to the Go developer’s toolbox for
diagnosing the inner workings of running applications. We’ve steadily been improving
tracing over the past couple of releases. Go 1.21 greatly reduced the run-time overhead
of tracing. The trace format became more robust and also splittable in the Go 1.22
release, leading to features like the flight recorder. Open-source tools like
<a href="https://gotraceui.dev/" rel="noreferrer" target="_blank">gotraceui</a>, and the <a href="/issue/62627">forthcoming ability to programmatically
parse execution traces</a> are more ways to leverage the power of
execution traces. The <a href="/doc/diagnostics">Diagnostics page</a> lists many additional
tools at your disposal. We hope you make use of them as you write and refine
your Go applications.</p>
<h2 id="thanks">Thanks</h2>
<p>We’d like to take a moment to thank those community members who have been active in the
diagnostics meetings, contributed to the designs, and provided feedback over the years:
Felix Geisendörfer (<a href="https://bsky.app/profile/felixge.de" rel="noreferrer" target="_blank">@felixge.de</a>),
Nick Ripley (<a href="https://github.com/nsrip-dd" rel="noreferrer" target="_blank">@nsrip-dd</a>),
Rhys Hiltner (<a href="https://github.com/rhysh" rel="noreferrer" target="_blank">@rhysh</a>),
Dominik Honnef (<a href="https://github.com/dominikh" rel="noreferrer" target="_blank">@dominikh</a>),
Bryan Boreham (<a href="https://github.com/bboreham" rel="noreferrer" target="_blank">@bboreham</a>),
and PJ Malloy (<a href="https://github.com/thepudds" rel="noreferrer" target="_blank">@thepudds</a>).</p>
<p>The discussions, feedback, and work you’ve all put in have been instrumental in pushing
us to a better diagnostics future. Thank you!</p>
</div>
</div>
<div class="Article prevnext">
<p>
<b>Next article: </b><a href="/blog/greenteagc">The Green Tea Garbage Collector</a><br>
<b>Previous article: </b><a href="/blog/survey2025-announce">It's survey time! How has Go has been working out for you?</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>
It's survey time! How has Go has been working out for you?tag:blog.golang.org,2013:blog.golang.org/survey2025-announce2025-09-16T00:00:00+00:002025-09-16T00:00:00+00:00Todd Kulesza, on behalf of the Go teamHelp shape the future of Go
<div id="blog"><div id="content">
<div id="content">
<div class="Article" data-slug="/blog/survey2025-announce">
<h1 class="small"><a href="/blog/">The Go Blog</a></h1>
<h1>It's survey time! How has Go has been working out for you?</h1>
<p class="author">
Todd Kulesza, on behalf of the Go team<br>
16 September 2025
</p>
<div class='markdown'>
<p>Hi Gophers! Today we’re excited to announce our <a href="https://google.qualtrics.com/jfe/form/SV_3wwSstC8vv4Ymkm?s=b" rel="noreferrer" target="_blank">2025 Go Developer Survey</a>. The Go Team uses the results of this annual survey to better understand the needs and concerns of Go developers across the world. Your feedback helps us brainstorm, plan, and prioritize work on Go.</p>
<p>You can <a href="https://google.qualtrics.com/jfe/form/SV_3wwSstC8vv4Ymkm?s=b" rel="noreferrer" target="_blank">take the survey here</a>. It will be open through <strong>September 30th</strong>. The survey should take 10 – 20 minutes to complete, and every question is optional.</p>
<p>We’ll share aggregated survey results on this blog in early November. This year we’ll also share the raw dataset of survey responses, so that the entire Go community can benefit from this knowledge and conduct your own analyses on the data. Similar to our approach with <a href="/blog/gotelemetry">Go Telemetry</a>, we’re using an opt-in model: the survey will ask for your permission to include your responses in the dataset. If you don’t give us permission, your survey responses will not be shared.</p>
<p>Please help us reach as many Go developers as possible! We love it when you share the survey with your colleagues, friends, and online communities. The more voices we hear, the better we can understand the diverse needs of Go developers everywhere.</p>
<p>We’re looking forward to hearing your feedback!</p>
</div>
</div>
<div class="Article prevnext">
<p>
<b>Next article: </b><a href="/blog/flight-recorder">Flight Recorder in Go 1.25</a><br>
<b>Previous article: </b><a href="/blog/jsonv2-exp">A new experimental Go API for JSON</a><br>
<b><a href="/blog/all">Blog Index</a></b>
</div>
</div>
</div>
<script src="/js/play.js"></script>