Jekyll2026-01-31T17:48:58-08:00https://natemcmaster.com/feed.xmlNate McMasterThoughts on developer tools, software internals, and lessons learned from a principal software engineer. Nate McMaster[email protected]https://natemcmaster.com1Password is still worth it in 20262026-01-31T00:00:00-08:002026-01-31T00:00:00-08:00https://natemcmaster.com/blog/2026/01/31/1password-is-still-worth-itI let my 1Password Family subscription lapse for three months to give iCloud Passwords a real shot. Apple’s offering is free, built-in, and getting better. After three months, I went back to 1Password.

This is totally a convenience purchase. You can absolutely get by without it, and maybe as Apple keeps improving, this won’t be needed. But iCloud Passwords is a death by a thousand papercuts. Each issue is small, but they add up, and after three months I was fatigued enough to miss 1Password.

Note: this is not a paid post. I’m not a sponsor. I like writing about tools I enjoy using.

macOS: Chrome autofill is painful

The iCloud Passwords Chrome extension has 2.3 stars on the Chrome Web Store, and I think that’s justified. I have to re-enter a 6-digit code multiple times a day just to let Chrome autofill passwords. Even when I’m already signed in, every single password fill requires re-entering my password or using Touch ID. There’s no grace period.

Passkeys, on the other hand, work great. Likewise, if you’re using Safari on macOS, no complaints there. But don’t get be started on Safari for macOS and its gaps….

iCloud Passwords verification code prompt for Chrome

Enable Password AutoFill prompt in Chrome

iCloud sync bugs

Sync has been mostly reliable, but I ran into a strange issue a few weeks ago. Sync between my family member’s iPad, my phone, and laptop was broken. They all had slightly different versions of passwords. It didn’t work to logout/login again. Instead, I had to add dummy items to our password vaults to trigger a resync.

Credit card autofill

Apple Passwords is just for passwords. Despite the increasing availability of Apple Pay, I still have to fill in credit card numbers in many places. macOS does let you save credit cards in Wallet & Apple Pay, but getting to those details takes about 6 clicks through System Settings. And Wallet’s saved credit cards only autofill in Safari, not Chrome.

With 1Password, I can hit Cmd+Option+\ or click the browser extension to quickly search and autofill. It also recognizes credit card fields and suggests autofill automatically. It’s a much smoother experience.

Navigating to saved credit cards in macOS System Settings

Viewing saved credit card details

Manually selecting and copying credit card details

iPhone: where Apple Passwords shines

Integration is actually slightly better with Apple Passwords in most cases since it’s the native experience. 1Password has its own annoyance: sometimes the password suggestion doesn’t pop up above the keyboard, and I have to switch apps to copy it. Minor, but real.

Windows sync is buggier than macOS

There is an iCloud for Windows app to make the Chrome extension work for passwords. I don’t use Windows, but my family members tell me the password autofill is buggy.

Migration is the worst part

Passkeys can’t be exported or imported, so you’re locked in. Most services let you register more than one passkey, but not all of them. De-duplicating passwords after switching between managers is also tedious. Merging two vaults means sorting through duplicates, outdated entries, and slight mismatches.

To make this less painful, I wrote a script that uses the 1Password CLI to find and merge duplicate logins. It groups items by domain and username, lets you interactively pick which to keep, and archives the rest. It also handles merging TOTP secrets, notes, and extra fields from discarded items into the keeper so you don’t lose anything. If you’re migrating back to 1Password from another manager and end up with a mess of duplicates, it might save you some time.

Verdict

iCloud Passwords is good enough for most people, especially if you’re all-in on Safari and Apple devices. But if you use Chrome on macOS, the experience has enough friction to justify paying for 1Password. It’s a convenience tax, and I’m okay paying it.

]]>
Nate
So, you want to work for Anthropic?2026-01-27T00:00:00-08:002026-01-27T00:00:00-08:00https://natemcmaster.com/blog/2026/01/27/so-you-want-to-join-anthropicI recently joined Anthropic as a Member of Technical Staff, building software to support Anthropic’s network. It sparked a flurry of people asking if they could join, too. Whether you’re considering a job change or specifically curious about Anthropic, I’ll share how I got the job, then walk through what I weighed: income, uncertainty, values, and the allure of hype.

DISCLAIMER: This is a personal website, produced on my own time and solely reflecting my personal opinions. Statements on this site do not represent the views or policies of my employer, Anthropic. I am not a recruiter for Anthropic. This was published Jan. 2026. Hiring practices may have changed by the time you read this.

Process: how did I get the job?

I applied at https://anthropic.com/jobs. I first considered Anthropic in 2024, but didn’t apply as I was committed to my goals at AWS. I completed my goals at AWS by mid 2025. A posting appeared on Anthropic’s job board in late 2025 with requirements that matched me almost perfectly. So, I created an updated resume and applied online. A recruiter got back to me 4 days later. It took about 2 months to go from application to starting the job. During this time, I also interviewed with Databricks and xAI as well as explored other roles within Amazon.

The exact questions and content of the interviews are not something I’ll share here—I don’t think that would help anyone. If you’re preparing, don’t expect to pass by memorizing “perfect” answers or using an AI assistant in the background. Interviewers want to understand how you think, not whether you can produce a polished response. Here’s Anthropic’s guidance on using AI to prepare and apply: https://www.anthropic.com/candidate-ai-guidance. I used AI to proofread my resume and work on practice questions, but during the actual interviews, it was all me.

If you’re using AI to prepare, a few tips:

  • Ask it to review your resume for clarity and gaps—and tell it to be brutally honest.
  • Practice answering questions out loud, then ask AI to critique your reasoning, not just your phrasing.
  • Don’t over-polish. The goal is to think clearly, not to sound rehearsed.

Income: less cash, but maybe more equity

Anthropic job posts include salary ranges for the role, and will mention if the role includes equity compensation. Read it carefully. And if you get the job offer, take time to understand the equity offer.

I took a pay cut for this job—at least in cash terms. Joining a startup is similar in ways to investing in a startup, and like any investment, you should understand the risk.

If you’re comparing offers, here are questions worth asking:

  • Public vs. private stock: Public company equity (like AMZN) is predictable—you can track the price and sell when it vests. Private company equity is a bet on future value. What’s your risk tolerance?
  • Cliffs and liquidity: When does equity vest? Are there buyback opportunities, or will you wait years for a liquidity event?
  • Your financial situation: Can you absorb a lower cash salary? Do you have runway if the bet doesn’t pay off?

I weighed these factors against my own situation and decided to take the risk. Your calculus may be different.

Uncertainty: into the unknown

How do you weigh an unknown opportunity against the familiar trade-offs of your current job? I had a job with coworkers I like, a good boss, and predictable pay. Why walk away?

I don’t need something new, I’m afraid of what I’m risking

— Elsa, Into the Unknown

Like many parents, I’ve seen Frozen approximately a bazillion times. So naturally, I resonated with this song—and the tongue-in-cheek humor of “When I Am Older”. Elsa resists the call into the unknown at first. I also resisted leaving Amazon, despite feeling drawn away. I came close in 2022—I had a job offer from a startup—but stayed. There’s wisdom in caution. “Better the devil you know” exists for a reason.

If you’re feeling similar resistance, it might help to ask: what specifically am I afraid of? Can I test those assumptions by talking to people, researching the company, or examining the offer more closely?

For me, what was different this time? The Anthropic role fit my interest, needs, and goals. This was enough to outweigh the uncertainty that comes from changing to a new job. I covered this in more detail in my previous post.

Personal values and company alignment

The past few years were difficult, but they transformed my worldview. Once I came to believe AI was going to substantially change the way my children will experience life, I realized I wanted to contribute to a vision of AI aligned with my humanist values. I suspect this is common at my stage of life. I’ve known others who, after experiencing loss or confronting mortality, felt a pull toward work that felt more meaningful.

During your interviews, pay attention to what employees say about why they’re there. Do their answers feel rehearsed or genuine? Is the company’s stated mission reflected in the questions they ask and the goals they describe? For me, the more people I met at Anthropic, the more I saw alignment between their mission and how people actually talked about their work.

If you’re weighing options, I’d encourage you to think beyond compensation and hype. What does the company actually do, and does it matter to you? What do employees talk about when they describe why they joined? These questions helped me find clarity.

Popularity: hype is ephemeral

Anthropic is having a moment. I asked myself, “am I joining because of the hype?” That played a role in attracting my attention, but tech companies follow predictable narrative arcs—today’s darling becomes tomorrow’s cautionary tale. I fully expect Anthropic’s time in the spotlight to fade, and I’m okay with that. (I probably don’t have to convince you of that—I joined Microsoft after an internship on Windows Phone, perhaps the most un-cool, unpopular smartphone available to college students at the time.) The positive press won’t last forever, but my reasons for being here aren’t built on it.

I should also be honest: I’ve only been here a month. My impressions are early, and I could be wrong. One risk I think about: any company surrounded by believers can drift into an echo chamber. Anthropic’s mission-driven culture is a strength, but it’s also a vulnerability if it crowds out dissent or self-criticism. I don’t know yet how well the company handles that tension. Ask me again in a year.


So, you want to work for Anthropic? I can’t give you a step-by-step process of what to say in a resume or how to respond in interviews. I also can’t tell you if it’s right for you—but I hope in sharing my experiences, you’ll be better equipped to chart your own course.

]]>
Nate
Thoughts on leaving AWS and joining Anthropic2025-12-30T00:00:00-08:002025-12-30T00:00:00-08:00https://natemcmaster.com/blog/2025/12/30/farewell-amazonThis month, I announced I have resigned my position as a Principal Software Engineer for Amazon Web Services (AWS) and accepted a new role as a Member of Technical Staff with Anthropic. After announcing my transition out, 34 people at AWS scheduled office hours for 1:1 interviews with me, and I got asked a lot of questions. I’m sharing some of the common questions and answers here. Whether my thoughts are actually worth sharing is a question I try not to think about too hard. (Plus, two people specifically asked if I could blog more after I leave. I’m choosing to believe they weren’t being sarcastic.)

Most asked questions

What made you want to leave?

People were implying—and a few said outright—that I had a position too good to give up. I didn’t make this decision in haste. It took months of consideration, during which time I had many chats with my wife, family, managers, mentors, and friends to collect diverse viewpoints. Those conversations helped me identify my motivations.

Ultimately, it came down to 3 factors:

  • Anthropic is on the cutting edge of AI innovation, a technology that I believe is going to transform our economy and society. I’m aligned with the company’s mission and values, and I want to contribute to something meaningful. While the company is a startup and may not end up being profitable, I think their research will outlast the corporate structure. I hope the science it produces benefits humanity.
  • At AWS, I had been on the same team for 6.5 years and was happy with the state of the projects. In 2025 specifically, I received lots of positive feedback that my focus area, a project called Barge, had delivered “best in class” tooling. I would rather leave a project in a good state than one falling apart.
  • Anthropic’s work climate is favorable. By this I mean the combination of compensation and equity, benefits, work-from-home flexibility, in-office perks, and “vibe”. The last part, “vibe”, is important. I want to be in a space where people are excited and thrilled about their work, and not just clinging to a job, hoping they don’t get laid off to boost profits or forced out to meet a quota for unregretted attrition.

What do you recommend for my career?

Many people asked me for my input. Should they stay at Amazon? Would I be able to hire them into Anthropic? How can they get promoted if they stay at Amazon?

While my feedback was adapted to each individual, some common themes emerged:

SDE II (L5) looking for promotion to senior SDE (L6)

I’ll probably need to write a separate post about this one. The promotion to L6 SDE at Amazon is challenging. Many people wait years, eager for the significant bump in pay and the sense of security that comes from the title.

Right now, Amazon is in a period of contraction. They’ve laid off thousands in 2025 alone. In the “before times” when teams were aggressively growing, promotions to L6 seemed to be easier. Because Amazon brought in so many new college hires, it was easier for anyone with experience to guide a team and fulfill the expectations of a senior engineer. Now, however, teams are shrinking, not growing, leaving less space for people to move into a team-lead type role.

My advice to people asking about this was to dig deeper into the problem space. There are countless things that need solving. Perhaps your current team is too heavy on senior engineers to demonstrate next-level work in your current scope—but there are many unsolved problems. If you can find those unsolved areas, and make it clear to your managers and other leaders why those problems need solving AND help guide a team to deliver on it, you may find that’s a path to promotion. That said, it might not work. The other, more likely path, is that a senior/principal engineer leaves a team and you have grown in skill and expertise so you can fill their shoes.

Stay at Amazon or quit?

Once it came out I was leaving, people started sharing with me they were either interviewing, planning to leave, or were strongly considering it. They wanted my input on my own decision-making process to check it against their own.

People had many reasons for considering quitting Amazon:

  • Frustration with bureaucracy and politics or other team dynamics
  • Lack of opportunities for promotion
  • Desire for more flexibility to work hybrid/remote
  • Looking to increase income by moving to companies that pay better
  • Insecurity due to Amazon’s persistent layoffs, and wanting to move first instead of waiting to be cut

As a side note, in my final exit interview, I shared some of these reasons anonymously with my director. He wasn’t surprised at all. I took that to mean these are all common issues a director deals with. New to me, not to him.

Who will replace you?

Because I had stayed in the same team and position for 6.5 years, I accumulated a lot of domain knowledge and understanding of history. But I want to be clear: I’m not special. There are many engineers who know these systems well. I was never a single point of failure—though I did have a unique perspective from a breadth of knowledge across areas and deep insights in a few specific domains.

I don’t feel any guilt about leaving. Amazon is going to be fine. It’s a huge company with enormous resources. The people around me will get to learn and grow in ways they couldn’t have with me always present to answer questions or make calls.

There’s some discomfort in realizing you’re not as essential as you thought. Better to leave before Amazon figures that out too.

I left people a “redirect” table pointing to other managers and engineers taking on leadership of various areas of work. The machine keeps running.

What’s next

Stepping out of the spotlight

Amazon has a culture of high expectations for anyone with principal-level titles. For years they’ve hosted an internal talk series called “Principals of Amazon”. They record the sessions and the content is often used as reference material. In meetings, I found that the principals could often use their influence and title to break ties or make final calls. On more than one occasion, senior managers directly asked “what is your call here, Nate?”. And in my organization, my role extended beyond software engineering into people management: hiring, mentoring, and coaching.

Lalit Maganti recently wrote a post about his experiences as Staff+ at Google on a developer tools and infra team. His post resonated with me because I, like Lalit, was not on a “product” team at AWS where we had direct external customers. That said, there was still some level of spotlight internally. I was involved in giving internal talks to showcase our internal services and tooling. I wrote newsletters and Slack announcements. At Amazon, this “internal marketing” is necessary to raise awareness. The company is so large and filled with so many bright, talented people, that the pace of innovation is breakneck.

In my next role at Anthropic, there is no “principal” in my title. I’m a “Member of Technical Staff”, the same title as all of my peers. I’m eager to see how this changes conversations—it’s been a long time since I was new anywhere. And I’ve found at Amazon that my title was often entering the meeting before me, so anything I said came with an air of authority, regardless of whether my opinion had technical merit or not. Moving out of the spotlight seems refreshing—a chance to ground myself again.

Why Anthropic, why now?

I’ve been cautious about avoiding hype in making this choice. I’d been considering Anthropic, xAI, or OpenAI for over a year. I actually had the application page open in 2024 but didn’t submit—I needed time to clarify my thinking. At some point in the last year, it became clear a transition was needed. I was either going to change roles internally at AWS or leave.

I spent my last three months at Amazon exploring a new project space, but I’m more excited about what I’ll get to work on at Anthropic. I was able to have several interviews beyond the typical hiring loop to learn about what I’ll be doing there. I’m going to be intentionally vague about the specifics for now, but hopefully I’ll have a chance to blog more in the future.

I’m excited to see how AI models are actually developed. I’ve heard Dario say that a lot of the work to do science comes down to engineering. I want to be part of that.

Work flexibility

I’m also excited to have more flexibility in how I work. There’s an Anthropic office in Seattle, but as I understand it, there isn’t a “badge report” system like Amazon has been using to enforce return-to-office goals—and in some cases, force out people unwilling to comply. Everything I’ve heard indicates people go to the office because they’re genuinely excited to be there, and the atmosphere is vibrant and collaborative.

My office at Amazon had begun to feel empty and hollow. I would drive a long commute to sit in a cubicle and join Zoom calls. I had one of the better cubicles—I had a window—but I often felt isolated from the people around me.

Back to coding

Finally, I’m excited to keep coding. Coding is why I got into this profession.

I’ve enjoyed learning to use AI as a coding assistant. I’ve heard Dario say 90% of Anthropic’s code is written by AI. I was maybe getting 60-70% of that in my work at Amazon, but the spec-driven approach of Kiro didn’t feel quite right—too heavy for small changes. I’m curious to see what Anthropic has developed to make AI-assisted coding even more effective.

I believe AI will transform my profession even more than it already has, and I’m eager to contribute. There’s vast, unlocked potential that’s been too hard to achieve due to the difficult nature of coding prior to AI. I’m excited to see what we—collectively, humanity—are able to do with better technology.

]]>
Nate
How I use AI to code2025-11-06T00:00:00-08:002025-11-06T00:00:00-08:00https://natemcmaster.com/blog/2025/11/06/tenets-of-ai-codingI love to code. I code a lot. And I think I’m good at it - I started coding in 2002 and have written many, many applications in the last 20+ years. AI has been changing the way I code, and I wanted to share my learnings (so far).

Over the past few years, I’ve been an eager adopter of Cline, Roo, and Amazon Q CLI. I’ve also been experimenting with Kiro, Cursor, Copilot, and Claude on the side.

From these experiences, I’ve developed a few personal guidelines for using AI coding tools effectively.

Only ask AI to do things I already know how to do

I take ownership of any code I produce. So, I don’t ask AI to help me write code unless I could have written it myself.
That said, AI is much faster at typing than I am, so if it can do something I already know how to do, I’ll absolutely let it.

This includes knowing how to research to find an answer. I use AI as a research companion to help me find resources when I need to learn something new. Then I apply that learning to guide my AI coding agents more effectively.

No “vibe” coding

AI can generate huge volumes of code in autopilot mode. I like the mental model behind Kiro’s spec-driven coding: set a goal, think about the end state, and break the work into smaller steps.
That said, I’m not a fan of Kiro’s “autopilot” mode. No matter how well I’ve written a spec, AI still needs supervision to produce results I’m happy with.

So, I don’t code in a “fire-and-forget” way. I tried this many times, but I ended up spending just as much time undoing or discarding code generated en masse by an autopilot agent.

Ask AI to do small, specific tasks

As of Nov. 2025, even the best coding models (like Sonnet 4.5) still appear to perform best on small, well-scoped tasks. I’ve found Roo’s boomerang tasks to be a great way to break larger tasks into steps AI can manage well.

AI automates but does not think

I resist the temptation to delegate “thinking” to the AI. I treat AI like an autocomplete engine on steroids. It may look like it’s thinking, but it’s not capable of true critical reasoning.
I use it to automate tasks, but I stay in charge and make sure it’s doing what I actually need.

]]>
Nate
Less code is often better2023-06-18T00:00:00-07:002023-06-18T00:00:00-07:00https://natemcmaster.com/blog/2023/06/18/less-codeEarly in my software engineering career, a senior engineer at Microsoft told me “the best solution is one that requires no new code.” At the time, I thought this was nonsense. Is not my role as a software engineer to write code? Why would writing less or no code be better? More code means more bug fixes, more features, more services, and more tools. So why is more not always better?

Fast forward to 2023 – now I am the most senior engineer on a team, and I give the same guidance. Prefer solutions that require less or no code.

What led to this shift in perspective? It boils down to this: writing code is a one-time cost, but maintenance is an ongoing cost. Maintenance of successful code will extend for years beyond your original projections because migrating to a new thing will have its own cost. So, your code will run until the benefit of removing it outweighs the cost of refactor or migration.

On the one hand, this can be exciting. Your legacy could be a system the lasts for years, even decades. But on the other hand, you may live to regret introducing it in the first place, especially if your code ends up having problems. And more often than not, code will have problems after it is first written. Many problems appear immediately, but others creep in slowly as the system around it changes.

Another thing shifted, too – I see now that software is a means to an end, but not the end itself. I was especially blind to this because my first job was building an open-source framework at Microsoft. The code was the product, or so I thought. Now I realize our product was an intermediate ingredient. The framework on its own produces no value. It gained value by helping other developers solve problems with less code of their own to maintain.

Hidden costs

Ongoing maintenance costs can be hard to understand when you are learning to build software. Often your initial tasks are to write something new. Also, often young developers do not stay on projects long enough to see the consequences of their work play out.

However, with time, most engineers end up maintaining code they or someone else created long ago. And when that happens, what was hidden now becomes a headache.

We have a term for this: “technical debt”. It’s sometimes used as a dirty word to malign a system or piece of code because its issues are more expensive than they should be or preventing you from accomplishing something.

Open-source and sticky ownership

It should come as no surprise that if you make code open-source, you are giving it away for free. GitHub has tried to set up “sponsorship” programs to fund developers. I have been lucky to have some sponsors, but despite their generosity, I have earned 10 to 15 cents per hour of effort spent.

In professional environments, maintenance is passed along employees rotate through a project. In personal, open-source projects, there is almost never someone else working on it. So, ownership of your code will stick to you forever.

So, I add double emphasis to “write less code” if you are considering open-sourcing it. While open-source has benefits to the world of software as a whole, most of the benefits are collected by the big companies and not individuals. For an individual getting started, something other than money must motivate your open-source work if you want it to continue. Consider your motivations, and whether those will change over time.

An example

Five years ago, I uploaded a project called DotNetCorePlugins and advertised in Twitter with a blog post “Introducing an API for loading .dll files”. My motivations at the time were related to my work in the @dotnet project at Microsoft. I saw many developers struggling with dynamic loading. I had found one solution that was not straightforward to discover, but once I found I could abstract some complexity of .NET in a library, it seemed like a good chance to promote my findings (and my reputation) but posting online.

Since that time, however, several things occurred which I did not anticipate.

  1. There were bugs. The project has 160 issues. I made 4 releases after the initial release to address the biggest ones, but the bug reports continued to come in.
  2. .NET added a standard library feature which filled the need, but people kept using my project.
  3. I lost interest in C# and .NET. Plus, life changed. I aged, I started a family, I developed non-computer hobbies.

Now, years later, I am deprecating the project and alerting developers this has hit the end of its life. I posted the project was in maintenance-only mode in 2020, and no one has come along to offer help or to take ownership.

Write code responsibly

If you are about to write some code, you should also ask yourself:

  • Who is this code for?
  • Who will maintain it?
  • Is there existing code I can repurpose or use instead?

Writing code is fun, so keep doing it. Just be careful not to get carried away and neglect to consider its future.

]]>
Nate
Deep-dive into .NET Core primitives, part 3: runtimeconfig.json in depth2019-01-09T00:00:00-08:002019-01-09T00:00:00-08:00https://natemcmaster.com/blog/2019/01/09/netcore-primitives-3.NET Core applications contain a file named <something>.runtimeconfig.json. This file can be used to control a variety of options. Most developers need not be concerned with it because the SDK generates the file, but I think it’s worth understanding. The file can be used to control settings which are not surfaced in Visual Studio, such as automatically running your app on higher .NET Core versions, tuning thread pools and garbage collection, and more.

This post is part of a series:

Purpose of the file

The runtimeconfig.json file is technically optional, but for practical reasons, every real-world app has it. The file can be hand-edited. Unlike the .deps.json file, it is meant to be human-readable. The purpose of the file is to define required shared frameworks (for framework-dependency deployments only), as well as other runtime options, as outlined below.

A simple example

A typical runtimeconfig.json file will look something like this.

{
  "runtimeOptions": {
    "tfm": "netcoreapp2.1",
    "framework": {
      "name": "Microsoft.NETCore.App",
      "version": "2.1.0"
    }
  }
}

I’ve written a complete schema for this file. See https://gist.github.com/natemcmaster/0bdee16450f8ec1823f2c11af880ceeb.

runtimeconfig.template.json

Some options cannot be set in your project file (.csproj). You have two options to work around this. Hand-edit runtimeconfig.json as a post-build action, or use a file named runtimeconfig.template.json. I recommend using the template if you want settings to persist.

On build, the SDK will augment the template with additional data from your .csproj file. Follow these steps to use a template:

  1. Create a new project (dotnet new console -n MyApp)
  2. Create a file named “runtimeconfig.template.json” into the project directory (next to your .csproj file).
  3. Set the contents of the file to this:
    {
      "rollForwardOnNoCandidateFx": 2
    }
    
  4. Run dotnet build

Voila! That’s it. Look at bin/Debug/netcoreapp2.1/MyApp.runtimeconfig.json to make sure it worked.

Visual Studio intellisense

I’ve written a JSON schema, which you can use in your Visual Studio editor. Add this line to your runtimeconfig.template.json file.

{
  "$schema": "https://gist.githubusercontent.com/natemcmaster/0bdee16450f8ec1823f2c11af880ceeb/raw/runtimeconfig.template.schema.json"
}

Runtime options

Frameworks, versions, and roll-forward

.NET Core shared frameworks support installing side-by-side versions, and therefore, dotnet has to pick one version when starting an application. The following options are used to set which shared frameworks and which versions of those frameworks are loaded.

Note: the default settings generated by the SDK are usually sufficient, but they can be altered to workaround regressions in .NET Core patches or the unfortunately common error when .NET Core fails to launch:

It was not possible to find any compatible framework version. The specified framework ‘Microsoft.NETCore.App’, version ‘X.Y.Z’ was not found.

Shared framework(s)

This specifies the shared framework(s) the application depends on by name. The version is treated as a minimum version. The only way to override the minimum (without changing the file) is to use dotnet exec --fx-version.

For .NET Core < 3.0, only one framework can be specified.

JSON:

{
  "runtimeOptions": {
    "framework": {
      "name": "Microsoft.AspNetCore.App",
      "version": "2.2.0"
    }
  }
}

.csproj:

<ItemGroup>
  <PackageReference Include="Microsoft.AspNetCore.App" Version="2.2.0" />
</ItemGroup>

For .NET Core >= 3.0, multiple shared frameworks can be used and are no longer referenced as packages.

Note: 3.0 is still in preview, may change.

JSON:

{
  "runtimeOptions": {
    "frameworks": [
      {
        "name": "Microsoft.AspNetCore.App",
        "version": "3.0.0"
      },
      {
        "name": "Microsoft.WindowsDesktop.App",
        "version": "3.0.0"
      }
    ]
  }
}

.csproj:

<ItemGroup>
  <FrameworkReference Include="Microsoft.AspNetCore.App" />
  <FrameworkReference Include="Microsoft.WindowsDesktop.App" />
</ItemGroup>

Automatically run on higher versions

This option is new in .NET Core 3.0.

By default, .NET Core will try to find the highest patch version of the shared framework which has the same major and minor version as your app specifies. But if it can’t find that, it may roll-forward to newer versions. This option controls the roll-forward policy.

JSON:

{
  "runtimeOptions": {
    "rollForward": "Major"
  }
}

.csproj:

<PropertyGroup>
  <RollForward>Major</RollForward>
</PropertyGroup>

The spec for this setting can be found at https://github.com/dotnet/designs/blob/master/accepted/2019/runtime-binding.md. About this setting, it says:

RollForward can have the following values:

  • LatestPatch – Roll forward to the highest patch version. This disables minor version roll forward.
  • Minor – Roll forward to the lowest higher minor version, if requested minor version is missing. If the requested minor version is present, then the LatestPatch policy is used.
  • Major – Roll forward to lowest higher major version, and lowest minor version, if requested major version is missing. If the requested major version is present, then the Minor policy is used.
  • LatestMinor – Roll forward to highest minor version, even if requested minor version is present.
  • LatestMajor – Roll forward to highest major and highest minor version, even if requested major is present.
  • Disable – Do not roll forward. Only bind to specified version. This policy is not recommended for general use since it disable the ability to roll-forward to the latest patches. It is only recommended for testing.

Minor is the default setting. See Configuration Precedence for more information.

In all cases except Disable the highest available patch version is selected.

Note: LatestMinor and LatestMajor are intended for component hosting scenarios, for both managed and native hosts (for example, managed COM components).

Automatically run on higher patch versions (before .NET Core 3.0)

This policy is being deprecated in .NET Core 3.0 in favor of the simpler “rollForward” option, as described above.

By default, .NET Core runs on the highest patch version of shared frameworks installed on the machine. This can be disabled using ‘applyPatches’.

JSON:

{
  "runtimeOptions": {
    "applyPatches": false
  }
}

.csproj: currently not available as an SDK option. See above.

Note: I couldn’t write about this without a word of caution. I would personally only use this in production when it’s 3 AM, the site is down, the phone is ringing, and the company is bleeding $$$ every minute. Otherwise, it’s better to get the latest security patches – for obvious reasons.

Automatically run on higher major or minor versions (before .NET Core 3.0)

This policy is being deprecated in .NET Core 3.0 in favor of the simpler “rollForward” option, as described above.

By default, .NET Core will try to find the highest patch version of the shared framework which has the same major and minor version as your app specifies. But if it can’t find that, it may roll-forward to newer versions. This option controls the roll-forward policy.

JSON:

{
  "runtimeOptions": {
    "rollForwardOnNoCandidateFx": 1
  }
}

.csproj: currently not available as an SDK option. See above.

This can be set to 0, 1, or 2. See the design document for more details.

For example, given framework.version == 2.1.0, this is how .NET Core uses this setting to decided what is a ‘compatible’ version of the framework.

rollForwardOnNoCandidateFx Compatible framework versions
0 >=2.1.0, < 2.2.0
1 (default) >=2.1.0, < 3.0.0
2 >=2.1.0

Target framework moniker

This one is an implementation detail of the runtime package store.

JSON:

{
  "runtimeOptions": {
    "tfm": "netcoreapp2.1"
  }
}

.csproj:

<PropertyGroup>
  <TargetFramework>netcoreapp2.1</TargetFramework>
</PropertyGroup>

Assembly probing paths

This specifies additional folders used by the host to find assemblies listed in the .deps.json file. See Part 1 of this series for details on how this works.

JSON:

{
  "runtimeOptions": {
    "additionalProbingPaths": ["C:\\Users\\nmcmaster\\.nuget\\packages\\"]
  }
}

.csproj:

<ItemGroup>
  <AdditionalProbingPath Include="$(USERPROFILE)\.nuget\packages" />
</ItemGroup>

Note: this .csproj item will only end up in the runtimeconfig.dev.json file, which is only used during development, not production. Use the template file to set values which are required to be in the regular, production version of runtimeconfig.json.

Runtime settings

“configProperties” is a list of key-value pairs given to the runtime. These can be used in almost any way imaginable, but there is a short list of well-defined and commonly used settings.

JSON:

{
  "runtimeOptions": {
    "configProperties": {
      "key": "value"
    }
  }
}

Well-known runtime settings

Setting name Type Description
System.GC.Server boolean Enable server garbage collection.
System.GC.Concurrent boolean Enable concurrent garbage collection.
System.GC.RetainVM boolean Put segments that should be deleted on a standby list for future use instead of releasing them back to the OS.
System.Runtime.TieredCompilation boolean Enable tiered compilation.
System.Threading.ThreadPool.MinThreads integer Override MinThreads for the ThreadPool worker pool.
System.Threading.ThreadPool.MaxThreads integer Override MaxThreads for the ThreadPool worker pool.
System.Globalization.Invariant boolean Enabling invariant mode disables globalization behavior.

Here are some documents explaining more about these:

These settings can be configured in your .csproj file. The best way to find more is to look at the Microsoft.NET.Sdk.targets file itself.

<PropertyGroup>
  <ConcurrentGarbageCollection>true</ConcurrentGarbageCollection>
  <ServerGarbageCollection>true</ServerGarbageCollection>
  <RetainVMGarbageCollection>true</RetainVMGarbageCollection>
  <ThreadPoolMinThreads>1</ThreadPoolMinThreads>
  <ThreadPoolMaxThreads>100</ThreadPoolMaxThreads>
  <!-- Supported as of .NET Core SDK 3.0 Preview 1 -->
  <TieredCompilation>true</TieredCompilation>
  <InvariantGlobalization>true</InvariantGlobalization>
</PropertyGroup>

Additional runtime settings

.NET Core allows you to specify your own settings. These values can be retrieved using System.AppContext.GetData.

Note: this is not a suitable alternative to configuration builders.

JSON:

{
  "runtimeOptions": {
    "configProperties": {
      "ArbitraryNumberSetting": 2,
      "ArbitraryStringSetting": "red",
      "ArbitraryBoolSetting": true
    }
  }
}

.csproj:

<ItemGroup>
  <RuntimeHostConfigurationOption Include="ArbitraryNumberSetting" Value="2" />
  <RuntimeHostConfigurationOption Include="ArbitraryStringSetting" Value="red" />
  <RuntimeHostConfigurationOption Include="ArbitraryBoolSetting" Value="true" />
</ItemGroup>

In C#,

// "red"
var color = System.AppContext.GetData("ArbitraryStringSetting") as string;

More info

See Part 1 for more details about this file and how to use it. I also recommend searching through the Markdown files in https://github.com/dotnet for more details on how these various settings are used.

]]>
Nate
Deep-dive into .NET Core primitives, part 2: the shared framework2018-08-29T00:00:00-07:002018-08-29T00:00:00-07:00https://natemcmaster.com/blog/2018/08/29/netcore-primitives-2Shared frameworks have been an essential part of .NET Core since 1.0. ASP.NET Core shipped as a shared framework for the first time in 2.1. You may not have noticed if things are working smoothly, but there have been some bumps and ongoing discussion about its design. In this post, I will dive deep into the shared frameworks and talk about some common developer pitfalls.

This post is part of a series:

The Basics

.NET Core apps run in one of two modes: framework-dependent or self-contained. On my MacBook, a minimal self-contained ASP.NET Core application is 93 MB and has 350 files. By contrast, a minimal framework-dependent app is 239 KB and has 5 files.

You can produce both kinds of apps with these command line instructions.

dotnet new web
dotnet publish --runtime osx-x64 --output bin/self_contained_app/
dotnet publish --output bin/framework_dependent_app/

Screenshot comparing file size of framework dependent and self-contained

When the app runs, it is functionally equivalent in both modes. So why are there different modes? As the docs explain well:

framework-dependent deployment relies on the presence of a shared system-wide version of .NET Core…. [A] self-contained deployment doesn’t rely on the presence of shared components on the target system. All components…are included with the application.

This document does a great job of explaining the advantages of each mode.

The shared framework

To put it simply, a .NET Core shared framework is a folder of assemblies (*.dll files) that are not in the application folder. These assemblies version and release together. This folder is one part of the “shared system-wide version of .NET Core”, and is usually found in C:/Program Files/dotnet/shared.

When you run dotnet.exe WebApp1.dll, the .NET Core host must

  1. discover the names and versions of your app dependencies
  2. find those dependencies in common locations.

These dependencies are found in a variety locations, including, but not limited to, the shared frameworks. In a previous post, I briefly explained how the deps.json and runtimeconfig.json files configure the host’s behavior. See that post for more details.

The .NET Core host reads the *.runtimeconfig.json file to determine which shared framework(s) to load. Its contents may look like this:

{
  "runtimeOptions": {
    "framework": {
      "name": "Microsoft.AspNetCore.App",
      "version": "2.1.1"
    }
  }
}

The shared framework name is just that - a name. By convention, this name ends in “.App”, but it could be anything, like “FooBananaShark”.

The shared framework version represents the minimum version. The .NET Core host will never run on a lower version, but it may try to run on a higher one.

Which shared frameworks do I have installed?

Run dotnet --list-runtimes. It will show the names, versions, and locations of shared frameworks.

Comparing Microsoft.NETCore.App, AspNetCore.App, and AspNetCore.All

As of .NET Core 2.2, there are three shared frameworks.

Framework name Description
Microsoft.NETCore.App The base runtime. It supports things like System.Object, List<T>, string, memory management, file and network IO, threading, etc.
Microsoft.AspNetCore.App The default web runtime. It imports Microsoft.NETCore.App, and adds API to build an HTTP server using Kestrel, Mvc, SignalR, Razor, and parts of EF Core.
Microsoft.AspNetCore.All Integrations with third-party stuff. It imports Microsoft.AspNetCore.App. It adds support for EF Core + SQLite, extensions that use Redis, config from Azure Key Vault, and more. (Will be deprecated in 3.0.)

Relationship to the NuGet package

The .NET Core SDK generates the runtimeconfig.json file. In .NET Core 1 and 2, it uses two pieces from the project configuration to determine what goes in the framework section of the file:

  1. the MicrosoftNETPlatformLibrary property. By default this is set to "Microsoft.NETCore.App" for all .NET Core projects.
  2. NuGet restore results, which must include a package by the same name.

The .NET Core SDK adds an implicit package reference to Microsoft.NETCore.App to all projects. ASP.NET Core overrides the default by setting MicrosoftNETPlatformLibrary to "Microsoft.AspNetCore.App".

The NuGet package, however, does not provide the shared framework. I repeat: the NuGet package does not provide the shared framework. (I’ll repeat once more below.) The NuGet package only provides a set of APIs used by the compiler and a few other SDK bits. The shared framework files come from runtime installers found on https://aka.ms/dotnet-download, or bundled in Visual Studio, Docker images, and some Azure services.

Version roll-forward

As mentioned above, runtimeconfig.json is a minimum version. The actual version used depends on a rollforward policy documented in great detail by Microsoft. The most common way this applies is:

  • If an app minimum version is 2.1.0, the highest 2.1.* version will be loaded.

I’ll go into this file in more details. See .NET Core Primitives part 3.

Layered shared frameworks

This feature was added in .NET Core 2.1.

Shared frameworks can depend on other shared frameworks. This was introduced to support ASP.NET Core which converted from a package runtime store to a shared framework.

For example, if you look inside the $DOTNET_ROOT/shared/Microsoft.AspNetCore.All/$version/ folder, you will see a Microsoft.AspNetCore.All.runtimeconfig.json file.

$ cat /usr/local/share/dotnet/shared/Microsoft.AspNetCore.All/2.1.2/Microsoft.AspNetCore.All.runtimeconfig.json
{
  "runtimeOptions": {
    "tfm": "netcoreapp2.1",
    "framework": {
      "name": "Microsoft.AspNetCore.App",
      "version": "2.1.2"
    }
  }
}

Multi-level lookup

This feature was added in .NET Core 2.0.

The host will probe several locations to find a suitable shared framework. It starts by looking in the dotnet root, which is the directory containing the dotnet executable. This can also be overridden by setting the DOTNET_ROOT environment variable to a folder path. The first location probed is:

$DOTNET_ROOT/shared/$name/$version

If a folder is not there, it will attempt to look in pre-defined global locations using multi-level lookup. This can be turned off by setting the environment variable DOTNET_MULTILEVEL_LOOKUP=0. The default global locations are:

OS Location
Windows C:\Program Files\dotnet (64-bit processes)
C:\Program Files (x86)\dotnet (32-bit processes) (See in the source code)
macOS /usr/local/share/dotnet (source code)
Unix /usr/share/dotnet (source code)

The host will probe for directories in:

$GLOBAL_DOTNET_ROOT/shared/$name/$version

ReadyToRun

The assemblies in the shared frameworks are pre-optimized with a tool called crossgen. This produces “ReadyToRun” versions of .dll’s which are optimized for specific operating systems and CPU architectures. The primary performance gain is that this reduces the amount of time the JIT spends preparing code on startup.

Pitfalls

I think every .NET Core developer has fallen into one of these pitfalls at some point. I’ll attempt to explain how this happens.

HTTP Error 502.5 Process Failure

Screenshot of HTTP 502.5 error

By far the most common pitfall when hosting ASP.NET Core in IIS or running on Azure Web Services. This typically happens after a developer upgraded a project, or when an app is deployed to a machine which hasn’t been updated recently. The real error is often that a shared framework cannot be found, and the .NET Core application cannot start without it. When dotnet fails to launch the app, IIS issues the HTTP 502.5 error, but does not surface the internal error message.

“The specified framework was not found”

It was not possible to find any compatible framework version
The specified framework 'Microsoft.AspNetCore.App', version '2.1.3' was not found.
  - Check application dependencies and target a framework version installed at:
      /usr/local/share/dotnet/
  - Installing .NET Core prerequisites might help resolve this problem:
      http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
  - The .NET Core framework and SDK can be installed from:
      https://aka.ms/dotnet-download
  - The following versions are installed:
      2.1.1 at [/usr/local/share/dotnet/shared/Microsoft.AspNetCore.App]
      2.1.2 at [/usr/local/share/dotnet/shared/Microsoft.AspNetCore.App]

This error is often found lurking behind HTTP 502.5 errors or Visual Studio Test Explorer failures.

This happens when the runtimeconfig.json file specifies a framework name and version, and the host cannot find an appropriate version using the multi-level lookup and rollforward policies, as explained above.

Updating the NuGet package for Microsoft.AspNetCore.App

The NuGet package for Microsoft.AspNetCore.App does not provide the shared framework. It only provides the APIs used by the C#/F# compiler and a few SDK bits. You must download and install the shared framework separately. See https://aka.ms/dotnet-download.

Also, because of rollforward policies, you don’t need to update the NuGet package version to get your app to run on a new shared framework version.

It was probably a design mistake on the part of the ASP.NET Core team (which I’m on) to represent the shared framework as a NuGet package in the project file. The packages which represent shared frameworks are not normal packages. Unlike most packages, they are not self-sufficient. It is reasonable to expect that when a project uses a <PackageReference>, NuGet is able to install everything needed, and frustrating that these special packages deviate from the pattern. Various proposals have been made to fix this. I’m hopeful one will land soon-ish.

<PackageReference Include="Microsoft.AspNetCore.App" />

New project templates and docs for ASP.NET Core 2.1 showed users that they only needed to have this line in their project.

<PackageReference Include="Microsoft.AspNetCore.App" />

All other <PackageReference>’s must include a Version attribute. The version-less package ref only works if the project begins with <Project Sdk="Microsoft.NET.Sdk.Web">, and only works for the Microsoft.AspNetCore.{App, All} packages. The Web SDK will automatically pick a version of these packages based on other values in the project, like <TargetFramework> and <RuntimeIdentifier>.

This magic does not work if you specify a version on the package reference element, or if you’re not using the Web SDK. It’s hard to recommend a good solution because the best approach depends on your level of understanding and the project type.

Publish trimming

When you run dotnet publish to create a framework-dependent app, the SDK uses the NuGet restore result to determine which assemblies should be in the publish folder. Some will be copied from NuGet packages, and others are not because they are expected to be in the shared frameworks.

This can easily go wrong because ASP.NET Core is available as a shared framework and as NuGet packages. The trimming attempts to do some graph math to examine transitive dependencies, upgrades, etc., and pick the right files accordingly.

Take for example this project:

<PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.1" />
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.1.9" />

MVC is actually part of Microsoft.AspNetCore.App, but when dotnet publish runs, it sees that your project has decided to upgrade “Microsoft.AspNetCore.Mvc.dll” to a version which is higher than what Microsoft.AspNetCore.App 2.1.1 includes, so it will put Mvc.dll into your publish folder.

This is less than ideal because your application grows in size and you don’t get a ReadyToRun optimized version of Microsoft.AspNetCore.Mvc.dll. This can happen unintentionally if you get upgraded transitively through a ProjectReference of via a third-party dependencies.

Confusing the target framework moniker with the shared framework

It’s easy to think that "netcoreapp2.0" == "Microsoft.NETCore.App, v2.0.0". This is not true. A target framework moniker (aka TFM) is specified in a project using the <TargetFramework> element. “netcoreapp2.0” is meant to be a human-friendly way to express which version of .NET Core you would like to use.

The pitfall of a TFM is that it is too short. It cannot express things like multiple shared frameworks, patch-specific versioning, version rollforward, output type, and self-contained vs framework-dependent deployment. The SDK will attempt to infer many of these settings from the TFM, but it cannot infer everything.

So, more accurately, "netcoreapp2.0" implies "Microsoft.NETCore.App, at least v2.0.0".

Confusing project settings

The final pitfall I will mention is about project settings. There are many, and the terminology and setting names don’t always line up. It’s a confusing set of terms, so this one isn’t your fault if you get them mixed up.

Below, I’ve listed some common project settings and what they actually mean.

<PropertyGroup>
  <TargetFramework>netcoreapp2.1</TargetFramework>
  <!--
    Actual meaning:
      * The API set version to use when resolving compilation references from NuGet packages.
  -->

  <TargetFrameworks>netcoreapp2.1;net471</TargetFrameworks>
  <!--
    Actual meaning:
      * Compile for two different API version sets. This does not represent multi-layered shared frameworks.
  -->

  <MicrosoftNETPlatformLibrary>Microsoft.AspNetCore.App</MicrosoftNETPlatformLibrary>
  <!--
    Actual meaning:
      * The name of the top-most shared framework
  -->

  <RuntimeFrameworkVersion>2.1.2</RuntimeFrameworkVersion>
  <!--
    Actual meaning:
      * version of the implicit package reference to Microsoft.NETCore.App which then becomes
        the _minimum_ shared framework version.
  -->

  <RuntimeIdentifier>win-x64</RuntimeIdentifier>
  <!--
    Actual meaning:
      * Operating system kind + CPU architecture
  -->

  <RuntimeIdentifiers>win-x64;win-x86</RuntimeIdentifiers>
  <!--
    Actual meaning:
      * A list of operating systems and CPU architectures which this project _might_ run on.
        You still have to select one by setting RuntimeIdentifier.
  -->

</PropertyGroup>

<ItemGroup>

  <PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.2" />
  <!--
    Actual meaning:
      * Use the Microsoft.AspNetCore.App shared framework.
      * Minimum version = 2.1.2
  -->

  <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.1.2" />
  <!--
    Actual meaning:
      * Use the Microsoft.AspNetCore.Mvc package.
      * Exact version = 2.1.2
  -->

  <FrameworkReference Include="Microsoft.AspNetCore.App" />
  <!--
    Actual meaning:
      * Use the Microsoft.AspNetCore.App shared framework.
    (This is new and unreleased...see https://github.com/dotnet/sdk/pull/2486)
  -->

</ItemGroup>

Closing

The shared framework is an optional feature of .NET Core, and I think it’s a reasonable default for most users despite the pitfalls. I still think it’s good for .NET Core developers to understand what goes on under the hood, and hopefully this was a good overview of the shared frameworks feature. I tried to link to official docs and guidance where possible so you can find more info. If you have more questions, leave a comment below.

More info

]]>
Nate
Deep-dive into .NET Core primitives: inside a .dll file2018-07-28T00:00:00-07:002018-07-28T00:00:00-07:00https://natemcmaster.com/blog/2018/07/28/runtime-vs-refsWhen I started working with C# and .NET, clicking the “Start” button in Visual Studio was magical, but also dissatisfying. Dissatisfying – not because I want to write code in assembly – but because I didn’t know what “Start” did. So, I started to dig. In a previous post, I showed some of the important files used in a .NET Core application. In this post, I’m going to look even closer at one particular file, the .dll. If you’re new to .NET Core and want to peek under the hood, this is a good post for you. If you’re already a .NET developer but wonder what actually happens with your *.dll files, I’ll cover that, too.

I’m going to abandon the magic of Visual Studio and stick to command-line tools. To play with this yourself, you’ll need the .NET Core 2.1 SDK. These steps were written for macOS, but they work on Linux and Windows, too, if you adjust file paths to C:\Program Files\dotnet\ and dotnet.exe. You’ll also need to use the “ildasm” command, which is available in the Developer Command Prompt for VS 2017. If you’re on macOS or Linux, dotnet-ildasm is a good-enough replacement.

See also Deep-dive into .NET Core primitives: deps.json, runtimeconfig.json, and dll’s.

ldstr “Hello World!”

C# must be compiled first before it can execute. The C# compiler (csc) turns .cs files into a .dll. A .dll file is a portable executable, and it primarily contains something called Common Intermediate Language, or IL.

In C#, a simple method looks like this, and is stored in a plain text file.

static void Main(string[] args)
{
    Console.WriteLine("Hello World!");
}

The .dll contains the IL version, and is stored in a binary format. By calling ildasm Sample.dll on command line, you can create a plain text representation of that binary format. The matching IL looks like this:

.method private hidebysig static void  Main(string[] args) cil managed
{
  .entrypoint
  .maxstack  8
  IL_0000:  nop
  IL_0001:  ldstr      "Hello World!"
  IL_0006:  call       void [System.Console]System.Console::WriteLine(string)
  IL_000b:  nop
  IL_000c:  ret
}

External API

Here is the complete IL for a “Hello World” console app. It’s only 79 lines. If you skim through the IL, you may have noticed something: the IL does not contain the definition for Console.WriteLine. Instead, the IL contains this near the top:

.assembly extern System.Console
{
  .publickeytoken = (B0 3F 5F 7F 11 D5 0A 3A )
  .ver 4:1:1:0
}

This is called a reference. My assembly, Sample.dll, references another assembly named System.Console. And to be more specific, it references System.Console, version 4.1.1.0, with a strong name public key token of B03F5F7F11D50A3A.

So where can I find System.Console? Trick question, sort of.

The compilation reference to System.Console.dll

As discussed in more detail in Part 1, the C# compiler is a console command which supports a flag -reference. Visual Studio and the dotnet command line, through wizardry I won’t cover now, call the C# compiler with arguments like this:

/usr/local/share/dotnet/dotnet /usr/local/share/dotnet/sdk/2.1.301/Roslyn/bincore/csc.dll \
    -reference:/Users/nmcmaster/.nuget/packages/microsoft.netcore.app/2.1.0/ref/netcoreapp2.1/System.Console.dll \
    -reference:/Users/nmcmaster/.nuget/packages/microsoft.netcore.app/2.1.0/ref/netcoreapp2.1/System.Runtime.dll \
    -out:bin/Debug/netcoreapp2.1/Sample.dll \
    Program.cs

The System.Console.dll in my NuGet cache is the compilation reference, which defines the System.Console assembly. The C# compiler read this file, which is how it determined that:

  • the System.Console assembly is version 4.1.1.0 and has a public key token B03F5F7F11D50A3A
  • this assembly defines a type named ‘Console’ in the ‘System’ namespace
  • this type has a static method named ‘WriteLine’ which accepts one string argument

Now, if we ildasm this System.Console.dll file, we’ll see something interesting. The IL for this method looks like this:

.method public hidebysig static void  WriteLine(string 'value') cil managed
{
  // Code size       1 (0x1)
  .maxstack  8
  IL_0000:  ret
}

Let me translate this back to C#.

namespace System
{
    public class Console
    {
        public static void WriteLine(string value)
        {
            return;
        }
    }
}

…hold up…how can that possibly work?

This method is empty because the .NET Core SDK is taking advantage of an important feature of .NET: dynamic linking, also called assembly binding. .NET Core needs to run on Windows, Linux, macOS and more. Rather than produce a single System.Console.dll file which has to work on every possible operating system and CPU (some which may not even exist yet), the .NET Core team creates multiple variants of System.Console.dll. The one the compiler read is called the reference assembly, and its purpose is to provide the C# compiler with the available API, but not the implementation. Think of it like a C++ header file. This assembly has intentionally been stripped of implementation, so all methods do nothing or return null.

The runtime reference for System.Console

When you execute a .NET Core app, a different System.Console.dll file is used. You can find its location by running an app with this:

Console.WriteLine(typeof(Console).Assembly.Location);

On my computer, this file was here:

/usr/local/share/dotnet/shared/Microsoft.NETCore.App/2.1.1/System.Console.dll

This file is the runtime reference, aka the implementation assembly.

How did .NET Core find this file? It used some heuristics based on the deps.json file and runtimeconfig.json files that sit next to my Sample.dll file.

Now, if we ildasm the implementation version of System.Console.dll file, we’ll see that it’s actually doing something:

.method public hidebysig static void  WriteLine(string 'value') cil managed noinlining
{
  .maxstack  8
  IL_0000:  call       class [System.Runtime.Extensions]System.IO.TextWriter System.Console::get_Out()
  IL_0005:  ldarg.0
  IL_0006:  callvirt   instance void [System.Runtime.Extensions]System.IO.TextWriter::WriteLine(string)
  IL_000b:  ret
}

Closing

Assemblies are an essential primitive to understand to know how .NET Core really works. Most developers don’t really need to know all the details of IL and .dlls, but it’s good to have a general understanding of why they exist and what they do. This is only the tip of the iceberg. There are many, many more things involved in making a .dll execute in a .NET Core app, and lots of things I would love to explain. What happens if the compilation and runtime references are different? What’s a strong name? What’s crossgen? Can I obfuscate IL? etc. But I’ll leave those for another post, maybe.

More info

]]>
Nate
.NET Core Plugins2018-07-25T00:00:00-07:002018-07-25T00:00:00-07:00https://natemcmaster.com/blog/2018/07/25/netcore-pluginsI recently published a new package for .NET Core developers that want to implement a plugin system. Dynamic assembly loading in .NET Core is difficult to get right. The API in this package wrangles the complexity through a feature called ‘load contexts’. In this post, I’ll walk through problems that motivated the creation of this project, and explain what the API can do. My hope is that this plugin API will let you focus more on writing your app, and put an end to the inevitable mess of creating your own assembly loading code.

TL;DR?

Introducing McMaster.NETCore.Plugins

The foundation of McMaster.NETCore.Plugins is AssemblyLoadContext (more on this below). The API in McMaster.NETCore.Plugins ties together the understanding of how ALC works, and how dotnet.exe (aka corehost) reads deps.json files and runtimeconfig.json files to find dependencies. In the end, you should be able to use this new API with just a little bit of code.

using McMaster.NETCore.Plugins;

PluginLoader loader = PluginLoader.CreateFromAssemblyFile("./plugins/MyPlugin1.dll",
                        sharedTypes: new[] { typeof(ILogger) });
Assembly pluginDll = loader.LoadDefaultAssembly();

Once you have an Assembly object, you can use reflection to initialize and run code from the plugin.

using System.Reflection;

// For example, you could find and invoke a static method named Start on a type named Plugin.
Type pluginType = pluginDll.GetTypes().First(t => t.Name == "Plugin");
MethodInfo startMethod = pluginType.GetMethod("Start");
startMethod.Invoke(null, new object[] { myLogger, "arg1", "arg2" });

The plugins API provides a solution for managing common problems with assembly loading code, such as

  • finding dependencies of assemblies to load
  • finding unmanaged binaries to load
  • dealing with conflicts between different dependency versions
  • type unification - establishing consistent type identity between plugin and host app
  • isolation - keeping assemblies and their dependencies separated from each other and the host app

Motivations: the trouble with Assembly.LoadFrom

If you’ve ever tried to use Assembly.LoadFrom, you may be familiar with these issues. If you’re not, let me give you a quick demo.

Let’s say you want to load a new .dll file into an app. Assembly.LoadFrom is a tempting choice because it will get you part of what you want.

var pluginDll = Assembly.LoadFrom("./plugin/MyPlugin1.dll");

For simple plugins, it works great until…

Problem 1 - locating dependency assemblies

Let’s say MyPlugin1 uses JSON.NET, but the app calling Assembly.LoadFrom does not. If you try to do anything with the Assembly object you get from LoadFrom, you’ll get

System.IO.FileNotFoundException: Could not load file or assembly ‘Newtonsoft.Json, Version=11.0.0.0, Culture=neutral, PublicKeyToken=30ad4fe6b2a6aeed’. The system cannot find the file specified.

Even if you copy Newtonsoft.Json.dll into the same folder, .NET Core will not load it. Workarounds exist for this problem, such as hooking into assembly resolving events, but these don’t resolve the next set of issues.

Problem 2 - dependency mismatch

Let’s say MyPlugin1 uses JSON.NET 11, but the app calling LoadFrom used JSON.NET 10. You will still run into System.IO.FileNotFoundException.

You won’t always get this output when dependency versions don’t match. If the situation is reversed – MyPlugin1 depends on a lower version of something than the app has – you can get different errors if there were breaking changes. If you’re lucky, these surface as MissingMethodException or TypeLoadException. If you’re unlucky, your plugin will just function in a way you don’t expect because it’s running on a different version of its dependency.

Problem 3 - side by side and race conditions

If you resolved Problem 1 with some clever workarounds, you will still have issues when you need to load multiple plugins with mixed dependencies. The first plugin to load “wins”. So, if MyPlugin1 depends on JSON.NET 1 and MyPlugin2 depends on 11, you might get 10 or 11, but it will vary based on the order in which you called Assembly.LoadFrom.

var pluginDll1 = Assembly.LoadFrom("./plugin/MyPlugin1/MyPlugin1.dll");
var pluginDll2 = Assembly.LoadFrom("./plugin/MyPlugin2/MyPlugin2.dll");

These problems may be resolvable if you know ahead of time all the plugins and you can merge them to use a same version of common dependencies, but assuming this is not possible or reasonable, you can easily get into a situation where plugins will break each other. And, you have to pick a version. Highest wins is not always right, and Assembly.LoadFrom doesn’t give you a way to use multiple versions of an assembly by the same name.

Problem 4 - native libraries

If you need to use [DllImport] and extern to P/Invoke unmanaged code, Assembly.LoadFrom doesn’t work. In fact, I’m not really sure how you would do this without AssemblyLoadContext.

AssemblyLoadContext: the dark horse

System.Runtime.Loader.AssemblyLoadContext, aka ALC, provides some essential API for defining dynamic assembly loading behavior. This is one of my favorite, little-known APIs in .NET Core. This API provides:

  • Assembly loading in partial isolation. You can create multiple load contexts. Each context can load independent versions of an assembly with the same name.
  • Bring-your-own-resolution. AssemblyLoadContext is an abstract class with some virtual base members you can override. This allows you to implement your own resolution for dependency look up.
  • AssemblyLoadContext.LoadUnmanagedDll. This is basically the only good way to load unmanaged binaries dynamically.

While AssemblyLoadContext is a great API, it’s currently lacking docs. (It’s on the TODO list). It’s also fairly low-level, so you have to have a certain level of understanding to implement a load context. By default, ALC does not provide any resolution logic. You might expect there to be some sort of API in .NET Core for reading .deps.json and runtime.json files, but there isn’t. This is why I called ALC a ‘dark horse’. It’s a really good API, but few know much about it.

AssemblyLoadContextBuilder: build your own ALC

To make ALC easier to work with, I’ve written an API called McMaster.NETCore.Plugins.Loader.AssemblyLoadContextBuilder. This API creates a new AssemblyLoadContext with resolving behavior based on information from various sources. Some of the methods available on this builder include:

  • SetBaseDirectory - This directory is used as the starting point for loading assemblies.
  • PreferLoadContextAssembly / PreferDefaultLoadContextAssembly - specify, by assembly name, which assemblies should be resolved to a common version shared by every plugin and the app (the default load context), and which assemblies can use versions which are unique.
  • AddProbingPath - additional locations for finding dependencies
  • AddAdditionalProbingPathFromRuntimeConfig - add additional probing paths from a runtimeconfig.json file
  • AddManagedLibrary - add specific details about an assembly dependency to be loaded
  • AddNativeLibrary - add specific details about an unmanaged binary to be loaded
  • AddDependencyContext - add managed and native libraries as described in a .deps.json file
  • Finally, .Build() produce a new ALC. Multiple contexts can be created with the same info.

PluginLoader: bring it all together

McMaster.NETCore.Plugins.PluginLoader simplifies assembly loading even more by hiding most of the details about AssemblyLoadContextBuilder behind a smaller API. This is the default entrypoint which should be sufficient for many plugin scenarios. It uses the ALC builder and a set of some well-known conventions to construct a rich load context.

As mentioned above, you need to first create a loader.

PluginLoader loader = PluginLoader.CreateFromAssemblyFile(
    assemblyFile: "./plugins/MyPlugin1.dll",
    sharedTypes: new[] { typeof(ILogger) });

The sharedTypes parameter is important: this is used to define types which must exchange between the plugin and the host. These types are used to ensure consistent type identity. Read more details about this here.

Once you have the loader, you can then use PluginLoader.LoadDefaultAssembly() or LoadAssembly(AssemblyName) to get System.Reflection.Assembly objects. You can get from this object to executing code using a little bit of reflection.

UPDATE Aug. 28, 2019 This following paragraph appeared in the original post, but I’ve since abandoned this config file. It turns out no one really needed this and it was overly complicated.

Furthermore, I’ve begun work to define a way to express plugin behavior through config files. While this is in its early stages, the vision for config files is that you can define plugin behavior externally from the app (if you want) so you can make decisions about how plugins interact with the host app or other plugins.

PluginLoader loader = PluginLoader.CreateFromConfigFile(
    assemblyFile: "./plugins/config.xml",
    sharedTypes: new[] { typeof(ILogger) });

Demo

A full example of the API in action can be seen here: https://github.com/natemcmaster/DotNetCorePlugins/tree/master/samples/.

This demo includes a fully-working ASP.NET Core app which has two plugins loaded side-by-side. The plugins use type unification to ensure the plugin can interact with the IServiceCollection and IApplicationBuilder of the host application.

Closing

More reading

For more information, I recommend the following articles:

Why it’s still experimental

In PluginLoader, I’ve done my best to imitate most of the behaviors of corehost, however, there are some gaps which I can’t cover.

  • Unloading - once a plugin is loaded, the files it uses are locked by the process. The only way to unload a plugin is by killing the host app. Hopefully one day, .NET Core will implement collectible ALC’s, which will enable this feature. UPDATE: Aug. 28, 2019 - this was in the v0.3.0.
  • Localization and resource assemblies - if you have locale-specific resource assemblies, they’re not automagically loaded yet. UPDATE: Aug. 28, 2019 - this was fixed in the v0.2.0.
  • Conflict resolution - I haven’t yet defined behavior yet for what to do when there are multiple sources for the same assembly. For example, what if both the shared runtime and an local copy of the same binary exist which only differ by file version? TBD.
  • Perf - I haven’t taken time to investigate performance, yet. Before I would recommend this for production, I want to take a closer look at memory impact, CPU throughput, etc.

Plus, there is more work to be done on the “plugin config file” idea, API refinements, bugs to squash, etc.

I would not recommend this yet for production critical apps, but I hope to get it there. The project is open source, and I’m happy to take contributions. Give it a shot let me know what you think.

]]>
Nate
Configuring ASP.NET Core, webpack, and hot module replacement (hmr) for fast TypeScript development2018-07-05T00:00:00-07:002018-07-05T00:00:00-07:00https://natemcmaster.com/blog/2018/07/05/aspnetcore-hmrRecently, I spent a weekend banging my head against the wall as I tried to figure out how to upgrade a personal project to webpack 4, TypeScript 2.9, and React (used to be AngularJS 1.6). I finally got it all working together – and even got hot module replacement (hmr) working. TL;DR? Checkout the code here: https://github.com/natemcmaster/aspnetcore-webpack-hmr-demo

The important bits:

Use the WebpackDevMiddleware

This middleware in ASP.NET Core is built-in to ASP.NET Core 2.1, but you have to specifically add an option to configure HMR. Add this to your Startup.cs file.

app.UseWebpackDevMiddleware(new WebpackDevMiddlewareOptions
{
    HotModuleReplacement = true
});

See in source

Use babel-core and ES6

HMR was silently failing for a while until I discovered a few knobs in awesome-typescript-loader. After a bunch of GitHub spelunking, I discovered that I needed these magical settings in webpack.config.js.

// webpack.config.js
{
    test: /\.tsx?$/,
    include: /ClientApp/,
    loader: [
        {
            loader: 'awesome-typescript-loader',
            options: {
                useCache: true,
                useBabel: true,
                babelOptions: {
                    babelrc: false,
                    plugins: ['react-hot-loader/babel'],
                }
            }
        }
    ]
}

Also, you may need to update your tsconfig.json file to target ES6.

{
  "compilerOptions": {
    "target": "es6",
    "module": "commonjs",
    "jsx": "react"
  }
}

See in source

react-hot-loader 4

If you’ve used previous versions, considering upgrading to version 4. It’s usage is super simple now. Here’s a minimal React app with hmr.

import * as React from 'react';
import * as ReactDOM from 'react-dom';
import { hot } from 'react-hot-loader';

const App: React.SFC = () => <div>Hello, hot reloading</div>;

const HotApp = hot(module)(App);

ReactDOM.render(<HotApp />, document.getElementById('root'));

A few other goodies

I prefer Yarn to npm because it is faster, deterministic, and it’s not too hard to integrate Yarn with the .NET Core command line. Here are some MSBuild targets you can add to your project to lightup Yarn integration:

]]>
Nate