Microsoft for Developers https://devblogs.microsoft.com/ Get the latest information, insights, and news from Microsoft. Mon, 16 Mar 2026 16:31:11 +0000 en-US hourly 1 https://devblogs.microsoft.com/wp-content/uploads/2024/10/Microsoft-favicon-48x48.jpg Microsoft for Developers https://devblogs.microsoft.com/ 32 32 Awesome GitHub Copilot just got a website, and a learning hub, and plugins! https://devblogs.microsoft.com/blog/awesome-github-copilot-just-got-a-website-and-a-learning-hub-and-plugins https://devblogs.microsoft.com/blog/awesome-github-copilot-just-got-a-website-and-a-learning-hub-and-plugins#respond Mon, 16 Mar 2026 17:00:33 +0000 https://devblogs.microsoft.com/?p=21080 Back in July, we launched the Awesome GitHub Copilot Customizations repo with a simple goal: give the community a place to share custom instructions, prompts, and chat modes to customize the AI responses from GitHub Copilot. We were hoping for maybe one community contribution per week. That… did not happen. Instead, you all showed up. […]

The post Awesome GitHub Copilot just got a website, and a learning hub, and plugins! appeared first on Microsoft for Developers.

]]>
Back in July, we launched the Awesome GitHub Copilot Customizations repo with a simple goal: give the community a place to share custom instructions, prompts, and chat modes to customize the AI responses from GitHub Copilot. We were hoping for maybe one community contribution per week.

That… did not happen.

Instead, you all showed up. In a big way.

The repo now has 175+ agents, 208+ skills, 176+ instructions, 48+ plugins, 7 agentic workflows, and 3 hooks – all contributed by the community.

What started as a curated list has become something much bigger, and we needed to match that energy. The space has evolved so quickly that some of the customizations we originally supported aren’t even a thing any longer (looking at you prompts and chat modes).

Today we’re announcing the Awesome GitHub Copilot website, a Learning Hub, and a plugin system to make all of Awesome GitHub Copilot easier to use.

The problem with a very long README

Let’s face it, it was a little bit difficult to find what you were looking for in the Awesome Copilot repo.

The repo worked great when it had a couple dozen resources. But with 600+ items? The README, scratch that, we had multiple READMEs for each customization, turned into a scroll marathon. Finding what you needed meant a lot of Ctrl+F and patience. We needed a better front door.

The new website

The Awesome GitHub Copilot website is built to be easy to navigate (with a memorable URL https://awesome-copilot.github.com!) and deployed on GitHub Pages. It wraps the repo in a proper website with search, so you can find what’s in there without scrolling one of the READMEs forever.

The landing page of the new Awesome GitHub Copilot showing cards to navigate into each of the main sections

The big things:

  • Full-text search across every resource – agents, skills, instructions, hooks, workflows, and plugins. You can narrow results by category.
  • Resource pages for each category with live search, modal previews so you can see what a resource looks like before committing, and direct links back to the source. Plus one-click install into VS Code or VS Code insiders.
  • The Learning Hub more on that one below!

The original Awesome Copilot repo itself hasn’t gone anywhere. If you want to still browse via the native GitHub interface, be our guest.

Of course, you still contribute any content through the repo. Once your PR has been merged, your new content will show up on the website.

searching for .NET agents

The Learning Hub

It may be fair to sum up developer sentiment around AI tooling right now by saying: “whoa – everything is moving so fast all the time – I cannot keep up!”

Everybody is feeling it. Some of the resources we included when we launched Awesome Copilot back in July 2025 aren’t even things any longer. Anybody remember prompts? Yeah, it’s moving fast.

This is where we hope the Learning Hub will help out. The idea of the Learning Hub is to explain the fundamental concepts behind customizing the AI responses from GitHub Copilot.

In other words – what’s a skill and why is it important? How is a plugin different than a hook?

Then since Awesome Copilot contains ready-to-use examples of all of those – how do you tailor them exactly for your needs? Or write your own from scratch?

That’s what you’ll learn from The Learning Hub.

A screenshot from The Learning Hub showing the What Are Agents, Skills, and Instructions page

 

Plugins

Plugins are how the industry is thinking of distribution of customization files of the like Awesome Copilot contains. A plugin bundles related agents, skills, and commands into a single installable package – themed collections for specific domains like frontend development, Python, Azure cloud, or team-specific workflows.

Various IDEs or agentic runtimes like GitHub Copilot CLI support marketplaces of plugins. We’re very happy to announce that Awesome GitHub Copilot is a default plugin marketplace for both GitHub Copilot CLI and VS Code!

There are 48+ plugins in the repo today. The website has its own plugin page with search, tag filters.

And you can install any of those plugins as simply as:

 

copilot plugin install <plugin-name>@awesome-copilot

 

What else is new

A few more things worth knowing about. We menntioned above that the landscape is changing quickly and there are a couple of other new customization types that are available now:

  • Agentic Workflows are natural-language GitHub Actions that run AI coding agents autonomously. There are 7 examples in the repo right now, covering things like daily issue reports, codeowner file updates, and stale repo detection.
  • Hooks let you set up event-triggered automations during Copilot coding agent sessions – useful for session logging, governance auditing, and custom post-processing.

We also did a Skills migration. Consolidating the resource model from the original 8 types down to a cleaner set. Skills are now the standard unit for bundling reusable knowledge, which makes contributing (and consuming) a lot more straightforward.

Get involved

This is a community project. Everything in the repo was contributed by people who found something useful and wanted to share it.

A few ways to start:

1. Browse the website at https://awesome-copilot.github.com and find something that fits your workflow.

2. Try a plugin install one from the plugin page and see how it changes your Copilot experience.

3. Walk through the Learning Hub at https://awesome-copilot.github.com/learning-hub if you want to understand how AI response customization works end to end.

4. Contribute PRs are welcome! Check the contributing guide for details.

5. Star the repo at https://github.com/github/awesome-copilot to keep up with new additions.

Thank you

Seriously — thank you. We put up a repo and you filled it with 600+ resources. Every agent, skill, and instruction in there exists because somebody thought it was worth sharing. We’ll keep building on this.

Keep sending PRs. 💜

 

The post Awesome GitHub Copilot just got a website, and a learning hub, and plugins! appeared first on Microsoft for Developers.

]]>
https://devblogs.microsoft.com/blog/awesome-github-copilot-just-got-a-website-and-a-learning-hub-and-plugins/feed 0
Build a real-world example with Microsoft Agent Framework, Microsoft Foundry, MCP and Aspire https://devblogs.microsoft.com/blog/build-a-real-world-example-with-microsoft-agent-framework-microsoft-foundry-mcp-and-aspire https://devblogs.microsoft.com/blog/build-a-real-world-example-with-microsoft-agent-framework-microsoft-foundry-mcp-and-aspire#comments Mon, 09 Mar 2026 16:00:42 +0000 https://devblogs.microsoft.com/?p=21035 Building AI agents is getting easier. Deploying them as part of a real application, with multiple services, persistent state, and production infrastructure, is where things get complicated. Developers from the .NET community have requested whether a real-world example that shows running on local machine as well as on the cloud in a cloud-native way. We’ve […]

The post Build a real-world example with Microsoft Agent Framework, Microsoft Foundry, MCP and Aspire appeared first on Microsoft for Developers.

]]>
Building AI agents is getting easier. Deploying them as part of a real application, with multiple services, persistent state, and production infrastructure, is where things get complicated. Developers from the .NET community have requested whether a real-world example that shows running on local machine as well as on the cloud in a cloud-native way.

We’ve heard you! We built an open-source Interview Coach sample to show how Microsoft Agent Framework, Microsoft Foundry, Model Context Protocol (MCP), and Aspire fit together in a production-style application. It’s a working interview simulator where an AI coach walks you through behavioral and technical questions, then delivers a summary of your performance.

This post covers the patterns we used and the problems they solve.

Here’s the link to visit the Interview Coach demo app.

Why Microsoft Agent Framework?

If you’ve been building AI agents with .NET, you’ve probably used Semantic Kernel, AutoGen, or both. Microsoft Agent Framework is the next step. It’s built by the same teams and combines what worked from both projects into a single framework.

It takes AutoGen’s agent abstractions and Semantic Kernel’s enterprise features (state management, type safety, middleware, telemetry) and puts them under one roof. It also adds graph-based workflows for multi-agent orchestration.

For .NET developers, this means:

  • One framework instead of two. No more choosing between Semantic Kernel and AutoGen.
  • Familiar patterns. Agents use dependency injection, IChatClient, and the same hosting model as ASP.NET apps.
  • Built for production. OpenTelemetry, middleware pipelines, and Aspire integration are included.
  • Multi-agent orchestration. Sequential workflows, concurrent execution, handoff patterns, and group chat are all supported.

The Interview Coach puts all of this into a real application, not just a Hello World.

Why Microsoft Foundry?

AI agents need more than a model. They need infrastructure. Microsoft Foundry is Azure’s platform for building and managing AI applications, and it’s the recommended backend for Microsoft Agent Framework.

Foundry gives you a single portal for:

  • Model access. A catalog of models from OpenAI, Meta, Mistral, and others, all through one endpoint.
  • Content safety. Built-in moderation and PII detection so your agents don’t go off the rails.
  • Cost-optimized routing. Requests get routed to the best model for the job automatically.
  • Evaluation and fine-tuning. Measure agent quality and improve it over time.
  • Enterprise governance. Identity, access control, and compliance through Entra ID and Microsoft Defender.

For the Interview Coach, Foundry provides the model endpoint that powers the agents. Because the agent code uses the IChatClient interface, Foundry is just a configuration choice, but it’s the one that gives you the most tooling out of the box.

What does the Interview Coach do?

The Interview Coach is a conversational AI that runs a mock job interview. You provide a resume and a job description, and the agent takes it from there:

  1. Intake. Collects your resume and the target job description.
  2. Behavioral interview. Asks STAR-method questions tailored to your experience.
  3. Technical interview. Asks role-specific technical questions.
  4. Summary. Generates a performance review with specific feedback.

You interact with it through a Blazor web UI that streams responses in real time.

Architecture at a glance

The application is split into several services, all orchestrated by Aspire:

    • LLM Provider. Microsoft Foundry (recommended) for different model access.
    • WebUI. Blazor chat interface for the interview conversation.
    • Agent. The interview logic, built on Microsoft Agent Framework.
    • MarkItDown MCP Server. Parses resumes (PDF, DOCX) into markdown via Microsoft’s MarkItDown.
  • InterviewData MCP Server. A .NET MCP server that stores sessions in SQLite.

architecture image

Aspire handles service discovery, health checks, and telemetry. Each component runs as a separate process, and you start the whole thing with a single command.

The handoff pattern is where this sample gets interesting. Instead of one agent doing everything, the interview is split across five specialized agents:

Agent Role Tools
Triage Routes messages to the right specialist None (pure routing)
Receptionist Creates sessions, collects resume and job description MarkItDown + InterviewData
Behavioral Interviewer Conducts behavioral questions using the STAR method InterviewData
Technical Interviewer Asks role-specific technical questions InterviewData
Summarizer Generates the final interview summary InterviewData

In the handoff pattern, one agent transfers full control of the conversation to the next. The receiving agent takes over entirely. This is different from “agent-as-tools,” where a primary agent calls others as helpers but retains control.

Here’s how the handoff workflow is wired up:

var workflow = AgentWorkflowBuilder
 .CreateHandoffBuilderWith(triageAgent)
 .WithHandoffs(triageAgent, [receptionistAgent, behaviouralAgent, technicalAgent, summariserAgent])
 .WithHandoffs(receptionistAgent, [behaviouralAgent, triageAgent])
 .WithHandoffs(behaviouralAgent, [technicalAgent, triageAgent])
 .WithHandoffs(technicalAgent, [summariserAgent, triageAgent])
 .WithHandoff(summariserAgent, triageAgent)
 .Build();

The happy path is sequential: Receptionist → Behavioral → Technical → Summarizer. Each specialist hands off directly to the next. If something goes off-script, agents fall back to Triage for re-routing.

The sample also includes a single-agent mode for simpler deployments, so you can compare the two approaches side by side.

Pattern 2: MCP for tool integration

Tools in this project don’t live inside the agent. They live in their own MCP (Model Context Protocol) servers. The same MarkItDown server could power a completely different agent project, and tool teams can ship independently of agent teams. MCP is also language-agnostic, which is how MarkItDown runs as a Python server while the agent is .NET.

The agent discovers tools at startup through MCP clients and passes them to the appropriate agents:

var receptionistAgent = new ChatClientAgent(
 chatClient: chatClient,
 name: "receptionist",
 instructions: "You are the Receptionist. Set up sessions and collect documents...",
 tools: [.. markitdownTools, .. interviewDataTools]);

Each agent only gets the tools it needs. Triage gets none (it just routes), interviewers get session access, and the Receptionist gets document parsing plus session access. This follows the principle of least privilege.

Pattern 3: Aspire orchestration

Aspire ties everything together. The app host defines the service topology: which services exist, how they depend on each other, and what configuration they receive. You get:

  • Service discovery. Services find each other by name, not hardcoded URLs.
  • Health checks. The Aspire dashboard shows the status of every component.
  • Distributed tracing. OpenTelemetry wired up through shared service defaults.
  • One-command startup. aspire run --file ./apphost.cs launches everything.

For deployment, azd up pushes the entire application to Azure Container Apps.

Get started

Prerequisites

Run it locally

git clone https://github.com/Azure-Samples/interview-coach-agent-framework.git
cd interview-coach-agent-framework

# Configure credentials
dotnet user-secrets --file ./apphost.cs set MicrosoftFoundry:Project:Endpoint "<your-endpoint>"
dotnet user-secrets --file ./apphost.cs set MicrosoftFoundry:Project:ApiKey "<your-key>"

# Start all services
aspire run --file ./apphost.cs

Open the Aspire Dashboard, wait for all services to show as Running, and click the WebUI endpoint to start your mock interview.

aspire dashboard image

Here’s how the handoff pattern works – visualized on DevUI.

devui image

You can use this chat UI to interact with the agent as the interview candidate.

chat ui image

Deploy to Azure

azd auth login
azd up
That’s it. Aspire and azd handle the rest. Once you complete deployment and testing, you can safely delete all the resources by running:
azd down --force --purge

What you’ll learn from this sample

After working through the Interview Coach, you’ll have seen:

  • Using Microsoft Foundry as the model backend
  • Building single-agent and multi-agent systems with Microsoft Agent Framework
  • Splitting workflows across specialized agents with handoff orchestration
  • Creating and consuming MCP tool servers independently of agent code
  • Orchestrating multi-service applications with Aspire
  • Writing prompts that produce consistent, structured behavior
  • Deploying everything with azd up

Try it out

The full source is on GitHub: Azure-Samples/interview-coach-agent-framework

If you’re new to Microsoft Agent Framework, start with the framework documentation and the Hello World sample. Then come back here to see how the pieces fit in a larger project.

If you build something with these patterns, open an issue and tell us about it.

See it live!

Watch the live stream on the .NET AI Community Standup to see Bruno Capuano and Justin Yoo demo it and answer your questions live!

What’s next?

We’re working on more integration scenarios: Microsoft Foundry Agent Service, GitHub Copilot, and A2A and ore. We’ll update the sample as they ship.

Resources

The post Build a real-world example with Microsoft Agent Framework, Microsoft Foundry, MCP and Aspire appeared first on Microsoft for Developers.

]]> https://devblogs.microsoft.com/blog/build-a-real-world-example-with-microsoft-agent-framework-microsoft-foundry-mcp-and-aspire/feed 1 Get started with GitHub Copilot CLI: A free, hands-on course https://devblogs.microsoft.com/blog/get-started-with-github-copilot-cli-a-free-hands-on-course https://devblogs.microsoft.com/blog/get-started-with-github-copilot-cli-a-free-hands-on-course#comments Tue, 03 Mar 2026 20:40:42 +0000 https://devblogs.microsoft.com/?p=21012 Learn GitHub Copilot CLI with this free, 8-chapter hands-on course. Review code, generate tests, debug issues, and build custom agents and skills - all from your terminal. No AI experience needed. Works with GitHub Copilot Free. Clone the repo or open in Codespaces to get started.

The post Get started with GitHub Copilot CLI: A free, hands-on course appeared first on Microsoft for Developers.

]]>
copilot banner image

GitHub Copilot has grown well beyond code completions in your editor. It now lives in your terminal, too. GitHub Copilot CLI lets you review code, generate tests, debug issues, and ask questions about your projects without ever leaving the command line.

To help developers get up to speed, we put together a free, open source course: GitHub Copilot CLI for Beginners. It’s 8 chapters, hands-on from the start, and designed so you can go from installation to building real workflows in a few hours. Already have a GitHub account? GitHub Copilot CLI works with GitHub Copilot Free, which is available to all personal GitHub accounts.

In this post, I’ll walk through what the course covers and how to get started.

What GitHub Copilot CLI can do

If you haven’t tried it yet, GitHub Copilot CLI is a conversational AI assistant that runs in your terminal. You point it at files using @ references, and it reads your code and responds with analysis, suggestions, or generated code.

You can use it to:

  • Review a file and get feedback on code quality
  • Generate tests based on existing code
  • Debug issues by pointing it at a file and asking what’s wrong
  • Explain unfamiliar code or confusing logic
  • Generate commit messages, refactor functions, and more
  • Write new app features (front-end, APIs, database interactions, and more)

It remembers context within a conversation, so follow-up questions build on what came before.

What the course covers

The course is structured as 8 progressive chapters. Each one builds on the last, and you work with the same project throughout: a book collection management app. Instead of jumping between isolated snippets, you keep improving one codebase as you go.

Here’s what using GitHub Copilot CLI looks like in practice. Say you want to review a Python file for potential issues. Start up Copilot CLI and ask what you’d like done:

$ copilot
> Review @samples/book-app-project/books.py for potential improvements. Focus on error handling and code quality.

Copilot reads the file, analyzes the code, and gives you specific feedback right in your terminal.

code review demo image

Here are the chapters covered in the course:

  1. Quick Start — Installation and authentication
  2. First Steps — Learn the three interaction modes: interactive, plan, and one-shot (programmatic)
  3. Context and Conversations — Using @ references to point Copilot at files and directories, plus session management with --continue and --resume
  4. Development Workflows — Code review, refactoring, debugging, test generation, and Git integration
  5. Custom Agents — Building specialized AI assistants with .agent.md files (for example, a Python reviewer that always checks for type hints)
  6. Skills — Creating task-specific instructions that auto-trigger based on your prompt
  7. MCP Servers — Connecting Copilot to external services like GitHub repos, file systems, and documentation APIs via the Model Context Protocol
  8. Putting It All Together — Combining agents, skills, and MCP servers into complete development workflows

learning path image

Every command in the course can be copied and run directly. No AI or machine learning background is required.

Who this is for

The course is built for:

  • Developers using terminal workflows. If you’re already running builds, checking git status, and SSHing into servers from the command line, Copilot CLI fits right into that flow.
  • Teams looking to standardize AI-assisted practices. Custom agents and skills can be shared across a team through a project’s .github/agents and .github/skills directories.
  • Students and early-career developers. The course explains AI terminology as it comes up, and every chapter includes assignments with clear success criteria.

You don’t need prior experience with AI tools. If you can run commands in a terminal, you learn and apply the concepts in this course.

How the course teaches

Each chapter follows a consistent pattern: a real-world analogy to ground the concept, then the core technical material, then hands-on exercises. For instance, the three interaction modes are compared to ordering food at a restaurant. Plan mode is more like mapping your route to the restaurant before you start driving. Interactive mode is a back-and-forth conversation with a waiter. And one-shot mode (programmatic mode) is like going through the drive-through.

ordering food analogy image

Later chapters use different comparisons: agents are like hiring specialists, skills work like attachments for a power drill, and MCP servers are compared to browser extensions. The goal is to provide you with a visual and mental model before the technical details land.

The course also focuses on a question that’s harder than it looks: when should I use which tool? Knowing the difference between reaching for an agent, a skill, or an MCP server takes practice, and the final chapter walks through that decision-making in a realistic workflow.

integration pattern image

Get started

The course is free and open source. You can clone the repo, or open it in GitHub Codespaces for a fully configured environment. Jump right in, get Copilot CLI running, and see if it fits your workflow.

GitHub Copilot CLI for Beginners

For a quick reference, see the CLI command reference.

Subscribe to GitHub Insider for more developer tips and guides.

The post Get started with GitHub Copilot CLI: A free, hands-on course appeared first on Microsoft for Developers.

]]>
https://devblogs.microsoft.com/blog/get-started-with-github-copilot-cli-a-free-hands-on-course/feed 1
The JavaScript AI Build-a-thon Season 2 starts today! https://devblogs.microsoft.com/blog/the-javascript-ai-build-a-thon-season-2-starts-today https://devblogs.microsoft.com/blog/the-javascript-ai-build-a-thon-season-2-starts-today#respond Mon, 02 Mar 2026 20:00:39 +0000 https://devblogs.microsoft.com/?p=20988 The JavaScript AI Build‑a‑thon Season 2 starts today! Join a free, four‑week, hands‑on program—from Local AI and RAG pipelines to multi‑agent hackathon—designed specifically for JavaScript/TypeScript developers.

The post The JavaScript AI Build-a-thon Season 2 starts today! appeared first on Microsoft for Developers.

]]>
Most applications used by millions of people every single day are powered by JavaScript/TypeScript. But when it comes to AI, most learning resources and code samples assume you’re working in Python and will leave you trying to stitch scattered tutorials together to build AI into your stack.

The JavaScript AI Build-a-thon is a free, hands-on program designed to close that gap. Over the course of four weeks (March 2 – March 31, 2026), you’ll move from running AI 100% on-device (Local AI), to designing multi-service, multi-agentic systems, all in JavaScript/ TypeScript and using tools you are already familiar with. The series will culminate in a hackathon, where you will create, compete and turn what you’ll have learnt into working projects you can point to, talk about and extend.

Register now at aka.ms/JSAIBuildathon

How the program works!

The program is organized around 2 phases: –

Phase I: Learn & Skill Up Schedule (Mar 2 – 13)

  • Self-paced quests that teach core AI patterns,
  • Interactive Expert-led sessions on Microsoft Reactor (Livestreams) and Discord (Office hours & QnA)

Program roadmap overview

Day/Time (PT) Topic Links to join
Mon 3/2, 8:00 AM PST Local AI Development with Foundry Local Rewatch Livestream
Wed 3/4, 8:00 AM PST End-to-End Model Development on Microsoft Foundry Livestream Discord Office Hour
Fri 3/6, 9:00 AM PST Advanced RAG Deep Dive + Guided Project Livestream Discord Office Hour
Mon 3/9, 8:00 AM PST Design & Build an Agent E2E with Agent Builder (AITK) Livestream Discord Office Hour
Wed 3/11, 8:00 AM PST Build, Scale & Govern AI Agents + Guided project Livestream Discord Office Hour

The Build-a-thon prioritizes practical learning, so you’ll complete 2 guided projects by the end of this phase:-

1. A Local Serverless AI chat with RAG Concepts covered include: –

  • RAG Architecture
  • RAG Ingestion pipeline
  • Query & Retrieval
  • Response Generation (LLM Chains)

Visual Studio Code with the guided project files displayed

2. A Burger Ordering AI Agent Concepts covered include: –

  • Designing AI Agents
  • Building MCP Tools (Backend API Design)

Browser screenshots showing the completed AI Burger Ordering Agent

Phase II: Global Hack! (Mar 13 – 31)

  • Product demo series to showcase the latest product features that will accelerate your builder experience
  • A Global hackathon to apply what you learn into real, working AI solutions

This is where you’ll build something that matters using everything learnt in the quests, and beyond, to create an AI-powered project that solves a real problem, delights users, or pushes what’s possible.

The hackathon launches on March 13, 2026. Full details on registration, submission, judging criteria, award categories, prizes, and the hack phase schedule will be published when the hack goes live. Stay tuned!

But, here’s what we can tell you now:

  • 🏆 6 award categories
  • đŸ’»Â Product demo showcases throughout the hack phase to keep you building with the latest tools
  • đŸ‘„ Teams of up to 4 or solo. Your call

Start Now (Join the Community)

Join our community to connect with other participants and experts from Microsoft &. GitHub to support your builder journey.

Register now at aka.ms/JSAIBuildathon

See you soon!

The post The JavaScript AI Build-a-thon Season 2 starts today! appeared first on Microsoft for Developers.

]]>
https://devblogs.microsoft.com/blog/the-javascript-ai-build-a-thon-season-2-starts-today/feed 0
GitHub Copilot Dev Days: Build faster with GitHub Copilot CLI, in VS Code & Visual Studio, and beyond! https://devblogs.microsoft.com/blog/github-copilot-dev-days https://devblogs.microsoft.com/blog/github-copilot-dev-days#respond Mon, 02 Mar 2026 18:00:36 +0000 https://devblogs.microsoft.com/?p=20966 Modern software development is moving fast—and AI is now a practical part of how Microsoft developers design, build, and ship applications every day. From writing code in Visual Studio and VS Code, to building cloud-native apps on Azure, developers are looking for ways to stay productive without sacrificing quality. That’s exactly why GitHub Copilot Dev […]

The post GitHub Copilot Dev Days: Build faster with GitHub Copilot CLI, in VS Code & Visual Studio, and beyond! appeared first on Microsoft for Developers.

]]>
GitHub Copilot Dev Days banner image

Modern software development is moving fast—and AI is now a practical part of how Microsoft developers design, build, and ship applications every day. From writing code in Visual Studio and VS Code, to building cloud-native apps on Azure, developers are looking for ways to stay productive without sacrificing quality.

That’s exactly why GitHub Copilot Dev Days exists.

GitHub Copilot Dev Days is a global series of hands-on, in-person, community-led events designed to help developers learn how AI-assisted development fits naturally into the Microsoft developer stack. You’ll see how GitHub Copilot works alongside Visual Studio, VS Code, .NET, and more to streamline real-world workflows—from your first line of code to deployment.

Who should attend GitHub Copilot Dev Days?

GitHub Copilot Dev Days is designed for Microsoft developers at every stage, including:

  • Visual Studio and VS Code developers building in .NET, Java, Python, TypeScript, and more
  • Developers working in the CLI and other IDEs wanting to access GitHub Copilot directly from where they’re building
  • Professional developers, students, and community members looking to modernize their workflows with AI

If you’re new to AI-assisted development, these events will help you get started using GitHub Copilot effectively inside the tools you already use. If you’re more experienced, you’ll learn advanced techniques to integrate Copilot into larger projects, enterprise codebases, and team workflows. Different events may cover different scenarios – we have content geared towards the GitHub Copilot CLI, GitHub Copilot in VS Code, Visual Studio, and other IDEs, the cloud agent, and more. Check the registration page of the event near you for more information.

What you’ll learn at a GitHub Copilot Dev Day

Each GitHub Copilot Dev Day is hosted by local developer communities—including Microsoft MVPs, GitHub Stars, Microsoft Student Ambassadors, GitHub Campus Ambassadors, Azure Tech Groups, and Microsoft and GitHub employees —and focuses on practical, hands-on learning you can apply immediately.

You can expect:

  • Live demos showing GitHub Copilot in action with Visual Studio and VS Code
  • Hands-on workshops using Copilot with .NET, Java, Python, JavaScript, and more
  • Real-world scenarios that reflect how Microsoft developers build apps today

Topics may include:

  • Using GitHub Copilot in Visual Studio to accelerate .NET development
  • Pairing GitHub Copilot with VS Code for cross-platform and cloud-native workflows
  • Leveraging GitHub Copilot CLI and Cloud Agent to support asynchronous development
  • Applying AI-assisted coding patterns to increase your productivity

Each event is tailored by the local organizer, so session topics and agendas may vary. Be sure to check your event’s registration page for details.

A sample event agenda

While every event is unique, a typical GitHub Copilot Dev Day includes:

  • Introductory session (30–45 minutes): How GitHub Copilot fits into the Microsoft developer toolchain
  • Community session (30–45 minutes): A local developer shares real-world experience building with Copilot
  • Hands-on workshop (60 minutes): Guided coding exercises using Copilot in Visual Studio or VS Code

Along the way, you’ll connect with other developers in your area, share ideas, and pick up practical tips you can use right away—plus enjoy some snacks and swag.

Events start soon!

GitHub Copilot Dev Days events begin March 15, 2026, with events happening in cities around the world. Spots are limited, and many events are filling up quickly.

👉 Find a GitHub Copilot Dev Days event near you and register today

Interested in hosting a GitHub Copilot Dev Day for your local user group or developer community? 👉 Apply to host an event

 

The post GitHub Copilot Dev Days: Build faster with GitHub Copilot CLI, in VS Code & Visual Studio, and beyond! appeared first on Microsoft for Developers.

]]>
https://devblogs.microsoft.com/blog/github-copilot-dev-days/feed 0
WinGet Configuration: Set up your dev machine in one command https://devblogs.microsoft.com/blog/winget-configuration-set-up-your-dev-machine-in-one-command https://devblogs.microsoft.com/blog/winget-configuration-set-up-your-dev-machine-in-one-command#comments Wed, 04 Feb 2026 18:00:13 +0000 https://devblogs.microsoft.com/?p=20862 I’ve set up a lot of dev machines in my life. Traditionally, this takes a lot of time to get everything just right, but now there’s a faster way with WinGet Configuration files. Let me show you how to go from a fresh Windows install to a fully configured dev environment with a single command […]

The post WinGet Configuration: Set up your dev machine in one command appeared first on Microsoft for Developers.

]]>
I’ve set up a lot of dev machines in my life. Traditionally, this takes a lot of time to get everything just right, but now there’s a faster way with WinGet Configuration files. Let me show you how to go from a fresh Windows install to a fully configured dev environment with a single command and how GitHub Copilot CLI can help you build these configs.

winget configuration image

What is WinGet Configuration?

WinGet Configuration lets you describe your ideal dev environment in a YAML file, then apply it with one command. Instead of running a bunch of winget install commands manually, you declare what you want and let WinGet do the rest.

Getting started

Before you can use winget configure, you’ll need to install the WinGet DSC module. Open PowerShell as admin and run:

Install-Module Microsoft.WinGet.DSC -Force

Once that’s installed, you can run configuration files with:

winget configure -f configuration.winget

WinGet reads your file, installs all your tools, configures your settings, and gets you ready to code.

The best part is that these configs are idempotent, which means you can run them multiple times and they’ll only change what needs changing. For example if you already have VS Code installed, it skips it and moves on.

Tip:

Add --accept-configuration-agreements to skip the confirmation prompts for automated scenarios.

How is this different from WinGet import/export?

If you’ve used WinGet before, you might be thinking “wait, can’t I already do this with winget export and winget import?” This is a great question and they’re designed to solve different problems.

winget import/export creates a simple JSON list of installed packages:

# Export your installed packages
winget export -o packages.json
# Import on another machine
winget import -i packages.json

This works, but it only handles package installation. It’s basically a batch install script.

winget configure is much more powerful:

Feature import/export configure
Install packages ✅ ✅
Configure Windows settings ❌ ✅
Enable Developer Mode ❌ ✅
Install VS workloads ❌ ✅
Set environment variables ❌ ✅
Define dependencies ❌ ✅
Check OS requirements ❌ ✅
Run PowerShell DSC resources ❌ ✅

Think of import/export as a grocery list and configure as a complete recipe. The grocery list tells you what to buy, but the recipe tells you what to buy and how to put it all together.

For simple app installation scenarios, winget import is great. But if you want a fully configured dev environment with Developer Mode enabled, VS workloads installed, and settings configured, use winget configure.

Your first configuration file

Let’s start simple. Create a file called dev-setup.winget:

# yaml-language-server: $schema=https://aka.ms/configuration-dsc-schema/0.2
properties:
  configurationVersion: 0.2.0
  resources:
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Visual Studio Code Insiders
        securityContext: elevated
      settings:
        id: Microsoft.VisualStudioCode.Insiders
        source: winget

    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Git
        securityContext: elevated
      settings:
        id: Git.Git
        source: winget

    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Node.js LTS
        securityContext: elevated
      settings:
        id: OpenJS.NodeJS.LTS
        source: winget

    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Windows Terminal Preview
      settings:
        id: Microsoft.WindowsTerminal.Preview
        source: winget

Run it with:

winget configure -f dev-setup.winget

WinGet prompts you for admin approval once, then handles all the installations. Go grab a coffee and come back to a configured machine. ☕

Adding Windows settings

You can do more than install packages, you can configure Windows itself. Here’s how to enable Developer Mode and dark mode automatically:

- resource: Microsoft.Windows.Settings/WindowsSettings
  directives:
    description: Enable Developer Mode
    allowPrerelease: true
    securityContext: elevated
  settings:
    DeveloperMode: true

- resource: Microsoft.Windows.Developer/EnableDarkMode
  directives:
    description: Enable dark mode
    allowPrerelease: true
  settings:
    Ensure: Present
    RestartExplorer: true

Using assertions for requirements

Assertions let you check system requirements before running your config. For example, you can verify the machine meets a minimum OS version:

properties:
  configurationVersion: 0.2.0
  
  assertions:
    - resource: Microsoft.Windows.Developer/OsVersion
      directives:
        description: Require Windows 11 22H2 or later
        allowPrerelease: true
      settings:
        MinVersion: '10.0.22621'
  
  resources:
    # Install your tools...

If the OS version check fails, the config stops early with a clear message instead of crashing halfway through. This is useful when your tools require specific Windows features only available in newer versions.

Dependencies between resources

Sometimes you need things installed in a specific order. Use dependsOn to chain resources:

- resource: Microsoft.WinGet.DSC/WinGetPackage
  id: vsPackage
  directives:
    description: Install Visual Studio 2026 Community
    securityContext: elevated
  settings:
    id: Microsoft.VisualStudio.Community
    source: winget

- resource: Microsoft.VisualStudio.DSC/VSComponents
  dependsOn:
    - vsPackage
  directives:
    description: Install .NET workload
    allowPrerelease: true
    securityContext: elevated
  settings:
    productId: Microsoft.VisualStudio.Product.Community
    channelId: VisualStudio.18.Release
    components:
      - Microsoft.VisualStudio.Workload.ManagedDesktop

The vsPackage installs first, then the workload gets added.

Using GitHub Copilot CLI to generate configs

Here’s where it gets fun. First, let’s make sure Copilot CLI is part of our setup! You can bootstrap it right in your configuration file:

- resource: Microsoft.WinGet.DSC/WinGetPackage
  id: copilotCli
  directives:
    description: Install GitHub Copilot CLI
  settings:
    id: GitHub.Copilot
    source: winget

Now when you run your config on a fresh machine, Copilot CLI gets installed automatically. Once it’s installed, you can use it to generate more configs.

Instead of writing YAML by hand, I ask Copilot CLI to generate configs for me:

copilot

Then I prompt:

“Create a winget configuration file for a Python data science developer. Include Python 3.12, VS Code, Git, and Anaconda.”

Copilot generates a complete config that I can tweak and save. This is so much faster than looking up package IDs manually.

Finding package IDs

Not sure what the exact package ID is? Ask Copilot:

“What’s the winget package ID for the latest Python?”

It’ll tell you Python.Python.3.12 (or whichever version is current).

Converting existing scripts

Have an old PowerShell script that installs your tools? Copilot can convert it:

“Convert this script to a winget configuration file:
winget install Microsoft.VisualStudioCode
winget install Git.Git
winget install OpenJS.NodeJS.22″

It creates the proper YAML structure with descriptions and everything.

Explaining configs

Found a config file in a repo and not sure what it does? Paste it and ask:

“Explain what this winget configuration does and what will be installed”

This is super helpful when you’re onboarding to a new project.

Terminal window showing WinGet Desired State Configuration view.

The export command: Reverse-engineer your setup

One of my favorite features is winget configure export. It captures your current machine state so you can recreate it later:

# Export your entire package configuration
winget configure export -o my-machine.winget --all
# Export just one package's config
winget configure export -o vscode.winget --package-id Microsoft.VisualStudioCode

This is great for:

  • Backing up your current setup before a fresh install
  • Creating a config from a machine that’s already “just right”
  • Sharing your exact environment with teammates

Store configs in your repos

For project-specific setups, store your config in the repo at .config/configuration.winget. When new contributors clone your project, they can run:

winget configure -f .config/configuration.winget

This way, they’ll have the exact same environment as everyone else.

My configuration file

Here’s what my personal dev setup config looks like:

# yaml-language-server: $schema=https://aka.ms/configuration-dsc-schema/0.2
properties:
  configurationVersion: 0.2.0
  
  resources:
    # Enable Developer Mode and dark mode
    - resource: Microsoft.Windows.Settings/WindowsSettings
      directives:
        description: Enable Developer Mode
        allowPrerelease: true
        securityContext: elevated
      settings:
        DeveloperMode: true
    
    - resource: Microsoft.Windows.Developer/EnableDarkMode
      directives:
        description: Enable dark mode
        allowPrerelease: true
      settings:
        Ensure: Present
        RestartExplorer: true
    
    # Terminal and shell
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Windows Terminal Preview
      settings:
        id: Microsoft.WindowsTerminal.Preview
        source: winget
    
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install PowerShell 7
        securityContext: elevated
      settings:
        id: Microsoft.PowerShell
        source: winget
    
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Oh My Posh
      settings:
        id: JanDeDobbeleer.OhMyPosh
        source: winget
    
    # Development tools
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Visual Studio Code Insiders
        securityContext: elevated
      settings:
        id: Microsoft.VisualStudioCode.Insiders
        source: winget

    - resource: Microsoft.WinGet.DSC/WinGetPackage
      id: vsPackage
      directives:
        description: Install Visual Studio 2026
        securityContext: elevated
      settings:
        id: Microsoft.VisualStudio.Enterprise
        source: winget

    - resource: Microsoft.VisualStudio.DSC/VSComponents
      dependsOn:
        - vsPackage
      directives:
        description: Install .NET workload
        allowPrerelease: true
        securityContext: elevated
      settings:
        productId: Microsoft.VisualStudio.Product.Enterprise
        channelId: VisualStudio.18.Release
        components:
          - Microsoft.VisualStudio.Workload.ManagedDesktop
    
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install .NET SDK 10
        securityContext: elevated
      settings:
        id: Microsoft.DotNet.SDK.10
        source: winget
    
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Azure Developer CLI
      settings:
        id: Microsoft.Azd
        source: winget
    
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Git
        securityContext: elevated
      settings:
        id: Git.Git
        source: winget
    
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install Node.js
        securityContext: elevated
      settings:
        id: OpenJS.NodeJS.LTS
        source: winget

    # AI tools
    - resource: Microsoft.WinGet.DSC/WinGetPackage
      directives:
        description: Install GitHub Copilot CLI
      settings:
        id: GitHub.Copilot
        source: winget

Feel free to use this as a starting point and customize it for your needs!

Cheers!

WinGet Configuration has genuinely changed how I think about machine setup. It’s version-controlled, repeatable, and shareable. Combined with GitHub Copilot CLI for generating and understanding configs, it’s never been easier to get a new machine ready for development.

If you have questions or want to share your own configuration files, find me on Bluesky (@kaylacinnamon) or X (@cinnamon_msft)!

Got any top tips on you handle dev machine configuration? Let us know in the comments!

The post WinGet Configuration: Set up your dev machine in one command appeared first on Microsoft for Developers.

]]>
https://devblogs.microsoft.com/blog/winget-configuration-set-up-your-dev-machine-in-one-command/feed 7
Bringing work context to your code in GitHub Copilot https://devblogs.microsoft.com/blog/bringing-work-context-to-your-code-in-github-copilot https://devblogs.microsoft.com/blog/bringing-work-context-to-your-code-in-github-copilot#comments Fri, 23 Jan 2026 15:00:30 +0000 https://devblogs.microsoft.com/?p=20758 This week we shipped the GitHub Copilot SDK which takes the agent loop from the Copilot CLI and makes it easy to embed in other applications. We’ve been using, improving, and extending Copilot CLI for the last few months and it’s sparked new ideas about what it means to have the right context right where […]

The post Bringing work context to your code in GitHub Copilot appeared first on Microsoft for Developers.

]]>
This week we shipped the GitHub Copilot SDK which takes the agent loop from the Copilot CLI and makes it easy to embed in other applications. We’ve been using, improving, and extending Copilot CLI for the last few months and it’s sparked new ideas about what it means to have the right context right where we work.

As developers, we spend most of our time in the terminal and our IDEs, and on most days, writing code isn’t the hard part. The hard part is everything around it: figuring out why something was built a certain way, tracking down a spec that defined a requirement, remembering which meeting introduced a change, or finding the right person to talk to when we have a question.

Tools like GitHub Copilot CLI already do a great job helping with code. But they don’t see the work around the work that led to the code, such as the design doc that shaped it, the meeting where a decision was made, or the person or team that owns it.

What if you could connect GitHub Copilot to have a deeper understanding of your team and work? So we started building just that.

Here are some examples and experiments we’ve been playing with. They’ve helped us save time, remove toil, and even inject some fun in some of the things we do manually every day:

Finding the right owner for a piece of code

Sometimes we get thrown into codebases we aren’t familiar with and git blamehas someone who’s not even on the project anymore. Rather than asking around, Copilot can surface ownership based on commit history, project context, and organizational knowledge including meetings, e-mails, and documents.

Creating an architecture diagram from a meeting transcript

When you’re trying to translate meeting discussions into a technical plan, capturing architecture details is slow and error‑prone. With Work IQ connected to Copilot CLI, Copilot can pull the meeting transcripts, understand components and relationships, and generate a draft architecture diagram.

Comparing an implementation to the original design spec

Instead of guessing or manually sifting through code, Copilot can look at the relevant design doc and call out where things changed, drifted, or didn’t get implemented as expected.

Bringing work context into your own apps

With the GitHub Copilot SDK (available in Technical Preview), you can also bring your work context into your own apps and projects, giving them access to an agent that understands your work, with just a few lines of code. Here’s using VS Code as an example:

When you’re deep in a task, switching tools to hunt for the latest docs, meeting notes, or related files interrupts your flow. This sample shows how a lightweight VS Code extension can automatically surface the content that matters – recent meetings, design docs, and relevant files from SharePoint or OneDrive – right inside the editor.

Getting set up

These examples are enabled by connecting GitHub Copilot to Work IQ, the intelligence layer behind Microsoft 365 Copilot.

To get started, you’ll need a GitHub Copilot subscription, and a Microsoft 365 subscription that includes access to Microsoft 365 Copilot. You will also need approval from your Tenant admin, for more details see the Work IQ MCP Server Repo. You can get a free M365 dev tenant through your Visual Studio subscription or the Microsoft 365 Developer Program.

Make sure you’re on the latest version of GitHub Copilot CLI (it’ll tell you if there’s an update pending), and use the following commands to install the Work IQ MCP server:

/plugin marketplace add github/copilot-plugins 
/plugin install workiq@copilot-plugins 

Restart the CLI, and you’ll see the Work IQ MCP server there!

Work IQ MCP setup image

We’ve had a blast using GitHub Copilot to build things that help us be more productive. Check out Scott Hanselman’s video to see a great example of it in action. We’re excited to see what you build! Show off your experiments on social with #WorkIQBuilds and connect with us on GitHub to share your feedback.

 

 

The post Bringing work context to your code in GitHub Copilot appeared first on Microsoft for Developers.

]]>
https://devblogs.microsoft.com/blog/bringing-work-context-to-your-code-in-github-copilot/feed 3
Making Windows Terminal awesome with GitHub Copilot CLI https://devblogs.microsoft.com/blog/making-windows-terminal-awesome-with-github-copilot-cli https://devblogs.microsoft.com/blog/making-windows-terminal-awesome-with-github-copilot-cli#comments Thu, 11 Dec 2025 17:00:28 +0000 https://devblogs.microsoft.com/?p=20509 As someone who lives and breathes in the command line, I love making my terminal feel like home. Windows Terminal is full of personalization options that really allow for a custom experience. Additionally, I can stay within my terminal for my development with GitHub Copilot with the GitHub Copilot CLI. Let’s walk through how you […]

The post Making Windows Terminal awesome with GitHub Copilot CLI appeared first on Microsoft for Developers.

]]>
As someone who lives and breathes in the command line, I love making my terminal feel like home. Windows Terminal is full of personalization options that really allow for a custom experience. Additionally, I can stay within my terminal for my development with GitHub Copilot with the GitHub Copilot CLI. Let’s walk through how you can trick out your terminal with some of my favorite customizations and optimize it for using GitHub Copilot CLI.

What is GitHub Copilot CLI?

copilot banner image

GitHub Copilot CLI gives you the power of GitHub Copilot directly in your terminal, without having to use an IDE. I use it all the time when developing, especially when I’m playing around in a new code base.

When I’m working with a language I’m unfamiliar with, I ask it to help me build the project to make sure everything is set up correctly and I don’t have any build errors. I also ask it how to do certain command line actions I don’t know or can’t remember. For example, I just used it to clear my PowerShell suggestion history because I had some typos I didn’t want it suggesting.

I’ve even used it to help me set up demos. One time I was wanting to show off the rendering speed of Edit and I needed a really large text file, so I told Copilot to create me a really large text file just filled with lorem ipsum. This saved me a ton of time and made for a great demo!

To install GitHub Copilot CLI, you can run the following npm command to install it globally:

npm install -g @github/copilot

Launching the CLI is as simple as typing copilot into your terminal.

Copilot banner always

When you first launch the GitHub Copilot CLI, you’ll see this awesome banner appear at the top:

copilot banner image

I’m a huge fan of the banner animation and I want to see it every time I launch Copilot. This isn’t the default behavior, but I have a couple tricks to be able to see it on every launch of the Copilot CLI:

1. (What I did) Set "banner" to "always" in the GitHub Copilot CLI config.json file. On Windows, this is located at C:\Users\USERNAME\.copilot\config.json. In Windows Subsystem for Linux (WSL), you can find it at ~\.copilot\config.json.

2. Add the --banner flag when launching the CLI:

copilot --banner

Running shell commands

When you’re using the CLI, you’re giving the agent a prompt whenever you enter text. If you want to enter a shell command, you can stay within the Copilot CLI context and just prepend your command with !. This way you don’t have to close out the CLI to enter a command.

shell command 1 image
shell command 2 image

Customizing the terminal

Windows Terminal gives you the ability to customize just about everything and I’ve tricked mine out. Here are some of my favorite customizations:

Creating a GitHub Copilot CLI profile

copilot profile image

Sometimes I want to open a new tab in Terminal directly into the GitHub Copilot CLI. I’ve made this possible by creating a GitHub Copilot profile. I just duplicated my default PowerShell profile and then modified a few settings.

new terminal profile image

I changed the name to GitHub Copilot and updated the icon.

I also set my Starting directory to the folder where I keep my projects.

Then, since I’m using PowerShell, I added -c copilot to the end of the Command line setting. This tells PowerShell to run the Copilot command on launch:

"C:\Program Files\PowerShell\7\pwsh.exe" -c copilot

Opening a profile as a pane

Not necessarily a setting, but I like to use Terminal’s pane functionality if I’m switching between the Copilot CLI and another shell. You can open a new pane by holding Alt and selecting the profile from the dropdown list. If you want to close a pane, you can type Ctrl+Shift+W.

split pane image

Restoring tabs after relaunch

I often like to pick up where I’ve left off when I open up Terminal. To reopen the tabs I had open previously, I set the When Terminal starts setting on the Startup page to Restore window layout and content. This lets me quickly get back into my flow, especially if I need to restart Terminal while developing and I need those same tabs back immediately.

when terminal starts image

Custom background image

All of my profiles have a custom background. You can set a background image for each profile within your terminal, or just set one that will apply to all profiles.

If you want to apply a background image to every profile, modify the Background image path setting under Defaults -> Appearance:

background image image

Retro terminal effects

Another fun customization I use from time to time is the retro terminal effects, which adds glowing text and scan lines. To do this, enable the Retro terminal effects setting under Defaults -> Appearance. If you only want this to apply to one profile, navigate to that profile’s appearance settings and enable it there.

retro scanlines image

Customizing your prompt

One of my favorite ways to customize my terminal prompt is with Oh My Posh. Oh My Posh can help add styling and useful information to your prompt header. The thing that makes Oh My Posh great for displaying quick information is its segments functionality.

oh my posh image

Oh My Posh has a variety of segments that can provide additional context for things relating to the CLI, source control, music, and even your own health (plus more). One of my favorites is the Git segment (pictured above) which gives you source control information such as the branch you have checked out, what your diff is, and the state of your changes.

There are also segments for npm and React that display what the currently active version is for each.

When I’m developing, I’m always listening to music. Years ago, I contributed the cinnamon theme so I could have my current Spotify song displayed in my prompt:

omp cinnamon image

If you want to get started using Oh My Posh, here’s how!

Install Oh My Posh

My preferred method for installation through winget and you can run this command in your terminal to kick off the install:

winget install JanDeDobbeleer.OhMyPosh --source winget

More detailed instructions and alternative methods for how to install Oh My Posh can be found here.

Download and install a Nerd Font

In order for the glyphs to appear, you’ll need to download and install a Nerd Font, then set it as your font inside Terminal. My preferred font is Caskaydia Cove, but choose whichever speaks to you.

After downloading the font, extract the files from the zip folder, open them, and install them.

Once the font files are installed, you can set your Nerd Font as your font in Terminal:

font setting image

Applying an Oh My Posh theme

The list of available Oh My Posh themes can be found here. Once you’ve chosen a theme you like, you can follow these detailed instructions for how to enable it in your shell.

To enable your theme in PowerShell so it’s always applied on launch, add the following line to your PowerShell profile (not Terminal’s profile for PowerShell):

oh-my-posh init pwsh --config "THEMENAME" | Invoke-Expression

To open your PowerShell profile file, you can run code $PROFILE or notepad $PROFILE from PowerShell. If this file doesn’t exist, you’ll be prompted to create it.

Adding a GitHub Copilot segment

In Oh My Posh version 28.1.0 and newer, you can add a GitHub Copilot segment to your prompt that displays usage statistics and quota information including premium interactions, inline completions, and chat usage.

omp copilot image

I added this segment to my prompt by creating a custom theme and pointing my PowerShell profile to it. I grabbed a copy of the 1_shell theme from the GitHub repository and added the following object to the right-aligned segments list:

{
    "type": "copilot",
    "template": "   {{ .Premium.Percent.Gauge }} ",
    "cache": {
        "duration": "5m",
        "strategy": "session"
    },
    "properties": {
        "http_timeout": 1000
    }
}

This code block displays the percentage of your premium quota that you’ve used. The docs site goes into more detail with how to style and add more information to this segment.

Lastly, I authenticated using the oh-my-posh auth copilot command to get everything working.

Happy command lining!

Those are just some of my favorite tips and tricks for getting the most out of Windows Terminal with GitHub Copilot CLI. If you have any questions or want to learn more, feel free to reach out on Bluesky (@kaylacinnamon) or X (@cinnamon_msft)!

▶Go deeper: Scripting the GitHub Copilot CLI

Check out this AI Dev Days session by John Lindquist for even more tips and tricks with the GitHub Copilot CLI!

The post Making Windows Terminal awesome with GitHub Copilot CLI appeared first on Microsoft for Developers.

]]>
https://devblogs.microsoft.com/blog/making-windows-terminal-awesome-with-github-copilot-cli/feed 7
Announcing the JavaScript/TypeScript Modernizer for VS Code https://devblogs.microsoft.com/blog/jsts-modernizer-preview https://devblogs.microsoft.com/blog/jsts-modernizer-preview#comments Tue, 09 Dec 2025 17:41:29 +0000 https://devblogs.microsoft.com/?p=20568 Keeping JavaScript/TypeScript projects up-to-date can be a challenge, especially when it’s time to upgrade a bunch of npm packages or adopt the latest frameworks. We’ve heard from many JS/TS developers that modernizing an older app (upgrading dependencies, fixing breaking changes, etc.) is often tedious and time-consuming. To help with this, we’re excited to introduce the […]

The post Announcing the JavaScript/TypeScript Modernizer for VS Code appeared first on Microsoft for Developers.

]]>
Keeping JavaScript/TypeScript projects up-to-date can be a challenge, especially when it’s time to upgrade a bunch of npm packages or adopt the latest frameworks. We’ve heard from many JS/TS developers that modernizing an older app (upgrading dependencies, fixing breaking changes, etc.) is often tedious and time-consuming. To help with this, we’re excited to introduce the JavaScript/TypeScript Modernizer, a new AI-assisted tool in Visual Studio Code. This Modernizer uses GitHub Copilot under the hood to upgrade your JavaScript or TypeScript apps, guiding you through code updates and package upgrades step by step. It’s like having an AI pair programmer dedicated to updating your project, helping reduce the manual effort and potential errors during upgrades.

What does the JS/TS Modernizer do? In a nutshell, it analyzes your project (looking at files like package.json), suggests an upgrade plan, and then automatically updates your npm packages to their latest versions. As it upgrades libraries, it also helps apply any necessary code changes to accommodate breaking changes or new APIs. All of this happens through an interactive Copilot Chat experience in VS Code. The tool will walk you through the changes and ask you questions or confirmation if needed, guiding you through each stage of the process (from updating dependencies to fixing code). The goal is to simplify modernization by letting the AI do the heavy lifting (updating files, running installs, suggesting code fixes) while you supervise and approve the changes. This means you can upgrade to modern JavaScript/TypeScript practices and latest packages much faster and with more confidence.

Download the extension

The JS/TS Modernizer is a part of the preview version of the GitHub Copilot App Modernization VS Code Extension when installed in VS Code. Install it with the link below.

As we improve the JS/TS Modernizer we will bring it to the release version of VS Code.

Getting Set Up

Before you can use the JS/TS Modernizer, you’ll need to make sure a few things are ready:

  • Node.js and npm are installed – You probably already have Node/npm installed but if not, install them from npmjs. The modernization process will invoke Node/npm commands under the hood.
  • GitHub Copilot access in VS Code – The Modernizer leverages GitHub Copilot, so you must be signed in with an account that has Copilot enabled in VS Code. If you haven’t set up Copilot in VS Code yet, follow the Set up GitHub Copilot in VS Code guide to sign in. (If you don’t have a subscription, GitHub Copilot Free may be available.)
  • Install the GitHub Copilot App Modernization (Preview) extension – The JS/TS Modernizer is part of the preview version of the GitHub Copilot app modernization extension for VS Code. You can download and install this extension from the VS Code Marketplace. Once installed, Visual Studio Code will have a new “GitHub Copilot App Modernization” view that we’ll use in the steps below.
  • Enable the experimental Modernizer setting – Because the JavaScript/TypeScript modernization feature is in preview, you need to explicitly enable it in VS Code’s settings. Open your VS Code settings (you can use File > Preferences > Settings (JSON) to edit the JSON directly. Then add the following line to your settings JSON file
{
    "appmod.experimental.task.typescript.upgrade": true
}
  • This flag tells the extension that you want to turn on the JavaScript/TypeScript upgrade capabilities.
  • After saving this setting, restart VS Code to ensure the change takes effect.

Once you have Node.js, Copilot, and the extension set up (and the feature flag enabled), you’re ready to modernize your first project!

Modernizing a JS/TS App: Step-by-Step

Using the JS/TS Modernizer is straightforward. Let’s walk through the process:

  1. Open your project in VS Code. In VS Code, open the folder containing your JavaScript or TypeScript application. Make sure this folder contains a package.json file with your project’s dependencies (the tool uses this to know what to upgrade). The tool will not create any branches or commits. You may want to switch to a different branch at this time.
  2. Open the GitHub Copilot app modernization panel. In the VS Code Activity Bar (the sidebar on the left), click the GitHub Copilot App Modernization icon/pane. This is the panel provided by the extension where modernization tasks are launched. (If you just installed the extension, it might already be visible; it typically looks like a Copilot or upgrade icon on the left.)
  3. Click Upgrade npm Packages. In the Copilot app modernization panel, you should see a button labeled “Upgrade npm Packages”. Click this button to start the modernization process for your TS/JS app. (If you don’t see this button, double-check that your workspace has a package.json, the tool shows this option only when a package.json is detected.)

jsts upgrade npm packages button

  1. Follow the Copilot chat prompts. Once you click the upgrade button, the extension will initiate the modernization workflow. You’ll see GitHub Copilot Chat open up (usually on the right of VS Code). The Copilot modernization agent will analyze your project and propose an upgrade plan. For example, it might identify outdated packages that need updating. It will then begin applying updates. Copilot will update your package.json with new version numbers, run npm install (or npm update under the covers), and start suggesting any code changes required. Throughout this process, messages will appear in the chat explaining what’s happening. The Copilot agent may ask you a few questions as it runs. You can respond in the chat to guide it.
  2. What’s happening behind the scenes? The Copilot modernization agent is effectively performing a series of upgrade tasks on your project, driven by AI. It checks for outdated dependencies, updates them, and then addresses resulting issues. For example, if a new version of a library has changed an API your code uses, the agent (via Copilot) can suggest code modifications to fix the usage. All of this is done in an iterative loop within the chat: analyze the project, make a change, verify it (e.g. run a build or check for errors), and repeat. The extension uses Copilot’s “agent mode” to orchestrate these steps, so you’ll see it explaining each step (“Upgrading package X from v1 to v2”, “Fixing import statements for updated API,” etc.). It’s a bit like having a smart script or an assistant walk you through a complex upgrade, but you remain in control, since you can always intervene via the chat if needed. The Modernizer will continue this process until it believes the upgrade is complete (for example, all packages updated and no obvious errors remain).
  3. Review and finalize changes. After the modernization run finishes, be sure to review the changes that were made. The tool will have updated files in your workspace (for instance, the package.json and possibly some source files). You can inspect the diff of these changes using source control view in VS Code. (Nothing is auto-committed; you have the chance to accept or adjust the changes.) Run your build or tests to double-check everything is working. The Copilot chat may also summarize what was done or if there are any follow-up steps for you. Once you’re satisfied, you can commit the updated code. You’ve just modernized your app with substantially less effort than doing it all by hand!

Known Issues and Tips

As of this preview release, there are a couple of things to be aware of:

  • One project at a time: If you have a workspace with multiple JS/TS projects (for example, a monorepo or multiple package.json files in subfolders), the Modernizer will currently target only one of them (usually the first it detects). In the current preview, it’s best to open and upgrade one project folder at a time. If you need to modernize several projects individually, open each one separately in VS Code and run the tool for each. Support for multi-project workspaces may improve in future updates.
  • Experimental feature: The JS/TS Modernizer is still in preview, meaning our team is actively refining it. You might encounter some rough edges or cases where the tool can’t fully modernize a complex app. If the tool gets stuck or something isn’t working as expected please let us know.

Try It Out Today

The JavaScript/TypeScript Modernizer can save you time and help eliminate the drudgery of manual upgrades. Instead of combing through release notes and fixing broken imports alone, you have an AI assistant ready to do much of that work for you. This is part of our broader effort to bring GitHub Copilot-powered modernization tools to developers. We’re eager to expand and improve these capabilities based on your input.

Ready to modernize your app? Install the GitHub Copilot App Modernization (Preview) extension in VS Code, enable the JS/TS Modernizer as described above, and give it a try on one of your projects. We hope you find it useful for keeping your apps up-to-date with the latest and greatest.

What to do if you run into an issue?

If something is not working as expected during a TypeScript upgrade, you can use the following locations to diagnose the issue.

Check the TypeScript MCP Server Logs in VS Code

You can open logs from the TypeScript Package Updater MCP server directly in VS Code.

  1. Open the Command Palette.
  2. Run: MCP: List Servers.
  3. Select TypeScript Package Updater.
  4. Choose Show Output.
This opens the MCP server logs and can reveal any startup or runtime issues.

Look for the PROGRESS.md File

After you trigger an upgrade, the tool creates a folder named .tsupgrader next to your package.json.
  • If a file named PROGRESS.md exists, it will show detailed information about the current upgrade state.
  • If the file is missing, this usually indicates an unexpected error.

Inspect the Local App Modernization Log Directory

If PROGRESS.md is missing or the results seem incorrect, you can find more detailed logs at <your user profile>/.ghcp-appmod/ts/logs/.

Providing Feedback

We’d love to hear your feedback and any issues you encounter. If you run into a bug or want to suggest an improvement, please let us know! You can email us at [email protected] with details about the issue. Feedback is incredibly valuable to help us improve the Modernizer’s algorithms and coverage.

When reporting an issue with the JS or TS Modernizer, please include as much context as possible. This helps us diagnose problems faster. You can include:

  • Any errors or warnings from the MCP server output.
  • The contents of the PROGRESS.md file if it exists.
  • Any relevant files under .ghcp-appmod/ts/logs.
  • A short description of what you expected to happen and what actually occurred.

If the tool appears to be stuck or producing unexpected results, the logs described above will usually contain helpful clues.

The post Announcing the JavaScript/TypeScript Modernizer for VS Code appeared first on Microsoft for Developers.

]]>
https://devblogs.microsoft.com/blog/jsts-modernizer-preview/feed 5
Host Your Node.js MCP Server on Azure Functions in 1 Simple Step https://devblogs.microsoft.com/blog/host-your-node-js-mcp-server-on-azure-functions-in-3-simple-steps https://devblogs.microsoft.com/blog/host-your-node-js-mcp-server-on-azure-functions-in-3-simple-steps#respond Mon, 08 Dec 2025 18:00:54 +0000 https://devblogs.microsoft.com/?p=20494 Building AI agents with the Model Context Protocol (MCP) is powerful, but when it comes to hosting your MCP server in production, you need a solution that’s reliable, scalable, and cost-effective. What if you could deploy your regular Node.js MCP server to a serverless platform that handles scaling automatically while you only pay for what […]

The post Host Your Node.js MCP Server on Azure Functions in 1 Simple Step appeared first on Microsoft for Developers.

]]>
Building AI agents with the Model Context Protocol (MCP) is powerful, but when it comes to hosting your MCP server in production, you need a solution that’s reliable, scalable, and cost-effective. What if you could deploy your regular Node.js MCP server to a serverless platform that handles scaling automatically while you only pay for what you use?

Let’s explore how Azure Functions now supports hosting MCP servers built with the official Anthropic MCP SDK, giving you serverless scaling with almost no changes in your code.

Grab your favorite hot beverage, and let’s dive in!

TL;DR key takeaways

  • Azure Functions now supports hosting Node.js MCP servers using the official Anthropic SDK
  • Only 1 simple configuration needed: adding host.json file
  • Currently supports HTTP Streaming protocol with stateless servers
  • Serverless hosting means automatic scaling and pay-per-use pricing
  • Deploy with one command using Infrastructure as Code

What will you learn here?

  • Understand how MCP servers work on Azure Functions
  • Configure a Node.js MCP server for Azure Functions hosting
  • Test your MCP server locally and with real AI agents
  • Deploy your MCP server with Infrastructure as Code and AZD

Reference links for everything we use

Requirements

What is MCP and why does it matter?

Model Context Protocol is an open standard that enables AI models to securely interact with external tools and data sources. Instead of hardcoding tool integrations, you build an MCP server that exposes capabilities (like browsing a menu, placing orders, or querying a database) as tools that any MCP-compatible AI agent can discover and use. MCP is model-agnostic, meaning it can work with any LLM that supports the protocol, including models from Anthropic, OpenAI, and others. It’s also worth noting that MCP supports more than just tool calls, though that’s its most common use case.

Schema showing MCP interfacing with different tool servers

The challenge? Running MCP servers in production requires infrastructure. You need to handle scaling, monitoring, and costs. That’s where Azure Functions comes in.

🚹 Free course alert! If you’re new to MCP, check out the MCP for Beginners course to get up to speed quickly.

Why Azure Functions for MCP servers?

Azure Functions is a serverless compute platform that’s perfect for MCP servers:

  • Zero infrastructure management: No servers to maintain
  • Automatic scaling: Handles traffic spikes seamlessly
  • Cost-effective: Pay only for actual execution time (with generous free grant)
  • Built-in monitoring: Application Insights integration out of the box
  • Global distribution: Deploy to regions worldwide

The new Azure Functions support means you can take your existing Node.js MCP server and deploy it to a production-ready serverless environment with minimal changes. This comes up as an additional option for native Node.js MCP hosting, but you can still use the Azure Functions MCP bindings that were available before.

1 simple step to enable Functions hosting

Let’s break down what you need to add to your existing Node.js MCP server to run it on Azure Functions. I’ll use a real-world example from our burger ordering system.

If you already have a working Node.js MCP server, you can just follow this to make it compatible with Azure Functions hosting.

Step 1: Add the host.json configuration

Create a host.json file at the root of your Node.js project:

{
  "version": "2.0",
  "configurationProfile": "mcp-custom-handler",
  "customHandler": {
    "description": {
      "defaultExecutablePath": "node",
      "arguments": ["lib/server.js"]
    },
    "http": {
      "DefaultAuthorizationLevel": "anonymous"
    },
    "port": "3000"
  }
}

Note: Adjust the arguments array to point to your compiled server file (e.g., lib/server.js or dist/server.js), depending on your build setup. You can also change the port if needed to match your server configuration. The hosts.json file holds metadata configuration for the Functions runtime. The most important part here is the customHandler section. It configures the Azure Functions runtime to run your Node.js MCP server as a custom handler, which allows you to use any HTTP server framework (like Express, Fastify, etc.) without modification (tip: it can do more than MCP servers! 😉).

There’s no step 2 or 3. That’s it! 😎

Note: We’re not covering the authentication and authorization aspects of Azure Functions here, but you can easily add those later if needed.

Real-world example: Burger MCP Server

Let’s look at how this works in practice with a burger ordering MCP server. This server exposes 9 tools for AI agents to interact with a burger API:

  • get_burgers – Browse the menu
  • get_burger_by_id – Get burger details
  • place_order – Place an order
  • get_orders – View order history
  • And more…

Here’s the complete server implementation using Express and the MCP SDK:

import express, { Request, Response } from 'express';
import { StreamableHTTPServerTransport } from '@modelcontextprotocol/sdk/server/streamableHttp.js';
import { getMcpServer } from './mcp.js';

const app = express();
app.use(express.json());

// Handle all MCP Streamable HTTP requests
app.all('/mcp', async (request: Request, response: Response) => {
  const transport = new StreamableHTTPServerTransport({
    sessionIdGenerator: undefined,
  });

  // Connect the transport to the MCP server
  const server = getMcpServer();
  await server.connect(transport);

  // Handle the request with the transport
  await transport.handleRequest(request, response, request.body);

  // Clean up when the response is closed
  response.on('close', async () => {
    await transport.close();
    await server.close();
  });

  // Note: error handling not shown for brevity
});

// The port configuration
const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
  console.log(`Burger MCP server listening on port ${PORT}`);
});

The MCP tools are defined using the official SDK:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { z } from 'zod';

export function getMcpServer() {
  const server = new McpServer({
    name: 'burger-mcp',
    version: '1.0.0',
  });

  server.registerTool(
    'get_burgers',
    { description: 'Get a list of all burgers in the menu' },
    async () => {
      const response = await fetch(`${burgerApiUrl}/burgers`);
      const burgers = await response.json();
      return {
        content: [{
          type: 'text',
          text: JSON.stringify(burgers, null, 2)
        }]
      };
    }
  );

  // ... more tools
  return server;
}

As you can see, the actual implementation of the tool is forwarding an HTTP request to the burger API and returning the result in the MCP response format. This is a common pattern for MCP tools in enterprise contexts, that act as wrappers around one or more existing APIs.

Current limitations

Note that this Azure Functions MCP hosting currently has some limitations: it only supports stateless servers using the HTTP Streaming protocol. The legacy SSE protocol is not supported as it requires stateful connections, so you’ll either have to migrate your client to use HTTP Streaming or use another hosting option, like using containers for example.

For most use cases, HTTP Streaming is the recommended approach anyway as it’s more scalable and doesn’t require persistent connections. Stateful MCP servers comes with additional complexity challenges and have limited scalability if you need to handle many concurrent connections.

Testing the MCP server locally

First let’s run the MCP server locally and play a bit with it.

If you don’t want to bother with setting up a local environment, you can use the following link or open it in a new tab to launch a GitHub Codespace:

This will open a VS Code environment in your browser with the repo already cloned and all the tools installed and ready to go. Otherwise you can just clone the repo.

Once you have the code ready, open a terminal and run:

# Install dependencies
npm install

# Start the burger MCP server and API
npm start

This will start multiple services locally, including the Burger API and the MCP server, which will be available at http://localhost:3000/mcp. This may take a few seconds, wait until you see this message in the terminal:

🚀 All services ready 🚀

We’re only interested in the MCP server for now, so let’s focus on that.

Using MCP Inspector

The easiest way to test the MCP server is with the MCP Inspector tool:

npx -y @modelcontextprotocol/inspector

Open the URL shown in the console in your browser, then:

  1. Set transport type to Streamable HTTP
  2. Enter your local server URL: http://localhost:3000/mcp
  3. Click Connect

After you’re connected, go to the Tools tab to list available tools. You can then try the get_burgers tool to see the burger menu.

MCP Inspector Screenshot

Using GitHub Copilot (with remote MCP)

Configure GitHub Copilot to use your deployed MCP server by adding this to your project’s .vscode/mcp.json:

{
  "servers": {
    "burger-mcp": {
      "type": "http",
      "url": "http://localhost:3000/mcp"
    }
  }
}

Click on “Start” button that will appear in the JSON file to activate the MCP server connection.

Now you can use Copilot in agent mode and ask things like:

  • “What spicy burgers do you have?”
  • “Place an order for two cheeseburgers”
  • “Show my recent orders”

Copilot will automatically discover and use the MCP tools! 🎉

Tip: If Copilot doesn’t call the burger MCP tools, try checking if it’s enabled by clicking on the tool icon in the chat input box and ensuring that “burger-mcp” is selected. You can also force tool usage by adding #burger-mcp in your prompt.

(Bonus) Deploying to Azure with Infrastructure as Code

Deploying an application to Azure is usually not the fun part, especially when it involves multiple resources and configurations. With the Azure Developer CLI (AZD), you can define your entire application infrastructure and deployment process as code, and deploy everything with a single command.

If you’ve used the automated setup with GitHub Copilot, you should already have the necessary files. Our burger example also comes with these files pre-configured. The MCP server is defined as a service in azure.yaml, and the files under the infra folder defines the Azure Functions app and related resources.

Here’s the relevant part of azure.yaml that defines the burger MCP service:

name: mcp-agent-langchainjs

services:
  burger-mcp:
    project: ./packages/burger-mcp
    language: ts
    host: function

While the infrastructure files can look intimidating at first, you don’t need to understand all the details to get started. There are tons of templates and examples available to help you get going quickly, the important part is that everything is defined as code, so you can version control it and reuse it.

Now let’s deploy:

# Login to Azure
azd auth login

# Provision resources and deploy
azd up

Pick your preferred Azure region when prompted (if you’re not sure, choose East US2), and voilĂ ! In a few minutes, you’ll have a fully deployed MCP server running on Azure Functions.

Once the deployment is finished, the CLI will show you the URL of the deployed resources, including the MCP server endpoint.

AZD deployment output for the burger MCP example app

Example projects

The burger MCP server is actually part of a larger example project that demonstrates building an AI agent with LangChain.js, that uses the burger MCP server to place orders. If you’re interested in the next steps of building an AI agent on top of MCP, this is a great resource as it includes:

  • AI agent web API using LangChain.js
  • Web app interface built with Lit web components
  • MCP server on Functions (the one we just saw)
  • Burger ordering API (used by the MCP server)
  • Live order visualization
  • Complete Infrastructure as Code, to deploy everything with one command

But if you’re only interested in the MCP server part, then you might want to look at this simpler example that you can use as a starting point for your own MCP servers: mcp-sdk-functions-hosting-node is a server template for a Node.js MCP server using TypeScript and MCP SDK.

What about the cost?

Azure Functions Flex Consumption pricing is attractive for MCP servers:

  • Free grant: 1 million requests and 400,000 GB-s execution time per month
  • After free grant: Pay only for actual execution time
  • Automatic scaling: From zero to hundreds of instances

The free grant is generous enough to allow running a typical MCP server with moderate usage, and all the experimentation you might need. It’s easy to configure the scaling limits to control costs as needed, with an option to scale down to zero when idle. This flexibility is why Functions is my personal go-to choice for TypeScript projects on Azure.

Wrap up

Hosting MCP servers on Azure Functions gives you the best of both worlds: the simplicity of serverless infrastructure and the power of the official Anthropic SDK. With just one simple configuration step, you can take your existing Node.js MCP server and deploy it to a production-ready, auto-scaling platform.

The combination of MCP’s standardized protocol and Azure’s serverless platform means you can focus on building amazing AI experiences instead of managing infrastructure. Boom. 😎

Star the repos ⭐ if you found this helpful! Try deploying your own MCP server and share your experience in the comments. If you run into any issues or have questions, you can reach for help on the Azure AI community on Discord.

The post Host Your Node.js MCP Server on Azure Functions in 1 Simple Step appeared first on Microsoft for Developers.

]]>
https://devblogs.microsoft.com/blog/host-your-node-js-mcp-server-on-azure-functions-in-3-simple-steps/feed 0