DEV Community: Microsoft Azure The latest articles on DEV Community by Microsoft Azure (@azure). https://dev.to/azure https://media2.dev.to/dynamic/image/width=90,height=90,fit=cover,gravity=auto,format=auto/https:%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F512%2F64ce0b82-730d-4ca0-8359-2c21513a0063.jpg DEV Community: Microsoft Azure https://dev.to/azure en Getting started with GitHub Copilot part 2, streamable responses Chris Noring Mon, 16 Mar 2026 20:14:28 +0000 https://dev.to/azure/getting-started-with-github-copilot-part-2-streamable-responses-49a8 https://dev.to/azure/getting-started-with-github-copilot-part-2-streamable-responses-49a8 <p>I'm sure you've seen many AI apps where you sit tight for 30s or and you wonder if things are stuck? Not a great experience right? Yes, you're right, you deserve better, so how do we fix it?<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>Type your prompt&gt; Tell me a joke . . . . . . . . Why could I never find the atoms, cause they split.. </code></pre> </div> <p> <iframe src="proxy.php?url=https://www.youtube.com/embed/T4p-C2v_0wU"> </iframe> </p> <h2> Series on Copilot SDK </h2> <p>This series is about Copilot SDK and how you can leverage your existing GitHub Copilot license to integrate AI into your apps</p> <ul> <li><a href="proxy.php?url=https://dev.to/azure/get-started-with-github-copilot-sdk-1ijm/">Part 1 - install and your first app</a></li> <li>Part 2 - streamable response, <strong>you're here</strong> </li> </ul> <h2> Addressing the problem </h2> <p>By streaming the response, the response now arrives in chunks, pieces that you can show as soon as they arrive. How can we do that though and how can GitHub Copilot SDK help out?</p> <p>Well, there's two things you need to do:</p> <ul> <li>Enable streaming. You need to set <code>streaming</code> to <code>True</code> when you call <code>create_session</code>.</li> <li>Listen for events that contains a chunk. Specifically, you need to listen to <code>ASSISTANT_MESSAGE_DELTA</code> and print out the chunk. </li> </ul> <div class="highlight js-code-highlight"> <pre class="highlight python"><code><span class="c1"># 1. Enable streaming </span><span class="n">session</span> <span class="o">=</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">create_session</span><span class="p">({</span> <span class="sh">"</span><span class="s">model</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">gpt-4.1</span><span class="sh">"</span><span class="p">,</span> <span class="sh">"</span><span class="s">on_permission_request</span><span class="sh">"</span><span class="p">:</span> <span class="n">PermissionHandler</span><span class="p">.</span><span class="n">approve_all</span><span class="p">,</span> <span class="sh">"</span><span class="s">streaming</span><span class="sh">"</span><span class="p">:</span> <span class="bp">True</span><span class="p">,</span> <span class="p">})</span> <span class="nf">print</span><span class="p">(</span><span class="sh">"</span><span class="s">Starting streamed response:</span><span class="sh">"</span><span class="p">)</span> <span class="c1"># Listen for response chunks </span><span class="k">def</span> <span class="nf">handle_event</span><span class="p">(</span><span class="n">event</span><span class="p">):</span> <span class="k">if</span> <span class="n">event</span><span class="p">.</span><span class="nb">type</span> <span class="o">==</span> <span class="n">SessionEventType</span><span class="p">.</span><span class="n">ASSISTANT_MESSAGE_DELTA</span><span class="p">:</span> <span class="o">//</span> <span class="mf">2.</span> <span class="n">Chunk</span> <span class="n">arrived</span><span class="p">,</span> <span class="k">print</span> <span class="n">it</span> <span class="n">sys</span><span class="p">.</span><span class="n">stdout</span><span class="p">.</span><span class="nf">write</span><span class="p">(</span><span class="n">event</span><span class="p">.</span><span class="n">data</span><span class="p">.</span><span class="n">delta_content</span><span class="p">)</span> <span class="n">sys</span><span class="p">.</span><span class="n">stdout</span><span class="p">.</span><span class="nf">flush</span><span class="p">()</span> <span class="k">if</span> <span class="n">event</span><span class="p">.</span><span class="nb">type</span> <span class="o">==</span> <span class="n">SessionEventType</span><span class="p">.</span><span class="n">SESSION_IDLE</span><span class="p">:</span> <span class="nf">print</span><span class="p">()</span> <span class="c1"># New line when done </span></code></pre> </div> <p>Here's what the full application looks like:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight python"><code><span class="kn">import</span> <span class="n">asyncio</span> <span class="kn">import</span> <span class="n">sys</span> <span class="kn">from</span> <span class="n">copilot</span> <span class="kn">import</span> <span class="n">CopilotClient</span><span class="p">,</span> <span class="n">PermissionHandler</span> <span class="kn">from</span> <span class="n">copilot.generated.session_events</span> <span class="kn">import</span> <span class="n">SessionEventType</span> <span class="k">async</span> <span class="k">def</span> <span class="nf">main</span><span class="p">():</span> <span class="n">client</span> <span class="o">=</span> <span class="nc">CopilotClient</span><span class="p">()</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">start</span><span class="p">()</span> <span class="n">session</span> <span class="o">=</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">create_session</span><span class="p">({</span> <span class="sh">"</span><span class="s">model</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">gpt-4.1</span><span class="sh">"</span><span class="p">,</span> <span class="sh">"</span><span class="s">on_permission_request</span><span class="sh">"</span><span class="p">:</span> <span class="n">PermissionHandler</span><span class="p">.</span><span class="n">approve_all</span><span class="p">,</span> <span class="sh">"</span><span class="s">streaming</span><span class="sh">"</span><span class="p">:</span> <span class="bp">True</span><span class="p">,</span> <span class="p">})</span> <span class="nf">print</span><span class="p">(</span><span class="sh">"</span><span class="s">Starting streamed response:</span><span class="sh">"</span><span class="p">)</span> <span class="c1"># Listen for response chunks </span> <span class="k">def</span> <span class="nf">handle_event</span><span class="p">(</span><span class="n">event</span><span class="p">):</span> <span class="k">if</span> <span class="n">event</span><span class="p">.</span><span class="nb">type</span> <span class="o">==</span> <span class="n">SessionEventType</span><span class="p">.</span><span class="n">ASSISTANT_MESSAGE_DELTA</span><span class="p">:</span> <span class="n">sys</span><span class="p">.</span><span class="n">stdout</span><span class="p">.</span><span class="nf">write</span><span class="p">(</span><span class="n">event</span><span class="p">.</span><span class="n">data</span><span class="p">.</span><span class="n">delta_content</span><span class="p">)</span> <span class="n">sys</span><span class="p">.</span><span class="n">stdout</span><span class="p">.</span><span class="nf">flush</span><span class="p">()</span> <span class="k">if</span> <span class="n">event</span><span class="p">.</span><span class="nb">type</span> <span class="o">==</span> <span class="n">SessionEventType</span><span class="p">.</span><span class="n">SESSION_IDLE</span><span class="p">:</span> <span class="nf">print</span><span class="p">()</span> <span class="c1"># New line when done </span> <span class="n">session</span><span class="p">.</span><span class="nf">on</span><span class="p">(</span><span class="n">handle_event</span><span class="p">)</span> <span class="nf">print</span><span class="p">(</span><span class="sh">"</span><span class="s">Sending prompt...</span><span class="sh">"</span><span class="p">)</span> <span class="k">await</span> <span class="n">session</span><span class="p">.</span><span class="nf">send_and_wait</span><span class="p">({</span><span class="sh">"</span><span class="s">prompt</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">Tell me a short joke</span><span class="sh">"</span><span class="p">})</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">stop</span><span class="p">()</span> <span class="k">if</span> <span class="n">__name__</span> <span class="o">==</span> <span class="sh">"</span><span class="s">__main__</span><span class="sh">"</span><span class="p">:</span> <span class="n">asyncio</span><span class="p">.</span><span class="nf">run</span><span class="p">(</span><span class="nf">main</span><span class="p">())</span> </code></pre> </div> <p>That's it folks, now go out and build better experiences for your users.</p> <h2> Links </h2> <ul> <li><a href="proxy.php?url=https://github.com/github/copilot-sdk/blob/main/docs/getting-started.md#step-3-add-streaming-responses" rel="noopener noreferrer">Streamed response</a></li> </ul> ai githubcopilot copilotsdk python Get started with GitHub Copilot CLI: A free, hands-on course Renee Noble Thu, 05 Mar 2026 04:44:08 +0000 https://dev.to/azure/get-started-with-github-copilot-cli-a-free-hands-on-course-3beg https://dev.to/azure/get-started-with-github-copilot-cli-a-free-hands-on-course-3beg <p>GitHub Copilot has grown well beyond code completions in your editor. It now lives in your terminal, too. <a href="proxy.php?url=https://docs.github.com/copilot/how-tos/copilot-cli" rel="noopener noreferrer">GitHub Copilot CLI</a> lets you review code, generate tests, debug issues, and ask questions about your projects without ever leaving the command line.</p> <p>To help developers get up to speed, we put together a free, open source course: <a href="proxy.php?url=https://github.com/github/copilot-cli-for-beginners" rel="noopener noreferrer">GitHub Copilot CLI for Beginners</a>. It’s 8 chapters, hands-on from the start, and designed so you can go from installation to building real workflows in a few hours. <strong>Already have a GitHub account</strong>? GitHub Copilot CLI works with <a href="proxy.php?url=https://github.com/features/copilot/plans" rel="noopener noreferrer">GitHub Copilot Free</a>, which is available to all personal GitHub accounts.</p> <p>In this post, I’ll walk through what the course covers and how to get started.</p> <h2> What GitHub Copilot CLI can do </h2> <p>If you haven’t tried it yet, GitHub Copilot CLI is a conversational AI assistant that runs in your terminal. You point it at files using @ references, and it reads your code and responds with analysis, suggestions, or generated code.</p> <p>You can use it to:</p> <ul> <li>Review a file and get feedback on code quality</li> <li>Generate tests based on existing code</li> <li>Debug issues by pointing it at a file and asking what’s wrong</li> <li>Explain unfamiliar code or confusing logic</li> <li>Generate commit messages, refactor functions, and more</li> <li>Write new app features (front-end, APIs, database interactions, and more)</li> </ul> <p>It remembers context within a conversation, so follow-up questions build on what came before.</p> <h2> What the course covers </h2> <p>The course is structured as 8 progressive chapters. Each one builds on the last, and you work with the same project throughout: a book collection management app. Instead of jumping between isolated snippets, you keep improving one codebase as you go.</p> <p>Here’s what using GitHub Copilot CLI looks like in practice. Say you want to review a Python file for potential issues. Start up Copilot CLI and ask what you’d like done:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>$ copilot &gt; Review @samples/book-app-project/books.py for potential improvements. Focus on error handling and code quality. </code></pre> </div> <p>Copilot reads the file, analyzes the code, and gives you specific feedback right in your terminal.</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68z8p0e37osyv69md339.gif" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F68z8p0e37osyv69md339.gif" alt="code review demo gif" width="760" height="456"></a></p> <p>Here are the chapters covered in the course:</p> <p><strong>Quick Start</strong> — Installation and authentication<br> <strong>First Steps</strong> — Learn the three interaction modes: interactive, plan, and one-shot (programmatic)<br> <strong>Context and Conversations</strong> — Using @ references to point Copilot at files and directories, plus session management with --continue and --resume<br> <strong>Development Workflows</strong> — Code review, refactoring, debugging, test generation, and Git integration<br> <strong>Custom Agents</strong> — Building specialized AI assistants with .agent.md files (for example, a Python reviewer that always checks for type hints)<br> <strong>Skills</strong> — Creating task-specific instructions that auto-trigger based on your prompt<br> <strong>MCP Servers</strong> — Connecting Copilot to external services like GitHub repos, file systems, and documentation APIs via the Model Context Protocol<br> <strong>Putting It All Together</strong> — Combining agents, skills, and MCP servers into complete development workflows<br> learning path image</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F381r15kd6bohcuu878jj.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F381r15kd6bohcuu878jj.png" alt="learning path image" width="800" height="116"></a></p> <p>Every command in the course can be copied and run directly. No AI or machine learning background is required.</p> <h2> Who this is for </h2> <p>The course is built for:</p> <ul> <li> <strong>Developers using terminal workflows:</strong> If you’re already running builds, checking git status, and SSHing into servers from the command line, Copilot CLI fits right into that flow.</li> <li> <strong>Teams looking to standardise AI-assisted practices:</strong> Custom agents and skills can be shared across a team through a project’s <code>.github/agents</code> and <code>.github/skills</code> directories.</li> <li> <strong>Students and early-career developers:</strong> The course explains AI terminology as it comes up, and every chapter includes assignments with clear success criteria.</li> </ul> <p>You don’t need prior experience with AI tools. If you can run commands in a terminal, you learn and apply the concepts in this course.</p> <h2> How the course teaches </h2> <p>Each chapter follows a consistent pattern: a real-world analogy to ground the concept, then the core technical material, then hands-on exercises. For instance, the three interaction modes are compared to ordering food at a restaurant. Plan mode is more like mapping your route to the restaurant before you start driving. Interactive mode is a back-and-forth conversation with a waiter. And one-shot mode (programmatic mode) is like going through the drive-through.</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsorjdhrhn758c638mps.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffsorjdhrhn758c638mps.png" alt="ordering food analogy image" width="800" height="410"></a></p> <p>Later chapters use different comparisons: agents are like hiring specialists, skills work like attachments for a power drill, and MCP servers are compared to browser extensions. The goal is to provide you with a visual and mental model before the technical details land.</p> <p>The course also focuses on a question that’s harder than it looks: when should I use which tool? Knowing the difference between reaching for an agent, a skill, or an MCP server takes practice, and the final chapter walks through that decision-making in a realistic workflow.</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cfcllljwbgovlf9qfc6.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1cfcllljwbgovlf9qfc6.png" alt="integration pattern image" width="749" height="881"></a></p> <h2> Get started </h2> <p>The course is free and open source. You can clone the repo, or <a href="proxy.php?url=https://codespaces.new/github/copilot-cli-for-beginners?hide_repo_select=true&amp;ref=main&amp;quickstart=true" rel="noopener noreferrer">open it in GitHub Codespaces</a> for a fully configured environment. Jump right in, get Copilot CLI running, and see if it fits your workflow.</p> <p><a href="proxy.php?url=https://github.com/github/copilot-cli-for-beginners" rel="noopener noreferrer">GitHub Copilot CLI for Beginners</a></p> <p>For a quick reference, see the <a href="proxy.php?url=https://docs.github.com/copilot/reference/cli-command-reference" rel="noopener noreferrer">CLI command reference</a>.</p> <p>Subscribe to <a href="proxy.php?url=https://resources.github.com/newsletter/" rel="noopener noreferrer">GitHub Insider</a> for more developer tips and guides.</p> githubcopilot cli vscode terminal Get started with GitHub Copilot SDK, part 1 Chris Noring Wed, 04 Mar 2026 20:48:00 +0000 https://dev.to/azure/get-started-with-github-copilot-sdk-1ijm https://dev.to/azure/get-started-with-github-copilot-sdk-1ijm <p>Did you know GitHub Copilot now has an SDK and that you can leverage your existing license to build AI integrations into your app? No, well I hope I have you attention now.</p> <p> <iframe src="proxy.php?url=https://www.youtube.com/embed/hvwGZqS4qF0"> </iframe> </p> <h2> Series on Copilot SDK </h2> <p>This series is about Copilot SDK and how you can leverage your existing GitHub Copilot license to integrate AI into your apps</p> <ul> <li>Part 1 - install and your first app, <strong>you're here</strong> </li> <li><a href="proxy.php?url=https://dev.to/azure/getting-started-with-github-copilot-part-2-streamable-responses-49a8">Part 2 - streamable responses</a></li> </ul> <h2> Install </h2> <p>You need two pieces here to get started:</p> <ul> <li>GitHub Copilot CLI</li> <li>A supported runtime, which at present means either Node.js, .NET, Python or Go</li> </ul> <p>Then you need to install the SDK for your chosen runtime like so:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code>pip <span class="nb">install </span>github-copilot-sdk </code></pre> </div> <h2> The parts </h2> <p>So what do you need to know to get started? There are three concepts:</p> <ul> <li> <strong>Client</strong>, you need to create and instance of it. Additionally you need to start and stop it when you're done with it.</li> <li> <strong>Session</strong>. The session takes an object where you can set things like model, system prompt and more. Also, the session is what you talk when you want to carry out a request. </li> <li> <strong>Response</strong>. The response contains your LLM response.</li> </ul> <p>Below is an example program using these three concepts. As you can see we choose "gpt-4.1" as model but this can be changed. See also how we pass the prompt to the function <code>send_and_wait</code>.<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight python"><code><span class="kn">import</span> <span class="n">asyncio</span> <span class="kn">from</span> <span class="n">copilot</span> <span class="kn">import</span> <span class="n">CopilotClient</span> <span class="k">async</span> <span class="k">def</span> <span class="nf">main</span><span class="p">():</span> <span class="n">client</span> <span class="o">=</span> <span class="nc">CopilotClient</span><span class="p">()</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">start</span><span class="p">()</span> <span class="n">session</span> <span class="o">=</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">create_session</span><span class="p">({</span><span class="sh">"</span><span class="s">model</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">gpt-4.1</span><span class="sh">"</span><span class="p">})</span> <span class="n">response</span> <span class="o">=</span> <span class="k">await</span> <span class="n">session</span><span class="p">.</span><span class="nf">send_and_wait</span><span class="p">({</span><span class="sh">"</span><span class="s">prompt</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">What is 2 + 2?</span><span class="sh">"</span><span class="p">})</span> <span class="nf">print</span><span class="p">(</span><span class="n">response</span><span class="p">.</span><span class="n">data</span><span class="p">.</span><span class="n">content</span><span class="p">)</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">stop</span><span class="p">()</span> <span class="n">asyncio</span><span class="p">.</span><span class="nf">run</span><span class="p">(</span><span class="nf">main</span><span class="p">())</span> </code></pre> </div> <p>Ok, now that we know what a simple program looks like, let's make something interesting, an FAQ responder.</p> <h2> Your first app </h2> <p>An FAQ for a web page, is often a pretty boring read. A way to make that more interesting for the end user is if they can instead chat with the FAQ, let's make that happen. </p> <p>Here's the plan:</p> <ul> <li>Define a static FAQ</li> <li>Add the FAQ as part of the prompt.</li> <li>Make a request to to the LLM and print out the response.</li> </ul> <p>Let's build out the code little by little. First, let's define the FAQ information.</p> <p><strong>-1- FAQ information</strong><br> </p> <div class="highlight js-code-highlight"> <pre class="highlight python"><code><span class="c1"># faq.py </span> <span class="n">faq</span> <span class="o">=</span> <span class="p">{</span> <span class="sh">"</span><span class="s">warranty</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">Our products come with a 1-year warranty covering manufacturing defects. Please contact our support team for assistance.</span><span class="sh">"</span><span class="p">,</span> <span class="sh">"</span><span class="s">return_policy</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">We offer a 30-day return policy for unused products in their original packaging. To initiate a return, please visit our returns page and follow the instructions.</span><span class="sh">"</span><span class="p">,</span> <span class="sh">"</span><span class="s">shipping</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">We offer free standard shipping on all orders over $50. Expedited shipping options are available at checkout for an additional fee.</span><span class="sh">"</span><span class="p">,</span> <span class="p">}</span> </code></pre> </div> <p>Next, let's add the call to the Copilot SDK</p> <p><strong>-2 Adding the LLM call</strong><br> </p> <div class="highlight js-code-highlight"> <pre class="highlight python"><code> <span class="kn">import</span> <span class="n">asyncio</span> <span class="kn">from</span> <span class="n">copilot</span> <span class="kn">import</span> <span class="n">CopilotClient</span> <span class="k">def</span> <span class="nf">faq_to_string</span><span class="p">(</span><span class="n">faq</span><span class="p">:</span> <span class="nb">dict</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="nb">str</span><span class="p">:</span> <span class="k">return</span> <span class="sh">"</span><span class="se">\n</span><span class="sh">"</span><span class="p">.</span><span class="nf">join</span><span class="p">([</span><span class="sa">f</span><span class="sh">"</span><span class="si">{</span><span class="n">key</span><span class="si">}</span><span class="s">: </span><span class="si">{</span><span class="n">value</span><span class="si">}</span><span class="sh">"</span> <span class="k">for</span> <span class="n">key</span><span class="p">,</span> <span class="n">value</span> <span class="ow">in</span> <span class="n">faq</span><span class="p">.</span><span class="nf">items</span><span class="p">()])</span> <span class="k">async</span> <span class="k">def</span> <span class="nf">main</span><span class="p">(</span><span class="n">user_prompt</span><span class="p">:</span> <span class="nb">str</span> <span class="o">=</span> <span class="sh">"</span><span class="s">Tell me about shipping</span><span class="sh">"</span><span class="p">):</span> <span class="n">client</span> <span class="o">=</span> <span class="nc">CopilotClient</span><span class="p">()</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">start</span><span class="p">()</span> <span class="n">prompt</span> <span class="o">=</span> <span class="sa">f</span><span class="sh">"</span><span class="s">Here</span><span class="sh">'</span><span class="s">s the FAQ, </span><span class="si">{</span><span class="nf">faq_to_string</span><span class="p">(</span><span class="n">faq</span><span class="p">)</span><span class="si">}</span><span class="se">\n\n</span><span class="s">User question: </span><span class="si">{</span><span class="n">user_prompt</span><span class="si">}</span><span class="se">\n</span><span class="s">Answer:</span><span class="sh">"</span> <span class="n">session</span> <span class="o">=</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">create_session</span><span class="p">({</span><span class="sh">"</span><span class="s">model</span><span class="sh">"</span><span class="p">:</span> <span class="sh">"</span><span class="s">gpt-4.1</span><span class="sh">"</span><span class="p">})</span> <span class="n">response</span> <span class="o">=</span> <span class="k">await</span> <span class="n">session</span><span class="p">.</span><span class="nf">send_and_wait</span><span class="p">({</span><span class="sh">"</span><span class="s">prompt</span><span class="sh">"</span><span class="p">:</span> <span class="n">prompt</span><span class="p">})</span> <span class="nf">print</span><span class="p">(</span><span class="n">response</span><span class="p">.</span><span class="n">data</span><span class="p">.</span><span class="n">content</span><span class="p">)</span> <span class="k">await</span> <span class="n">client</span><span class="p">.</span><span class="nf">stop</span><span class="p">()</span> <span class="k">if</span> <span class="n">__name__</span> <span class="o">==</span> <span class="sh">"</span><span class="s">__main__</span><span class="sh">"</span><span class="p">:</span> <span class="nf">print</span><span class="p">(</span><span class="sh">"</span><span class="s">My first app using the GitHub Copilot SDK!</span><span class="sh">"</span><span class="p">)</span> <span class="nf">print</span><span class="p">(</span><span class="sa">f</span><span class="sh">"</span><span class="s">[LOG] Asking the model about shipping information...</span><span class="sh">"</span><span class="p">)</span> <span class="n">asyncio</span><span class="p">.</span><span class="nf">run</span><span class="p">(</span><span class="nf">main</span><span class="p">(</span><span class="sh">"</span><span class="s">Tell me about shipping</span><span class="sh">"</span><span class="p">))</span> </code></pre> </div> <p>Note how we concatenate the FAQ data with the user's prompt:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight python"><code> <span class="n">prompt</span> <span class="o">=</span> <span class="sa">f</span><span class="sh">"</span><span class="s">Here</span><span class="sh">'</span><span class="s">s the FAQ, </span><span class="si">{</span><span class="nf">faq_to_string</span><span class="p">(</span><span class="n">faq</span><span class="p">)</span><span class="si">}</span><span class="se">\n\n</span><span class="s">User question: </span><span class="si">{</span><span class="n">user_prompt</span><span class="si">}</span><span class="se">\n</span><span class="s">Answer:</span><span class="sh">"</span> </code></pre> </div> <p><strong>-3- Let's run it</strong></p> <p>Now run it:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code>uv run faq.py </code></pre> </div> <p>You should see output like so:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>My first app using the GitHub Copilot SDK! [LOG] Asking the model about shipping information... We offer free standard shipping on all orders over $50. Expedited shipping options are available at checkout for an additional fee. </code></pre> </div> <h2> What's next </h2> <p>Check out the <a href="proxy.php?url=https://github.com/github/copilot-sdk/blob/main/docs/getting-started.md" rel="noopener noreferrer">official docs</a></p> githubcopilot python ai programming The JavaScript AI Build-a-thon Season 2 starts March 2! Julia Muiruri Wed, 25 Feb 2026 04:09:06 +0000 https://dev.to/azure/the-javascript-ai-build-a-thon-season-2-starts-march-2-1e92 https://dev.to/azure/the-javascript-ai-build-a-thon-season-2-starts-march-2-1e92 <p>Most applications used by millions of people every single day are powered by JavaScript/TypeScript. But when it comes to AI, most learning resources and code samples assume you're working in Python and will leave you trying to stitch scattered tutorials together to build AI into your stack.</p> <p>The <strong>JavaScript AI Build-a-thon</strong> is a free, hands-on program designed to close that gap. Over the course of four weeks <strong>(March 2 - March 31, 2026)</strong>, you'll move from running AI 100% on-device (Local AI), to designing multi-service, multi-agentic systems, all in JavaScript/ TypeScript and using tools you are already familiar with.<br> The series will culminate in a <strong>hackathon</strong>, where you will create, compete and turn what you'll have learnt into working projects you can point to, talk about and extend.</p> <p>Register now at <a href="proxy.php?url=https://aka.ms/JSAIBuildathon" rel="noopener noreferrer">aka.ms/JSAIBuildathon</a></p> <h2> How the program works! </h2> <p>The program is organized around 2 phases: -</p> <h3> Phase I: Learn &amp; Skill Up Schedule (Mar 2 - 13) </h3> <ul> <li>Self-paced quests that teach core AI patterns,</li> <li>Interactive Expert-led sessions on Microsoft Reactor (Livestreams) and Discord (Office hours &amp; QnA)</li> </ul> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1779ztlhwqxtoxdxq6d3.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1779ztlhwqxtoxdxq6d3.png" alt="JavaScript AI Build-a-thon Roadmap" width="800" height="450"></a></p> <div class="table-wrapper-paragraph"><table> <thead> <tr> <th>Day/Time (PT)</th> <th>Topic</th> <th>Links to join</th> </tr> </thead> <tbody> <tr> <td>Mon 3/2, 8:00 AM PST</td> <td>Local AI Development with Foundry Local</td> <td> <a href="proxy.php?url=https://developer.microsoft.com/en-us/reactor/events/26772/" rel="noopener noreferrer">Livestream</a> <br> <a href="proxy.php?url=https://discord.gg/microsoftfoundry?event=1465380906842853666" rel="noopener noreferrer">Discord Office Hour</a> </td> </tr> <tr> <td>Wed 3/4, 8:00 AM PST</td> <td>End-to-End Model Development on Microsoft Foundry</td> <td> <a href="proxy.php?url=https://developer.microsoft.com/en-us/reactor/events/26773/" rel="noopener noreferrer">Livestream</a> <br> <a href="proxy.php?url=https://discord.gg/microsoftfoundry?event=1470927803888173109" rel="noopener noreferrer">Discord Office Hour</a> </td> </tr> <tr> <td>Fri 3/6, 9:00 AM PST</td> <td>Advanced RAG Deep Dive + Guided Project</td> <td> <a href="proxy.php?url=https://developer.microsoft.com/en-us/reactor/events/26775" rel="noopener noreferrer">Livestream</a> <br> <a href="proxy.php?url=https://discord.gg/microsoftfoundry?event=1465381686362509323" rel="noopener noreferrer">Discord Office Hour</a> </td> </tr> <tr> <td>Mon 3/9, 8:00 AM PST</td> <td>Design &amp; Build an Agent E2E with Agent Builder (AITK)</td> <td> <a href="proxy.php?url=https://developer.microsoft.com/en-us/reactor/events/26776/" rel="noopener noreferrer">Livestream</a> <br> <a href="proxy.php?url=https://discord.gg/microsoftfoundry?event=1465382167894036481" rel="noopener noreferrer">Discord Office Hour</a> </td> </tr> <tr> <td>Wed 3/11, 8:00 AM PST</td> <td>Build, Scale &amp; Govern AI Agents + Guided project</td> <td> <a href="proxy.php?url=https://developer.microsoft.com/en-us/reactor/events/26786/" rel="noopener noreferrer">Livestream</a> <br> <a href="proxy.php?url=https://discord.gg/microsoftfoundry?event=1465382908687814840" rel="noopener noreferrer">Discord Office Hour</a> </td> </tr> </tbody> </table></div> <p>The Build-a-thon prioritizes practical learning, so you'll complete <strong>2 guided projects</strong> by the end of this phase:-</p> <p><strong>1. A Local Serverless AI chat with RAG</strong><br> Concepts covered include: -</p> <ul> <li>RAG Architecture</li> <li>RAG Ingestion pipeline</li> <li>Query &amp; Retrieval</li> <li>Response Generation (LLM Chains)</li> </ul> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc733enuiuhxw2cx3jde7.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc733enuiuhxw2cx3jde7.png" alt="Serverless Chat LangChain.js CodeTour" width="800" height="611"></a></p> <p><strong>2. A Burger Ordering AI Agent</strong><br> Concepts covered include: -</p> <ul> <li>Designing AI Agents</li> <li>Building MCP Tools (Backend API Design)</li> </ul> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl74vmkvwp1c3m1k6g08m.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl74vmkvwp1c3m1k6g08m.png" alt="Contoso Burger Ordering Agent" width="800" height="456"></a></p> <h3> Phase II: Global Hack! (Mar 13 - 31) </h3> <ul> <li>Product demo series to showcase the latest product features that will accelerate your builder experience</li> <li>A Global hackathon to apply what you learn into real, working AI solutions</li> </ul> <p>This is where you'll build something that matters using everything learnt in the quests, and beyond, to create an AI-powered project that solves a real problem, delights users, or pushes what's possible.</p> <blockquote> <p><strong>The hackathon launches on March 13, 2026.</strong> Full details on registration, submission, judging criteria, award categories, prizes, and the hack phase schedule will be published when the hack goes live. Stay tuned!</p> </blockquote> <p>But, here's what we can tell you now:</p> <ul> <li>🏆 <strong>6 award categories</strong> </li> <li>💻 <strong>Product demo showcases</strong> throughout the hack phase to keep you building with the latest tools</li> <li>👥 Teams of up to 4 or solo. Your call</li> </ul> <h2> Start Now (Join the Community) </h2> <p>Join our community to connect with other participants and experts from Microsoft &amp;. GitHub to support your builder journey.</p> <ul> <li> <strong>Foundry Discord (#js-ai-build-a-thon channel):</strong> <a href="proxy.php?url=https://aka.ms/JSAIonDiscord" rel="noopener noreferrer">Our platform for office hours, live QnA, quick questions, community &amp; expert support</a> </li> <li> <strong>GitHub Discussions:</strong> <a href="proxy.php?url=https://aka.ms/JSAI_Discussions" rel="noopener noreferrer">This is where you'll share ideas, ask questions, find teammates</a> </li> <li> <strong>Social:</strong> Share your progress online using <strong>#JSAIBuildathon</strong> </li> </ul> <p>Register now at <a href="proxy.php?url=https://aka.ms/JSAIBuildathon" rel="noopener noreferrer">aka.ms/JSAIBuildathon</a></p> <p>See you soon!</p> javascript typescript langchain ai Build a Responsive UI through Prompt Driven Development Cynthia Zanoni Tue, 27 Jan 2026 07:00:00 +0000 https://dev.to/azure/build-a-responsive-ui-through-prompt-driven-development-2imc https://dev.to/azure/build-a-responsive-ui-through-prompt-driven-development-2imc <p>This article is part of the <strong><a href="proxy.php?url=https://www.youtube.com/playlist?list=PLj6YeMhvp2S6SxK3u_W5oN5neaZUpYK3O" rel="noopener noreferrer">Prompt Driven Development series from the VS Code YouTube channel</a></strong>. It is based on the video <em>Build a Responsive UI through Prompt Driven Development</em> and explains the concepts, tools, and workflows demonstrated when using <em>GitHub Copilot</em> to improve usability, responsiveness, and functionality in a web application.</p> <h2> 🎥 Video Reference </h2> <p><a href="proxy.php?url=http://www.youtube.com/watch?v=jVbBXjOsKzw" rel="noopener noreferrer"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxdpskr382rsfjhbyaaiw.jpg" alt="Build a Responsive UI through Prompt Driven Development" width="800" height="450"></a></p> <p>Building a responsive user interface involves more than adjusting CSS breakpoints. It requires thinking about <strong>layout, navigation, accessibility, and interaction patterns</strong> across different screen sizes.</p> <h2> 🏗️ Understanding the Application and Its Limitations </h2> <p>The application used in the video is a <strong>notes board web app</strong> that allows users to:</p> <ul> <li>Create notes with checklists</li> <li>Organize notes by categories</li> <li>Filter notes using a sidebar</li> </ul> <p>While functional on desktop, the app presents several issues:</p> <ul> <li>The layout does not adapt well to smaller screens</li> <li>Navigation becomes difficult on mobile</li> <li>Search functionality is missing</li> <li>Developer tools are hard to use due to layout constraints</li> </ul> <p>These limitations define the scope of improvements to be addressed using Prompt Driven Development.</p> <h2> 📌 Five Key Learnings from the Video </h2> <h3> 1️⃣ Prompts Should Be Specific and Context Aware </h3> <p>One of the strongest lessons in this video is that <strong>effective prompts are short, specific, and contextual</strong>.</p> <p>Rather than describing every possible change, the prompt focuses on:</p> <ul> <li>Improving mobile usability</li> <li>Adjusting layout for small screens</li> <li>Making buttons, forms, and text easier to interact with on touch devices</li> </ul> <p>This clarity allows Copilot to generate relevant changes without unnecessary modifications.</p> <h3> 2️⃣ Custom Chat Modes Enable Persistent Problem Solving </h3> <p>The workflow uses a custom chat mode called <strong>Beast Mode</strong>, designed to keep working until all tasks are completed.</p> <p>In this mode, Copilot:</p> <ul> <li>Analyzes the application</li> <li>Creates a task list</li> <li>Executes changes incrementally</li> <li>Verifies results by running the local server</li> </ul> <p>This demonstrates how custom chat modes can encode <em>behavior</em>, not just instructions, making complex UI changes more manageable.</p> <h3> 3️⃣ Responsive Design Requires Coordinated Changes </h3> <p>Making the UI responsive is not limited to a single layer.</p> <p>The video shows Copilot updating:</p> <ul> <li>HTML templates by adding or validating the viewport meta tag</li> <li>CSS with media queries and grid adjustments</li> <li>JavaScript to support layout and interaction changes</li> </ul> <p>Prompt Driven Development helps coordinate these changes across files while preserving consistency.</p> <h3> 4️⃣ Progressive Enhancement Improves Usability Across Devices </h3> <p>After improving the base responsive layout, the workflow introduces <strong>device specific navigation</strong>.</p> <p>On mobile:</p> <ul> <li>The sidebar is replaced with a category dropdown</li> </ul> <p>On desktop:</p> <ul> <li>The original sidebar remains available</li> </ul> <p>Copilot applies conditional logic and styling so the interface adapts naturally based on screen size, improving usability without duplicating functionality.</p> <h3> 5️⃣ Prompt Driven Development Goes Beyond UI Changes </h3> <p>The video demonstrates that Prompt Driven Development is not limited to layout improvements.</p> <p>Additional enhancements include:</p> <ul> <li>Implementing real time search for filtering notes by title or content</li> <li>Displaying user friendly messages when no results match</li> <li>Generating a README file using a reusable custom prompt</li> </ul> <p>This shows how prompts can drive not only UI changes, but also <strong>features, documentation, and developer experience</strong>.</p> <h2> 🛠️ Tools and Features Demonstrated </h2> <p>The workflow combines multiple GitHub Copilot and VS Code capabilities:</p> <ul> <li> <strong>Custom Chat Modes</strong> for persistent execution</li> <li> <strong>Custom Instructions</strong> with accessibility guidance</li> <li> <strong>Agent Mode</strong> for multi step changes</li> <li> <strong>Auto approve commands</strong> to run the local server</li> <li> <strong>Custom Prompts</strong> to generate documentation</li> <li> <strong>Source Control integration</strong> for commits and pushes</li> </ul> <p>Together, these tools enable a cohesive and efficient development loop.</p> <h2> Conclusion </h2> <p>This example demonstrates how <em>Prompt Driven Development</em> can be applied to <strong>user interface design and frontend evolution</strong>, not just backend logic or refactoring.</p> <p>By combining specific prompts, reusable instructions, and custom chat modes, developers can iteratively improve responsiveness, usability, and functionality while maintaining control over changes.</p> <p>Prompt Driven Development turns UI improvements into a structured, repeatable process, helping teams deliver better experiences across devices with less friction.</p> vscode githubcopilot 🐛 Fix a Chat App with Copilot Chat using Prompt Driven Development Cynthia Zanoni Tue, 20 Jan 2026 07:00:00 +0000 https://dev.to/azure/fix-a-chat-app-with-copilot-chat-using-prompt-driven-development-5a6 https://dev.to/azure/fix-a-chat-app-with-copilot-chat-using-prompt-driven-development-5a6 <p>This article is part of the <strong><a href="proxy.php?url=https://www.youtube.com/playlist?list=PLj6YeMhvp2S6SxK3u_W5oN5neaZUpYK3O" rel="noopener noreferrer">Prompt Driven Development series from the VS Code YouTube channel</a></strong>. It is based on the video <em>Fix a Chat App with Copilot Chat</em> and explains the concepts, tools, and workflows demonstrated when using <em>GitHub Copilot</em> to diagnose and fix issues in an existing application.</p> <h2> 🎥 Video Reference </h2> <p><a href="proxy.php?url=http://www.youtube.com/watch?v=QLAIww-Vhqo" rel="noopener noreferrer"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzjzcetm4zc2kgua9ds1n.jpg" alt="Fix a Chat App with Copilot Chat using Prompt Driven Development" width="800" height="450"></a></p> <p>Fixing bugs in an existing application often requires context switching between documentation, code, logs, tests, and tooling. As applications grow more complex, even small issues can require navigating a large surface area of code.</p> <h2> 🏗️ Understanding the Application Context </h2> <p>The application demonstrated in the video is a <strong>chat application built using Retrieval Augmented Generation</strong>.</p> <p>At a high level, the app:</p> <ul> <li>Answers questions based on a knowledge base</li> <li>Includes citations in responses</li> <li>Acts as an internal HR style chatbot for a fictional company</li> </ul> <p>Both the frontend and backend are running locally and the project contains multiple directories and integrations. Rather than manually inspecting every folder, the workflow begins by <strong>asking Copilot to explain the architecture</strong>.</p> <p>Using <em>Ask mode</em>, Copilot provides a concise overview of:</p> <ul> <li>The RAG architecture</li> <li>Frontend technologies such as React and TypeScript</li> <li>Backend services written in Python</li> <li>Search, ingestion, deployment, and testing components</li> </ul> <p>This step demonstrates how prompts can quickly establish <strong>situational awareness</strong> before any changes are made.</p> <h2> 🎯 Identifying and Fixing an Accessibility Issue </h2> <p>The first issue addressed in the video is an <strong>accessibility problem</strong> reported by users.</p> <p>An accessibility scanning tool is used to analyze the UI, immediately identifying multiple issues. A screenshot of the results is taken and passed directly to <em>GitHub Copilot Chat</em> running in <strong>agent mode</strong>.</p> <p>Key aspects of this workflow include:</p> <ul> <li>Uploading visual context through a screenshot</li> <li>Allowing the agent to read and reason about the image</li> <li>Letting the agent inspect relevant files before making changes</li> </ul> <p>Copilot proposes fixes and presents them as diffs, allowing the developer to review changes before accepting them. Once applied, the application is rebuilt and rescanned, resulting in <strong>zero remaining accessibility issues</strong>.</p> <p>This demonstrates how <em>multimodal prompting</em> can significantly reduce time spent diagnosing UI related problems.</p> <h2> 📌 Five Key Learnings from the Video </h2> <h3> 1️⃣ Start by Asking Questions, Not Making Changes </h3> <p>The workflow begins with <em>Ask mode</em>, not agent mode.</p> <p>Before editing anything, Copilot is used to explain the architecture at a high level. This reinforces an important principle: <em>Prompt Driven Development values understanding before execution</em>.</p> <h3> 2️⃣ Visual Context Improves Debugging Accuracy </h3> <p>By uploading a screenshot of accessibility issues, the agent can directly reference what the user sees.</p> <p>This approach reduces ambiguity and enables Copilot to propose targeted fixes rather than generic suggestions.</p> <h3> 3️⃣ Custom Chat Modes Enable Specialized Workflows </h3> <p>The video introduces a custom chat mode called <strong>Fixer</strong>.</p> <p>This mode defines:</p> <ul> <li>A preferred model</li> <li>Allowed tools, including MCP servers</li> <li>Instructions focused on minimal fixes and verification</li> </ul> <p>By encoding these expectations once, future bug fixes become more consistent and require less repeated instruction.</p> <h3> 4️⃣ External Context Can Be Integrated Seamlessly </h3> <p>For more complex issues, the workflow pulls data from <strong>GitHub issues</strong> using an MCP server.</p> <p>Copilot:</p> <ul> <li>Fetches the issue description</li> <li>Locates the relevant code</li> <li>Updates both implementation and tests</li> <li>Verifies changes by running the test suite</li> </ul> <p>This shows how Prompt Driven Development can span <strong>code, issues, and tests</strong> in a single flow.</p> <h3> 5️⃣ Validation Is a First Class Step </h3> <p>Fixes are not considered complete until:</p> <ul> <li>Tests are executed</li> <li>Results are reviewed</li> <li>Behavior is verified</li> </ul> <p>Only after successful validation are changes committed, a branch created, and a pull request opened. The workflow mirrors standard engineering practices, with AI accelerating execution rather than bypassing safeguards.</p> <h2> 🛠️ Tools and Features Demonstrated </h2> <p>The video showcases how several GitHub Copilot and VS Code features work together:</p> <ul> <li> <strong>Ask Mode</strong> for exploration and understanding</li> <li> <strong>Agent Mode</strong> for editing, testing, and iteration</li> <li> <strong>Custom Chat Modes</strong> for repeatable fixing strategies</li> <li> <strong>MCP Servers</strong> for GitHub and tool integration</li> <li> <strong>Test Runner Integration</strong> for automated verification</li> <li> <strong>GitHub CLI</strong> for branch creation and pull requests</li> </ul> <p>Together, these features support a cohesive and end to end debugging workflow.</p> <h2> Conclusion </h2> <p>This example demonstrates how <em>Prompt Driven Development</em> can be applied effectively to <strong>debugging and maintenance tasks</strong>, not just feature development.</p> <p>By combining structured prompts, rich context, and continuous validation, developers can fix issues faster while maintaining confidence in the results.</p> <p>Rather than replacing developer judgment, GitHub Copilot acts as a powerful collaborator, helping navigate complexity and reduce repetitive work while keeping engineering standards intact.</p> vscode githubcopilot Refactor an Existing Codebase using Prompt Driven Development Cynthia Zanoni Tue, 13 Jan 2026 07:00:00 +0000 https://dev.to/azure/refactor-an-existing-codebase-using-prompt-driven-development-1gem https://dev.to/azure/refactor-an-existing-codebase-using-prompt-driven-development-1gem <p>This article is part of the <strong>Prompt Driven Development series from the VS Code YouTube channel</strong>. It is based on the video <em>Refactor an Existing Codebase using Prompt Driven Development</em> and explains the concepts, decisions, and workflows demonstrated during a real refactoring scenario using <em>Visual Studio Code</em> and <em>GitHub Copilot</em>.</p> <h2> 🎥 Video Reference </h2> <p><a href="proxy.php?url=http://www.youtube.com/watch?v=1EBXoFZO6Kk" rel="noopener noreferrer"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flnnmr5cz41mzcgyn60rx.jpg" alt="Refactor an Existing Codebase using Prompt Driven Development" width="800" height="450"></a></p> <p>Refactoring an existing codebase is often more challenging than building something new. It requires understanding current behavior, preserving functionality, and improving structure without introducing regressions.</p> <h2> 🏗️ Understanding the Refactoring Context </h2> <p>Before any prompting begins, the video emphasizes the importance of <strong>understanding the system as it exists today</strong>.</p> <p>The demonstrated project is an inventory API built with:</p> <ul> <li>Azure Functions using a flexible consumption plan</li> <li>FastAPI with Python</li> <li>Cosmos DB using the NoSQL API</li> </ul> <p>The API exposes endpoints for managing products and categories, including batch operations. Before refactoring, the API is executed locally and exercised through its endpoints to confirm current behavior and establish a baseline.</p> <p>This step ensures that refactoring goals are grounded in a clear understanding of functionality rather than assumptions.</p> <h2> 🎯 Defining the Refactoring Goal </h2> <p>The refactoring goal is explicit and narrowly scoped.</p> <p>The existing codebase contains a CRUD layer responsible for database interaction. However, this layer also includes:</p> <ul> <li>Business logic</li> <li>Data normalization</li> <li>Validation</li> <li>Exception handling</li> </ul> <p>The objective is to <strong>separate responsibilities</strong> by introducing a service layer. After refactoring:</p> <ul> <li>The CRUD layer should only handle direct database access</li> <li>Business logic and helper functionality should live in dedicated service files</li> </ul> <p>This clear separation of concerns becomes the foundation for all subsequent prompts.</p> <h2> 📌 Five Key Learnings from the Video </h2> <h3> 1️⃣ Clear Intent Improves Prompt Quality </h3> <p>One of the strongest takeaways is that effective prompts start with <strong>clear intent</strong>.</p> <p>Rather than issuing a vague instruction, the prompt file explicitly defines:</p> <ul> <li>Which directories must be analyzed</li> <li>Which files contain mixed responsibilities</li> <li>What logic should remain in the CRUD layer</li> <li>What logic should move to the service layer</li> </ul> <p>This level of specificity enables the AI to reason about architectural changes rather than making superficial edits.</p> <h3> 2️⃣ Prompt Files Enable Structured Refactoring </h3> <p>The video demonstrates the use of a <strong>prompt file</strong> to manage the refactoring task.</p> <p>The prompt file:</p> <ul> <li>Aggregates all relevant context</li> <li>Defines goals and constraints</li> <li>Acts as a reusable refactoring recipe</li> </ul> <p>By running this prompt in agent mode, the AI can perform multi step operations such as reading files, creating new directories, moving logic, and updating references across the codebase.</p> <h3> 3️⃣ Observability Is Essential During AI Assisted Refactoring </h3> <p>Refactoring is not treated as a passive operation.</p> <p>Throughout the process, the developer actively monitors:</p> <ul> <li>Task lists generated by the agent</li> <li>Checkpoints indicating progress</li> <li>Modified files and newly created directories</li> <li>The Problems panel for errors and warnings</li> </ul> <p>This reinforces an important principle: <em>Prompt Driven Development assumes human oversight</em>. AI accelerates the work, but the developer remains responsible for correctness.</p> <h3> 4️⃣ Problems and Errors Are Part of the Workflow </h3> <p>As the refactor progresses, issues such as formatting errors, unused imports, and type warnings appear in the Problems panel.</p> <p>Rather than fixing these manually, the AI is prompted to resolve them, using the Problems context as additional input. This iterative loop continues until blocking issues are resolved and the codebase reaches a stable state.</p> <p>This demonstrates how <strong>tooling feedback can be incorporated directly into prompts</strong>.</p> <h3> 5️⃣ Validation Completes the Refactoring Loop </h3> <p>Once structural changes are complete, validation begins.</p> <p>The video shows:</p> <ul> <li>Reviewing updated routes to confirm they now depend on services instead of CRUD logic</li> <li>Running the API locally</li> <li>Executing create, update, and delete operations</li> <li>Verifying expected responses</li> </ul> <p>This reinforces that AI assisted refactoring must meet the same quality bar as any other change. Testing and verification remain mandatory.</p> <h2> 🛠️ Supporting Techniques Demonstrated </h2> <p>Several VS Code and Copilot features support this workflow:</p> <ul> <li> <strong>Copilot Instructions</strong> to provide consistent context across prompts</li> <li> <strong>Prompt Files</strong> to manage complex refactoring tasks</li> <li> <strong>Agent Mode</strong> for multi step execution</li> <li> <strong>Problems Context</strong> to guide iterative fixes</li> <li> <strong>Model selection</strong> to evaluate different LLM behaviors</li> </ul> <p>Together, these techniques enable a refactor that is structured, transparent, and repeatable.</p> <h2> Conclusion </h2> <p>This refactoring example shows that <em>Prompt Driven Development</em> can be effectively applied to existing codebases, not just greenfield projects.</p> <p>By treating prompts as structured artifacts and maintaining continuous validation, developers can use AI to accelerate refactoring while preserving control, clarity, and code quality.</p> <p>Prompt Driven Development does not replace engineering discipline. It reinforces it.</p> vscode githubcopilot Introduction to Prompt-Driven Development Cynthia Zanoni Mon, 05 Jan 2026 07:00:00 +0000 https://dev.to/azure/introduction-to-prompt-driven-development-36b0 https://dev.to/azure/introduction-to-prompt-driven-development-36b0 <p>This article is part of the <strong><a href="proxy.php?url=https://www.youtube.com/playlist?list=PLj6YeMhvp2S6SxK3u_W5oN5neaZUpYK3O" rel="noopener noreferrer">Prompt Driven Development series from the VS Code YouTube channel</a></strong>. It is based on the video <em>Introduction to Prompt Driven Development</em> and explains the concepts and workflows demonstrated, with a focus on how prompt driven development can be applied in practice using <em>Visual Studio Code and GitHub Copilot.</em></p> <h2> 🎥 Video Reference </h2> <p><a href="proxy.php?url=http://www.youtube.com/watch?v=fzYN_kgl-OM" rel="noopener noreferrer"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnzddk0o4e98orwvtujel.jpg" alt="Introduction to Prompt-Driven Development" width="800" height="450"></a></p> <p>As AI tools become increasingly capable, developers are not only writing code differently but also <strong>rethinking how software is designed and built</strong>.</p> <p>Terms such as <em>vibe coding</em>, <em>prompt engineering</em>, and <em>prompt driven development</em> are often used interchangeably. However, they represent <strong>distinct levels of structure, intent, and repeatability</strong>. Understanding these differences is essential to choosing the right approach for exploration, learning, or production ready systems.</p> <p>This article explains these approaches and then highlights the key technical learnings demonstrated in the video.</p> <h2> 🧠 Coding Styles in the Age of AI </h2> <h3> ✨ Vibe Coding </h3> <p><em>Vibe coding</em> refers to an intuitive and exploratory way of working with AI. Developers rely on natural language prompts and personal intuition, allowing the AI to generate code based on intent rather than explicit structure.</p> <p>This style is <strong>fast, creative, and useful for experimentation</strong>, but it often produces results that are hard to reproduce or maintain. Because prompts are informal and rarely documented, consistency becomes a challenge.</p> <h3> 🎯 Prompt Engineering </h3> <p><em>Prompt engineering</em> introduces intention and structure. Developers design prompts carefully to guide the AI toward <strong>specific and predictable outcomes</strong>.</p> <p>This approach improves reliability and reduces randomness. However, prompts are still often treated as <strong>one off inputs</strong>, used once and discarded, rather than as reusable artifacts.</p> <h3> Prompt Driven Development </h3> <p><em>Prompt driven development</em> treats prompts as <strong>first class technical artifacts</strong>.</p> <p>In this approach, prompts are:</p> <ul> <li><strong>Written intentionally</strong></li> <li><strong>Refined and iterated over time</strong></li> <li><strong>Documented and versioned</strong></li> <li><strong>Used to guide workflows, not just outputs</strong></li> </ul> <p>Rather than interacting with AI in isolated moments, developers design processes where prompts support planning, implementation, and iteration in a repeatable way.</p> <h2> 📌 Five Key Learnings from the Video </h2> <h3> 1️⃣ Prompts Are Part of the Codebase </h3> <p>One of the main ideas demonstrated is that prompts should not be ephemeral. When prompts are <strong>saved, documented, and versioned</strong>, they capture architectural intent and decision making, just like code or design documents.</p> <p>Using Markdown files inside the repository makes this process lightweight and easy to share.</p> <h3> 2️⃣ Re Prompting Is an Expected and Valuable Practice </h3> <p>The video clearly shows that the <strong>first prompt is rarely sufficient</strong>.</p> <p>Prompt driven development assumes iteration. Prompts improve through feedback, refinement, and clarification. This process is similar to refactoring code and should be treated with the same discipline.</p> <p><em>Re prompting is not a failure. It is part of the workflow.</em></p> <h3> 3️⃣ Documentation Can Drive Implementation </h3> <p>Instead of treating documentation as an afterthought, the workflow demonstrated uses documentation as the <strong>starting point for implementation</strong>.</p> <p>By referencing a structured Markdown document, the AI can scaffold an application more consistently, reducing improvisation and making the process easier to reproduce.</p> <h3> 4️⃣ AI Assisted Development Still Requires Validation </h3> <p>Even with AI generating code, <strong>traditional engineering practices remain essential</strong>.</p> <p>The video reinforces the habit of:</p> <ul> <li>Installing dependencies locally</li> <li>Running and testing the application</li> <li>Reviewing behavior before committing changes</li> </ul> <p>Prompt driven development enhances productivity, but it does not replace testing or validation.</p> <h3> 5️⃣ Context Improves AI Output </h3> <p>Providing richer context leads to better results. The video demonstrates this through:</p> <ul> <li> <strong>Custom Chat Modes</strong>, which define purpose and output expectations</li> <li> <strong>Copilot Vision</strong>, which uses screenshots to reason about UI changes</li> </ul> <p>By narrowing context and constraints, developers can guide AI systems more effectively and reduce ambiguity.</p> <h2> 🛠️ Applying These Ideas in Practice </h2> <p>The workflow shown in the video combines several VS Code features to support prompt driven development:</p> <ul> <li>GitHub Copilot for ideation and implementation</li> <li>Markdown files for prompt documentation</li> <li>Custom Chat Modes for repeatable workflows</li> <li>Copilot Vision for context aware UI changes</li> </ul> <p>Together, these tools help transform AI interactions into a <strong>structured and intentional development process</strong>.</p> <h2> Conclusion </h2> <p><em>Prompt driven development</em> offers a practical way to integrate AI into software engineering without sacrificing rigor.</p> <p>By treating prompts as structured, reusable artifacts, developers gain:</p> <ul> <li><strong>Clearer workflows</strong></li> <li><strong>Better documentation</strong></li> <li><strong>More consistent outcomes</strong></li> <li><strong>Easier collaboration</strong></li> </ul> <p>For teams and individuals looking to move beyond ad hoc AI usage, prompt driven development provides a scalable and repeatable model grounded in existing engineering best practices.</p> vscode githubcopilot Host Your Node.js MCP Server on Azure Functions in 1 Simple Step Yohan Lasorsa Tue, 09 Dec 2025 15:44:53 +0000 https://dev.to/azure/host-your-nodejs-mcp-server-on-azure-functions-in-3-simple-steps-3ao8 https://dev.to/azure/host-your-nodejs-mcp-server-on-azure-functions-in-3-simple-steps-3ao8 <p>Building AI agents with the Model Context Protocol (MCP) is powerful, but when it comes to hosting your MCP server in production, you need a solution that's reliable, scalable, and cost-effective. What if you could deploy your regular Node.js MCP server to a serverless platform that handles scaling automatically while you only pay for what you use?</p> <p>Let's explore how Azure Functions now supports hosting MCP servers built with the official Anthropic MCP SDK, giving you serverless scaling with almost no changes in your code.</p> <p>Grab your favorite hot beverage, and let's dive in!</p> <h2> TL;DR key takeaways </h2> <ul> <li>Azure Functions now supports hosting Node.js MCP servers using the official Anthropic SDK</li> <li>Only 1 simple configuration needed: adding <code>host.json</code> file</li> <li>Currently supports HTTP Streaming protocol with stateless servers</li> <li>Serverless hosting means automatic scaling and pay-per-use pricing</li> <li>Deploy with one command using Infrastructure as Code</li> </ul> <h2> What will you learn here? </h2> <ul> <li>Understand how MCP servers work on Azure Functions</li> <li>Configure a Node.js MCP server for Azure Functions hosting</li> <li>Test your MCP server locally and with real AI agents</li> <li>Deploy your MCP server with Infrastructure as Code and AZD</li> </ul> <h2> Reference links for everything we use </h2> <ul> <li> <a href="proxy.php?url=https://modelcontextprotocol.io/" rel="noopener noreferrer">Model Context Protocol</a> - Official MCP documentation</li> <li> <a href="proxy.php?url=https://learn.microsoft.com/azure/azure-functions/functions-overview" rel="noopener noreferrer">Azure Functions</a> - Serverless compute platform</li> <li> <a href="proxy.php?url=https://github.com/modelcontextprotocol/typescript-sdk" rel="noopener noreferrer">Anthropic MCP SDK</a> - Official TypeScript SDK</li> <li> <a href="proxy.php?url=https://learn.microsoft.com/azure/developer/azure-developer-cli/overview" rel="noopener noreferrer">Azure Developer CLI</a> - One-command deployment tool</li> <li> <a href="proxy.php?url=https://github.com/Azure-Samples/mcp-agent-langchainjs" rel="noopener noreferrer">Full sample project</a> - Complete burger ordering system with MCP</li> <li> <a href="proxy.php?url=https://github.com/Azure-Samples/mcp-sdk-functions-hosting-node" rel="noopener noreferrer">Simple example</a> - Minimal MCP server starter</li> <li> <a href="proxy.php?url=https://github.com/anthonychu/create-functions-mcp-server" rel="noopener noreferrer">GitHub Copilot prompt helper</a> - Automated setup by Anthony Chu</li> </ul> <h2> Requirements </h2> <ul> <li>Node.js 22 or higher</li> <li> <a href="proxy.php?url=https://azure.microsoft.com/free" rel="noopener noreferrer">Azure account</a> (free signup, or if you're a student, <a href="proxy.php?url=https://azure.microsoft.com/free/students" rel="noopener noreferrer">get free credits here</a>)</li> <li> <a href="proxy.php?url=https://aka.ms/azure-dev/install" rel="noopener noreferrer">Azure Developer CLI</a> (for deployment)</li> <li> <a href="proxy.php?url=https://github.com/signup" rel="noopener noreferrer">GitHub account</a> (optional, for using Codespaces)</li> </ul> <h2> What is MCP and why does it matter? </h2> <p>Model Context Protocol is an open standard that enables AI models to securely interact with external tools and data sources. Instead of hardcoding tool integrations, you build an MCP server that exposes capabilities (like browsing a menu, placing orders, or querying a database) as tools that any MCP-compatible AI agent can discover and use. MCP is model-agnostic, meaning it can work with any LLM that supports the protocol, including models from Anthropic, OpenAI, and others. It's also worth noting that MCP supports more than just tool calls, though that's its most common use case.</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkujo718hq9m24ouev5zl.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkujo718hq9m24ouev5zl.png" alt="Schema showing MCP interfacing with different tool servers" width="722" height="422"></a></p> <p>The challenge? <strong>Running MCP servers in production requires infrastructure</strong>. You need to handle scaling, monitoring, and costs. That's where Azure Functions comes in.</p> <blockquote> <p><strong>🚨 Free course alert!</strong> If you're new to MCP, check out the <a href="proxy.php?url=https://github.com/microsoft/mcp-for-beginners" rel="noopener noreferrer">MCP for Beginners</a> course to get up to speed quickly.</p> </blockquote> <h2> Why Azure Functions for MCP servers? </h2> <p>Azure Functions is a serverless compute platform that's perfect for MCP servers:</p> <ul> <li> <strong>Zero infrastructure management</strong>: No servers to maintain</li> <li> <strong>Automatic scaling</strong>: Handles traffic spikes seamlessly</li> <li> <strong>Cost-effective</strong>: Pay only for actual execution time (with generous free grant)</li> <li> <strong>Built-in monitoring</strong>: Application Insights integration out of the box</li> <li> <strong>Global distribution</strong>: Deploy to regions worldwide</li> </ul> <p>The new Azure Functions support means you can take your existing Node.js MCP server and deploy it to a production-ready serverless environment with minimal changes. This comes up as an additional option for native Node.js MCP hosting, but you can still use the <a href="proxy.php?url=https://learn.microsoft.com/azure/azure-functions/functions-bindings-mcp?pivots=programming-language-typescript" rel="noopener noreferrer">Azure Functions MCP bindings</a> that were available before.</p> <h2> 1 simple step to enable Functions hosting </h2> <p>Let's break down what you need to add to your existing Node.js MCP server to run it on Azure Functions. I'll use a <a href="proxy.php?url=https://github.com/Azure-Samples/mcp-agent-langchainjs/tree/main/packages/burger-mcp" rel="noopener noreferrer">real-world example</a> from our burger ordering system.</p> <p>If you already have a working Node.js MCP server, you can just follow this to make it compatible with Azure Functions hosting.</p> <h3> Step 1: Add the <code>host.json</code> configuration </h3> <p>Create a <code>host.json</code> file at the root of your Node.js project:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight json"><code><span class="p">{</span><span class="w"> </span><span class="nl">"version"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2.0"</span><span class="p">,</span><span class="w"> </span><span class="nl">"configurationProfile"</span><span class="p">:</span><span class="w"> </span><span class="s2">"mcp-custom-handler"</span><span class="p">,</span><span class="w"> </span><span class="nl">"customHandler"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"description"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"defaultExecutablePath"</span><span class="p">:</span><span class="w"> </span><span class="s2">"node"</span><span class="p">,</span><span class="w"> </span><span class="nl">"arguments"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"lib/server.js"</span><span class="p">]</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"http"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"DefaultAuthorizationLevel"</span><span class="p">:</span><span class="w"> </span><span class="s2">"anonymous"</span><span class="w"> </span><span class="p">},</span><span class="w"> </span><span class="nl">"port"</span><span class="p">:</span><span class="w"> </span><span class="s2">"3000"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre> </div> <blockquote> <p><strong>Note:</strong> Adjust the <code>arguments</code> array to point to your compiled server file (e.g., <code>lib/server.js</code> or <code>dist/server.js</code>), depending on your build setup. You can also change the port if needed to match your server configuration.</p> </blockquote> <p>The <code>hosts.json</code> file holds <a href="proxy.php?url=https://learn.microsoft.com/azure/azure-functions/functions-host-json" rel="noopener noreferrer">metadata configuration</a> for the Functions runtime. The most important part here is the <code>customHandler</code> section. It configures the Azure Functions runtime to run your Node.js MCP server as a <em>custom handler</em>, which allows you to use any HTTP server framework (like Express, Fastify, etc.) without modification (<strong>tip: it can do more than MCP servers!</strong> 😉).</p> <p>There's no step 2 or 3. That's it! 😎</p> <blockquote> <p><strong>Note:</strong> We're not covering the authentication and authorization aspects of Azure Functions here, but you can easily <a href="proxy.php?url=https://learn.microsoft.com/azure/azure-functions/functions-mcp-tutorial?tabs=self-hosted&amp;pivots=programming-language-typescript#enable-built-in-server-authorization-and-authentication" rel="noopener noreferrer">add those later if needed</a>.</p> </blockquote> <h2> Real-world example: Burger MCP Server </h2> <p>Let's look at how this works in practice with a <a href="proxy.php?url=https://github.com/Azure-Samples/mcp-agent-langchainjs/tree/main/packages/burger-mcp" rel="noopener noreferrer">burger ordering MCP server</a>. This server exposes 9 tools for AI agents to interact with a burger API:</p> <ul> <li> <code>get_burgers</code> - Browse the menu</li> <li> <code>get_burger_by_id</code> - Get burger details</li> <li> <code>place_order</code> - Place an order</li> <li> <code>get_orders</code> - View order history</li> <li>And more...</li> </ul> <p>Here's the complete server implementation using Express and the MCP SDK:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight typescript"><code><span class="k">import</span> <span class="nx">express</span><span class="p">,</span> <span class="p">{</span> <span class="nx">Request</span><span class="p">,</span> <span class="nx">Response</span> <span class="p">}</span> <span class="k">from</span> <span class="dl">'</span><span class="s1">express</span><span class="dl">'</span><span class="p">;</span> <span class="k">import</span> <span class="p">{</span> <span class="nx">StreamableHTTPServerTransport</span> <span class="p">}</span> <span class="k">from</span> <span class="dl">'</span><span class="s1">@modelcontextprotocol/sdk/server/streamableHttp.js</span><span class="dl">'</span><span class="p">;</span> <span class="k">import</span> <span class="p">{</span> <span class="nx">getMcpServer</span> <span class="p">}</span> <span class="k">from</span> <span class="dl">'</span><span class="s1">./mcp.js</span><span class="dl">'</span><span class="p">;</span> <span class="kd">const</span> <span class="nx">app</span> <span class="o">=</span> <span class="nf">express</span><span class="p">();</span> <span class="nx">app</span><span class="p">.</span><span class="nf">use</span><span class="p">(</span><span class="nx">express</span><span class="p">.</span><span class="nf">json</span><span class="p">());</span> <span class="c1">// Handle all MCP Streamable HTTP requests</span> <span class="nx">app</span><span class="p">.</span><span class="nf">all</span><span class="p">(</span><span class="dl">'</span><span class="s1">/mcp</span><span class="dl">'</span><span class="p">,</span> <span class="k">async </span><span class="p">(</span><span class="nx">request</span><span class="p">:</span> <span class="nx">Request</span><span class="p">,</span> <span class="nx">response</span><span class="p">:</span> <span class="nx">Response</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="p">{</span> <span class="kd">const</span> <span class="nx">transport</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">StreamableHTTPServerTransport</span><span class="p">({</span> <span class="na">sessionIdGenerator</span><span class="p">:</span> <span class="kc">undefined</span><span class="p">,</span> <span class="p">});</span> <span class="c1">// Connect the transport to the MCP server</span> <span class="kd">const</span> <span class="nx">server</span> <span class="o">=</span> <span class="nf">getMcpServer</span><span class="p">();</span> <span class="k">await</span> <span class="nx">server</span><span class="p">.</span><span class="nf">connect</span><span class="p">(</span><span class="nx">transport</span><span class="p">);</span> <span class="c1">// Handle the request with the transport</span> <span class="k">await</span> <span class="nx">transport</span><span class="p">.</span><span class="nf">handleRequest</span><span class="p">(</span><span class="nx">request</span><span class="p">,</span> <span class="nx">response</span><span class="p">,</span> <span class="nx">request</span><span class="p">.</span><span class="nx">body</span><span class="p">);</span> <span class="c1">// Clean up when the response is closed</span> <span class="nx">response</span><span class="p">.</span><span class="nf">on</span><span class="p">(</span><span class="dl">'</span><span class="s1">close</span><span class="dl">'</span><span class="p">,</span> <span class="k">async </span><span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span> <span class="k">await</span> <span class="nx">transport</span><span class="p">.</span><span class="nf">close</span><span class="p">();</span> <span class="k">await</span> <span class="nx">server</span><span class="p">.</span><span class="nf">close</span><span class="p">();</span> <span class="p">});</span> <span class="c1">// Note: error handling not shown for brevity</span> <span class="p">});</span> <span class="c1">// The port configuration</span> <span class="kd">const</span> <span class="nx">PORT</span> <span class="o">=</span> <span class="nx">process</span><span class="p">.</span><span class="nx">env</span><span class="p">.</span><span class="nx">PORT</span> <span class="o">||</span> <span class="mi">3000</span><span class="p">;</span> <span class="nx">app</span><span class="p">.</span><span class="nf">listen</span><span class="p">(</span><span class="nx">PORT</span><span class="p">,</span> <span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span> <span class="nx">console</span><span class="p">.</span><span class="nf">log</span><span class="p">(</span><span class="s2">`Burger MCP server listening on port </span><span class="p">${</span><span class="nx">PORT</span><span class="p">}</span><span class="s2">`</span><span class="p">);</span> <span class="p">});</span> </code></pre> </div> <p>The MCP tools are defined using the official SDK:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight typescript"><code><span class="k">import</span> <span class="p">{</span> <span class="nx">McpServer</span> <span class="p">}</span> <span class="k">from</span> <span class="dl">'</span><span class="s1">@modelcontextprotocol/sdk/server/mcp.js</span><span class="dl">'</span><span class="p">;</span> <span class="k">import</span> <span class="p">{</span> <span class="nx">z</span> <span class="p">}</span> <span class="k">from</span> <span class="dl">'</span><span class="s1">zod</span><span class="dl">'</span><span class="p">;</span> <span class="k">export</span> <span class="kd">function</span> <span class="nf">getMcpServer</span><span class="p">()</span> <span class="p">{</span> <span class="kd">const</span> <span class="nx">server</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">McpServer</span><span class="p">({</span> <span class="na">name</span><span class="p">:</span> <span class="dl">'</span><span class="s1">burger-mcp</span><span class="dl">'</span><span class="p">,</span> <span class="na">version</span><span class="p">:</span> <span class="dl">'</span><span class="s1">1.0.0</span><span class="dl">'</span><span class="p">,</span> <span class="p">});</span> <span class="nx">server</span><span class="p">.</span><span class="nf">registerTool</span><span class="p">(</span> <span class="dl">'</span><span class="s1">get_burgers</span><span class="dl">'</span><span class="p">,</span> <span class="p">{</span> <span class="na">description</span><span class="p">:</span> <span class="dl">'</span><span class="s1">Get a list of all burgers in the menu</span><span class="dl">'</span> <span class="p">},</span> <span class="k">async </span><span class="p">()</span> <span class="o">=&gt;</span> <span class="p">{</span> <span class="kd">const</span> <span class="nx">response</span> <span class="o">=</span> <span class="k">await</span> <span class="nf">fetch</span><span class="p">(</span><span class="s2">`</span><span class="p">${</span><span class="nx">burgerApiUrl</span><span class="p">}</span><span class="s2">/burgers`</span><span class="p">);</span> <span class="kd">const</span> <span class="nx">burgers</span> <span class="o">=</span> <span class="k">await</span> <span class="nx">response</span><span class="p">.</span><span class="nf">json</span><span class="p">();</span> <span class="k">return</span> <span class="p">{</span> <span class="na">content</span><span class="p">:</span> <span class="p">[{</span> <span class="na">type</span><span class="p">:</span> <span class="dl">'</span><span class="s1">text</span><span class="dl">'</span><span class="p">,</span> <span class="na">text</span><span class="p">:</span> <span class="nx">JSON</span><span class="p">.</span><span class="nf">stringify</span><span class="p">(</span><span class="nx">burgers</span><span class="p">,</span> <span class="kc">null</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span> <span class="p">}]</span> <span class="p">};</span> <span class="p">}</span> <span class="p">);</span> <span class="c1">// ... more tools</span> <span class="k">return</span> <span class="nx">server</span><span class="p">;</span> <span class="p">}</span> </code></pre> </div> <p>As you can see, the actual implementation of the tool is forwarding an HTTP request to the burger API and returning the result in the MCP response format. This is a common pattern for MCP tools in enterprise contexts, that act as wrappers around one or more existing APIs.</p> <h3> Current limitations </h3> <p>Note that this Azure Functions MCP hosting currently has some limitations: <strong>it only supports stateless servers using the HTTP Streaming protocol</strong>. The legacy SSE protocol is not supported as it requires stateful connections, so you'll either have to migrate your client to use HTTP Streaming or use another hosting option, like using containers for example.</p> <p>For most use cases, HTTP Streaming is the recommended approach anyway as it's more scalable and doesn't require persistent connections. Stateful MCP servers comes with additional complexity challenges and have limited scalability if you need to handle many concurrent connections.</p> <h2> Testing the MCP server locally </h2> <p>First let's run the MCP server locally and play a bit with it.</p> <p>If you don't want to bother with setting up a local environment, you can use the following link or open it in a new tab to launch a GitHub Codespace:</p> <ul> <li><a href="proxy.php?url=https://codespaces.new/Azure-Samples/mcp-agent-langchainjs?hide_repo_select=true&amp;ref=main&amp;quickstart=true" rel="noopener noreferrer">Create Codespace</a></li> </ul> <p>This will open a VS Code environment in your browser with the repo already cloned and all the tools installed and ready to go. Otherwise you can just <a href="proxy.php?url=https://github.com/Azure-Samples/mcp-agent-langchainjs" rel="noopener noreferrer">clone the repo</a>.</p> <p>Once you have the code ready, open a terminal and run:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code><span class="c"># Install dependencies</span> npm <span class="nb">install</span> <span class="c"># Start the burger MCP server and API</span> npm start </code></pre> </div> <p>This will start multiple services locally, including the Burger API and the MCP server, which will be available at <code>http://localhost:3000/mcp</code>. This may take a few seconds, wait until you see this message in the terminal:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>🚀 All services ready 🚀 </code></pre> </div> <p>We're only interested in the MCP server for now, so let's focus on that.</p> <h3> Using MCP Inspector </h3> <p>The easiest way to test the MCP server is with the MCP Inspector tool:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code><span class="nv">$ </span>npx <span class="nt">-y</span> @modelcontextprotocol/inspector </code></pre> </div> <p>Open the URL shown in the console in your browser, then:</p> <ol> <li>Set transport type to <strong>Streamable HTTP</strong> </li> <li>Enter your local server URL: <code>http://localhost:3000/mcp</code> </li> <li>Click <strong>Connect</strong> </li> </ol> <p>After you're connected, go to the <strong>Tools</strong> tab to list available tools. You can then try the <code>get_burgers</code> tool to see the burger menu.</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuo3lgw3v80we4v31p57u.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuo3lgw3v80we4v31p57u.png" alt="MCP Inspector Screenshot" width="800" height="299"></a></p> <h3> Using GitHub Copilot (with remote MCP) </h3> <p>Configure GitHub Copilot to use your deployed MCP server by adding this to your project's <code>.vscode/mcp.json</code>:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight json"><code><span class="p">{</span><span class="w"> </span><span class="nl">"servers"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"burger-mcp"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w"> </span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"http"</span><span class="p">,</span><span class="w"> </span><span class="nl">"url"</span><span class="p">:</span><span class="w"> </span><span class="s2">"http://localhost:3000/mcp"</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span><span class="p">}</span><span class="w"> </span></code></pre> </div> <p>Click on "Start" button that will appear in the JSON file to activate the MCP server connection.</p> <p>Now you can use Copilot in agent mode and ask things like:</p> <ul> <li>"What spicy burgers do you have?"</li> <li>"Place an order for two cheeseburgers"</li> <li>"Show my recent orders"</li> </ul> <p>Copilot will automatically discover and use the MCP tools! 🎉</p> <blockquote> <p><strong>Tip:</strong> If Copilot doesn't call the burger MCP tools, try checking if it's enabled by clicking on the tool icon in the chat input box and ensuring that "burger-mcp" is selected. You can also force tool usage by adding <code>#burger-mcp</code> in your prompt.</p> </blockquote> <h2> (Bonus) Deploying to Azure with Infrastructure as Code </h2> <p>Deploying an application to Azure is usually not the fun part, especially when it involves multiple resources and configurations.<br> With the <a href="proxy.php?url=https://learn.microsoft.com/azure/developer/azure-developer-cli/overview" rel="noopener noreferrer">Azure Developer CLI (AZD)</a>, you can define your entire application infrastructure and deployment process as code, and deploy everything with a single command.</p> <p>If you've used the automated setup with GitHub Copilot, you should already have the necessary files. Our burger example also comes with these files pre-configured. The MCP server is defined as a service in <code>azure.yaml</code>, and the files under the <code>infra</code> folder defines the Azure Functions app and related resources.</p> <p>Here's the relevant part of <code>azure.yaml</code> that defines the burger MCP service:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight yaml"><code><span class="na">name</span><span class="pi">:</span> <span class="s">mcp-agent-langchainjs</span> <span class="na">services</span><span class="pi">:</span> <span class="na">burger-mcp</span><span class="pi">:</span> <span class="na">project</span><span class="pi">:</span> <span class="s">./packages/burger-mcp</span> <span class="na">language</span><span class="pi">:</span> <span class="s">ts</span> <span class="na">host</span><span class="pi">:</span> <span class="s">function</span> </code></pre> </div> <p>While the infrastructure files can look intimidating at first, you don't need to understand all the details to get started. There are tons of templates and examples available to help you get going quickly, the important part is that everything is defined as code, so you can version control it and reuse it.</p> <p>Now let's deploy:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code><span class="c"># Login to Azure</span> azd auth login <span class="c"># Provision resources and deploy</span> azd up </code></pre> </div> <p>Pick your preferred Azure region when prompted (if you're not sure, choose <strong>East US2</strong>), and voilà! In a few minutes, you'll have a fully deployed MCP server running on Azure Functions.</p> <p>Once the deployment is finished, the CLI will show you the URL of the deployed resources, including the MCP server endpoint.</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95d7ypphekrsa5v8ecn3.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F95d7ypphekrsa5v8ecn3.png" alt="AZD deployment output for the burger MCP example app" width="800" height="271"></a></p> <h2> Example projects </h2> <p>The burger MCP server is actually part of a larger example project that demonstrates building an AI agent with LangChain.js, that uses the burger MCP server to place orders. If you're interested in the next steps of building an AI agent on top of MCP, this is a great resource as it includes:</p> <ul> <li>AI agent web API using LangChain.js</li> <li>Web app interface built with Lit web components</li> <li>MCP server on Functions (the one we just saw)</li> <li>Burger ordering API (used by the MCP server)</li> <li>Live order visualization</li> <li>Complete Infrastructure as Code, to deploy everything with one command</li> </ul> <p>But if you're only interested in the MCP server part, then you might want to look at this simpler example that you can use as a starting point for your own MCP servers: <a href="proxy.php?url=https://github.com/Azure-Samples/mcp-sdk-functions-hosting-node" rel="noopener noreferrer">mcp-sdk-functions-hosting-node</a> is a server template for a Node.js MCP server using TypeScript and MCP SDK.</p> <h2> What about the cost? </h2> <p>Azure Functions Flex Consumption pricing is attractive for MCP servers:</p> <ul> <li> <strong>Free grant</strong>: 1 million requests and 400,000 GB-s execution time per month</li> <li> <strong>After free grant</strong>: Pay only for actual execution time</li> <li> <strong>Automatic scaling</strong>: From zero to hundreds of instances</li> </ul> <p>The free grant is generous enough to allow running a typical MCP server with moderate usage, and all the experimentation you might need. It's easy to configure the scaling limits to control costs as needed, with an option to scale down to zero when idle. This flexibility is why Functions is my personal go-to choice for TypeScript projects on Azure.</p> <h2> Wrap up </h2> <p>Hosting MCP servers on Azure Functions gives you the best of both worlds: the simplicity of serverless infrastructure and the power of the official Anthropic SDK. With just <strong>one simple configuration step</strong>, you can take your existing Node.js MCP server and deploy it to a production-ready, auto-scaling platform.</p> <p>The combination of MCP's standardized protocol and Azure's serverless platform means you can focus on building amazing AI experiences instead of managing infrastructure. Boom. 😎</p> <p>Star the repos ⭐️ if you found this helpful! Try deploying your own MCP server and share your experience in the comments. If you run into any issues or have questions, you can reach for help on the <a href="proxy.php?url=https://aka.ms/foundry/discord" rel="noopener noreferrer">Azure AI community on Discord</a>.</p> webdev javascript ai azure Serverless MCP Agent with LangChain.js v1 — Burgers, Tools, and Traces 🍔 Yohan Lasorsa Tue, 21 Oct 2025 16:09:46 +0000 https://dev.to/azure/serverless-mcp-agent-with-langchainjs-v1-burgers-tools-and-traces-25oo https://dev.to/azure/serverless-mcp-agent-with-langchainjs-v1-burgers-tools-and-traces-25oo <p>AI agents that can actually do stuff (not just chat) are the fun part nowadays, but wiring them cleanly into real APIs, keeping things observable, and shipping them to the cloud can get... messy. So we built a fresh end‑to‑end sample to show how to do it right with the brand new <strong>LangChain.js v1</strong> and <strong>Model Context Protocol (MCP)</strong>. In case you missed it, MCP is a recent open standard that makes it easy for LLM agents to consume tools and APIs, and LangChain.js, a great framework for building GenAI apps and agents, has first-class support for it.</p> <p>This new sample gives you:</p> <ul> <li>A LangChain.js v1 agent that streams its result, along reasoning + tool steps</li> <li>An MCP server exposing real tools (burger menu + ordering) from a business API</li> <li>A web interface with authentication, sessions history, and a debug panel (for developers)</li> <li>A production-ready multi-service architecture</li> <li>Serverless deployment on Azure in one command (<code>azd up</code>)</li> </ul> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyd3ybr83tyyogakr7ou.gif" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flyd3ybr83tyyogakr7ou.gif" alt="GIF animation of the agent in action" width="760" height="427"></a></p> <p>Yes, it’s a burger ordering system. Who doesn't like burgers? Grab your favorite beverage ☕, and let’s dive in for a quick tour!</p> <h2> TL;DR key takeaways </h2> <ul> <li>New sample: full-stack Node.js AI agent using LangChain.js v1 + MCP tools</li> <li>Architecture: web app → agent API → MCP server → burger API</li> <li>Runs locally with a single <code>npm start</code>, deploys with <code>azd up</code> </li> <li>Uses streaming (NDJSON) with intermediate tool + LLM steps surfaced to the UI</li> <li>Ready to fork, extend, and plug into your own domain / tools</li> </ul> <h2> What will you learn here? </h2> <ul> <li>What this sample is about and its high-level architecture</li> <li>What LangChain.js v1 brings to the table for agents</li> <li>How to deploy and run the sample</li> <li>How MCP tools can expose real-world APIs</li> </ul> <h2> Reference links for everything we use </h2> <ul> <li><a href="proxy.php?url=https://github.com/Azure-Samples/mcp-agent-langchainjs" rel="noopener noreferrer">GitHub repo</a></li> <li><a href="proxy.php?url=https://docs.langchain.com/oss/javascript/langchain/overview" rel="noopener noreferrer">LangChain.js docs</a></li> <li><a href="proxy.php?url=https://modelcontextprotocol.io" rel="noopener noreferrer">Model Context Protocol</a></li> <li><a href="proxy.php?url=https://learn.microsoft.com/azure/developer/azure-developer-cli/" rel="noopener noreferrer">Azure Developer CLI</a></li> <li><a href="proxy.php?url=https://www.npmjs.com/package/@modelcontextprotocol/inspector" rel="noopener noreferrer">MCP Inspector</a></li> </ul> <h2> Use case </h2> <p>You want an AI assistant that can take a natural language request like “Order two spicy burgers and show me my pending orders” and:</p> <ul> <li>Understand intent (query menu, then place order)</li> <li>Call the right MCP tools in sequence, calling in turn the necessary APIs</li> <li>Stream progress (LLM tokens + tool steps)</li> <li>Return a clean final answer</li> </ul> <p>Swap “burgers” for “inventory”, “bookings”, “support tickets”, or “IoT devices” and you’ve got a reusable pattern!</p> <h2> Sample overview </h2> <p>Before we play a bit with the sample, let's have a look at the main services implemented here:</p> <div class="table-wrapper-paragraph"><table> <thead> <tr> <th>Service</th> <th>Role</th> <th>Tech</th> </tr> </thead> <tbody> <tr> <td>Agent Web App (<code>agent-webapp</code>)</td> <td>Chat UI + streaming + session history</td> <td>Azure Static Web Apps, Lit web components</td> </tr> <tr> <td>Agent API (<code>agent-api</code>)</td> <td>LangChain.js v1 agent orchestration + auth + history</td> <td>Azure Functions, Node.js</td> </tr> <tr> <td>Burger MCP Server (<code>burger-mcp</code>)</td> <td>Exposes burger API as tools over MCP (Streamable HTTP + SSE)</td> <td>Azure Functions, Express, MCP SDK</td> </tr> <tr> <td>Burger API (<code>burger-api</code>)</td> <td>Business logic: burgers, toppings, orders lifecycle</td> <td>Azure Functions, Cosmos DB</td> </tr> </tbody> </table></div> <p>Here's a simplified view of how they interact:</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzs3kcdu4b537q2yx3hl.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fuzs3kcdu4b537q2yx3hl.png" alt="Architecture Diagram" width="800" height="276"></a></p> <p>There are also other supporting components like databases and storage not shown here for clarity.</p> <p>For this quickstart we'll only interact with the <strong>Agent Web App</strong> and the <strong>Burger MCP Server</strong>, as they are the main stars of the show here.</p> <h3> LangChain.js v1 agent features </h3> <p>The recent release of LangChain.js v1 is a huge milestone for the JavaScript AI community! It marks a significant shift from experimental tools to a production-ready framework. The new version doubles down on what’s needed to build robust AI applications, with a strong focus on <strong>agents</strong>. This includes first-class support for streaming not just the final output, but also intermediate steps like tool calls and agent reasoning. This makes building transparent and interactive agent experiences (like the one in this sample) much more straightforward.</p> <h2> Quickstart </h2> <h3> Requirements </h3> <ul> <li><a href="proxy.php?url=https://github.com/signup" rel="noopener noreferrer">GitHub account</a></li> <li> <a href="proxy.php?url=https://azure.microsoft.com/free" rel="noopener noreferrer">Azure account</a> (free signup, or if you're a student, <a href="proxy.php?url=https://azure.microsoft.com/free/students" rel="noopener noreferrer">get free credits here</a>)</li> <li><a href="proxy.php?url=https://learn.microsoft.com/azure/developer/azure-developer-cli/install-azd?tabs=winget-windows%2Cbrew-mac%2Cscript-linux&amp;pivots=os-windows" rel="noopener noreferrer">Azure Developer CLI</a></li> </ul> <h3> Deploy and run the sample </h3> <p>We'll use GitHub Codespaces for a quick zero-install setup here, but if you prefer to run it locally, check the <a href="proxy.php?url=https://github.com/Azure-Samples/mcp-agent-langchainjs?tab=readme-ov-file#getting-started" rel="noopener noreferrer">README</a>.</p> <p>Click on the following link or open it in a new tab to launch a Codespace:</p> <ul> <li><a href="proxy.php?url=https://codespaces.new/Azure-Samples/mcp-agent-langchainjs?hide_repo_select=true&amp;ref=main&amp;quickstart=true" rel="noopener noreferrer">Create Codespace</a></li> </ul> <p>This will open a VS Code environment in your browser with the repo already cloned and all the tools installed and ready to go.</p> <h4> Provision and deploy to Azure </h4> <p>Open a terminal and run these commands:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code><span class="c"># Install dependencies</span> npm <span class="nb">install</span> <span class="c"># Login to Azure</span> azd auth login <span class="c"># Provision and deploy all resources</span> azd up </code></pre> </div> <p>Follow the prompts to select your Azure subscription and region. If you're unsure of which one to pick, choose <code>East US 2</code>.<br> The deployment will take about 15 minutes the first time, to create all the necessary resources (Functions, Static Web Apps, Cosmos DB, AI Models).</p> <p>If you're curious about what happens under the hood, you can take a look at the <code>main.bicep</code> file in the <code>infra</code> folder, which defines the infrastructure as code for this sample.</p> <h3> Test the MCP server </h3> <p>While the deployment is running, you can run the MCP server and API locally (even in Codespaces) to see how it works. Open another terminal and run:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code>npm start </code></pre> </div> <p>This will start all services locally, including the Burger API and the MCP server, which will be available at <code>http://localhost:3000/mcp</code>. This may take a few seconds, wait until you see this message in the terminal:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>🚀 All services ready 🚀 </code></pre> </div> <p>When these services are running without Azure resources provisioned, they will use in-memory data instead of Cosmos DB so you can experiment freely with the API and MCP server, though the agent won't be functional as it requires a LLM resource.</p> <h4> MCP tools </h4> <p>The MCP server exposes the following tools, which the agent can use to interact with the burger ordering system:</p> <div class="table-wrapper-paragraph"><table> <thead> <tr> <th>Tool Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td><code>get_burgers</code></td> <td>Get a list of all burgers in the menu</td> </tr> <tr> <td><code>get_burger_by_id</code></td> <td>Get a specific burger by its ID</td> </tr> <tr> <td><code>get_toppings</code></td> <td>Get a list of all toppings in the menu</td> </tr> <tr> <td><code>get_topping_by_id</code></td> <td>Get a specific topping by its ID</td> </tr> <tr> <td><code>get_topping_categories</code></td> <td>Get a list of all topping categories</td> </tr> <tr> <td><code>get_orders</code></td> <td>Get a list of all orders in the system</td> </tr> <tr> <td><code>get_order_by_id</code></td> <td>Get a specific order by its ID</td> </tr> <tr> <td><code>place_order</code></td> <td>Place a new order with burgers (requires <code>userId</code>, optional <code>nickname</code>)</td> </tr> <tr> <td><code>delete_order_by_id</code></td> <td>Cancel an order if it has not yet been started (status must be <code>pending</code>, requires <code>userId</code>)</td> </tr> </tbody> </table></div> <p>You can test these tools using the MCP Inspector. Open another terminal and run:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight shell"><code>npx <span class="nt">-y</span> @modelcontextprotocol/inspector </code></pre> </div> <p>Then open the URL printed in the terminal in your browser and connect using these settings:</p> <ul> <li> <strong>Transport</strong>: Streamable HTTP</li> <li> <strong>URL</strong>: <a href="proxy.php?url=http://localhost:3000/mcp" rel="noopener noreferrer">http://localhost:3000/mcp</a> </li> <li> <strong>Connection Type</strong>: Via Proxy (should be default)</li> </ul> <p>Click on <strong>Connect</strong>, then try listing the tools first, and run <code>get_burgers</code> tool to get the menu info.</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1x7jghzwa5rvdi32lfa.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo1x7jghzwa5rvdi32lfa.png" alt="MCP Inspector Screenshot" width="800" height="299"></a></p> <h3> Test the Agent Web App </h3> <p>After the deployment is completed, you can run the command <code>npm run env</code> to print the URLs of the deployed services. Open the Agent Web App URL in your browser (it should look like <code>https://&lt;your-web-app&gt;.azurestaticapps.net</code>).</p> <p>You'll first be greeted by an authentication page, you can sign in either with your GitHub or Microsoft account and then you should be able to access the chat interface.</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3hhz6xv0g1djbendzrd.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv3hhz6xv0g1djbendzrd.png" alt="Agent chat interface screenshot" width="800" height="429"></a></p> <p>From there, you can start asking any question or use one of the suggested prompts, for example try asking: <code>Recommend me an extra spicy burger</code>.</p> <p>As the agent processes your request, you'll see the response streaming in real-time, along with the intermediate steps and tool calls. Once the response is complete, you can also unfold the debug panel to see the full reasoning chain and the tools that were invoked:</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrk3nr22ypq4jm0qfaum.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrk3nr22ypq4jm0qfaum.png" alt="Intermediate steps debug panel screenshot" width="800" height="556"></a></p> <blockquote> <p><strong>Tip:</strong> Our agent service also sends detailed tracing data using OpenTelemetry. You can explore these either in Azure Monitor for the deployed service, or locally using an OpenTelemetry collector. We'll cover this in more detail in a future post.</p> </blockquote> <h2> Wrap it up </h2> <p>Congratulations, you just finished spinning up a full-stack serverless AI agent using LangChain.js v1, MCP tools, and Azure’s serverless platform. Now it's your turn to dive in the code and extend it for your use cases! 😎 And don't forget to <code>azd down</code> once you're done to avoid any unwanted costs.</p> <h2> Going further </h2> <p>This was just a quick introduction to this sample, and you can expect more in-depth posts and tutorials soon.</p> <p>Since we're in the era of AI agents, we've also made sure that this sample can be explored and extended easily with code agents like GitHub Copilot.<br> We even built a custom chat mode to help you discover and understand the codebase faster! Check out the <a href="proxy.php?url=.https://github.com/Azure-Samples/mcp-agent-langchainjs/blob/main/docs/copilot.md">Copilot setup guide</a> in the repo to get started.</p> <p>If you like this sample, don't forget to star the repo ⭐️! You can also join us in the <a href="proxy.php?url=https://aka.ms/foundry/discord" rel="noopener noreferrer">Azure AI community Discord</a> to chat and ask any questions.</p> <p>Happy coding and burger ordering! 🍔</p> ai javascript langchain azure What’s in a Name? Fuzzy Matching for Real-World Data Renee Noble Fri, 17 Oct 2025 00:47:36 +0000 https://dev.to/azure/whats-in-a-name-fuzzy-matching-for-real-world-data-5b5o https://dev.to/azure/whats-in-a-name-fuzzy-matching-for-real-world-data-5b5o <p><strong><a href="proxy.php?url=https://www.youtube.com/watch?v=-AQBJTt1qR4" rel="noopener noreferrer">🎥 Watch the full PyCon AU 2025 talk here</a></strong></p> <p><iframe width="710" height="399" src="proxy.php?url=https://www.youtube.com/embed/-AQBJTt1qR4"> </iframe> </p> <p>When you work with human-entered data (registrations, surveys, customer forms, you name it!) you soon discover that <strong>people are very creative typists</strong>. Names, schools, companies, and addresses come in with abbreviations, nicknames, missing words, and typos galore.</p> <p>That mess makes it hard to answer even simple questions like: <em>“Do these two records refer to the same person?”</em> or <em>“How many participants came from this organisation?”</em></p> <p>At PyCon AU 2025, I explored how different fuzzy matching techniques, from traditional algorithms to generative AI, can help make sense of that chaos.</p> <h2> The Fuzzy Matching Challenge </h2> <p>String comparison looks straightforward until you meet real-world data. “PLC Sydney” might really be “Presbyterian Ladies’ College Sydney.” “Certain Collage” is obviously a typo for “Certain College” (hopefully). And nicknames like Liz, Lizzy, and Elizabeth might all belong to the same person.</p> <p>That’s where <strong>fuzzy matching</strong> comes in, using a variety of techniques we can rank how similar different non-identical words are to try and find the most likely match. But the question is, what fuzzy matching algorithms are best suited for matching what types of data? And can generative AI play a part in this matching game?</p> <h3> Comparing Algorithmic Approaches </h3> <p>I put six Python libraries to the test:</p> <ul> <li> <strong>TextDistance</strong> and <strong>Python-Levenshtein</strong> – classic edit-distance approaches.</li> <li> <strong>FuzzyWuzzy</strong> and <strong>RapidFuzz</strong> – hybrids that combine multiple distance metrics.</li> <li> <strong>Nicknames</strong> and <strong>PyNameMatcher</strong> – specialised tools for given-name variations.</li> </ul> <p>To test them, I generated around 100 fake student names with nicknames, misspellings, and swapped orderings. Then I measured how accurately each library matched them to their correct counterparts.</p> <p><strong>RapidFuzz</strong> came out ahead, matching almost every record correctly, and doing it fast! The edit-distance methods struggled most with multicultural names where order or character sets varied, and the nickname libraries were strong but less consistent overall.</p> <h3> When Generative AI Shines </h3> <p>Algorithmic fuzzy matching is fast and accurate, but it only looks at characters, not meaning. That’s where I turned to <strong>Azure OpenAI Service</strong> for a different kind of help.</p> <p>By feeding in real school-name data, I found that straight out of the box <strong>GPT-5 was exceptionally good at recognising and correcting school names</strong>, especially when they were abbreviated, misspelled, or included local school nicknames.</p> <p>For example, it could confidently map:</p> <ul> <li>“PLC Syd” → “Presbyterian Ladies’ College Sydney”</li> <li>“Cerdon Collage” → “Cerdon College”</li> <li>“St Cats” → “St Catherine’s School, Waverley”</li> </ul> <p>That level of contextual correction is almost impossible to achieve with pure algorithmic matching unless you maintain a custom dictionary of every possible variation. And who has time for that!</p> <p>The trade-off, of course, is performance. Generative models are slower and costlier to run at scale. But when used selectively, just for ambiguous or hard-to-match cases, they can dramatically improve accuracy. And of course this is something this specifically works well for names, like schools, that are well documented on the internet – something that doesn’t apply to the names of individual school students.</p> <h3> Can we have the best of both worlds? </h3> <p><strong>hybrid AI + Algorithmic matching!</strong></p> <p>In practice, the best results came from a <strong>hybrid approach</strong>, using traditional fuzzy-matching algorithms for most cases, and bringing in <strong>Azure OpenAI</strong> only when the names got tricky. For example, RapidFuzz could quickly match “Lizzy Wong” to “Elizabeth Wong,” while the generative model was better at reasoning through ambiguous inputs like “Sally-Anne W.” or reversed multicultural name orders. By combining both, I could match almost every student record accurately, keeping the speed of algorithmic methods while adding the contextual understanding of generative AI.</p> <h2> Try It Yourself </h2> <p>🎥 You can watch my full PyCon AU 2025 talk here:<br> <strong><a href="proxy.php?url=https://www.youtube.com/watch?v=-AQBJTt1qR4" rel="noopener noreferrer">What’s in a Name: Fuzzy Matching Techniques for Proper Nouns</a></strong></p> <p>📁 If you’d like to explore this further you can check out my <strong><a href="proxy.php?url=https://aka.ms/rn-whats-in-a-name" rel="noopener noreferrer">fuzzy matching repo</a>.</strong></p> <p>Take a look at the libraries and tool I mentioned above, they’re easy to install and experiment with in Python. If you’re already using Azure OpenAI, it’s worth testing how a small retrieval-augmented setup might complement your existing matching logic.</p> <h2> Chat to us! </h2> <p>💬 To chat more about AI solutions you can join the AI Foundry Discord, where advocates like me are chatting about the latest tools all the time. </p> <p><a href="proxy.php?url=https://aka.ms/AI-Discord-rn-FM-blog" class="ltag_cta ltag_cta--branded" rel="noopener noreferrer">Join the Azure AI Foundry Discord here</a> </p> <p><strong><em>Good luck on your fuzzy matching adventures!</em></strong></p> python azure ai datascience Sesiones de mentoría Ellas en Tecnología con Microsoft Reactor Cynthia Zanoni Tue, 16 Sep 2025 19:22:41 +0000 https://dev.to/azure/sesiones-de-mentoria-ellas-en-tecnologia-con-microsoft-reactor-39p1 https://dev.to/azure/sesiones-de-mentoria-ellas-en-tecnologia-con-microsoft-reactor-39p1 <p>Ellas en Tecnología es una iniciativa de <strong>WoMakersCode en colaboración con Microsoft Reactor</strong>, creada para promover la <strong>empleabilidad, la visibilidad y el crecimiento personal</strong> de mujeres en el sector tecnológico en América Latina.</p> <p>Durante <strong>cuatro encuentros virtuales y gratuitos</strong>, exploraremos herramientas y estrategias prácticas para fortalecer tu carrera, desde la creación de CVs con inteligencia artificial hasta el desarrollo de la confianza profesional.</p> <p><strong>👉 Todos los eventos serán transmitidos en vivo en español. <a href="proxy.php?url=https://aka.ms/EllasEnLaTecnologia" rel="noopener noreferrer">Inscríbete aquí</a></strong></p> <h2> 👩‍💻 Aprende con especialistas </h2> <p>En cada sesión tendrás la oportunidad de aprender directamente de <strong>expertas en tecnología y carrera de Microsoft y NTT Data</strong>, quienes compartirán sus experiencias, consejos prácticos y estrategias para avanzar en el mercado laboral y en tu desarrollo profesional.</p> <p><a href="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnyshzdgcd8no5hv4ijk.png" class="article-body-image-wrapper"><img src="proxy.php?url=https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsnyshzdgcd8no5hv4ijk.png" alt="expertas en tecnología y carrera de Microsoft y NTT Data" width="800" height="814"></a></p> <p><strong>Una serie de transmisiones en vivo para impulsar tu desarrollo personal y profesional. <a href="proxy.php?url=https://aka.ms/EllasEnLaTecnologia" rel="noopener noreferrer">Inscríbete aquí</a></strong></p> espanol