Microsoft Agent Framework https://devblogs.microsoft.com/agent-framework/ The latest news from the Microsoft Agent Framework team for developers Fri, 13 Mar 2026 11:04:33 +0000 en-US hourly 1 https://devblogs.microsoft.com/agent-framework/wp-content/uploads/sites/78/2024/10/Microsoft-favicon-48x48.jpg Microsoft Agent Framework https://devblogs.microsoft.com/agent-framework/ 32 32 What’s New in Agent Skills: Code Skills, Script Execution, and Approval for Python https://devblogs.microsoft.com/agent-framework/whats-new-in-agent-skills-code-skills-script-execution-and-approval-for-python/ https://devblogs.microsoft.com/agent-framework/whats-new-in-agent-skills-code-skills-script-execution-and-approval-for-python/#respond Fri, 13 Mar 2026 11:04:33 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5181 Code-Defined Skills, Script Execution, and Approval for Agent Skills in Python When we introduced Agent Skills for Microsoft Agent Framework, you could package domain expertise as file-based skill directories and have agents discover and load them on demand. Now, the Python SDK takes skills further — you can define skills entirely in code, let agents […]

The post What’s New in Agent Skills: Code Skills, Script Execution, and Approval for Python appeared first on Microsoft Agent Framework.

]]>
Code-Defined Skills, Script Execution, and Approval for Agent Skills in Python

When we introduced Agent Skills for Microsoft Agent Framework, you could package domain expertise as file-based skill directories and have agents discover and load them on demand. Now, the Python SDK takes skills further — you can define skills entirely in code, let agents execute scripts bundled with skills, and gate script execution behind human approval. These additions give you more flexibility in how you author skills, more power in what agents can do with them, and more control over when agents are allowed to act.

Code-Defined Skills

Until now, every skill started as a directory on the filesystem with a SKILL.md file. That works well for static, shareable knowledge packages, but not every skill fits that mold. Sometimes skill content comes from a database. Sometimes you want skill definitions to live alongside the application code that uses them. And sometimes a resource needs to execute logic at read time rather than serve static text.

Code-defined skills address these scenarios. You create a Skill instance in Python with a name, description, and instruction content — no files required:

from textwrap import dedent
from agent_framework import Skill, SkillResource, SkillsProvider

code_style_skill = Skill(
    name="code-style",
    description="Coding style guidelines and conventions for the team",
    content=dedent("""\
        Use this skill when answering questions about coding style,
        conventions, or best practices for the team.
    """),
    resources=[
        SkillResource(
            name="style-guide",
            content=dedent("""\
                # Team Coding Style Guide
                - Use 4-space indentation (no tabs)
                - Maximum line length: 120 characters
                - Use type annotations on all public functions
            """),
        ),
    ],
)

skills_provider = SkillsProvider(skills=[code_style_skill])

The agent uses code-defined skills exactly like file-based ones — calling load_skill to retrieve instructions and read_skill_resource to fetch resources. From the agent’s perspective, there’s no difference.

Dynamic Resources

Static content is useful, but sometimes you need resources that return fresh data each time they’re read. The @skill.resource decorator registers a function as a resource. Both sync and async functions are supported:

import os
from agent_framework import Skill

project_info_skill = Skill(
    name="project-info",
    description="Project status and configuration information",
    content="Use this skill for questions about the current project.",
)

@project_info_skill.resource
def environment() -> Any:
    """Get current environment configuration."""
    env = os.environ.get("APP_ENV", "development")
    region = os.environ.get("APP_REGION", "us-east-1")
    return f"Environment: {env}, Region: {region}"

@project_info_skill.resource(name="team-roster", description="Current team members")
def get_team_roster() -> Any:
    """Return the team roster."""
    return "Alice Chen (Tech Lead), Bob Smith (Backend Engineer)"

When the decorator is used without arguments (@skill.resource), the function name becomes the resource name and the docstring becomes the description. Use @skill.resource(name="...", description="...") to set them explicitly. The function is called each time the agent reads the resource, so it can pull up-to-date data from databases, APIs, or environment variables.

Code-Defined Scripts

Use the @skill.script decorator to register a function as an executable script on a skill. Code-defined scripts run in-process as direct function calls:

from agent_framework import Skill

unit_converter_skill = Skill(
    name="unit-converter",
    description="Convert between common units using a conversion factor",
    content="Use the convert script to perform unit conversions.",
)

@unit_converter_skill.script(name="convert", description="Convert a value: result = value × factor")
def convert_units(value: float, factor: float) -> str:
    """Convert a value using a multiplication factor."""
    import json
    result = round(value * factor, 4)
    return json.dumps({"value": value, "factor": factor, "result": result})

A JSON Schema is automatically created from the function’s typed parameters and presented to the agent, so it knows what arguments the script expects and provides them accordingly when calling run_skill_script.

Combining File-Based and Code-Defined Skills

You can mix both approaches in a single SkillsProvider. Pass skill_paths for file-based skills and skills for code-defined ones. If a code-defined skill shares a name with a file-based skill, the file-based version takes precedence:

from pathlib import Path
from agent_framework import Skill, SkillsProvider

my_skill = Skill(
    name="my-code-skill",
    description="A code-defined skill",
    content="Instructions for the skill.",
)

skills_provider = SkillsProvider(
    skill_paths=Path(__file__).parent / "skills",
    skills=[my_skill],
)

Script Execution

Skills can include executable scripts that the agent runs via the run_skill_script tool. How a script runs depends on how it was defined:

  • Code-defined scripts (registered via @skill.script) run in-process as direct function calls. No runner is needed.
  • File-based scripts (.py files discovered in skill directories) require a SkillScriptRunner — any callable matching (skill, script, args) -> Any — that you provide to control how the script is executed.

To enable execution of file-based scripts, pass a script_runner to SkillsProvider:

from pathlib import Path
from agent_framework import Skill, SkillScript, SkillsProvider

def my_runner(skill: Skill, script: SkillScript, args: dict | None = None) -> str:
    """Run a file-based script as a subprocess."""
    import subprocess, sys
    cmd = [sys.executable, str(Path(skill.path) / script.path)]
    if args:
        for key, value in args.items():
            if value is not None:
                cmd.extend([f"--{key}", str(value)])
    # ... Execute cmd in a sandboxed subprocess and return stdout
    return result.stdout.strip()

skills_provider = SkillsProvider(
    skill_paths=Path(__file__).parent / "skills",
    script_runner=my_runner,
)

This runner is provided for demonstration purposes only. For production use, implement proper sandboxing, resource limits, input validation, and structured logging.

The runner receives the resolved Skill, SkillScript, and an optional args dictionary. You control the execution environment — how scripts are launched, what permissions they have, and how their output is captured.

Script Approval

When agents can execute scripts, you need a way to keep a human in the loop for sensitive operations. Setting require_script_approval=True on SkillsProvider gates all script execution behind human approval. Instead of executing immediately, the agent pauses and returns approval requests that your application handles:

from agent_framework import Agent, Skill, SkillsProvider

# Create provider with approval enabled
skills_provider = SkillsProvider(
    skills=[my_skill],
    require_script_approval=True,
)

# ... Create an agent with skills_provider as a context provider and start a session
result = await agent.run("Deploy version 2.5.0 to production", session=session)

# Handle approval requests
while result.user_input_requests:
    for request in result.user_input_requests:
        print(f"Script: {request.function_call.name}")
        print(f"Args: {request.function_call.arguments}")

        approval = request.to_function_approval_response(approved=True)
        result = await agent.run(approval, session=session)

When a script is rejected (approved=False), the agent is informed that the user declined and can respond accordingly — explaining the limitation or suggesting an alternative approach.

This pattern gives you the benefits of agent-driven script execution while maintaining the oversight that enterprise environments require.

Use Cases

Data Validation Pipelines

Package your organization’s data quality rules as a skill with validation scripts. When an analyst asks the agent to check a dataset, it loads the skill, runs the validation script against the data, and reports results — all following the same rules every time. With approval enabled, a data steward can review each validation before it executes.

DevOps Runbooks

Turn your team’s operational procedures into skills with executable scripts for common tasks like log retrieval, health checks, or configuration changes. The agent loads the right runbook based on the issue, and the approval mechanism ensures that no deployment or infrastructure change happens without human sign-off.

Dynamic Knowledge from Internal Systems

Use code-defined skills with dynamic resources to surface live information from internal APIs, databases, or configuration systems. An HR agent can pull current policy details from a CMS at read time rather than relying on a static copy that might be stale.

Security Considerations

Script execution introduces additional responsibility. Agent Skills should be treated like any third-party code you bring into your project:

  • Review before use — Read all skill content and scripts before deploying. Verify that a script’s actual behavior matches its stated intent.
  • Sandbox execution — Run file-based scripts in isolated environments. Limit filesystem, network, and system-level access to only what the skill requires.
  • Use approval for sensitive operations — Enable require_script_approval=True for any skill that can produce side effects in production systems.
  • Audit and log — Record which skills are loaded, which scripts are executed, and what arguments are passed to maintain an audit trail.

Get Started

Code-defined skills, script execution, and script approval are available now in the Python agent-framework package. These features give you more ways to author skills, more capability within skills, and the safety controls needed for production deployments.

To learn more and try it out:

We’re always interested in hearing from you. If you have feedback or questions, reach out to us on the GitHub discussion boards. And if you’ve been enjoying Agent Framework, give us a ⭐ on GitHub.

The post What’s New in Agent Skills: Code Skills, Script Execution, and Approval for Python appeared first on Microsoft Agent Framework.

]]>
https://devblogs.microsoft.com/agent-framework/whats-new-in-agent-skills-code-skills-script-execution-and-approval-for-python/feed/ 0
Agent Harness in Agent Framework https://devblogs.microsoft.com/agent-framework/agent-harness-in-agent-framework/ https://devblogs.microsoft.com/agent-framework/agent-harness-in-agent-framework/#respond Thu, 12 Mar 2026 19:30:26 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5187 Agent harness is the layer where model reasoning connects to real execution: shell and filesystem access, approval flows, and context management across long-running sessions. With Agent Framework, these patterns can now be built consistently in both Python and .NET. In this post, we’ll look at three practical building blocks for production agents: Local shell harness […]

The post Agent Harness in Agent Framework appeared first on Microsoft Agent Framework.

]]>
Agent harness is the layer where model reasoning connects to real execution: shell and filesystem access, approval flows, and context management across long-running sessions. With Agent Framework, these patterns can now be built consistently in both Python and .NET.

In this post, we’ll look at three practical building blocks for production agents:

  • Local shell harness for controlled host-side execution
  • Hosted shell harness for managed execution environments
  • Context compaction for keeping long conversations efficient and reliable

Shell and Filesystem Harness

Many agent experiences need to do more than generate text. They need to inspect files, run commands, and work with the surrounding environment in a controlled way. Agent Framework makes it possible to model those capabilities explicitly, with approval patterns where needed.

The following examples show compact harness patterns in both Python and .NET.

Python: Local shell with approvals

import asyncio
import subprocess
from typing import Any

from agent_framework import Agent, Message, tool
from agent_framework.openai import OpenAIResponsesClient


@tool(approval_mode="always_require")
def run_bash(command: str) -> str:
    """Execute a shell command locally and return stdout, stderr, and exit code."""
    result = subprocess.run(
        command,
        shell=True,
        capture_output=True,
        text=True,
        timeout=30,
    )
    parts: list[str] = []
    if result.stdout:
        parts.append(result.stdout)
    if result.stderr:
        parts.append(f"stderr: {result.stderr}")
    parts.append(f"exit_code: {result.returncode}")
    return "\n".join(parts)


async def run_with_approvals(query: str, agent: Agent) -> Any:
    current_input: str | list[Any] = query

    while True:
        result = await agent.run(current_input)
        if not result.user_input_requests:
            return result

        next_input: list[Any] = [query]
        for request in result.user_input_requests:
            print(f"Shell request: {request.function_call.name}")
            print(f"Arguments: {request.function_call.arguments}")
            approved = (await asyncio.to_thread(input, "Approve command? (y/n): ")).strip().lower() == "y"
            next_input.append(Message("assistant", [request]))
            next_input.append(Message("user", [request.to_function_approval_response(approved)]))
            if not approved:
                return "Shell command execution was rejected by user."

        current_input = next_input


async def main() -> None:
    client = OpenAIResponsesClient(
        model_id="<responses-model-id>",
        api_key="<your-openai-api-key>",
    )
    local_shell_tool = client.get_shell_tool(func=run_bash)

    agent = Agent(
        client=client,
        instructions="You are a helpful assistant that can run shell commands.",
        tools=[local_shell_tool],
    )

    result = await run_with_approvals(
        "Use run_bash to execute `python --version` and show only stdout.",
        agent,
    )
    print(result)


if __name__ == "__main__":
    asyncio.run(main())

This pattern keeps execution on the host machine while giving the application a clear approval checkpoint before the command runs.

Security note: For local shell execution, we recommend running this logic in an isolated environment and keeping explicit approval in place before commands are allowed to run.

Python: Hosted shell in a managed environment

import asyncio

from agent_framework import Agent
from agent_framework.openai import OpenAIResponsesClient


async def main() -> None:
    client = OpenAIResponsesClient(
        model_id="<responses-model-id>",
        api_key="<your-openai-api-key>",
    )
    shell_tool = client.get_shell_tool()

    agent = Agent(
        client=client,
        instructions="You are a helpful assistant that can execute shell commands.",
        tools=shell_tool,
    )

    result = await agent.run("Use a shell command to show the current date and time")
    print(result)

    for message in result.messages:
        shell_calls = [c for c in message.contents if c.type == "shell_tool_call"]
        shell_results = [c for c in message.contents if c.type == "shell_tool_result"]

        if shell_calls:
            print(f"Shell commands: {shell_calls[0].commands}")
        if shell_results and shell_results[0].outputs:
            for output in shell_results[0].outputs:
                if output.stdout:
                    print(f"Stdout: {output.stdout}")
                if output.stderr:
                    print(f"Stderr: {output.stderr}")
                if output.exit_code is not None:
                    print(f"Exit code: {output.exit_code}")


if __name__ == "__main__":
    asyncio.run(main())

Hosted shell is useful when you want the agent to execute commands in a provider-managed environment rather than directly on the local machine.

.NET: Local shell with approvals

using System.ComponentModel;
using System.Diagnostics;
using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;
using OpenAI;

var apiKey = "<your-openai-api-key>";
var model = "<responses-model-id>";

[Description("Execute a shell command locally and return stdout, stderr and exit code.")]
static string RunBash([Description("Bash command to execute.")] string command)
{
    using Process process = new()
    {
        StartInfo = new ProcessStartInfo
        {
            FileName = "/bin/bash",
            ArgumentList = { "-lc", command },
            RedirectStandardOutput = true,
            RedirectStandardError = true,
            UseShellExecute = false,
        }
    };

    process.Start();
    process.WaitForExit(30_000);

    string stdout = process.StandardOutput.ReadToEnd();
    string stderr = process.StandardError.ReadToEnd();

    return $"stdout:\n{stdout}\nstderr:\n{stderr}\nexit_code:{process.ExitCode}";
}

IChatClient chatClient = new OpenAIClient(apiKey)
    .GetResponsesClient(model)
    .AsIChatClient();

AIAgent agent = chatClient.AsAIAgent(
    name: "LocalShellAgent",
    instructions: "Use tools when needed. Avoid destructive commands.",
    tools: [new ApprovalRequiredAIFunction(AIFunctionFactory.Create(RunBash, name: "run_bash"))]);

AgentSession session = await agent.CreateSessionAsync();
AgentResponse response = await agent.RunAsync("Use run_bash to execute `dotnet --version` and return only stdout.", session);

List<FunctionApprovalRequestContent> approvalRequests = response.Messages
    .SelectMany(m => m.Contents)
    .OfType<FunctionApprovalRequestContent>()
    .ToList();

while (approvalRequests.Count > 0)
{
    List<ChatMessage> approvals = approvalRequests
        .Select(request => new ChatMessage(ChatRole.User, [request.CreateResponse(approved: true)]))
        .ToList();

    response = await agent.RunAsync(approvals, session);
    approvalRequests = response.Messages
        .SelectMany(m => m.Contents)
        .OfType<FunctionApprovalRequestContent>()
        .ToList();
}

Console.WriteLine(response);

Like the Python version, this approach combines local execution with an explicit approval flow so the application stays in control of what actually runs.

Security note: For local shell execution, we recommend running this logic in an isolated environment and keeping explicit approval in place before commands are allowed to run.

.NET: Hosted shell with protocol-level configuration

using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;
using OpenAI;
using OpenAI.Responses;

var apiKey = "<your-openai-api-key>";
var model = "<responses-model-id>";

IChatClient chatClient = new OpenAIClient(apiKey)
    .GetResponsesClient(model)
    .AsIChatClient();

CreateResponseOptions hostedShellOptions = new();
hostedShellOptions.Patch.Set(
    "$.tools"u8,
    BinaryData.FromObjectAsJson(new object[]
    {
        new
        {
            type = "shell",
            environment = new
            {
                type = "container_auto"
            }
        }
    }));

AIAgent agent = chatClient
    .AsBuilder()
    .BuildAIAgent(new ChatClientAgentOptions
    {
        Name = "HostedShellAgent",
        UseProvidedChatClientAsIs = true,
        ChatOptions = new ChatOptions
        {
            Instructions = "Use shell commands to answer precisely.",
            RawRepresentationFactory = _ => hostedShellOptions
        }
    });

AgentResponse response = await agent.RunAsync("Use a shell command to print UTC date/time. Return only command output.");
Console.WriteLine(response);

This makes it possible to target a managed shell environment from .NET today while keeping the rest of the agent flow in the standard Agent Framework programming model.

Context Compaction

Long-running agent sessions accumulate chat history that can exceed a model’s context window. The Agent Framework includes a built-in compaction system that automatically manages conversation history before each model call — keeping agents within their token budget without losing important context (Docs).

Python: In-run compaction on the agent

import asyncio

from agent_framework import Agent, InMemoryHistoryProvider, SlidingWindowStrategy, tool
from agent_framework.openai import OpenAIChatClient


@tool(approval_mode="never_require")
def get_weather(city: str) -> str:
    weather_data = {
        "London": "cloudy, 12°C",
        "Paris": "sunny, 18°C",
        "Tokyo": "rainy, 22°C",
    }
    return weather_data.get(city, f"No data for {city}")


async def main() -> None:
    client = OpenAIChatClient(
        model_id="<chat-model-id>",
        api_key="<your-openai-api-key>",
    )

    agent = Agent(
        client=client,
        instructions="You are a helpful weather assistant.",
        tools=[get_weather],
        context_providers=[InMemoryHistoryProvider()],
        compaction_strategy=SlidingWindowStrategy(keep_last_groups=3),
    )

    session = agent.create_session()
    for query in [
        "What is the weather in London?",
        "How about Paris?",
        "And Tokyo?",
        "Which city is the warmest?",
    ]:
        result = await agent.run(query, session=session)
        print(result.text)


if __name__ == "__main__":
    asyncio.run(main())

This example keeps the most recent conversational context intact while trimming older tool-heavy exchanges that no longer need to be replayed in full.

.NET: Compaction pipeline with multiple strategies

using System.ComponentModel;
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Compaction;
using Microsoft.Extensions.AI;
using OpenAI;

var apiKey = "<your-openai-api-key>";
var model = "<chat-model-id>";

[Description("Look up the current price of a product by name.")]
static string LookupPrice([Description("The product to look up.")] string productName) =>
    productName.ToUpperInvariant() switch
    {
        "LAPTOP" => "The laptop costs $999.99.",
        "KEYBOARD" => "The keyboard costs $79.99.",
        "MOUSE" => "The mouse costs $29.99.",
        _ => $"No data for {productName}."
    };

IChatClient chatClient = new OpenAIClient(apiKey)
    .GetChatClient(model)
    .AsIChatClient();

PipelineCompactionStrategy compactionPipeline = new(
    new ToolResultCompactionStrategy(CompactionTriggers.MessagesExceed(7)),
    new SlidingWindowCompactionStrategy(CompactionTriggers.TurnsExceed(4)),
    new TruncationCompactionStrategy(CompactionTriggers.GroupsExceed(12)));

AIAgent agent = chatClient
    .AsBuilder()
    .UseAIContextProviders(new CompactionProvider(compactionPipeline))
    .BuildAIAgent(new ChatClientAgentOptions
    {
        Name = "ShoppingAssistant",
        ChatOptions = new ChatOptions
        {
            Instructions = "You are a concise shopping assistant.",
            Tools = [AIFunctionFactory.Create(LookupPrice)]
        },
        ChatHistoryProvider = new InMemoryChatHistoryProvider()
    });

AgentSession session = await agent.CreateSessionAsync();

string[] prompts =
[
    "What's the price of a laptop?",
    "How about a keyboard?",
    "And a mouse?",
    "Which is cheapest?",
    "What was the first product I asked about?"
];

foreach (string prompt in prompts)
{
    Console.WriteLine($"User: {prompt}");
    AgentResponse response = await agent.RunAsync(prompt, session);
    Console.WriteLine($"Agent: {response}\\n");

    if (session.TryGetInMemoryChatHistory(out var history))
    {
        Console.WriteLine($"[Stored message count: {history.Count}]\\n");
    }
}

By combining multiple compaction strategies, you can keep sessions responsive and cost-aware without giving up continuity.

What’s Next?

These patterns make Agent Framework a stronger foundation for real-world agent systems:

  • Local shell with approvals enables controlled execution on the host.
  • Hosted shell supports execution in managed environments.
  • Compaction strategies help long-running sessions stay within limits while preserving useful context.

Whether you are building an assistant that can inspect a project workspace or a multi-step workflow that needs durable context over time, these capabilities help close the gap between model reasoning and practical execution.

For more information, check out our documentation and examples on GitHub, and install the latest packages from NuGet (.NET) or PyPI (Python).

The post Agent Harness in Agent Framework appeared first on Microsoft Agent Framework.

]]>
https://devblogs.microsoft.com/agent-framework/agent-harness-in-agent-framework/feed/ 0
Give Your Agents Domain Expertise with Agent Skills in Microsoft Agent Framework https://devblogs.microsoft.com/agent-framework/give-your-agents-domain-expertise-with-agent-skills-in-microsoft-agent-framework/ https://devblogs.microsoft.com/agent-framework/give-your-agents-domain-expertise-with-agent-skills-in-microsoft-agent-framework/#comments Mon, 02 Mar 2026 16:46:27 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5159 You can now equip your Microsoft Agent Framework agents with portable, reusable skill packages that provide domain expertise on demand — without changing a single line of your agent’s core instructions. With built-in skills providers for both .NET and Python, your agents can discover and load Agent Skills at runtime, pulling in only the context […]

The post Give Your Agents Domain Expertise with Agent Skills in Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>
You can now equip your Microsoft Agent Framework agents with portable, reusable skill packages that provide domain expertise on demand — without changing a single line of your agent’s core instructions. With built-in skills providers for both .NET and Python, your agents can discover and load Agent Skills at runtime, pulling in only the context they need, when they need it.

What Are Agent Skills?

Agent Skills is a simple, open format for giving agents new capabilities and expertise. At the core of every skill is a SKILL.md file — a markdown document that describes what the skill does and provides step-by-step instructions for how to do it. Skills can also include optional scripts, reference documents, and other resources the agent can fetch on demand.

A skill directory looks like this:

expense-report/
├── SKILL.md                          # Required — frontmatter + instructions
├── scripts/
│   └── validate.py                   # Executable code agents can run
├── references/
│   └── POLICY_FAQ.md                 # Reference documents loaded on demand
└── assets/
    └── expense-report-template.md    # Templates and static resources

The SKILL.md file contains YAML frontmatter with metadata followed by the skill’s instructions in markdown. Only name and description are required; fields like license, compatibility, and metadata are optional:

---
name: expense-report
description: >-
  File and validate employee expense reports according to company policy.
  Use when asked about expense submissions, reimbursement rules, or spending limits.
license: Apache-2.0                   # Optional
compatibility: Requires python3       # Optional
metadata:                             # Optional
  author: contoso-finance
  version: "2.1"
---

## Instructions

1. Ask the employee for their receipt and expense details...
2. Validate against the policy in references/POLICY_FAQ.md...

Skills are useful when you want to:

  • Package domain expertise — Capture specialized knowledge (expense policies, legal workflows, data analysis pipelines) as reusable packages.
  • Extend agent capabilities — Give agents new abilities without modifying their core instructions.
  • Ensure consistency — Turn multi-step tasks into repeatable, auditable workflows.
  • Enable interoperability — Reuse the same skill across different Agent Skills-compatible products.

Progressive Disclosure: Context-Efficient by Design

One of the key design principles behind Agent Skills is progressive disclosure. Rather than loading everything into the agent’s context upfront, skills are disclosed in three stages:

  1. Advertise (~100 tokens per skill) — Skill names and descriptions are injected into the system prompt so the agent knows what’s available.
  2. Load (< 5,000 tokens recommended) — When a task matches a skill, the agent calls load_skill to retrieve the full SKILL.md instructions.
  3. Read resources (as needed) — The agent calls read_skill_resource to fetch supplementary files (references, templates, assets) only when required.

This pattern keeps the agent’s context window lean while still giving it access to deep domain knowledge on demand — important when you’re working with agents that handle many different domains or when you want to keep token usage under control.

Creating a Skill

The simplest skill is just a folder with a SKILL.md file. Create a skills directory and add a skill folder inside it:

skills/
└── meeting-notes/
    └── SKILL.md

The SKILL.md file starts with YAML frontmatter (name and description are required) followed by instructions in markdown:

---
name: meeting-notes
description: >-
  Summarize meeting transcripts into structured notes with action items.
  Use when asked to process or summarize meeting recordings or transcripts.
---

## Instructions

1. Extract key discussion points from the transcript.
2. List any decisions that were made.
3. Create a list of action items with owners and due dates.
4. Keep the summary concise — aim for one page or less.

The description field is especially important — the agent uses it to decide when to load the skill, so include both what the skill does and when it should be used.

That’s it. No scripts, no extra files — just a folder and a SKILL.md. You can always add references/, scripts/, and assets/ directories later as your skill grows. You can also use the skill-creator skill to help you generate new skills interactively.

Connecting Skills to an Agent

The Agent Framework includes a skills provider that discovers skills from filesystem directories and makes them available to your agent as a context provider. It searches configured paths recursively (up to two levels deep) for SKILL.md files, validates their format and resources, and injects skill names and descriptions into the system prompt so the agent knows what’s available. It also exposes two tools to the agent:

  • load_skill — Retrieves the full SKILL.md instructions when the agent determines a user’s request matches a skill’s domain, giving it detailed step-by-step guidance to address the task.
  • read_skill_resource — Fetches supplementary files (references, templates, assets) bundled with a skill, allowing the agent to pull in additional context only when needed.

Using Skills in .NET

Install the package:

dotnet add package Microsoft.Agents.AI --prerelease
dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
dotnet add package Azure.AI.OpenAI --prerelease
dotnet add package Azure.Identity

Set up the provider and create an agent:

using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Agents.AI;
using OpenAI.Responses;

// Discover skills from the 'skills' directory
var skillsProvider = new FileAgentSkillsProvider(
    skillPath: Path.Combine(AppContext.BaseDirectory, "skills"));

// Create an agent with the skills provider
AIAgent agent = new AzureOpenAIClient(
    new Uri(endpoint), new DefaultAzureCredential())
    .GetResponsesClient(deploymentName)
    .AsAIAgent(new ChatClientAgentOptions
    {
        Name = "SkillsAgent",
        ChatOptions = new()
        {
            Instructions = "You are a helpful assistant.",
        },
        AIContextProviders = [skillsProvider],
    });

// The agent discovers and loads matching skills automatically
AgentResponse response = await agent.RunAsync(
    "Summarize the key points and action items from today's standup meeting.");
Console.WriteLine(response.Text);

Using Skills in Python

Install the package:

pip install agent-framework --pre

Set up the provider and create an agent:

from pathlib import Path
from agent_framework import SkillsProvider
from agent_framework.azure import AzureOpenAIChatClient
from azure.identity.aio import AzureCliCredential

# Discover skills from the 'skills' directory
skills_provider = SkillsProvider(
    skill_paths=Path(__file__).parent / "skills"
)

# Create an agent with the skills provider
agent = AzureOpenAIChatClient(credential=AzureCliCredential()).as_agent(
    name="SkillsAgent",
    instructions="You are a helpful assistant.",
    context_providers=[skills_provider],
)

# The agent discovers and loads matching skills automatically
response = await agent.run(
    "Summarize the key points and action items from today's standup meeting."
)
print(response.text)

Once configured, the agent automatically discovers available skills and uses them when a user’s task matches a skill’s domain. You don’t need to write any routing logic — the agent reads the skill descriptions from the system prompt and decides when to load one.

Use Cases

Here are a few scenarios where Agent Skills can help:

Enterprise Policy Compliance

Package your company’s HR policies, expense rules, or IT security guidelines as skills. An employee-facing agent can load the relevant policy skill when someone asks “Can I expense a co-working space?” and give an accurate, policy-grounded answer — without needing all policies in context at all times.

Customer Support Playbooks

Turn your support team’s troubleshooting guides into skills. When a customer reports an issue, the agent loads the matching playbook and follows the documented steps, ensuring consistent resolution regardless of which agent instance handles the request.

Multi-Team Skill Libraries

Different teams can author and maintain their own skills independently. Point the skills provider at multiple directories to combine them:

.NET

var skillsProvider = new FileAgentSkillsProvider(
    skillPaths: [
        Path.Combine(AppContext.BaseDirectory, "company-skills"),
        Path.Combine(AppContext.BaseDirectory, "team-skills"),
    ]);

Python

skills_provider = SkillsProvider(
    skill_paths=[
        Path(__file__).parent / "company-skills",
        Path(__file__).parent / "team-skills",
    ]
)

Each path can point to an individual skill folder or a parent folder containing multiple skill subdirectories.

Security

Treat skills like open-source dependencies — only use ones from sources you trust, and review them before adding them to your agent. Skill instructions are injected into the agent’s context and can influence its behavior, so the same diligence you’d apply to a new package applies here.

What’s Next

We’re continuing to build out Agent Skills support in the framework. Here’s what’s coming:

  • Programmatic skills — Create and register agent skills dynamically via API, enabling scenarios where skills are generated or modified at runtime rather than authored as static files.
  • Agent skill execution — Support for agents to execute scripts bundled within skills, extending skills beyond instructions and reference material into active code execution.

Learn More

To learn more and try it out yourself, check out the documentation and working samples:

The post Give Your Agents Domain Expertise with Agent Skills in Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>
https://devblogs.microsoft.com/agent-framework/give-your-agents-domain-expertise-with-agent-skills-in-microsoft-agent-framework/feed/ 2
Migrate your Semantic Kernel and AutoGen projects to Microsoft Agent Framework Release Candidate https://devblogs.microsoft.com/agent-framework/migrate-your-semantic-kernel-and-autogen-projects-to-microsoft-agent-framework-release-candidate/ https://devblogs.microsoft.com/agent-framework/migrate-your-semantic-kernel-and-autogen-projects-to-microsoft-agent-framework-release-candidate/#comments Fri, 20 Feb 2026 05:52:57 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5140 We’re thrilled to announce that Microsoft Agent Framework has reached Release Candidate status for both .NET and Python. Release Candidate is an important milestone on the road to General Availability — it means the API surface is stable, and all features that we intend to release with version 1.0 are complete. Now is the time […]

The post Migrate your Semantic Kernel and AutoGen projects to Microsoft Agent Framework Release Candidate appeared first on Microsoft Agent Framework.

]]>
We’re thrilled to announce that Microsoft Agent Framework has reached Release Candidate status for both .NET and Python. Release Candidate is an important milestone on the road to General Availability — it means the API surface is stable, and all features that we intend to release with version 1.0 are complete. Now is the time to move your Semantic Kernel project to Microsoft Agent Framework and give us your feedback before final release. Whether you’re building a single helpful assistant or orchestrating a team of specialized agents, Agent Framework gives you a consistent, multi-language foundation to do it.

What is Microsoft Agent Framework?

Microsoft Agent Framework is a comprehensive, open-source framework for building, orchestrating, and deploying AI agents. It’s the successor to Semantic Kernel and AutoGen, and it provides a unified programming model across .NET and Python with:

  • Simple agent creation — go from zero to a working agent in just a few lines of code
  • Function tools — give agents the ability to call your code with type-safe tool definitions
  • Graph-based workflows — compose agents and functions into sequential, concurrent, handoff, and group chat patterns with streaming, checkpointing, and human-in-the-loop support
  • Multi-provider support — works with Microsoft Foundry, Azure OpenAI, OpenAI, GitHub Copilot, Anthropic Claude, AWS Bedrock, Ollama, and more
  • Interoperability — supports A2A (Agent-to-Agent), AG-UI, and MCP (Model Context Protocol) standards

Migration from Semantic Kernel and AutoGen

If you’ve been building agents with Semantic Kernel or AutoGen, Agent Framework is the natural next step. We’ve published detailed migration guides to help you transition:

Create Your First Agent

Getting started takes just a few lines of code. Here’s how to create a simple agent in both languages.

Python

pip install agent-framework --pre
import asyncio
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential


async def main():
    agent = AzureOpenAIResponsesClient(
        credential=AzureCliCredential(),
    ).as_agent(
        name="HaikuBot",
        instructions="You are an upbeat assistant that writes beautifully.",
    )

    print(await agent.run("Write a haiku about Microsoft Agent Framework."))

if __name__ == "__main__":
    asyncio.run(main())

.NET

dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
dotnet add package Azure.Identity
using System.ClientModel.Primitives;
using Azure.Identity;
using Microsoft.Agents.AI;
using OpenAI;
using OpenAI.Responses;

// Replace <resource> and gpt-4.1 with your Azure OpenAI resource name and deployment name.
var agent = new OpenAIClient(
    new BearerTokenPolicy(new AzureCliCredential(), "https://ai.azure.com/.default"),
    new OpenAIClientOptions() { Endpoint = new Uri("https://<resource>.openai.azure.com/openai/v1") })
    .GetResponsesClient("gpt-4.1")
    .AsAIAgent(name: "HaikuBot", instructions: "You are an upbeat assistant that writes beautifully.");

Console.WriteLine(await agent.RunAsync("Write a haiku about Microsoft Agent Framework."));

That’s it — a working AI agent in a handful of lines. From here you can add function tools, sessions for multi-turn conversations, streaming responses, and more.

Multi-Agent Workflows

Single agents are powerful, but real-world applications often need multiple agents working together. Agent Framework ships with a workflow engine that lets you compose agents into orchestration patterns — sequential, concurrent, handoff, and group chat — all with streaming support built in.

Here’s a sequential workflow where a copywriter agent drafts a tagline and a reviewer agent provides feedback:

Python

pip install agent-framework-orchestrations --pre
import asyncio
from typing import cast

from agent_framework import Message
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.orchestrations import SequentialBuilder
from azure.identity import AzureCliCredential


async def main() -> None:
    client = AzureOpenAIChatClient(credential=AzureCliCredential())

    writer = client.as_agent(
        instructions="You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
        name="writer",
    )

    reviewer = client.as_agent(
        instructions="You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
        name="reviewer",
    )

    # Build sequential workflow: writer -> reviewer
    workflow = SequentialBuilder(participants=[writer, reviewer]).build()

    # Run and collect outputs
    outputs: list[list[Message]] = []
    async for event in workflow.run("Write a tagline for a budget-friendly eBike.", stream=True):
        if event.type == "output":
            outputs.append(cast(list[Message], event.data))

    if outputs:
        for msg in outputs[-1]:
            name = msg.author_name or "user"
            print(f"[{name}]: {msg.text}")


if __name__ == "__main__":
    asyncio.run(main())

.NET

dotnet add package Microsoft.Agents.AI.Workflows --prerelease
using System.ClientModel.Primitives;
using Azure.Identity;
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Workflows;
using Microsoft.Extensions.AI;
using OpenAI;

// Replace <resource> and gpt-4.1 with your Azure OpenAI resource name and deployment name.
var chatClient = new OpenAIClient(
    new BearerTokenPolicy(new AzureCliCredential(), "https://ai.azure.com/.default"),
    new OpenAIClientOptions() { Endpoint = new Uri("https://<resource>.openai.azure.com/openai/v1") })
    .GetChatClient("gpt-4.1")
    .AsIChatClient();

ChatClientAgent writer = new(chatClient,
    "You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
    "writer");

ChatClientAgent reviewer = new(chatClient,
    "You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
    "reviewer");

// Build sequential workflow: writer -> reviewer
Workflow workflow = AgentWorkflowBuilder.BuildSequential(writer, reviewer);

List<ChatMessage> messages = [new(ChatRole.User, "Write a tagline for a budget-friendly eBike.")];

await using StreamingRun run = await InProcessExecution.RunStreamingAsync(workflow, messages);

await run.TrySendMessageAsync(new TurnToken(emitEvents: true));
await foreach (WorkflowEvent evt in run.WatchStreamAsync())
{
    if (evt is AgentResponseUpdateEvent e)
    {
        Console.Write(e.Update.Text);
    }
}

What’s Next?

This Release Candidate represents an important step toward General Availability. We encourage you to try the framework and share your feedback — your input is invaluable as we finalize the release in the coming weeks.

For more information, check out our documentation and examples on GitHub, and install the latest packages from NuGet (.NET) or PyPI (Python).

The post Migrate your Semantic Kernel and AutoGen projects to Microsoft Agent Framework Release Candidate appeared first on Microsoft Agent Framework.

]]>
https://devblogs.microsoft.com/agent-framework/migrate-your-semantic-kernel-and-autogen-projects-to-microsoft-agent-framework-release-candidate/feed/ 1
From Local Models to Agent Workflows: Building a Deep Research Solution with Microsoft Agent Framework on Microsoft Foundry Local https://devblogs.microsoft.com/agent-framework/from-local-models-to-agent-workflows-building-a-deep-research-solution-with-microsoft-agent-framework-on-microsoft-foundry-local/ https://devblogs.microsoft.com/agent-framework/from-local-models-to-agent-workflows-building-a-deep-research-solution-with-microsoft-agent-framework-on-microsoft-foundry-local/#respond Tue, 10 Feb 2026 11:12:33 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5119 Introduction: A New Paradigm for AI Application Development In enterprise AI application development, we often face this dilemma: while cloud-based large language models are powerful, issues such as data privacy, network latency, and cost control make many scenarios difficult to implement. Traditional local small models, although lightweight, lack complete development, evaluation, and orchestration frameworks. The […]

The post From Local Models to Agent Workflows: Building a Deep Research Solution with Microsoft Agent Framework on Microsoft Foundry Local appeared first on Microsoft Agent Framework.

]]>
Introduction: A New Paradigm for AI Application Development

In enterprise AI application development, we often face this dilemma: while cloud-based large language models are powerful, issues such as data privacy, network latency, and cost control make many scenarios difficult to implement. Traditional local small models, although lightweight, lack complete development, evaluation, and orchestration frameworks.

The combination of Microsoft Foundry Local and Agent Framework (MAF) provides an elegant solution to this dilemma. This article will guide you from zero to one in building a complete Deep Research agent workflow, covering the entire pipeline from model safety evaluation, workflow orchestration, interactive debugging to performance optimization.

bg image

Why Choose Foundry Local?

Foundry Local is not just a local model runtime, but an extension of Microsoft’s AI ecosystem to the edge:

  • Privacy First: All data and inference processes are completed locally, meeting strict compliance requirements
  • Zero Latency: No network round trips required, suitable for real-time interactive scenarios
  • Cost Control: Avoid cloud API call fees, suitable for high-frequency calling scenarios
  • Rapid Iteration: Local development and debugging, shortening feedback cycles

Combined with the Microsoft Agent Framework, you can build complex agent applications just like using Azure OpenAI.

Example Code:

agent = FoundryLocalClient(model_id="qwen2.5-1.5b-instruct-generic-cpu:4").as_agent(
    name="LocalAgent",
    instructions="""You are an assistant.

Your responsibilities:
- Answering questions and providing professional advice
- Helping users understand concepts
- Offering users different suggestions
""",
)

How to Evaluate an Agent? eval image

Based on the Agent Framework evaluation samples, here are three complementary evaluation methods, with corresponding implementations and configurations in this repository:

  1. Red Teaming (Security and Robustness)

    • Purpose: Use systematic adversarial prompts to cover high-risk content and test the agent’s security boundaries.
    • Method: Execute multiple attack strategies against the target agent, covering risk categories such as violence, hate/unfairness, sexual content, and self-harm.
  2. Self-Reflection (Quality Verification)

    • Purpose: Let the agent perform secondary review of its own output, checking factual consistency, coverage, citation completeness, and answer structure.
    • Method: Add a “reflection round” after task output, where the agent provides self-assessment and improvement suggestions based on fixed dimensions, producing a revised version.
    • This content is temporarily omitted in the example
  3. Observability (Performance Metrics)

    • Purpose: Measure end-to-end latency, stage-wise time consumption, and tool invocation overhead using metrics and distributed tracing.
    • Method: Enable OpenTelemetry to report workflow execution processes and tool invocations.

Complete Development Process: From Security to Production

Step 1: Red Team Evaluation – Securing the Safety Baseline

Before putting any model into production, security evaluation is an essential step. MAF provides out-of-the-box Red Teaming capabilities, combined with Microsoft Foundry to complete Red Team evaluation:

# 01.foundrylocal_maf_evaluation.py
from azure.ai.evaluation.red_team import AttackStrategy, RedTeam, RiskCategory
from azure.identity import AzureCliCredential
from agent_framework_foundry_local import FoundryLocalClient

credential = AzureCliCredential()
agent = FoundryLocalClient(model_id="qwen2.5-1.5b-instruct-generic-cpu:4").as_agent(
    name="LocalAgent",
    instructions="""You are an assistant.

Your responsibilities:
- Answering questions and providing professional advice
- Helping users understand concepts
- Offering users different suggestions
""",
)

def agent_callback(query: str) -> str:
    async def _run():
        return await agent.run(query)
    response = asyncio.get_event_loop().run_until_complete(_run())
    return response.text

red_team = RedTeam(
    azure_ai_project=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
    credential=credential,
    risk_categories=[
        RiskCategory.Violence,
        RiskCategory.HateUnfairness,
        RiskCategory.Sexual,
        RiskCategory.SelfHarm,
    ],
    num_objectives=2,
)

results = await red_team.scan(
    target=agent_callback,
    scan_name="Qwen2.5-1.5B-Agent",
    attack_strategies=[
        AttackStrategy.EASY,
        AttackStrategy.MODERATE,
        AttackStrategy.CharacterSpace,
        AttackStrategy.ROT13,
        AttackStrategy.UnicodeConfusable,
        AttackStrategy.CharSwap,
        AttackStrategy.Morse,
        AttackStrategy.Leetspeak,
        AttackStrategy.Url,
        AttackStrategy.Binary,
        AttackStrategy.Compose([AttackStrategy.Base64, AttackStrategy.ROT13]),
    ],
    output_path="Qwen2.5-1.5B-Redteam-Results.json",
)

Evaluation Dimensions:

  • Risk Categories: Violence, hate/unfairness, sexual content, self-harm
  • Attack Strategies: Encoding obfuscation, character substitution, prompt injection, etc.
  • Output Analysis: Generate detailed risk scorecards and response samples

Evaluation results are saved as JSON for traceability and continuous monitoring. This step ensures the model’s robustness when facing malicious inputs.

This is a screenshot after running 01.foundrylocal_maf_evaluation.py. You can improve results by adjusting the prompt.

redteam image

Step 2: Deep Research Workflow Design – Multi-Round Iterative Intelligence

The core of Deep Research is the “research-judge-research again” iterative loop. MAF Workflows makes this complex logic clear and maintainable:

workflow image

Key Components:

  1. Research Agent

    • Equipped with search_web tool for real-time external information retrieval
    • Generates summaries and identifies knowledge gaps in each round
    • Accumulates context to avoid redundant searches
  2. Iteration Controller

    • Evaluates current information completeness
    • Decision-making: Continue deeper vs Generate report
    • Prevents infinite loops (sets maximum rounds)
  3. Final Reporter

    • Integrates findings from all iterations
    • Generates structured reports with citations

Code Implementation (simplified):

from agent_framework import WorkflowBuilder
from agent_framework_foundry_local import FoundryLocalClient

workflow_builder = WorkflowBuilder(
    name="Deep Research Workflow",
    description="Multi-agent deep research workflow with iterative web search"
)

workflow_builder.register_executor(lambda: StartExecutor(state=state), name="start_executor")
workflow_builder.register_executor(lambda: ResearchAgentExecutor(), name="research_executor")
workflow_builder.register_executor(lambda: iteration_control, name="iteration_control")
workflow_builder.register_executor(lambda: FinalReportExecutor(), name="final_report")
workflow_builder.register_executor(lambda: OutputExecutor(), name="output_executor")

workflow_builder.register_agent(
    lambda: FoundryLocalClient(model_id="qwen2.5-1.5b-instruct-generic-cpu:4").as_agent(
        name="research_agent",
        instructions="...",
        tools=search_web,
        default_options={"temperature": 0.7, "max_tokens": 4096},
    ),
    name="research_agent",
)

workflow_builder.add_edge("start_executor", "research_executor")
workflow_builder.add_edge("research_executor", "research_agent")
workflow_builder.add_edge("research_agent", "iteration_control")
workflow_builder.add_edge(
    "iteration_control",
    "research_executor",
    condition=lambda decision: decision.signal == ResearchSignal.CONTINUE,
)
workflow_builder.add_edge(
    "iteration_control",
    "final_report",
    condition=lambda decision: decision.signal == ResearchSignal.COMPLETE,
)
workflow_builder.add_edge("final_report", "final_reporter_agent")
workflow_builder.add_edge("final_reporter_agent", "output_executor")

The beauty of this design lies in:

  • Modularity: Each executor has a single responsibility, easy to test and replace
  • Observability: Inputs and outputs of each node can be tracked
  • Extensibility: Easy to add new tools or decision logic

Step 3: DevUI Interactive Debugging – Making Workflows Visible

Traditional agent debugging is often a “black box” experience. MAF DevUI visualizes the entire execution process:

python 02.foundrylocal_maf_workflow_deep_research_devui.py
# DevUI starts at http://localhost:8093

DevUI Provides:

  • Workflow Topology Diagram: Intuitively see node and edge relationships
  • Step-by-Step Execution: View input, output, and status of each node
  • Real-time Injection: Dynamically modify input parameters to test different scenarios
  • Log Aggregation: Unified view of all agent logs and tool invocations

Debugging Scenario Example:

  • Input: “GPT-5.3-Codex vs Anthropic Claud 4.6”
  • Observe: Evolution of search keywords across 3 rounds by the research agent
  • Verify: Whether the iteration controller’s decision basis is reasonable
  • Check: Whether the final report covers all sub-topics

This interactive experience significantly shortens the time from discovering problems to solving them.

devui image

Step 4: Performance Evaluation and Optimization – .NET Aspire Integration

In production environments, performance is a dimension that cannot be ignored. MAF’s integration with .NET Aspire provides enterprise-grade observability:

Enable Telemetry:

# Configure OpenTelemetry
export OTLP_ENDPOINT="http://localhost:4317"

# Workflow automatically reports
- Latency: Time consumption of each executor
- Throughput: Concurrent request processing capacity
- Tool Usage: search_web call frequency and time consumption

Key Metrics:

  • End-to-End Latency: Time from user input to final report
  • Model Inference Time: Response speed of local model
  • Tool Invocation Overhead: Impact of external APIs (such as search)
  • Memory Usage: Context accumulation across multiple iterations

Optimization Strategies:

  • Use smaller models (such as Qwen2.5-1.5B) to balance speed and quality
  • Cache common search results to reduce API calls
  • Limit iteration depth to avoid excessive research
  • Streaming output to improve user experience

Through distributed tracing, you can precisely locate bottlenecks and make data-driven optimization decisions.

tracing image

Practical Guide: Quick Start

GitHub Repo : https://github.com/microsoft/Agent-Framework-Samples/blob/main/09.Cases/FoundryLocalPipeline/

Environment Setup

# 1. Set environment variables
export FOUNDRYLOCAL_ENDPOINT="http://localhost:8000"
export FOUNDRYLOCAL_MODEL_DEPLOYMENT_NAME="qwen2.5-1.5b-instruct-generic-cpu:4"
export SERPAPI_API_KEY="your_serpapi_key"
export AZURE_AI_PROJECT_ENDPOINT="your_azure_endpoint"
export OTLP_ENDPOINT="http://localhost:4317"

# 2. Azure authentication (for evaluation)
az login

# 3. Install dependencies (example)
pip install azure-ai-evaluation azure-ai-evaluation[redteam] agent-framework agent-framework-foundry-local

Three-Step Launch

Step 1: Security Evaluation

python 01.foundrylocal_maf_evaluation.py
# View results: Qwen2.5-1.5B-Redteam-Results.json

Step 2: DevUI Mode (Recommended)

python 02.foundrylocal_maf_workflow_deep_research_devui.py
# Open in browser: http://localhost:8093
# Enter research topic, observe iteration process

Step 3: CLI Mode (Production)

python 02.foundrylocal_maf_workflow_deep_research_devui.py --cli
# Directly output final report

Architectural Insights: Evolution from Model to Agent

This case demonstrates three levels of modern AI application development:

  1. Model Layer (Foundation): Foundry Local provides reliable inference capabilities
  2. Agent Layer: ChatAgent + Tools encapsulate business logic
  3. Orchestration Layer: MAF Workflows handle complex processes

Traditional development often stops at model invocation, while MAF allows you to stand at a higher level of abstraction:

  • No more manual loops and state management
  • Automatic handling of tool invocations and result parsing
  • Built-in observability and error handling

This “framework-first” approach is key to moving enterprise AI from POC to production.

Use Cases and Extension Directions

Current Solution Suitable For:

  • Research tasks requiring multi-round information synthesis
  • Enterprise scenarios sensitive to data privacy
  • Cost optimization needs for high-frequency calls
  • Offline or weak network environments

Extension Directions:

  • Multi-Agent Collaboration: Add expert agents (such as data analysts, code generators)
  • Knowledge Base Integration: Combine with vector databases to retrieve private documents
  • Human-in-the-Loop: Add manual review at critical decision points
  • Multimodal Support: Process rich media inputs such as images, PDFs

Conclusion: The Infinite Possibilities of Localized AI

The combination of Microsoft Foundry Local + Agent Framework proves that local small models can also build production-grade intelligent applications. Through this Deep Research case, we see:

  • Security and Control: Red Team evaluation ensures model behavior meets expectations
  • Efficient Orchestration: Workflows make complex logic clear and maintainable
  • Rapid Iteration: DevUI provides instant feedback, shortening development cycles
  • Performance Transparency: Aspire integration makes optimization evidence-based

More importantly, this solution is open and composable. You can:

  • Integrate custom tools (database queries, internal APIs)
  • Deploy to edge devices or private clouds

The future of AI applications lies not only in the cloud, but in the flexible architecture of cloud-edge collaboration. Foundry Local provides enterprises with a practical path, enabling every developer to build agent systems that are both powerful and controllable.


Related Resources:

The post From Local Models to Agent Workflows: Building a Deep Research Solution with Microsoft Agent Framework on Microsoft Foundry Local appeared first on Microsoft Agent Framework.

]]>
https://devblogs.microsoft.com/agent-framework/from-local-models-to-agent-workflows-building-a-deep-research-solution-with-microsoft-agent-framework-on-microsoft-foundry-local/feed/ 0
Build AI Agents with Claude Agent SDK and Microsoft Agent Framework https://devblogs.microsoft.com/agent-framework/build-ai-agents-with-claude-agent-sdk-and-microsoft-agent-framework/ https://devblogs.microsoft.com/agent-framework/build-ai-agents-with-claude-agent-sdk-and-microsoft-agent-framework/#respond Fri, 30 Jan 2026 19:21:49 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5116 Microsoft Agent Framework now integrates with the Claude Agent SDK, enabling you to build AI agents powered by Claude’s full agentic capabilities. This integration brings together the Agent Framework’s consistent agent abstraction with Claude’s powerful features, including file editing, code execution, function calling, streaming responses, multi-turn conversations, and Model Context Protocol (MCP) server integration — […]

The post Build AI Agents with Claude Agent SDK and Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>
Microsoft Agent Framework now integrates with the Claude Agent SDK, enabling you to build AI agents powered by Claude’s full agentic capabilities. This integration brings together the Agent Framework’s consistent agent abstraction with Claude’s powerful features, including file editing, code execution, function calling, streaming responses, multi-turn conversations, and Model Context Protocol (MCP) server integration — available in Python.

Why Use Agent Framework with Claude Agent SDK?

You can use the Claude Agent SDK on its own to build agents. So why use it through Agent Framework? Here are the key reasons:

  • Consistent agent abstraction — Claude agents implement the same BaseAgent interface as every other agent type in the framework. You can swap providers or combine them without restructuring your code.
  • Multi-agent workflows — Compose Claude agents with other agents (Azure OpenAI, OpenAI, GitHub Copilot, and more) in sequential, concurrent, handoff, and group chat workflows using built-in orchestrators.
  • Ecosystem integration — Access the full Agent Framework ecosystem: declarative agent definitions, A2A protocol support, and consistent patterns for function tools, sessions, and streaming across all providers.

In short, Agent Framework lets you treat Claude as one building block in a larger agentic system rather than a standalone tool.

Install the Claude Agent SDK Integration

Python

pip install agent-framework-claude --pre

Create a Claude Agent

Getting started is straightforward. Create a ClaudeAgent and start interacting with it using the async context manager pattern.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful assistant.",
    ) as agent:
        response = await agent.run("What is Microsoft Agent Framework?")
        print(response.text)

Use Built-in Tools

Claude Agent SDK provides access to powerful built-in tools for file operations, shell commands, and more. Simply pass tool names as strings to enable them.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful coding assistant.",
        tools=["Read", "Write", "Bash", "Glob"],
    ) as agent:
        response = await agent.run("List all Python files in the current directory")
        print(response.text)

Add Function Tools

Extend your agent with custom function tools to give it domain-specific capabilities.

Python

from typing import Annotated
from pydantic import Field
from agent_framework_claude import ClaudeAgent

def get_weather(
    location: Annotated[str, Field(description="The location to get the weather for.")],
) -> str:
    """Get the weather for a given location."""
    return f"The weather in {location} is sunny with a high of 25C."

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful weather agent.",
        tools=[get_weather],
    ) as agent:
        response = await agent.run("What's the weather like in Seattle?")
        print(response.text)

Stream Responses

For a better user experience, you can stream responses as they are generated instead of waiting for the complete result.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful assistant.",
    ) as agent:
        print("Agent: ", end="", flush=True)
        async for chunk in agent.run_stream("Tell me a short story."):
            if chunk.text:
                print(chunk.text, end="", flush=True)
        print()

Multi-Turn Conversations

Maintain conversation context across multiple interactions using threads. The Claude Agent SDK automatically manages session resumption to preserve context.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful assistant. Keep your answers short.",
    ) as agent:
        thread = agent.get_new_thread()

        # First turn
        await agent.run("My name is Alice.", thread=thread)

        # Second turn - agent remembers the context
        response = await agent.run("What is my name?", thread=thread)
        print(response.text)  # Should mention "Alice"

Configure Permission Modes

Control how the agent handles permission requests for file operations and command execution using permission modes.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a coding assistant that can edit files.",
        tools=["Read", "Write", "Bash"],
        default_options={
            "permission_mode": "acceptEdits",  # Auto-accept file edits
        },
    ) as agent:
        response = await agent.run("Create a hello.py file that prints 'Hello, World!'")
        print(response.text)

Connect MCP Servers

Claude agents support connecting to external MCP servers, giving the agent access to additional tools and data sources.

Python

from agent_framework_claude import ClaudeAgent

async def main():
    async with ClaudeAgent(
        instructions="You are a helpful assistant with access to the filesystem.",
        default_options={
            "mcp_servers": {
                "filesystem": {
                    "command": "npx",
                    "args": ["-y", "@modelcontextprotocol/server-filesystem", "."],
                },
            },
        },
    ) as agent:
        response = await agent.run("List all files in the current directory using MCP")
        print(response.text)

Use Claude in a Multi-Agent Workflow

One of the key benefits of using Agent Framework is the ability to combine Claude with other agents in a multi-agent workflow. In this example, an Azure OpenAI agent drafts a marketing tagline and a Claude agent reviews it — all orchestrated as a sequential pipeline.

Python

import asyncio
from typing import cast

from agent_framework import ChatMessage, Role, SequentialBuilder, WorkflowOutputEvent
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework_claude import ClaudeAgent
from azure.identity import AzureCliCredential

async def main():
    # Create an Azure OpenAI agent as a copywriter
    chat_client = AzureOpenAIChatClient(credential=AzureCliCredential())

    writer = chat_client.as_agent(
        instructions="You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
        name="writer",
    )

    # Create a Claude agent as a reviewer
    reviewer = ClaudeAgent(
        instructions="You are a thoughtful reviewer. Give brief feedback on the previous assistant message.",
        name="reviewer",
    )

    # Build a sequential workflow: writer -> reviewer
    workflow = SequentialBuilder().participants([writer, reviewer]).build()

    # Run the workflow
    async for event in workflow.run_stream("Write a tagline for a budget-friendly electric bike."):
        if isinstance(event, WorkflowOutputEvent):
            messages = cast(list[ChatMessage], event.data)
            for msg in messages:
                name = msg.author_name or ("assistant" if msg.role == Role.ASSISTANT else "user")
                print(f"[{name}]: {msg.text}\n")

asyncio.run(main())

This example shows how a single workflow can combine agents from different providers. You can extend this pattern to concurrent, handoff, and group chat workflows as well.

More Information

Summary

The Claude Agent SDK integration for Microsoft Agent Framework makes it easy to build AI agents that leverage Claude’s full agentic capabilities. With support for built-in tools, function tools, streaming, multi-turn conversations, permission modes, and MCP servers in Python, you can build powerful agentic applications that interact with code, files, shell commands, and external services.

We’re always interested in hearing from you. If you have feedback, questions or want to discuss further, feel free to reach out to us and the community on the discussion boards on GitHub! We would also love your support, if you’ve enjoyed using Agent Framework, give us a star on GitHub.

The post Build AI Agents with Claude Agent SDK and Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>
https://devblogs.microsoft.com/agent-framework/build-ai-agents-with-claude-agent-sdk-and-microsoft-agent-framework/feed/ 0
Build AI Agents with GitHub Copilot SDK and Microsoft Agent Framework https://devblogs.microsoft.com/agent-framework/build-ai-agents-with-github-copilot-sdk-and-microsoft-agent-framework/ https://devblogs.microsoft.com/agent-framework/build-ai-agents-with-github-copilot-sdk-and-microsoft-agent-framework/#respond Tue, 27 Jan 2026 21:37:26 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5106 Microsoft Agent Framework now integrates with the GitHub Copilot SDK, enabling you to build AI agents powered by GitHub Copilot. This integration brings together the Agent Framework’s consistent agent abstraction with GitHub Copilot’s capabilities, including function calling, streaming responses, multi-turn conversations, shell command execution, file operations, URL fetching, and Model Context Protocol (MCP) server integration […]

The post Build AI Agents with GitHub Copilot SDK and Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>
Microsoft Agent Framework now integrates with the GitHub Copilot SDK, enabling you to build AI agents powered by GitHub Copilot. This integration brings together the Agent Framework’s consistent agent abstraction with GitHub Copilot’s capabilities, including function calling, streaming responses, multi-turn conversations, shell command execution, file operations, URL fetching, and Model Context Protocol (MCP) server integration — all available in both .NET and Python.

Why Use Agent Framework with GitHub Copilot SDK?

You can use the GitHub Copilot SDK on its own to build agents. So why use it through Agent Framework? Here are the key reasons:

  • Consistent agent abstraction — GitHub Copilot agents implement the same AIAgent (.NET) / BaseAgent (Python) interface as every other agent type in the framework. You can swap providers or combine them without restructuring your code.
  • Multi-agent workflows — Compose GitHub Copilot agents with other agents (Azure OpenAI, OpenAI, Anthropic, and more) in sequential, concurrent, handoff, and group chat workflows using built-in orchestrators.
  • Ecosystem integration — Access the full Agent Framework ecosystem: declarative agent definitions, A2A protocol support, and consistent patterns for function tools, sessions, and streaming across all providers.

In short, Agent Framework lets you treat GitHub Copilot as one building block in a larger agentic system rather than a standalone tool.

Install the GitHub Copilot Integration

.NET

dotnet add package Microsoft.Agents.AI.GitHub.Copilot --prerelease

Python

pip install agent-framework-github-copilot --pre

Create a GitHub Copilot Agent

Getting started is straightforward. Create a CopilotClient (in .NET) or a GitHubCopilotAgent (in Python) and start interacting with the agent.

.NET

using GitHub.Copilot.SDK;
using Microsoft.Agents.AI;

await using CopilotClient copilotClient = new();
await copilotClient.StartAsync();

AIAgent agent = copilotClient.AsAIAgent();

Console.WriteLine(await agent.RunAsync("What is Microsoft Agent Framework?"));

Python

from agent_framework.github import GitHubCopilotAgent

async def main():
    agent = GitHubCopilotAgent(
        default_options={"instructions": "You are a helpful assistant."},
    )

    async with agent:
        result = await agent.run("What is Microsoft Agent Framework?")
        print(result)

Add Function Tools

Extend your agent with custom function tools to give it domain-specific capabilities.

.NET

using GitHub.Copilot.SDK;
using Microsoft.Agents.AI;
using Microsoft.Extensions.AI;

AIFunction weatherTool = AIFunctionFactory.Create((string location) =>
{
    return $"The weather in {location} is sunny with a high of 25C.";
}, "GetWeather", "Get the weather for a given location.");

await using CopilotClient copilotClient = new();
await copilotClient.StartAsync();

AIAgent agent = copilotClient.AsAIAgent(
    tools: [weatherTool],
    instructions: "You are a helpful weather agent.");

Console.WriteLine(await agent.RunAsync("What's the weather like in Seattle?"));

Python

from typing import Annotated
from pydantic import Field
from agent_framework.github import GitHubCopilotAgent

def get_weather(
    location: Annotated[str, Field(description="The location to get the weather for.")],
) -> str:
    """Get the weather for a given location."""
    return f"The weather in {location} is sunny with a high of 25C."

async def main():
    agent = GitHubCopilotAgent(
        default_options={"instructions": "You are a helpful weather agent."},
        tools=[get_weather],
    )

    async with agent:
        result = await agent.run("What's the weather like in Seattle?")
        print(result)

Stream Responses

For a better user experience, you can stream responses as they are generated instead of waiting for the complete result.

.NET

await using CopilotClient copilotClient = new();
await copilotClient.StartAsync();

AIAgent agent = copilotClient.AsAIAgent();

await foreach (AgentResponseUpdate update in agent.RunStreamingAsync("Tell me a short story."))
{
    Console.Write(update);
}

Console.WriteLine();

Python

from agent_framework.github import GitHubCopilotAgent

async def main():
    agent = GitHubCopilotAgent(
        default_options={"instructions": "You are a helpful assistant."},
    )

    async with agent:
        print("Agent: ", end="", flush=True)
        async for chunk in agent.run_stream("Tell me a short story."):
            if chunk.text:
                print(chunk.text, end="", flush=True)
        print()

Multi-Turn Conversations

Maintain conversation context across multiple interactions using sessions (.NET) or threads (Python).

.NET

await using CopilotClient copilotClient = new();
await copilotClient.StartAsync();

await using GitHubCopilotAgent agent = new(
    copilotClient,
    instructions: "You are a helpful assistant. Keep your answers short.");

AgentSession session = await agent.GetNewSessionAsync();

// First turn
await agent.RunAsync("My name is Alice.", session);

// Second turn - agent remembers the context
AgentResponse response = await agent.RunAsync("What is my name?", session);
Console.WriteLine(response); // Should mention "Alice"

Python

from agent_framework.github import GitHubCopilotAgent

async def main():
    agent = GitHubCopilotAgent(
        default_options={"instructions": "You are a helpful assistant."},
    )

    async with agent:
        thread = agent.get_new_thread()

        # First interaction
        result1 = await agent.run("My name is Alice.", thread=thread)
        print(f"Agent: {result1}")

        # Second interaction - agent remembers the context
        result2 = await agent.run("What's my name?", thread=thread)
        print(f"Agent: {result2}")  # Should remember "Alice"

Enable Permissions

By default, the agent cannot execute shell commands, read/write files, or fetch URLs. To enable these capabilities, provide a permission handler that approves or denies requests.

.NET

static Task<PermissionRequestResult> PromptPermission(
    PermissionRequest request, PermissionInvocation invocation)
{
    Console.WriteLine($"\n[Permission Request: {request.Kind}]");
    Console.Write("Approve? (y/n): ");

    string? input = Console.ReadLine()?.Trim().ToUpperInvariant();
    string kind = input is "Y" or "YES" ? "approved" : "denied-interactively-by-user";

    return Task.FromResult(new PermissionRequestResult { Kind = kind });
}

await using CopilotClient copilotClient = new();
await copilotClient.StartAsync();

SessionConfig sessionConfig = new()
{
    OnPermissionRequest = PromptPermission,
};

AIAgent agent = copilotClient.AsAIAgent(sessionConfig);

Console.WriteLine(await agent.RunAsync("List all files in the current directory"));

Python

from agent_framework.github import GitHubCopilotAgent
from copilot.types import PermissionRequest, PermissionRequestResult

def prompt_permission(
    request: PermissionRequest, context: dict[str, str]
) -> PermissionRequestResult:
    kind = request.get("kind", "unknown")
    print(f"\n[Permission Request: {kind}]")

    response = input("Approve? (y/n): ").strip().lower()
    if response in ("y", "yes"):
        return PermissionRequestResult(kind="approved")
    return PermissionRequestResult(kind="denied-interactively-by-user")

async def main():
    agent = GitHubCopilotAgent(
        default_options={
            "instructions": "You are a helpful assistant that can execute shell commands.",
            "on_permission_request": prompt_permission,
        },
    )

    async with agent:
        result = await agent.run("List the Python files in the current directory")
        print(result)

Connect MCP Servers

GitHub Copilot agents support connecting to local (stdio) and remote (HTTP) MCP servers, giving the agent access to external tools and data sources.

.NET

await using CopilotClient copilotClient = new();
await copilotClient.StartAsync();

SessionConfig sessionConfig = new()
{
    OnPermissionRequest = PromptPermission,
    McpServers = new Dictionary<string, object>
    {
        // Local stdio server
        ["filesystem"] = new McpLocalServerConfig
        {
            Type = "stdio",
            Command = "npx",
            Args = ["-y", "@modelcontextprotocol/server-filesystem", "."],
            Tools = ["*"],
        },
        // Remote HTTP server
        ["microsoft-learn"] = new McpRemoteServerConfig
        {
            Type = "http",
            Url = "https://learn.microsoft.com/api/mcp",
            Tools = ["*"],
        },
    },
};

AIAgent agent = copilotClient.AsAIAgent(sessionConfig);

Console.WriteLine(await agent.RunAsync("Search Microsoft Learn for 'Azure Functions' and summarize the top result"));

Python

from agent_framework.github import GitHubCopilotAgent
from copilot.types import MCPServerConfig

async def main():
    mcp_servers: dict[str, MCPServerConfig] = {
        # Local stdio server
        "filesystem": {
            "type": "stdio",
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "."],
            "tools": ["*"],
        },
        # Remote HTTP server
        "microsoft-learn": {
            "type": "http",
            "url": "https://learn.microsoft.com/api/mcp",
            "tools": ["*"],
        },
    }

    agent = GitHubCopilotAgent(
        default_options={
            "instructions": "You are a helpful assistant with access to the filesystem and Microsoft Learn.",
            "on_permission_request": prompt_permission,
            "mcp_servers": mcp_servers,
        },
    )

    async with agent:
        result = await agent.run("Search Microsoft Learn for 'Azure Functions' and summarize the top result")
        print(result)

Use GitHub Copilot in a Multi-Agent Workflow

One of the key benefits of using Agent Framework is the ability to combine GitHub Copilot with other agents in a multi-agent workflow. In this example, an Azure OpenAI agent drafts a marketing tagline and a GitHub Copilot agent reviews it — all orchestrated as a sequential pipeline.

.NET

using Azure.AI.OpenAI;
using Azure.Identity;
using GitHub.Copilot.SDK;
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.GitHub.Copilot;
using Microsoft.Agents.AI.Workflows;
using Microsoft.Extensions.AI;

// Create an Azure OpenAI agent as a copywriter
var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")!;
var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME") ?? "gpt-4o-mini";
var chatClient = new AzureOpenAIClient(new Uri(endpoint), new AzureCliCredential())
    .GetChatClient(deploymentName)
    .AsIChatClient();

ChatClientAgent writer = new(chatClient,
    "You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
    "writer");

// Create a GitHub Copilot agent as a reviewer
await using CopilotClient copilotClient = new();
await copilotClient.StartAsync();

GitHubCopilotAgent reviewer = new(copilotClient,
    instructions: "You are a thoughtful reviewer. Give brief feedback on the previous assistant message.");

// Build a sequential workflow: writer -> reviewer
Workflow workflow = AgentWorkflowBuilder.BuildSequential([writer, reviewer]);

// Run the workflow
await using StreamingRun run = await InProcessExecution.StreamAsync(workflow, input: prompt);
await run.TrySendMessageAsync(new TurnToken(emitEvents: true));

await foreach (WorkflowEvent evt in run.WatchStreamAsync())
{
    if (evt is AgentResponseUpdateEvent e)
    {
        Console.Write(e.Update.Text);
    }
}

Python

import asyncio
from typing import cast

from agent_framework import ChatMessage, Role, SequentialBuilder, WorkflowOutputEvent
from agent_framework.azure import AzureOpenAIChatClient
from agent_framework.github import GitHubCopilotAgent
from azure.identity import AzureCliCredential

async def main():
    # Create an Azure OpenAI agent as a copywriter
    chat_client = AzureOpenAIChatClient(credential=AzureCliCredential())

    writer = chat_client.as_agent(
        instructions="You are a concise copywriter. Provide a single, punchy marketing sentence based on the prompt.",
        name="writer",
    )

    # Create a GitHub Copilot agent as a reviewer
    reviewer = GitHubCopilotAgent(
        default_options={"instructions": "You are a thoughtful reviewer. Give brief feedback on the previous assistant message."},
        name="reviewer",
    )

    # Build a sequential workflow: writer -> reviewer
    workflow = SequentialBuilder().participants([writer, reviewer]).build()

    # Run the workflow
    async for event in workflow.run_stream("Write a tagline for a budget-friendly electric bike."):
        if isinstance(event, WorkflowOutputEvent):
            messages = cast(list[ChatMessage], event.data)
            for msg in messages:
                name = msg.author_name or ("assistant" if msg.role == Role.ASSISTANT else "user")
                print(f"[{name}]: {msg.text}\n")

asyncio.run(main())

This example shows how a single workflow can combine agents from different providers. You can extend this pattern to concurrent, handoff, and group chat workflows as well.

More Information

Summary

The GitHub Copilot SDK integration for Microsoft Agent Framework makes it easy to build AI agents that leverage GitHub Copilot’s capabilities. With support for function tools, streaming, multi-turn conversations, permissions, and MCP servers in both .NET and Python, you can build powerful agentic applications that interact with code, files, shell commands, and external services.

We’re always interested in hearing from you. If you have feedback, questions or want to discuss further, feel free to reach out to us and the community on the discussion boards on GitHub! We would also love your support, if you’ve enjoyed using Agent Framework, give us a star on GitHub.

The post Build AI Agents with GitHub Copilot SDK and Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>
https://devblogs.microsoft.com/agent-framework/build-ai-agents-with-github-copilot-sdk-and-microsoft-agent-framework/feed/ 0
The “Golden Triangle” of Agentic Development with Microsoft Agent Framework: AG-UI, DevUI & OpenTelemetry Deep Dive https://devblogs.microsoft.com/agent-framework/the-golden-triangle-of-agentic-development-with-microsoft-agent-framework-ag-ui-devui-opentelemetry-deep-dive/ https://devblogs.microsoft.com/agent-framework/the-golden-triangle-of-agentic-development-with-microsoft-agent-framework-ag-ui-devui-opentelemetry-deep-dive/#comments Mon, 01 Dec 2025 17:08:48 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5069 In the explosive era of Agentic AI, we’re not just seeking more powerful models—we’re searching for a development experience that lets developers actually get some sleep. When building Agents locally, we’ve traditionally faced three major challenges: Black-Box Execution: What is my Agent thinking? Why is it stuck? (Debugging is hard) Interaction Silos: I’ve built my Agent—how do […]

The post The “Golden Triangle” of Agentic Development with Microsoft Agent Framework: AG-UI, DevUI & OpenTelemetry Deep Dive appeared first on Microsoft Agent Framework.

]]>
In the explosive era of Agentic AI, we’re not just seeking more powerful models—we’re searching for a development experience that lets developers actually get some sleep. When building Agents locally, we’ve traditionally faced three major challenges:

  1. Black-Box Execution: What is my Agent thinking? Why is it stuck? (Debugging is hard)
  2. Interaction Silos: I’ve built my Agent—how do I quickly demo a beautiful UI to stakeholders? (Productization is hard)
  3. Performance Blind Spots: How many tokens are being consumed? Where’s the latency? (Optimization is hard)

Today, I’ll walk you through a classic case from Microsoft Agent Framework Samples—GHModel.AI—to reveal the “Golden Triangle” development stack that perfectly solves these pain points: DevUIAG-UI, and OpenTelemetry.

Let’s explore how this powerful combination empowers the entire local development lifecycle.

Phase 1: Creation — Standing on the Shoulders of GitHub Models

In the GHModel.AI case, we first address the “brain” problem.

Traditional local development is often constrained by computing resources or expensive API keys. This case cleverly leverages GitHub Models. As an evangelist, I must strongly recommend this combination:

  • Zero-Barrier Access: Call GPT-4o, Llama 3, and other cutting-edge models directly with your GitHub account—no complex Azure configuration or credit card binding required.
  • Standardized SDK: Through Agent Framework’s abstraction layer, we can switch model backends with just a few lines of code.

In this case’s code structure, you’ll find Agent definitions become exceptionally clear. No more spaghetti-style Python/C# scripts—just structured “declarations.”

Quick Start Code

Python:

# Python - Create Agents with GitModels

from agent_framework.openai import OpenAIChatClient  

chat_client = OpenAIChatClient(
    base_url=os.environ.get("GITHUB_ENDPOINT"),    # 🌐 GitHub Models API endpoint
    api_key=os.environ.get("GITHUB_TOKEN"),        # 🔑 Authentication token
    model_id=os.environ.get("GITHUB_MODEL_ID")     # 🎯 Selected AI model
)


# Create Concierge Agent

CONCIERGE_AGENT_NAMES = "Concierge"
CONCIERGE_AGENT_INSTRUCTIONS = """
            You are an are hotel concierge who has opinions about providing the most local and authentic experiences for travelers.
            The goal is to determine if the front desk travel agent has recommended the best non-touristy experience for a traveler.
            If so, state that it is approved.
            If not, provide insight on how to refine the recommendation without using a specific example. """


concierge_agent = chat_client.create_agent(
    instructions=CONCIERGE_AGENT_INSTRUCTIONS,
    name=CONCIERGE_AGENT_NAMES,
)

# Create FrontDesk Agent

FRONTEND_AGENT_NAMES = "FrontDesk"
FRONTEND_AGENT_INSTRUCTIONS = """
            You are a Front Desk Travel Agent with ten years of experience and are known for brevity as you deal with many customers.
            The goal is to provide the best activities and locations for a traveler to visit.
            Only provide a single recommendation per response.
            You're laser focused on the goal at hand.
            Don't waste time with chit chat.
            Consider suggestions when refining an idea.
            """


frontend_agent = chat_client.create_agent(
    instructions=FRONTEND_AGENT_INSTRUCTIONS,
    name=FRONTEND_AGENT_NAMES,
)

# Create Workflow

frontend_executor = AgentExecutor(frontend_agent, id="frontend_agent")
concierge_executor = AgentExecutor(concierge_agent, id="concierge_agent")

workflow = (
WorkflowBuilder()
.set_start_executor(frontend_executor)
.add_edge(frontend_executor, concierge_executor)
.build()
)

.NET:

// .NET - Creat Agents with GitHub Models

var openAIOptions = new OpenAIClientOptions()
{
    Endpoint = new Uri(github_endpoint)
};
        
var openAIClient = new OpenAIClient(new ApiKeyCredential(github_token), openAIOptions);

var chatClient = openAIClient.GetChatClient(github_model_id).AsIChatClient();

const string ReviewerAgentName = "Concierge";
const string ReviewerAgentInstructions = @"
    You are an are hotel concierge who has opinions about providing the most local and authentic experiences for travelers.
    The goal is to determine if the front desk travel agent has recommended the best non-touristy experience for a traveler.
    If so, state that it is approved.
    If not, provide insight on how to refine the recommendation without using a specific example. ";

const string FrontDeskAgentName = "FrontDesk";
const string FrontDeskAgentInstructions = @"""
    You are a Front Desk Travel Agent with ten years of experience and are known for brevity as you deal with many customers.
    The goal is to provide the best activities and locations for a traveler to visit.
    Only provide a single recommendation per response.
    You're laser focused on the goal at hand.
    Don't waste time with chit chat.
    Consider suggestions when refining an idea.
    """;

var reviewerAgentBuilder = new AIAgentBuilder(chatClient.CreateAIAgent(
    name: ReviewerAgentName,
    instructions: ReviewerAgentInstructions));

var frontDeskAgentBuilder = new AIAgentBuilder(chatClient.CreateAIAgent(
    name: FrontDeskAgentName,
    instructions: FrontDeskAgentInstructions));

AIAgent reviewerAgent = reviewerAgentBuilder.Build(serviceProvider);
AIAgent frontDeskAgent = frontDeskAgentBuilder.Build(serviceProvider);

// Create Workflow
var workflow = new WorkflowBuilder(frontDeskAgent)
.AddEdge(frontDeskAgent, reviewerAgent)
.Build();

Phase 2: Testing & Debugging — DevUI

This is the highlight of this article. Previously, we debugged Agents using the print() method and endless console logs. Now, we have DevUI.

What is DevUI? It’s an “inner-loop” tool designed specifically for developers within Agent Framework. When GHModel.AI runs, DevUI provides a visual console:

  1. Chain of Thought Visualization: You no longer need to guess why the Agent chose Tool A over Tool B. In DevUI, you can see each ReasoningAction, and Observation step like a flowchart. This isn’t just debugging—it’s an “X-ray” of Agent behavior.

  2. Real-Time State Monitoring: What’s stored in the Agent’s Memory? Is the context overflowing? DevUI lets you view Conversation State in real-time, quickly pinpointing the root cause of “hallucinations.”

Python:

cd GHModel.Python.AI/GHModel.Python.AI.Workflow.DevUI
pip install agent-framework agent-framework-devui python-dotenv
python main.py
# Browser opens automatically at http://localhost:8090

.NET:

cd GHModel.dotNET.AI/GHModel.dotNET.AI.Workflow.DevUI
dotnet run
# DevUI: https://localhost:50516/devui
# OpenAI API: https://localhost:50516/v1/responses
 DevUI dramatically shortens the "write-run-fix" feedback loop. For complex Multi-Agent collaboration scenarios, it's your command center.

Screenshot 2025 12 01 at 4 24 26 PM image

Phase 3: Delivery & Interaction — AG-UI

Debugging is done, and your boss says: “Can you send me a link so I can try it too?” At this moment, don’t hand-write a React frontend! What you need is AG-UI.

What does AG-UI solve? It’s a standardized Agent-User interaction protocol. In the GHModel.AI case, by integrating AG-UI:

  • Out-of-the-Box Frontend: Agent Framework can directly expose interfaces compliant with the AG-UI protocol. Any frontend supporting AG-UI (like components provided by CopilotKit) can connect directly to your local Agent.
  • Streaming Responses & Generative UI: It supports not only text streaming but also server-side UI component pushing. This means your Agent can dynamically render charts, tables, or cards on the user interface based on content—no frontend hardcoding required.

AG-UI Supported Features

  • ✅ Streaming responses (SSE)
  • ✅ Backend tool rendering
  • ✅ Human-in-the-Loop approvals
  • ✅ Shared state synchronization
  • ✅ Seamless CopilotKit integration

Implementation Examples

Python Server:

# Server — Register AG-UI endpoint
from agent_framework_ag_ui import add_agent_framework_fastapi_endpoint
from workflow import workflow

app = FastAPI()
agent = workflow.as_agent(name="Travel Agent")
add_agent_framework_fastapi_endpoint(app, agent, "/")

.NET Server:

// Program.cs — ASP.NET Core AG-UI endpoint registration
using Microsoft.Agents.AI.Hosting.AGUI.AspNetCore;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddAGUI();

var app = builder.Build();
AIAgent workflowAgent = ChatClientAgentFactory.CreateTravelAgenticChat();
app.MapAGUI("/", workflowAgent);
await app.RunAsync();

The transition from DevUI to AG-UI is a seamless switch from “developer perspective” to “user perspective.” We can use CopilotKit to create UI

Screenshot 2025 12 01 at 4 27 36 PM image

Phase 4: Performance Tracking — OpenTelemetry

Before the Agent goes live, besides functioning correctly, we must answer: “Is it fast? Is it expensive?”

This is where OpenTelemetry (OTel) enters. In Agent Framework, OpenTelemetry support is baked-in. In GHModel.AI code, typically just one line of configuration (like AddOpenTelemetry or setup_observability):

  1. Distributed Tracing: When a request comes in, passes through routing, Guardrails, calls GitHub Models, and returns results—OTel generates a complete Flame Graph. You can precisely see:

    • How long does network I/O take?
    • How long does LLM Token generation take?
    • How long does local logic processing take?
  2. Cost Transparency: Combined with OTel Metrics, we can monitor Token consumption rates. This is crucial for cost estimation when migrating from GitHub Models (free/prototype stage) to Azure OpenAI (paid/production stage).

🔧 Quick Setup

Python:

# Enable telemetry in one line
from agent_framework.observability import setup_observability
from agent_framework import setup_logging

setup_observability()
setup_logging()

.NET:

// OpenTelemetry configuration
var tracerProvider = Sdk.CreateTracerProviderBuilder()
    .AddSource("*Microsoft.Agents.AI")
    .AddOtlpExporter(options => options.Endpoint = new Uri("http://localhost:4317"))
    .Build();

Environment Variables:

ENABLE_OTEL=true
ENABLE_SENSITIVE_DATA=true               # Enable sensitive data logging in dev
OTLP_ENDPOINT=http://localhost:4317       # Aspire Dashboard / OTLP Collector
APPLICATIONINSIGHTS_CONNECTION_STRING=... # Azure Application Insights (optional)

📈 Visualization Options

Platform Use Case Quick Start
Aspire Dashboard Local development docker run --rm -d -p 18888:18888 -p 4317:18889 mcr.microsoft.com/dotnet/aspire-dashboard:latest
Application Insights Production monitoring Set APPLICATIONINSIGHTS_CONNECTION_STRING
Grafana Dashboards Advanced visualization Agent OverviewWorkflow Overview

Screenshot 2025 12 01 at 4 30 48 PM image

Architecture Overview

Screenshot 2025 12 01 at 4 13 34 PM image

Summary: Build Your “Efficiency Closed Loop”

Returning to the GHModel.AI case, it’s not just a code sample—it demonstrates best practice architecture for modern Agent development:

Layer Tool Purpose
Model Layer GitHub Models Rapidly validate ideas with free, cutting-edge models
Debug Layer DevUI Gain “God Mode View,” iterate logic quickly
Presentation Layer AG-UI Standardize output, generate user interfaces in seconds
Observability Layer OpenTelemetry Data-driven optimization, no more guesswork

Final Thoughts

I encourage every Agent developer to dive deep into the code in Agent-Framework-Samples. Stop debugging AI with Notepad—arm yourself with these modern weapons and go build next-generation intelligent applications!

The combination of GitHub Models for rapid prototyping, DevUI for visual debugging, AG-UI for seamless user interaction, and OpenTelemetry for production-grade observability represents a paradigm shift in how we build agentic applications.

Your Agent development journey starts here. The future is agentic. Let’s build it together!

Resources

  1.  Microsoft Agent Framework  Microsoft Agent Framework GitHub Repo
  2.  Microsoft Agent Framework Samples Microsoft Agent Framework Samples
  3.  Microsoft Agent Framework DevUI Samples DevUI Getting Started
  4.  Microsoft Agent Framework Observability Guide Observability Samples

The post The “Golden Triangle” of Agentic Development with Microsoft Agent Framework: AG-UI, DevUI & OpenTelemetry Deep Dive appeared first on Microsoft Agent Framework.

]]>
https://devblogs.microsoft.com/agent-framework/the-golden-triangle-of-agentic-development-with-microsoft-agent-framework-ag-ui-devui-opentelemetry-deep-dive/feed/ 2
Unlocking Enterprise AI Complexity: Multi-Agent Orchestration with the Microsoft Agent Framework https://devblogs.microsoft.com/agent-framework/unlocking-enterprise-ai-complexity-multi-agent-orchestration-with-the-microsoft-agent-framework/ https://devblogs.microsoft.com/agent-framework/unlocking-enterprise-ai-complexity-multi-agent-orchestration-with-the-microsoft-agent-framework/#comments Thu, 23 Oct 2025 08:09:07 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5019 The Architectural Imperative: Why Multi-Agent Orchestration is Essential In modern enterprise AI systems, the scope and complexity of real-world business challenges quickly exceed the capabilities of a single, monolithic AI Agent. Facing tasks like end-to-end customer journey management, multi-source data governance, or deep human-in-the-loop review processes, the fundamental architectural challenge shifts: How do we effectively coordinate […]

The post Unlocking Enterprise AI Complexity: Multi-Agent Orchestration with the Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>
The Architectural Imperative: Why Multi-Agent Orchestration is Essential

In modern enterprise AI systems, the scope and complexity of real-world business challenges quickly exceed the capabilities of a single, monolithic AI Agent. Facing tasks like end-to-end customer journey management, multi-source data governance, or deep human-in-the-loop review processes, the fundamental architectural challenge shifts: How do we effectively coordinate and manage a network of specialized, atomic AI capabilities?

Much like a high-performing corporation relies on specialized departments, we must transition from a single-executor model to a Collaborative Multi-Agent Network.

The Microsoft Agent Framework is designed to address this paradigm shift, offering a unified, observable platform that empowers developers to achieve two core value propositions:

Scenario 1: Architecting Professionalized AI Agent Units

Each Agent serves as a specialized, pluggable, and independently operating execution unit, underpinned by three critical pillars of intelligence:

  1. LLM-Powered Intent Resolution: Leveraging the power of Large Language Models (LLMs) to accurately interpret and map complex user input requests.
  2. Action & Tooling Execution: Performing actual business logic and operations by invoking external APIs, tools, or internal services (like MCP servers).
  3. Contextual Response Generation: Returning precise, valuable, and contextually aware smart responses to the user based on the execution outcome and current state.

Developers retain the flexibility to utilize leading model providers, including Azure OpenAI, OpenAI, Azure AI Foundry or local models, to customize and build these high-performance Agent primitives.

Scenario 2: Dynamic Coordination via Workflow Orchestration

The Workflow feature is the flagship capability of the Microsoft Agent Framework, elevating orchestration from simple linear flow to a dynamic collaboration graph. It grants the system advanced architectural abilities:

  • 🔗 Architecting the Collaboration Graph: Connecting specialized Agents and functional modules into a highly cohesive, loosely coupled network.
  • 🎯 Decomposing Complex Tasks: Automatically breaking down macro-tasks into manageable, traceable sub-task steps for precise execution.
  • 🧭 Context-Based Dynamic Routing: Utilizing intermediate data types and business rules to automatically select the optimal processing path or Agent (Routing).
  • 🔄 Supporting Deep Nesting: Embedding sub-workflows within a primary workflow to achieve layered logical abstraction and maximize reusability.
  • 💾 Defining Checkpoints: Persisting state at critical execution nodes to ensure high process traceability, data validation, and fault tolerance.
  • 🤝 Human-in-the-Loop Integration: Defining clear request/response contracts to introduce human experts into the decision cycle when necessary.

Crucially, Workflow definitions are not limited to Agent connections; they can integrate seamlessly with existing business logic and method executors, providing maximum flexibility for complex process integration.

Deeper Dive: Workflow Patterns

Drawing on the GitHub Models examples, we demonstrate how to leverage the Workflow component to enforce structure, parallelism, and dynamic decision-making in enterprise applications.

1. Sequential: Enforcing Structured Data Flow

wf01 image

  • Definition: Executors are run in a predefined order, where the output of each step is validated, serialized, and passed as the normalized input for the next executor in the chain.
  • Architectural Implication: This pattern is essential for pipelines requiring strict idempotency and state management between phases. You should strategically use Transformer Executors (like to_reviewer_result) at intermediate nodes for data formatting, validation, or status logging, thereby establishing critical checkpoints.
# Linear flow: Agent1 -> Agent2 -> Agent3

workflow = (
	WorkflowBuilder()
	.set_start_executor(agent1)
	.add_edge(agent1, agent2)
	.add_edge(agent2, agent3)
	.build()
)

2. Concurrent: Achieving High-Throughput Fan-out/Fan-in

Screenshot 2025 10 23 at 4 04 36 PM image

  • Definition: Multiple Agents (or multiple instances of the same Agent) are initiated concurrently within the same workflow to minimize overall latency, with results merged at a designated Join Point.
  • Architectural Implication: This is the core implementation of the Fan-out/Fan-in pattern. The critical component is the Aggregation Function (aggregate_results_function), where custom logic must be implemented to reconcile multi-branch returns, often via voting mechanisms, weighted consolidation, or priority-based selection.
workflow = (
	ConcurrentBuilder()
	.participants([agentA, agentB, agentC])
	.build()
)

3. Conditional: State-Based Dynamic Decisioning

wf03 image

  • Definition: The workflow incorporates a decision-making executor that dynamically routes the process to different branches (e.g., Save Draft, Rework, Human Review) based on the intermediate results or predefined business rules.
  • Architectural Implication: The power of this pattern lies in the selection function (selection_func). It receives the parsed intermediate data (e.g., ReviewResult) and returns a list of target executor IDs, enabling not just single-path routing but also complex logic where a single data item can branch into multiple parallel paths.
def select_targets(review, targets):
	handle_id, save_id = targets
	return [save_id] if review.review_result == "Yes" else [handle_id]

workflow = (
	WorkflowBuilder()
	.set_start_executor(evangelist_executor)
	.add_edge(evangelist_executor, reviewer_executor)
	.add_edge(reviewer_executor, to_reviewer_result)
	.add_multi_selection_edge_group(to_reviewer_result, [handle_review, save_draft], selection_func=select_targets)
	.build()
)

In sophisticated production scenarios, these patterns are frequently layered: for instance, a Concurrent search and summarization phase followed by a Conditional branch that routes the result to either automatic publishing or a Sequential Human-in-the-Loop review process.

Production-Grade Observability: Harnessing DevUI and Tracing

For complex multi-agent systems, Observability is non-negotiable. The Microsoft Agent Framework offers an exceptional developer experience through the built-in DevUI, providing real-time visualization, interaction tracking, and performance monitoring for your orchestration layer.

The following simplified code demonstrates the key steps to enable this capability in your project (see project main.py):

  1. Core Workflow Construction (code unchanged)
# Transform and selection function example
@executor(id="to_reviewer_result")
async def to_reviewer_result(response, ctx):
	parsed = ReviewAgent.model_validate_json(response.agent_run_response.text)
	await ctx.send_message(ReviewResult(parsed.review_result, parsed.reason, parsed.draft_content))

def select_targets(review: ReviewResult, target_ids: list[str]) -> list[str]:
	handle_id, save_id = target_ids
	return [save_id] if review.review_result == "Yes" else [handle_id]

# Build executors and connect them
evangelist_executor = AgentExecutor(evangelist_agent, id="evangelist_agent")
reviewer_executor = AgentExecutor(reviewer_agent, id="reviewer_agent")
publisher_executor = AgentExecutor(publisher_agent, id="publisher_agent")

workflow = (
	WorkflowBuilder()
	.set_start_executor(evangelist_executor)
	.add_edge(evangelist_executor, to_evangelist_content_result)
	.add_edge(to_evangelist_content_result, reviewer_executor)
	.add_edge(reviewer_executor, to_reviewer_result)
	.add_multi_selection_edge_group(to_reviewer_result, [handle_review, save_draft], selection_func=select_targets)
	.add_edge(save_draft, publisher_executor)
	.build()
)
  1. Launching with DevUI for Visualization (project main.py)
from agent_framework.devui import serve

def main():
	serve(entities=[workflow], port=8090, auto_open=True, tracing_enabled=True)

if __name__ == "__main__":
	main()

Implementing End-to-End Tracing

devui image When deploying multi-agent workflows to production or CI environments, robust tracing and monitoring are essential. To ensure high observability, you must confirm the following:

  • Environment Configuration: Ensure all necessary connection strings and credentials for Agents and tools are loaded via .env prior to start up.
  • Event Logging: Within Agent Executors and Transformers, utilize the framework’s context mechanism to explicitly log critical events (e.g., Agent responses, branch selection outcomes) for easy retrieval by DevUI or your log aggregation platform.
  • OTLP Integration: Set tracing_enabled to True and configure an OpenTelemetry Protocol (OTLP) exporter. This enables the complete execution call chain (Trace) to be exported to an APM/Trace platform (e.g., Azure Monitor, Jaeger).
  • Sample Code:https://github.com/microsoft/Agent-Framework-Samples/tree/main/08.EvaluationAndTracing/python/multi_workflow_aifoundry_devui

By pairing the DevUI’s visual execution path with APM trace data, you gain the ability to rapidly diagnose latency bottlenecks, pinpoint failures, and ensure full control over your complex AI system.

Next Steps: Resources for the Agent Architect

Multi-Agent Orchestration represents the future of complex AI architecture. We encourage you to delve deeper into the Microsoft Agent Framework to master these powerful capabilities.

Here is a curated list of resources to accelerate your journey to becoming an Agent Architect:

  1. Microsoft Agent Framework GitHub Repo: https://github.com/microsoft/agent-framework

  2. Microsoft Agent Framework Workflow official sample: https://github.com/microsoft/agent-framework/tree/main/python/samples/getting_started/workflows

  3. Community and Collaboration: https://discord.com/invite/azureaifoundry

The post Unlocking Enterprise AI Complexity: Multi-Agent Orchestration with the Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>
https://devblogs.microsoft.com/agent-framework/unlocking-enterprise-ai-complexity-multi-agent-orchestration-with-the-microsoft-agent-framework/feed/ 2
Semantic Kernel and Microsoft Agent Framework https://devblogs.microsoft.com/agent-framework/semantic-kernel-and-microsoft-agent-framework/ Wed, 08 Oct 2025 06:51:06 +0000 https://devblogs.microsoft.com/semantic-kernel/?p=5041 Last week we announced Microsoft Agent Framework, you can find all the details: In the blog post here: Introducing Microsoft Agent Framework: The Open-Source Engine for Agentic AI Apps | Azure AI Foundry Blog Explore documentation for more details: https://aka.ms/AgentFramework/Docs See it in action: Watch demos on AI Show and Open at Microsoft Learn step by step: Microsoft Learn modules for […]

The post Semantic Kernel and Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>
Last week we announced Microsoft Agent Framework, you can find all the details:
I’m immensely proud of the work the team that brought you AutoGen and Semantic Kernel have done to create Microsoft Agent Framework. We really think it’s a great step forward in building AI agents and applications, building on all the learnings we’ve had from creating AutoGen and Semantic Kernel. Please give a try and give us your feedback, we think you’ll like it!

If you’ve been building and shipping on Semantic Kernel, I’m sure you have questions. I’ve answered the most common here but, as always, you can reach out to us on Semantic Kernel GitHub Discussions or Discord

What is your position on Semantic Kernel and Microsoft Agent Framework?

Microsoft Agent Framework is the successor to Semantic Kernel for building AI agents. The goal of Microsoft Agent Framework is to provide a unified, enterprise-grade platform for developing, deploying, and managing AI agents. It builds upon the foundations laid by Semantic Kernel and AutoGen, incorporating lessons learned and feedback from the community to deliver a more robust and scalable solution. Microsoft Agent Framework is our single call-to-action for developers looking to create AI agents, with deep integration into the Microsoft and Azure ecosystems, as well as support for a wide range of models and tools from across the broader AI ecosystem. These are the same goals we had with Semantic Kernel and AutoGen.

How long do we expect Semantic Kernel will be supported for?

Think of Microsoft Agent Framework as Semantic Kernel v2.0 (it’s built by the same team!). Just like any library that has a v1.x and v2.x available, we will continue to support Semantic Kernel v1.x for the foreseeable future. We will continue to address critical bugs, security issues and we’ll take some existing Semantic Kernel features to GA, but the majority of new features with be built for Microsoft Agent Framework. Ultimately, we continue to support Semantic Kernel while there are still a substantial number of developers using it, and for a least one year after Microsoft Agent Framework leaves Preview and is Generally Available.

Will the support be different between platforms (Python/C#)?

We intend to support Python and C#/.NET at parity for features that are marked General Availability. During preview, there may be some features that are only available in one language or the other at first, depending on which developers are taking lead on a particular feature.

Should I stop using Semantic Kernel for new projects?

Microsoft Agent Framework is still in Preview and we expect it to be in Preview for several months. If you have an existing project using Semantic Kernel, or if you need to ship something quickly, it is perfectly fine to use Semantic Kernel. If you are starting a new project and can wait until Microsoft Agent Framework reaches General Availability before shipping, we recommend starting with Microsoft Agent Framework. If you are starting a new project and need features that are only available in Microsoft Agent Framework today, it is also fine to start with Microsoft Agent Framework. When you decide to make the journey from Semantic Kernel to Microsoft Agent Framework, we have some great migration documentation here: Semantic Kernel .NET migration guideSemantic Kernel Python migration guide

Semantic Kernel and AutoGen had such cool names, why Microsoft Agent Framework?

We tried Semantogen, but the branding team was having none of it 🙂

Happy coding and I hope to see all of you from the Semantic Kernel community on this exciting journey!

 

Shawn Henry

Product Lead – Semantic Kernel, AutoGen and Microsoft Agent Framework

The post Semantic Kernel and Microsoft Agent Framework appeared first on Microsoft Agent Framework.

]]>