Uno Platform https://platform.uno/ Multi-Platform Applications with C# and WinUI Fri, 13 Mar 2026 17:52:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.1 https://uno-website-assets.s3.amazonaws.com/wp-content/uploads/2024/06/07135506/cropped-platform-uno-favicon-32x32.png Uno Platform https://platform.uno/ 32 32 Cross-Platform Layout Bugs: Runtime Inspection Beats Guessing https://platform.uno/blog/cross-platform-layout-bugs-runtime-inspection-beats-guessing/ Fri, 13 Mar 2026 17:47:15 +0000 https://platform.uno/?p=56127 ๐Ÿ•“ 3 MINYour layout fix takes 10 seconds, finding the problem takes 30 minutes

The post Cross-Platform Layout Bugs: Runtime Inspection Beats Guessing appeared first on Uno Platform.

]]>
AI Tooling Cross-Platform
The Problem

Your page looks great on desktop. You switch to the Android emulator and the header is jammed under the status bar. The fix is almost always one line. But you just spent 25 minutes figuring out which element needs it, on which target, because you were working from impressions instead of evidence.

You know what happens next. You eyeball it. You guess it's a padding issue on the header. You hardcode a margin, rebuild, wait for the emulator, and check. Wrong element. Try again. Rebuild. Wait. Check. Closer, but now there's extra space on desktop.

There's a version of this workflow where you skip the guessing entirely.

The Approach

The Fix Is Easy. Finding the Problem Is the Problem.

Uno Platform's App MCP connects your AI assistant to the running app. Not to the source code, to the actual rendered UI. It reads the live visual tree: margins, padding, bounds, position. Real values from real elements on the real target.

That changes the debugging question from "what do I think is wrong?" to "what do the numbers say?"

Walkthrough

The Walkthrough

A ProfilePage with a header area. Looks correct on desktop. On Android, the header overlaps the status bar.

1 Capture the Baseline on Desktop

Run the app on desktop. Ask the assistant to inspect the ProfilePage header: margins, padding, rendered bounds of the top-level Grid and its first children.

The result: header Grid starts at Y=32. Correct inset below the title bar.

That's your baseline. Save it.

2 Inspect on Android

Same app, same page, Android target. Same inspection.

Header Grid starts at Y=0.

That's it. That's the diagnosis. The header doesn't account for the status bar. No guessing. No "I think it might be the padding." Y=32 on desktop, Y=0 on Android.

3 Get the Right Fix

Now you bring in the Uno MCP (the documentation server) with the evidence:

"The header Grid starts at Y=0 on Android. On desktop it's Y=32. What's the recommended way to handle safe area insets?"

The answer:

XAML
<Grid utu:SafeArea.Insets="Top">
    <!-- header content -->
</Grid>

One attribute. On Android, it adds the status bar offset. On desktop, it adds nothing. Platform-aware behavior, no conditional logic, no hardcoded values.

4 Verify on Android

Apply. Rebuild. Run on Android.

Inspect the header again. It now starts at Y=48 (Android's status bar height). No overlap.

5 Cross-Check Desktop. Don't Skip This.

This is the step everyone skips, and it's the one that matters most.

Switch back to desktop. Run. Inspect the header.

Y=32. Identical to the baseline.

The SafeArea attribute correctly does nothing where there's no inset. Your fix didn't regress the target that was already working.

If you'd hardcoded Margin="0,48,0,0" instead, this is where you'd discover it pushed the desktop header down by 16px. Your eyes might call that "close enough." Y=32 vs Y=48 would not.

Demo
Why It Matters

Why This Matters More Than the Fix

You probably could have guessed SafeArea.Insets="Top". That's not the point.

The point is that without runtime inspection, you might have hardcoded a margin that fixes Android and breaks desktop. Or applied SafeArea to the page instead of the header, which shifts child layout in ways you won't notice until three sprints later. Or you might have gotten it right on the first guess and then had no way to prove it didn't introduce a subtle regression on your other targets.

The visual tree gives you numbers. Numbers don't lie and they don't look "close enough."

Pitfalls

Common Cross-Platform Layout Pitfalls

SymptomLikely CauseDiagnostic
Content under status bar on AndroidMissing SafeAreaCheck header Y-position in visual tree
Extra space at top on iOSSafeArea applied to wrong elementCompare bounds of header vs page
Different spacing on desktop vs mobileHardcoded margins instead of SafeAreaCompare Margin/Padding values across targets
Content clipped at bottom on mobileNot accounting for navigation barCheck bottom bounds in visual tree
Touch targets wrong despite correct layoutElement bounds smaller than visualInspect ActualWidth/ActualHeight vs visible area
The Pattern

The Pattern

For any layout change targeting a platform-specific issue:

  1. Capture baseline on your primary target. Actual values, not a screenshot you eyeball.
  2. Inspect the broken target. Get exact numbers for the discrepancy.
  3. Propose a platform-aware fix using Uno Platform docs, not a hardcoded workaround.
  4. Verify the fix on the affected target.
  5. Re-verify the baseline on every other target.

Step 5 is the whole point. A fix that works on one target and silently regresses another isn't a fix. It's a time bomb with a longer fuse.

The post Cross-Platform Layout Bugs: Runtime Inspection Beats Guessing appeared first on Uno Platform.

]]>
Configuring Claude Code for Real .NET Projects https://platform.uno/blog/configuring-claude-code-for-real-net-projects/ Thu, 12 Mar 2026 17:19:16 +0000 https://platform.uno/?p=56223 ๐Ÿ•“ 4 MINSix files that make Claude Code stop guessing and start knowing your stack

The post Configuring Claude Code for Real .NET Projects appeared first on Uno Platform.

]]>
AI Tooling Claude Code
The Point

Claude Code works out of the box. It also asks permission for every dotnet build, has no idea your project uses MVUX, and will happily scaffold a project targeting .NET 8 when you wanted the latest. The fix isn't a better prompt. It's configuration.

Claude Code's setup lives across a handful of files. Once you understand what each one does, the tool starts working the way you think. What follows is how I've set things up for Uno Platform projects, though the patterns apply to any .NET codebase.

LevelFilePurpose
Usersettings.jsonBaseline permissions, global deny rules
UserCLAUDE.mdGlobal instructions, stack identity, session workflow
Project.mcp.jsonMCP server registry
ProjectCLAUDE.mdProject instructions, architecture, doc references
Projectsettings.jsonProject permissions, hooks
Projectsettings.local.jsonPersonal overrides (never committed)
.claude/

.claude/

Not a config file. Just the folder that holds settings.json and settings.local.json. Think .vscode/ or .vs/. It exists at two levels: ~/.claude/ for user-scoped, your-repo/.claude/ for project-scoped.

settings.json

settings.json

The rules engine. It controls what Claude Code is allowed to do: which files it can read or write, which bash commands it can run, what gets denied.

For an Uno Platform project, this means pre-approving the dotnet CLI and git, while blocking access to .env files, signing keys, and destructive commands. You can also wire up hooks: automated commands that fire after specific actions. I have a PostToolUse hook that runs dotnet format every time Claude writes a .cs file. It keeps output consistent with my .editorconfig without me having to ask.

It lives at .claude/settings.json in your project directory for project-scoped rules, or ~/.claude/settings.json for global defaults. Commit the project one. The team shares it.

settings.local.json

settings.local.json

Same format, same capabilities, never committed. Claude Code auto-gitignores it. This is where machine-specific environment variables, API keys, and experimental permissions go. Things you don't want to inflict on the rest of the team.

It wins in the precedence chain. If the shared config denies something and your local config allows it, local takes priority.

Managed enterprise policies still override everything.

CLAUDE.md

CLAUDE.md

This is the one that changes everything.

CLAUDE.md is a free-form markdown file that Claude Code reads at the start of every session. No schema, no enforcement mechanism. Just guidance, written in plain language, that shapes how the agent thinks.

But it needs to stay short. Claude Code wraps your CLAUDE.md in a system reminder that tells the model to ignore instructions that aren't relevant to the current task. As instruction count grows, Claude doesn't just ignore the new ones; it starts ignoring all of them uniformly. General consensus is under 300 lines. Shorter is better.

Don't put code style rules in here. That's what .editorconfig and dotnet format are for. Claude is an in-context learner. If your codebase follows a pattern, it picks it up from reading your files. Focus instead on what Claude can't infer: your stack identity, scaffolding rules, framework-level decisions like x:Bind over {Binding}, workflow expectations.

A few patterns worth including:

  • Commit after each meaningful change with descriptive messages. Makes reverting easy.
  • For complex features, write a spec to a markdown file first. Start a fresh session to implement from the spec. Clean context.
  • At the end of a session, have Claude summarize what was done and suggest improvements to the project docs. Continuous improvement loop.

For anything that isn't universally applicable (database schemas, component patterns, design tokens), don't inline it. Point to it. A Key References section that says "read docs/ARCHITECTURE.md before starting" gives Claude the map without bloating the instruction file.

Progressive disclosure: tell Claude how to find information, not all the information itself.

Placement matters. ~/.claude/CLAUDE.md applies globally. Scaffolding rules go here, because when you create a new project, the project directory doesn't have config files yet. CLAUDE.md at the repo root applies to that specific project. Project-level overrides user-level when they conflict.

.mcp.json

.mcp.json

This is where you wire up external capabilities through the Model Context Protocol.

For Uno Platform, two servers matter. The remote server at mcp.platform.uno/v1 gives Claude Code access to up-to-date documentation: docs search, docs fetch, agent rules, usage best practices. The local App MCP connects to your running application and gives the agent runtime visibility: screenshots, visual tree snapshots, pointer clicks, keyboard input, element data context inspection.

The remote MCP helps you write code that follows conventions. The App MCP confirms the code actually works at runtime. Design-time knowledge vs. runtime truth.

.mcp.json lives at the project root, not inside .claude/. Commit it. When teammates clone the repo and open Claude Code, they get prompted to approve the servers once. Then everything just works.

Reference

The Quick Reference

Every file, where it goes, whether to commit it.

User Level: ~/.claude/

FilePathCommit?Purpose
settings.json~/.claude/settings.jsonN/ABaseline permissions, global deny rules
CLAUDE.md~/.claude/CLAUDE.mdN/AGlobal instructions, stack identity, session workflow

Project Level: Your Repo Root

FilePathCommit?Purpose
.mcp.jsonmy-app/.mcp.jsonYesMCP server registry
CLAUDE.mdmy-app/CLAUDE.mdYesProject instructions, architecture, doc references
settings.jsonmy-app/.claude/settings.jsonYesProject permissions, hooks
settings.local.jsonmy-app/.claude/settings.local.jsonNoPersonal overrides

The Full Structure

Structure
~/
โ””โ”€โ”€ .claude/
    โ”œโ”€โ”€ settings.json
    โ””โ”€โ”€ CLAUDE.md

my-app/
โ”œโ”€โ”€ .mcp.json
โ”œโ”€โ”€ CLAUDE.md
โ”œโ”€โ”€ .claude/
โ”‚   โ”œโ”€โ”€ settings.json
โ”‚   โ””โ”€โ”€ settings.local.json
โ”œโ”€โ”€ docs/
โ”‚   โ”œโ”€โ”€ ARCHITECTURE.md
โ”‚   โ””โ”€โ”€ DESIGN-BRIEF.md
โ”œโ”€โ”€ src/
โ””โ”€โ”€ MyApp.sln
Summary

Six files. Two user-level, four project-level. The only one you don't commit is settings.local.json.

settings.json controls permissions. CLAUDE.md controls behavior. .mcp.json controls integrations. The separation is clean. Keep CLAUDE.md lean. Let hooks and linters handle formatting. Point to detailed docs instead of inlining them. Let the setup improve itself over time.

Set this up once and Claude Code stops being a generic assistant that needs hand-holding every session. It becomes a tool that knows your stack.

That's the whole point.

The post Configuring Claude Code for Real .NET Projects appeared first on Uno Platform.

]]>
Announcing Pragmatic AI in .NET Show https://platform.uno/blog/announcing-pragmatic-ai-in-net-show/ Wed, 11 Mar 2026 16:23:53 +0000 https://platform.uno/?p=56163 ๐Ÿ•“ 4 MINAnnouncement AI in .NET New Show A biweekly livestream about what it’s actually like to build software with AI. Real developer stories, honest tool assessments, agentic workflow deep dives. Every other Thursday at 11 AM ET โ€” live on YouTube, X, and LinkedIn. If you’ve spent any time building software in the last couple of […]

The post Announcing Pragmatic AI in .NET Show appeared first on Uno Platform.

]]>
Announcement AI in .NET
New Show

A biweekly livestream about what it's actually like to build software with AI. Real developer stories, honest tool assessments, agentic workflow deep dives. Every other Thursday at 11 AM ET โ€” live on YouTube, X, and LinkedIn.

If you've spent any time building software in the last couple of years, you've felt the shift. AI coding assistants, agentic workflows, LLM-powered UI โ€” it's not a distant future anymore. It's your pull request queue, your design review, your sprint planning. AI is now part of everyday software development.

But most of what we read about AI in software development is either breathless hype or dismissive cynicism. What's harder to find are honest accounts from developers actually in the trenches โ€” shipping real products, hitting real walls, figuring it out as they go.

That's exactly why we're launching the Pragmatic AI in .NET Show.

The Show

Introducing the Pragmatic AI in .NET Show

Every Other Thursday ยท 11 AM ET

Each episode will feature developers sharing what and how they're building with AI โ€” the wins, the surprises, and the moments where AI didn't quite do what they expected. We'll dig into the latest developer AI tools, explore agentic workflows, and have frank conversations about where this technology helps and where it still has a way to go.

No hype. No demos that only work in ideal conditions. Just developers talking honestly about what it's like to build software today.

The Landscape

The Developer Landscape Has Genuinely Changed

Let's be honest about where we are. A few years ago, AI in a developer's workflow mostly meant autocomplete that sometimes got lucky. Today, it's something qualitatively different.

AI can now scaffold entire app features, generate test suites, catch bugs during code review, and help developers think through architecture decisions โ€” all before lunch. Tools like GitHub Copilot, Claude Code, Cursor, Codex, and newer agentic frameworks are becoming a real part of how many teams ship software.

But "the landscape has changed" doesn't tell the whole story. The more interesting question is: changed in what ways, for whom, and at what cost?

Here's what we're actually seeing in the .NET community:

  • AI tools genuinely accelerate certain kinds of work โ€” boilerplate, CRUD operations, test generation, documentation.
  • They also introduce new categories of problems: hallucinated APIs, subtle logic errors that pass code review, and over-reliance on generated code that developers don't fully understand.
  • The developers getting the most value aren't treating AI as a replacement for judgment โ€” they're treating it as a highly capable but fallible collaborator.
  • And the craft of prompting, reviewing AI output, and integrating it into a real codebase is itself a skill that takes time to develop.

None of this is a reason to stay on the sidelines. But it is a reason to be thoughtful.

Reality

The Realities of Building Software with AI

There's a version of the AI narrative: describe what you want, AI builds it, and you ship. If you've tried this on anything beyond a toy project, you know it's more complicated than that.

The reality is messier and more interesting. AI can dramatically speed up parts of your workflow while introducing friction in others. It works best when developers have clarity on what is being built โ€” AI can amplify intent, for better or worse. A vague prompt yields vague code.

"AI is a force multiplier. But it multiplies whatever developers bring to the table โ€” clear thinking, good architecture, solid testing habits. The fundamentals still matter."

Agentic workflows โ€” where AI doesn't just respond to individual prompts but takes sequences of actions toward a goal โ€” are genuinely exciting and genuinely tricky. Getting an agent to reliably navigate a real codebase, understand conventions, and make changes that don't break things downstream is an active area of work, not a solved problem.

We want to build a space where developers can talk honestly about all of this. Where's the leverage? Where are the landmines? What does it actually look like to integrate AI into a professional .NET development workflow?

Topics

What We'll Cover

Each episode of the Pragmatic AI in .NET Show will dig into:

  • Real developer stories โ€” folks building actual products, not demo apps.
  • The latest in developer AI tools โ€” what's new, what's worth attention, and honest assessments of what's still rough around the edges.
  • Agentic workflows โ€” practical exploration of autonomous AI patterns and where they fit in a .NET context.
  • The meta-skills โ€” prompting, reviewing, integrating, and knowing when not to use AI.

We're intentionally keeping the format conversational. This isn't a polished tutorial series. It's more like pulling up a chair with developers who are figuring this out alongside you.

Why .NET

Why This Matters for the .NET Community

The .NET ecosystem is in an interesting moment. C#/.NET and the broader Microsoft stack have always attracted developers who care about building things that work โ€” reliably, at scale, over time. That ethos doesn't go out the window just because AI is in the picture.

If anything, it makes the conversation more important. How do you maintain code quality when a significant chunk of your codebase is AI-generated? How do you onboard new developers when your workflows have changed? How do you make good architectural decisions when AI can scaffold almost anything?

These are the conversations we want to have. And we think the .NET community โ€” pragmatic by nature โ€” is exactly the right place to have them.

At Uno Platform, we spend a lot of time thinking about how to make cross-platform .NET development faster and more accessible. AI tools are a big part of that picture โ€” MCP tools that give AI "eyes and hands" for app interactivity, smarter design-to-code workflows, and AI-assisted debugging. Good tooling and good judgment work together.

Join Us

Join Us On The Show

The Pragmatic AI in .NET Show kicks off this Thursday at 11 AM ET. We'd love to have you there.

Whether you're already deep in AI-powered workflows or just starting to explore what's possible, there's something in this for you. Come for the developer stories. Stay for the honest conversation about what building software actually looks like right now.

First Three Guests

Kevin Griffin
.NET Foundation President & Consultant
Jonathan "J" Tower
Founder at TrailHead Technology Partners
Eric D. Boyd
Founder/CEO of ResponsiveX
Be a Guest

If you have a story to share โ€” something you've built, a workflow that surprised you, a tool that changed how you work โ€” we want to hear from you. Reach out at [email protected].

See you on the show.

The post Announcing Pragmatic AI in .NET Show appeared first on Uno Platform.

]]>
Debugging Silent XAML Binding Failures in 2 Minutes https://platform.uno/blog/debugging-silent-xaml-binding-failures-in-2-minutes/ Thu, 05 Mar 2026 21:21:34 +0000 https://platform.uno/?p=55867 ๐Ÿ•“ 3 MINStop guessing: debug silent XAML binding failures with runtime inspection via App MCP

The post Debugging Silent XAML Binding Failures in 2 Minutes appeared first on Uno Platform.

]]>
AI Tooling Debugging
The Problem

A button does nothing when you tap it. No crash. No exception. The markup compiles, the page renders, the button appears enabled, it just sits there.

Silent binding failures are the most time-consuming class of bug in XAML development because they give you nothing to work with.

If you've built anything with XAML, you've been here. No error. No stack trace. Just a control that quietly ignores your intent.

The traditional debugging path looks like this: read the XAML, scan for typos, check the code-behind, add some debug output, rebuild, run, check output, repeat. Depending on the complexity of the page, that's 15โ€“60 minutes of your life.

There's a faster way.

The Solution

Runtime Inspection with Uno Platform App MCP

Uno Platform provides a local MCP server (App MCP) that connects your AI agent to your running application. Instead of guessing what's happening at runtime, the agent can inspect the live visual tree, check DataContext values, and read binding states directly.

Here's how that changes the debugging workflow.

Walkthrough

A Real Example: The Missing Command

Setup

An EditProfilePage has a Save button. The page uses MVUX with a partial record model. The button is bound to a command:

XAML
<Button Content="Save" Command="{Binding SaveProfile}" />

The model has a method that should generate the command:

C#
public partial record EditProfileModel
{
    public async ValueTask SaveProfle(CancellationToken ct)
    {
        // save logic
    }
}

The app builds. The page renders. The button appears. Nothing happens when you tap it.

1 Inspect the Running App

With the app running, ask your AI agent:

"Using App MCP, inspect the Save button on EditProfilePage. Get a visual tree snapshot focused on the Button element. Call uno_app_get_element_datacontext on the Save button. Report: Command binding value, DataContext type and instance, IsEnabled state."

2 Read the Evidence

The agent reports back:

  • โœ“ The Button exists in the visual tree
  • โœ“ IsEnabled is true
  • โœ“ The DataContext is a valid EditProfileModel instance
  • โœ— The Command property is null

That's the diagnosis. The DataContext is fine, the model is injected correctly. But the Command binding resolved to nothing.

3 Cross-Reference the Binding Path

Now you know exactly where to look. The XAML binds to SaveProfile. MVUX generates commands from public methods on the model, using the method name as the command name.

Look at the method name again: SaveProfle. Missing an "i."

MVUX generated a command called SaveProfle (matching the method). The XAML binds to SaveProfile. No match. No error. Just null.

4 Fix and Verify

Rename the method to SaveProfile. Rebuild. Run. Then verify:

"Using App MCP, inspect the Save button again. Confirm the Command property is now non-null. Use uno_app_pointer_click to tap the button and take a screenshot showing the result."

The agent confirms the Command resolves to a non-null IAsyncCommand. The button works.

Total time: ~2 minutes
Why It Works

The key insight is that uno_app_get_element_datacontext tells you the truth about runtime state. When the DataContext is valid but a specific property (like Command) is null, the problem is in the binding path, the name doesn't match what's available on the DataContext. Every time.

Without runtime inspection, you're working backwards from symptoms. With it, you're working forward from evidence.

The Pattern

The Pattern for Any Silent Binding Failure

This isn't specific to commands. The same approach works for any binding that silently fails:

  1. Inspect the element with App MCP's visual tree snapshot
  2. Check the DataContext with uno_app_get_element_datacontext
  3. Compare what the binding expects vs. what the DataContext actually provides
DataContextBound PropertyDiagnosis
NullAnyDataContext not set โ€” check page registration, DI, or navigation
Valid, wrong typeAnyWrong model injected โ€” check route registration
Valid, correct typeNullBinding path doesn't match a property on the model
Valid, correct typeHas valueBinding works โ€” problem is elsewhere (logic, not binding)

That table covers about 90% of silent binding failures.

Checklist

Binding Verification Checklist

Use this after any change that touches bindings:

DataContext is set and non-null
Binding path matches the MVUX-generated property name exactly (case-sensitive)
Property type matches expected type (bool for ToggleSwitch.IsOn, string for TextBlock.Text)
For commands: IAsyncCommand instance is non-null
For two-way bindings: Mode=TwoWay is set in XAML
State updates reflect in visual tree after interaction
Get Started

Getting Started

If you want to try this workflow, you need two things:

  1. Uno Platform App MCP running locally, connected to your running app
  2. An MCP-capable AI agent (Claude Code, Codex CLI, VS Code with Copilot, etc.)

Setup takes about 5 minutes:

Terminal
claude mcp add --scope user --transport stdio "uno-app" -- dotnet dnx -y uno.devserver --mcp-app

The post Debugging Silent XAML Binding Failures in 2 Minutes appeared first on Uno Platform.

]]>
Codex 5.3 vs 5.2 for .NET Developers https://platform.uno/blog/codex-comparison-for-net-developers/ Wed, 25 Feb 2026 17:54:19 +0000 https://platform.uno/?p=55496 ๐Ÿ•“ 4 MINWhen OpenAI unveiled GPT-5.3-Codex in early February 2026, it marked a significant evolution in the Codex product line โ€” not just as a coding assistant, but as an autonomous agentic work partner. Where earlier generations of Codex focused on generating high-quality code snippets and assisting developers with long-running tasks, version 5.3 is positioned as a […]

The post Codex 5.3 vs 5.2 for .NET Developers appeared first on Uno Platform.

]]>

When OpenAI unveiled GPT-5.3-Codex in early February 2026, it marked a significant evolution in the Codex product line โ€” not just as a coding assistant, but as an autonomous agentic work partner. Where earlier generations of Codex focused on generating high-quality code snippets and assisting developers with long-running tasks, version 5.3 is positioned as a more capable collaborator that blends coding prowess with professional reasoning and interactive execution. OpenAI is pushing Codex toward acting more like a fellow developer โ€” one that can take direction mid-task, maintain context over extended workflows, and tackle complex real-world development jobs end-to-end.

TL, DR: I built the same Uno Platform app with Codex 5.3 & 5.2 - Follow along the journey ..

Why do .NET developers care? Well, the developer chops Codex 5.3 brings to the table apply very much to the realities of enterprise app development scenarios. Add to that the nuances of running apps across platforms from single shared codebase, the benefits of a matured Agentic partner which can reason & churn out solid code, become obvious.

Uno Platform is the most flexible open-source platform for modern cross-platform .NET development, complete with enterprise-grade AI and visual design tools. Paired with Uno Platform Studio, .NET developers can elevate productivity with runtime visual designers and dependable AI Agents/MCP tools for contextual AI intelligence โ€“ all towards building apps from any OS/IDE and any AI Agent, to run on mobile, web, desktop or embedded devices.

For .NET developers building cross-platform apps, AI tooling in Uno Platform works with any Agentic workflows, including GPT-Codex. Let’s take a closer look as to how Codex 5.3 fares against 5.2 for Uno Platform .NET apps – we’re in hands off mode pitching AI Models against each other.

Head-to-Head Key Numbers: Codex 5.3 vs Codex 5.2

Here are some of the benchmark figures and performance deltas published in third-party coverage and reliable sources:

Metric/BenchmarkGPT-5.2-CodexGPT-5.3-CodexDifference
SWE-Bench Pro Coding~56.4%~56.8%+0.4 pts
Terminal-Bench Skills~64.0%~77.3%+13.3 pts
OSWorld-Verified Tasks~38.2%~64.7%+26.5 pts
Cybersecurity Challenges~67.4%~77.6%+10.2 pts
GDPval Pro Tasks~70.9%~70.9%~Match
Inference speedBaseline~25% faster~25% faster

So yeah, Codex 5.3 is better and faster than Codex 5.2 in almost every way. But does 5.2 still have some deeper reasoning tricks up its sleeves? Let’s check out the developers experience.

Uno Platform Experience with MCPs

Uno Platform MCP Servers/Tools work really well with Codex – AI Agents can not only generate code, but use the tools to also verify app fuunctionality. To put the two versions of Codex to a fair test, how about we ask them to build the same app with the same exact prompts/tools?

Overall Prompt:

Build a me a car dashboard UI

  • Show a map with route overlay
  • Show navigational overlay on the map
  • Show dynamic lane visualizing our car
  • Show live traffic going by/overtaking in lanes
  • Show overlay to change Seat settings
  • Show overlay to control A/C with airflow
  • Show Media info that is playing now
  • Show overlay to control what Media is playing
  • Changeable UI data bound to overlay info

ย 

Additional things handed to Agents:

  • / Couple of car dashboard screenshots for inspiration – likes of Tesla, BMW & Kia.
  • / Images for top of car for lanes & seat graphic
  • / Uno Platform Docs MCP for best practices
  • / Uno Platform App MCP with app interativity tools for verification

ย 

The goal was to see what each Codex can do on their own – Fire off prompts in CLI and go hands-off.

Here’s what GPT-5.2-Codex built:

Here’s what GPT-5.3-Codex built:

Let’s look at the comparison – this is mostly anecdotal from a developer’s perspective:

CriteriaCodex 5.2Codex 5.3
Overall outcomePerformed well on the overall taskPerformed quicker on the overall task
Final resultFully functional Uno Platform cross-platform appFully functional Uno Platform cross-platform app
Time to complete~35 minutes~20 minutes
Mapping approachUsed a real mapDid not use a real map; rather plotted names on canvas
Map awarenessDid not initially understand which part of the map represented whatNot applicable
Route overlay accuracyInitially drew route over large bodies of waterDid not have this issue
Fixes required for routeNeeded extra prompts to correct route overlay over waterNot applicable
Traffic labelingInitially labeled traffic incorrectlyCanvas overlay did not need accuracy
Moving vehicles in center laneRequired extra prompt to prevent vehicles overlapping our carRequired extra prompt to prevent vehicles overlapping our car
Mapping overlay data bindingNeeded fixes to properly bind overlay data changes to UIQuicker and more accurate in binding overlay data changes to UI
Climate control airflow visualizationTook a literal and bold approach to represent airflow around dashboardTook a safer approach using moving dots with variable velocity to represent airflow

Overall, both Codex 5.2 & Codex 5.3 performed admirably – entire functional UI built with AI and tested with Uno Platform MCP tools. While Codex 5.3 was substantially quicker, the deeper reasoning and brave more realistic approach taken by Codex 5.2 is to be appreciated.

Comparing Codex Agents

In many ways, developers today are spoiled for choice. Both Codex 5.2 and Codex 5.3 represent the cutting edge of AI-assisted software development, and itโ€™s hard to go wrong with either. Each model brings a distinct personality to the table, and understanding those nuances helps teams pick the right tool for the moment rather than declaring a single universal winner.

Codex 5.2 often felt bold and exploratory โ€” a model willing to take calculated risks, reason deeply through ambiguous problems, and push toward creative or non-obvious solutions. For developers tackling complex architecture, experimental ideas, or problems that benefit from heavier reasoning, 5.2 proved to be a remarkably capable partner.

Codex 5.3, by contrast, refines the experience. It is faster, more consistent, and noticeably more robust as an end-to-end coding agent. The gains in execution-style benchmarks, workflow fluency, and responsiveness make it particularly well suited for real-world development loops โ€” writing code, iterating, fixing, and finishing tasks with fewer stalls and less friction.

Ultimately, this isnโ€™t a story of replacement but of progression. Codex 5.2 showcased strong reasoning and risk-tolerant problem solving, while Codex 5.3 builds on that foundation with speed, reliability, and agentic strength. Developers truly have a plethora of riches โ€” and whichever path they choose, theyโ€™re backed by some of the most capable coding models ever built.

Ready to Build with AI-Assisted Workflows?

Sign up to Uno Platform Studio for free and set up Uno MCP and App MCP in minutes. Start building cross-platform apps with AI agents that actually understand your code.

The post Codex 5.3 vs 5.2 for .NET Developers appeared first on Uno Platform.

]]>
Guardrails, Hard Cases, and Daily Habits for MCP Adoption https://platform.uno/blog/guardrails-hard-cases-and-daily-habits-for-mcp-adoption/ Tue, 24 Feb 2026 20:01:48 +0000 https://platform.uno/?p=55401 ๐Ÿ•“ 6 MINThe On-Ramp to AI-Assisted Development with Uno Platform

The post Guardrails, Hard Cases, and Daily Habits for MCP Adoption appeared first on Uno Platform.

]]>
MCP Series Part 3 of 3
The Final Piece

This part covers the scenarios where MCP earns its keep: the bugs that would otherwise take you an hour of staring at XAML, and the ones where itโ€™s wasting your time.

Knowing the difference is what separates productivity from ritual.

Parts 1 and 2 gave you the setup and the loop. You can scaffold features, verify them at runtime, and iterate with evidence. That's the foundation.

Scenario C

Debug a Silent Binding Failure with App MCP Evidence

Context
A Save button on EditProfilePage does nothing when tapped. No crash. No exception in the output window. The button just sits there.

This is the most frustrating class of bug in XAML development. Silent binding failures. The markup compiles, the page renders, the button appears enabledโ€”but the Command resolves to null at runtime because of a typo or a missing DataContext.

Goal
Find and fix the broken command binding without manually stepping through the visual tree in a debugger.
Step 1: Define
  • โ€” Save button on EditProfilePage executes SaveProfile() when tapped
  • โ€” Button should disable while save is in progress (MVUX auto-disables)
Step 2: Inspect with App MCP first App MCP

Prompt:

"Using App MCP, inspect the Save button on EditProfilePage:

  • 1. Get a visual tree snapshot focused on the Button element
  • 2. Call uno_app_get_element_datacontext on the Save button
  • 3. Report: Command binding value, DataContext type and instance, IsEnabled state"
What comes back
The agent reports that the Button exists, IsEnabled is true, the DataContext is a valid EditProfileViewModel instanceโ€”but the Command property is null.

That's the clue. The DataContext is fine. The model is injected. But the generated command isn't binding.

Step 3: Diagnose Uno MCP

Prompt:

"The Command property on the Save button is null, but the DataContext is a valid EditProfileViewModel. Using Uno MCP, search for how MVUX generates commands from model methods. Then check: does my model have a public method that would generate a command matching the binding path in the XAML?"

The agent finds the problem
C#
// Your model
public partial record EditProfileModel
{
    public async ValueTask SaveProfle(CancellationToken ct)  // 👈 Typo: "Profle"
    {
        // save logic
    }
}
XAML
<!-- Your XAML -->
<Button Content="Save" Command="{Binding SaveProfile}" />  <!-- 👈 Expects "SaveProfile" -->

MVUX generated a command called SaveProfle (matching the method name). The XAML binds to SaveProfile. No match. No error. Just null.

Steps 4 & 5: Fix, Build, Verify App MCP

Rename the method to SaveProfile. Rebuild.

"Using App MCP, inspect the Save button again. Confirm the Command property is now non-null and the button executes when clicked. Use uno_app_pointer_click to tap the button and take a screenshot showing the result."

The Lesson

This bug took 2 minutes with App MCP. Without it, you'd be reading XAML, scanning for typos, maybe adding debug output, rebuilding, and repeating. The visual tree doesn't lie. When uno_app_get_element_datacontext shows the DataContext is valid but the Command is null, the problem is in the binding pathโ€”every time.

Scenario D

Cross-Platform Parity Verification

Context
A feature works on Windows. You need to confirm it renders and behaves correctly on Android and WebAssembly before merging.
Goal
Structured comparison of UI state across three targets, not just "it looks okay."
Step 1: Define your baseline App MCP

Run on Windows. Capture evidence:

"Using App MCP, get a screenshot and visual tree snapshot of [PAGE] on Windows. This is my baseline."

Save the output.

Step 2: Run on Android. Compare. App MCP

Prompt:

"Using App MCP, get a screenshot and visual tree snapshot of [PAGE] on Android. Compare against this Windows baseline:

[PASTE WINDOWS SNAPSHOT]

List differences in: element count, layout bounds, visibility states, binding values, and any missing elements."

Step 3: Run on WebAssembly. Compare.

Same prompt, different target.

What you're looking for

Check What it catches
Element count differs A control doesn't render on one platform
Layout bounds differ significantly Spacing/margin issues, SafeArea gaps
Visibility states differ Platform-conditional logic not working
Binding values differ Platform-specific service returning different data
Missing elements Conditional XAML or platform-specific controls
Steps 4 & 5: Fix discrepancies, re-verify

For each difference, bring in Uno MCP to propose a platform-aware fix. Feed the App MCP comparison back as context:

"App MCP shows this StackPanel has Padding=24 on Windows but Padding=0 on Android. Using Uno MCP, search for how to apply consistent padding that respects platform conventions. Propose a fix."

Important: This is the step people skip. Don't. A fix for Android can break Windows. Always close the loop.

Scenario E

Add a Screen and Wire Navigation with Parameters

Context
You have a list of items on a MainPage. You need a DetailPage that shows the selected item's data.
Goal
Full navigation wiring with parameter passing, verified end-to-end.

The Loop (Condensed)

Define
"Tapping an item in the list navigates to DetailPage. DetailPage receives the item ID and displays the item data."
Plan Uno MCP

"Using Uno MCP, search for the recommended navigation pattern for passing parameters between pages with Uno Navigation extensions. I need to navigate from a list selection to a detail page, passing an item ID. Propose the route registration, the navigation request, and the DetailModel that receives the parameter."

Apply, Build, Run
Verify App MCP

"Using App MCP:

  1. Get a visual tree snapshot of MainPage - confirm the list renders with items
  2. Click the first item using uno_app_pointer_click
  3. Take a screenshot - confirm navigation occurred
  4. Get a visual tree snapshot of DetailPage - confirm the item data is displayed
  5. Call uno_app_get_element_datacontext on the detail content area - confirm the model received the item ID"

Common Pitfalls

  • โœ— Navigation route registered but parameter type doesn't match
  • โœ— DetailModel constructor expects the parameter but DI doesn't provide it
  • โœ— Navigation occurs but DataContext is empty (parameter lost in transit)
Best Practices

Guardrails and Best Practices

Keep diffs small, commit often

One feature per dev loop cycle. If the agent proposes eight file changes, ask yourself: can this be two separate loops? Smaller diffs are easier to verify and easier to revert.

Never trust generated UI without App MCP verification

This is the single most important habit. Code that compiles is not code that works. XAML that renders on Windows might not render on Android. A binding that resolves on one page might be null on another because of a DataContext scope issue. uno_app_visualtree_snapshot is your source of truth.

Always align with existing repo patterns

If the agent proposes a new pattern that doesn't match your codebase, push back. "My project uses MVUX partial recordsโ€”don't generate a ViewModel with INotifyPropertyChanged." Uno MCP should be reading your conventions from the docs, but your project might have additional conventions the remote MCP doesn't know about. You're the authority on your own codebase.

Stop conditionsโ€”when to stop using MCP

  • โ–  The agent is proposing the same fix repeatedly and it's not working. This means the problem is outside the agent's context. Debug manually.
  • โ–  You're spending more time writing prompts than writing code. If you already know the fix, just fix it. MCP is a productivity tool, not a ceremony.
  • โ–  The issue requires deep platform-specific knowledge you already have. If you know the Android lifecycle issue by heart, don't ask the agent to search for it.
  • โ–  You're debugging a third-party library, not your own code. App MCP can see your visual tree, but it can't see into native platform internals or third-party library state.

When MCP is valuable vs. unnecessary

โœ“ VALUABLE

  • โ€” Scaffolding new pages/models/routes
  • โ€” Cross-platform verification
  • โ€” Binding debugging
  • โ€” Unfamiliar Uno Platform APIs

โœ— UNNECESSARY

  • โ€” Simple typo fixes
  • โ€” Style tweaks you can eyeball
  • โ€” Code you've written dozens of times
  • โ€” Pure logic bugs with no UI component
Habits

Daily Habits for Adoption

Before starting a feature
Run one uno_app_visualtree_snapshot on your current working page. Get a baseline. Know what "before" looks like.
During development
Use the six-step loop for any change that touches UI, navigation, or bindings. Skip it for pure backend/logic changes.
Before a PR
Run App MCP verification on each target you support. Capture the screenshot output and include it in your PR description as evidence. "Verified on Windows, Android, WASMโ€”screenshots attached."
Weekly
Review which prompts you used most. Refine your prompt kit. Delete prompts that never helped. Add new ones for patterns you repeat.
Wrap Up

The goal was never to make AI write your app. It's to make verification automatic and scaffolding consistent.

Uno MCP gives the agent knowledge of Uno Platform conventions and current docs. App MCP gives it knowledge of your running application's actual state. Together, they turn "generate and hope" into "generate and verify."

Three things to do right now
  1. Connect both MCPs if you haven't (Part 1)
  2. Run the full dev loop on one real featureโ€”the Settings page scenario from Part 2 is a good first candidate
  3. Set a stop condition before you startโ€”decide in advance when you'll stop prompting and just write the code

That's it. No more "AI-generated XAML that compiles but doesn't render." You have the tools to check.

Series Complete

Ready to Build Smarter?

You have the setup, the workflow, and the guardrails. Start with the Settings page scenario and build from there.

The post Guardrails, Hard Cases, and Daily Habits for MCP Adoption appeared first on Uno Platform.

]]>
An easy agentic Workflow for Developing with Uno Platform MCPs https://platform.uno/blog/an-easy-agentic-workflow-for-developing-with-uno-platform-mcps/ Thu, 19 Feb 2026 20:44:04 +0000 https://platform.uno/?p=55104 ๐Ÿ•“ 5 MINThe On-Ramp to AI-Assisted Development with Uno Platform

The post An easy agentic Workflow for Developing with Uno Platform MCPs appeared first on Uno Platform.

]]>
MCP Series Part 2 of 3
The Workflow

Define โ†’ Plan โ†’ Apply โ†’ Build โ†’ Verify โ†’ Iterate .

Five steps. Verification built into every cycle, not as an afterthought, but as a gate.

Part 1 got you connected. Both MCPs work. You've seen them in action.

Now you need a process, not just "ask the AI and hope." The difference between developers who get value from AI assistants and those who don't is almost always workflow, not capability. The tools are the same. The loop is what matters.

The Loop

The Uno Platform AI Dev Loop

1 DEFINE State the outcome + write clear acceptance criteria
2 PLAN Uno MCP: propose changes aligned with repo patterns
3 APPLY/BUILD Apply the smallest safe patch, then compile + run the target
4 VERIFY App MCP: inspect runtime UI state vs. acceptance criteria
5 ITERATE If mismatches, return to Plan with App MCP evidence

Every step involves a specific action. Steps 2 and 5 are where the MCPs earn their keep.

Step by Step

Step 1 - Define: Write down what you're changing and how you'll know it worked. Be specific. "Add a settings page" is not acceptance criteria. "A SettingsPage with three ToggleSwitches, each bound to an IState, page reachable from MainPage navigation" is.

Step 2 - Plan: Ask the agent (via Uno MCP) to propose the changes. It searches the docs, reads your project conventions, and proposes file diffs that match your existing patterns. You review before applying.

Step 3 - Apply: Take the smallest safe patch. One feature per cycle. If the agent proposes eight file changes, question whether it can be split.

Step 4 - Build: dotnet build, run on your target. Nothing fancy here.

Step 5 - Verify: With the app running, use App MCP to inspect the visual tree (uno_app_visualtree_snapshot), take screenshots (uno_app_get_screenshot), and check DataContext state (uno_app_get_element_datacontext). Compare what's actually rendered against your acceptance criteria from Step 1.

Step 6 - Iterate: If verification reveals mismatches, feed the App MCP output back to the agent. "The visual tree shows the ToggleSwitch exists but its DataContext is null. Here's the snapshot. Fix the binding." This is where the loop gets powerfulโ€”you're giving the agent evidence, not just a vague "it doesn't work."

Key principle: Never skip Step 5. Code that compiles is not code that works. This is especially true for cross-platform UI, where something can render perfectly on Windows and break on Android.

Scenario A

Scaffold a Settings Page with Toggles and Persistence

Context
An existing Uno Platform app using MVUX, Navigation extensions, and Material theme. No settings page exists yet.
Goal
Add a SettingsPage with three toggle switches (Dark Mode, Notifications, Analytics) that persist state across app restarts. Page accessible from main navigation.
Step 1: Define Acceptance Criteria

Write acceptance criteria:

  • โ€” SettingsPage exists with three ToggleSwitch controls
  • โ€” Each toggle is bound to an IState<bool> in a SettingsModel (MVUX)
  • โ€” Toggle state persists via IWriteableOptions or local storage
  • โ€” SettingsPage is reachable from main navigation
  • โ€” All toggles default to false on first launch
Step 2: Plan Uno MCP

Prompt for the agent:

"I want to add a SettingsPage to my Uno Platform app. Review my project structure. I need:

  • โ€ข A SettingsModel (MVUX partial record) with three IState<bool> properties
  • โ€ข A SettingsPage.xaml with three ToggleSwitches bound to those states
  • โ€ข Navigation route registered so SettingsPage is reachable from MainPage
  • โ€ข State persistence using the project's existing storage pattern

Use Uno MCP docs to verify the correct MVUX state pattern and navigation registration. Propose the minimal set of file changes. Match my existing naming, namespace, and folder conventions."

What the agent should do
Call uno_platform_docs_search for MVUX states, navigation routes, and IWriteableOptions. Propose a SettingsModel using partial record with IState<bool>, the XAML page, and the navigation route registration.
What the proposed changes might looks like
Claude Response
I've reviewed everything. Here's my proposed plan - 4 file changes total:

1. New file: Presentation/SettingsModel.cs
   MVUX partial record with 3 IState<bool> properties, injecting 
   IWritableOptions<SettingsConfig> to persist toggles.

2. New file: Presentation/SettingsPage.xaml + code-behind
   Three ToggleSwitch controls bound to the states, with NavigationBar,
   matching existing page conventions.

3. New file: Models/SettingsConfig.cs
   Immutable record with three bool? properties for persistence via 
   IWritableOptions.

4. Modified: App.xaml.cs
   - Register SettingsConfig as a configuration section
   - Add ViewMap<SettingsPage, SettingsModel>() in RegisterRoutes
   - Add new("Settings", ...) nested route

What this uses:
- MVUX IState<bool> with State.Value() pattern (matching MainModel)
- IWritableOptions<SettingsConfig> for persistence
- Region-based navigation route registration (matching SecondPage pattern)
- No new NuGet packages needed

Unresolved questions:
- What should the three boolean settings represent? I'll use generic 
  labels: Notifications, Dark Mode, Analytics - want me to change these?
- Should SettingsPage be reachable from MainPage via a button, or just 
  registered as a route?
Steps 3 & 4: Apply and Build

Review the proposed files. Check that namespaces, folder locations, and patterns match your project. Apply the changes.

Run on your target.

Step 5: Verify App MCP

Prompt for the agent:

"Use uno_app_start to verify runtime behavior:

  • โ€ข Navigate to SettingsPage (or confirm it's reachable)
  • โ€ข Take a screenshot of the SettingsPage
  • โ€ข Get a visual tree snapshot of the SettingsPage
  • โ€ข Verify: Three ToggleSwitches exist, each has a binding, page title is visible

Report pass/fail for each criterion with evidence from the visual tree."

What the agent should do
Call uno_app_get_screenshot and uno_app_visualtree_snapshot. Parse the tree for ToggleSwitch elements. Report findings against each criterion.
Visual tree snapshot showing ToggleSwitches with binding states
Step 6: Iterate If needed

If the visual tree shows ToggleSwitches but with null DataContext, feed the snapshot back:

"The visual tree snapshot shows the ToggleSwitches exist, but here's the output: [paste snapshot]. The DataContext appears to be null. Check the navigation route registration and verify the SettingsModel is being injected correctly."

The agent now has evidence to work with, not just your description of the problem.

Copy-Paste Templates

Prompt Kit

These are reusable across any scenario. Customize the [BRACKETED] parts.

1 Plan the change

"Review my project structure via Uno MCP. I want to [GOAL]. Propose the minimal set of file changes. For each file, state: what changes, what's new, and why. Match my existing naming, namespace, and MVUX patterns."

2 Implement smallest patch

"Based on the plan above, generate the code changes. Keep the diff as small as possible. Do not refactor unrelated code. Use MVUX patterns (partial record, IState, IFeed) as appropriate."

3 Inspect runtime UI

"Using App MCP, take a screenshot and get a visual tree snapshot of [PAGE]. Report: element hierarchy, bound property values, actual layout values (margins, padding, bounds), and any elements with null DataContext."

4 Verify acceptance criteria

"Using App MCP, verify these criteria against the running app: [LIST CRITERIA] Report pass/fail for each with evidence from the visual tree or screenshot."

5 Diagnose binding issues

"Using App MCP, inspect [CONTROL] on [PAGE]. Call uno_app_get_element_datacontext on it. Report: DataContext type, bound property values, and whether any bindings resolve to null or fallback values."

6 Generate regression checklist

"Based on the changes made to [FILES], generate a checklist of things to verify. Include cross-platform considerations for Windows, Android, and WebAssembly."

Up Next

What's Next

You have the loop and two real scenarios. Part 3 covers the harder cases:

  • โ€” Debugging silent binding failures with App MCP evidence
  • โ€” Verifying cross-platform parity across multiple targets
  • โ€” Wiring up navigation with parameter passing
  • โ€” Knowing when to stop using MCP and just write the code yourself
Get Started Today

Ready to Build with AI-Assisted Workflows?

Set up Uno MCP and App MCP in minutes. Start building cross-platform apps with AI agents that actually understand your code.

The post An easy agentic Workflow for Developing with Uno Platform MCPs appeared first on Uno Platform.

]]>
Uno MCP vs App MCP: When to Use Each https://platform.uno/blog/uno-mcp-vs-app-mcp/ Tue, 17 Feb 2026 21:04:34 +0000 https://platform.uno/?p=54919 ๐Ÿ•“ 5 MINThe On-Ramp to AI-Assisted Development with Uno Platform

The post Uno MCP vs App MCP: When to Use Each appeared first on Uno Platform.

]]>
MCP Series Part 1 of 3
The Mental Model

Uno MCP gives your AI agent access to up-to-date Uno Platform documentation and conventions. Use it when planning features and scaffolding code.

App MCP connects to your running application. Use it to verify that generated code actually works at runtime.

Together, they close the gap between "AI suggested this" and "this is verified."

Is this you?

You already know how to build cross-platform apps with Uno Platform or other cross-platform .NET technologies, past and present. Now youโ€™re experimenting with AIโ€”whether itโ€™s agents, skills, MCPs, or whatever form it shows up in this weekโ€”and youโ€™re trying to turn it into a workflow you can actually trust.

Thatโ€™s where Uno Platform MCPs come in. Theyโ€™re part of Uno Platform Studio, and they give your AI agents two things they usually lack:

โ€” Current Uno knowledge (docs, conventions, best practices)
โ€” Runtime truth (what your app actually rendered and did)

Knowing when to use each MCP is the difference between โ€œAI suggested thisโ€ and โ€œthis is verified.โ€ And once you feel that loop click, it brings back that early-days spark: shipping faster because youโ€™ve got feedback you can rely on.

The Two MCPs

Two MCPs, Two Roles

Uno Platform provides two MCP servers. They do different jobs. Understanding which one to reach for, and when, is the whole mental model

Uno MCP (Remote)

What it is
A remotely hosted MCP server at https://mcp.platform.uno/v1.
What it does
Gives your AI agent access to up-to-date Uno Platform documentation, best practices, and development conventions. It provides:
  • โ€” uno_platform_docs_search โ€” searches Uno Platform docs
  • โ€” uno_platform_docs_fetch โ€” retrieves full documentation pages
  • โ€” uno_platform_agent_rules_init โ€” primes with development rules
  • โ€” uno_platform_usage_rules_init โ€” primes with API best practices

Built-in prompts: /new (scaffold new app), /init (prime existing session)

When you use it
Planning a feature. Scaffolding new pages. Asking "what's the right way to do X in Uno Platform?" Getting the agent to propose file changes that match your project's patterns.
Think of it as
Uno Platform's knowledge base, piped directly into your agent. Instead of guessing based on stale training data, it searches current docs and loads up-to-date development rules.

App MCP (Local)

What it is
A locally running MCP server started via dotnet dnx -y uno.devserver --mcp-app. It connects to your running application. If youโ€™re using VS Code or Visual Studio with the Uno extension, itโ€™s automatically registered.
What it does
Gives your agent eyes and hands on your actual running app. Community license tools:
  • โ€” uno_app_get_runtime_info โ€” app PID, OS, platform details
  • โ€” uno_app_get_screenshot โ€” captures what the app looks like right now
  • โ€” uno_app_visualtree_snapshot โ€” live visual tree dump
  • โ€” uno_app_pointer_click โ€” clicks at X,Y coordinates
  • โ€” uno_app_key_press / uno_app_type_text โ€” keyboard input
  • โ€” uno_app_element_peer_default_action โ€” triggers automation peer actions

Pro license adds: uno_app_get_element_datacontext, uno_app_element_peer_action

When you use it
Something looks wrong at runtime. Verifying generated UI rendered correctly. Checking binding values. Confirming layout on a specific target. Debugging why a button does nothing when tapped.
Think of it as
Runtime truth. It doesn't guess what the UI looks likeโ€”it reads what's actually there.

When to Use Which

"I want to..." MCP to use
Plan a new feature Uno MCP
Scaffold a page, model, and navigation route Uno MCP
Check the correct MVUX pattern for my scenario Uno MCP
Verify my page rendered correctly after a change App MCP
Debug why a binding resolves to null App MCP
Confirm layout matches across platforms App MCP
Get a screenshot of the current app state App MCP
Build a feature end-to-end with verification Both, in sequence

The key insight: Uno MCP helps you write code that follows conventions. App MCP confirms the code actually works at runtime.

Getting Started

Your First 30 Minutes

Don't try to build a feature yet. Start with three small tasks that confirm everything works. Full setup instructions live in the Uno Platform docs.

Task 1: Confirm Uno MCP is connected 2 minutes

Ask your agent:

"Search the Uno Platform docs for the recommended MVUX pattern for persisted settings. Summarize what you find."

What you're checking
The agent calls uno_platform_docs_search, gets results, and summarizes them coherently. If it can't find the tools or the calls fail, your Uno MCP connection isn't working.
What this is NOT checking
Uno MCP doesn't read your local project files. It provides Uno Platform documentation and development conventions. Your local project context comes from the agent reading your files directly.
Uno MCP docs search result
Task 2: Confirm App MCP sees your running app 5 minutes

Build and run your app on any target. Then ask:

"Use App MCP to take a screenshot of the running app and get a visual tree snapshot of the current page."

What you're checking
The agent calls uno_app_get_screenshot and uno_app_visualtree_snapshot and returns real data from your running app.
Task 3: Close the loop โ€” propose, build, and verify 15 minutes

This one exercises both MCPs in sequence with a real (small) feature: a button that opens a flyout.

Ask the agent:

"Using Uno MCP, look up the correct pattern for attaching a Flyout to a Button in Uno Platform. Then propose how to add a Button labeled 'About' to [MainPage] with a Flyout that displays the text 'Built with Uno Platform and MCP.' Show me the XAML change."

Review the proposed change. The agent should produce XAML like:

<Button Content="About">
      <Button.Flyout>
          <Flyout>
              <TextBlock Text="Built with Uno Platform and MCP." />
          </Flyout>
      </Button.Flyout>
  </Button>

No code-behind, no model changes - just XAML. That's the right call for a simple flyout. Apply the change. Rebuild and run. Then:

"Using App MCP, take a screenshot of the page. Confirm the About button exists in the visual tree. Then click the button and take another screenshot to confirm the Flyout appeared with the expected text."

What you're checking
The full loop: Uno MCP informed the proposal (correct Flyout attachment pattern), you applied and built it, and the agent performed verification using the App MCP, checking the button exists in the UI tree and confirming the runtime behavior (Flyout opens on click). Thatโ€™s not just โ€œdoes it compileโ€; itโ€™s โ€œdoes it work.โ€
Summary

What You Just Did

You used both MCPs in sequence:

  1. Uno MCP gave the agent accurate Uno Platform knowledge (ContentDialog patterns, MVUX command generation) so it proposed correct code instead of guessing
  2. You reviewed, applied, and built the change
  3. App MCP verified both the static result (button exists in visual tree) and the runtime behavior (dialog opens on click)

That third step is the important one. The agent didn't just generate code and walk awayโ€”it confirmed the code works by interacting with the running app. It clicked the button. It saw the dialog.

The agent itself reads your local project filesโ€”that's just how AI coding agents work. What the MCPs add is Uno Platform expertise (Uno MCP) and runtime verification (App MCP). Without them, the agent can read your code but doesn't know Uno Platform conventions and can't confirm what actually happens when a user taps something.

That's the entire mental model. Everything in Parts 2 and 3 builds on this loop.

Get Started Today

Ready to Build with AI-Assisted Workflows?

Sign up to Uno Platform Studio for free and set up Uno MCP and App MCP in minutes. Start building cross-platform apps with AI agents that actually understand your code.

What's Next

Part 2 introduces the full Uno AI Dev Loopโ€”a six-step repeatable workflowโ€”and walks through two real scenarios: scaffolding a Settings page with MVUX and fixing a cross-platform layout bug. You'll get copy-paste prompt templates for each step.

Part 3 covers the hard cases (binding debugging, cross-platform parity verification), guardrails for when MCP is wasting your time, and daily habits to make this stick.

Get Started Today

Ready to Build with AI-Assisted Workflows?

Set up Uno MCP and App MCP in minutes. Start building cross-platform apps with AI assistants that actually understand your code.

The post Uno MCP vs App MCP: When to Use Each appeared first on Uno Platform.

]]>
Uno Platform 6.5: Community Standup https://platform.uno/blog/6-5-release-community-standup/ Tue, 17 Feb 2026 17:04:36 +0000 https://platform.uno/?p=54836 ๐Ÿ•“ 2 MINUno Platform is the most flexible open-source platform for modern cross-platform .NET development, complete with enterprise-grade AI and visual design tools. Paired with Uno Platform Studio, .NET developers can get a serious boost in productivity with Hot Design visual designer with Hot Reload for fastest C#/XAML dev loop, dependable AI Agents/MCP tools for contextual AI […]

The post Uno Platform 6.5: Community Standup appeared first on Uno Platform.

]]>

Uno Platform is the most flexible open-source platform for modern cross-platform .NET development, complete with enterprise-grade AI and visual design tools. Paired with Uno Platform Studio, .NET developers can get a serious boost in productivity with Hot Design visual designer with Hot Reload for fastest C#/XAML dev loop, dependable AI Agents/MCP tools for contextual AI intelligence and flexibility in technology stack โ€“ all towards building apps from any OS/IDE to run on mobile, web, desktop or embedded devices.

Uno Platform recently pushed out 6.5 release – packed with goodies for .NET developer productivity. Letโ€™s celebrate the release and unpack all that.

The Chefs

Itโ€™s restaurant week and the kitchen doors are wide open.

Weโ€™re inviting a lineup of master chefs to step up to the counter and showcase the signature recipes theyโ€™ve been perfecting behind the scenes. Each dish represents something thoughtfully crafted: bold ideas, refined techniques, and flavors shaped by real-world experience.

In this special community standup, our chefs arenโ€™t just plating up theory โ€” theyโ€™re walking you through the recipes theyโ€™ve built, the ingredients they chose, and the techniques that make everything come together. Pull up a seat, bring your appetite, and get ready for a tasting menu inspired by our latest release. Here are the chefs joining us from around the world and the specialized recipes they’ll showcase:

  • Martin Zikmund | Czech Republic | Uno Platform Core updates
  • Steve Bilogan | Canada | Uno Platform Studio & Productivity
  • Sam Basu | United States | Host & Demo monkey

The Recipes

With 6.5 release, Uno Platform is staying on the forefront of AI innovation while strengthening the core cross-platform .NET open-source ecosystem – there’s something for everyone. While Uno Platform Core developers benefit from new functionality, plethora of performance and stability fixes, Uno Platform Studio delights with better UX for developer workflows and tighter AI integrations. Here are the highlights of some of the recipes:

ย 

Uno Platform v6.5:

  • Unicode TextBox support for robust multilingual text input
  • WebView2 improvements across the board
  • Expanded drag-and-drop capabilities for Wasm (Skia)
  • Better error diagnostics, plus major performance and stability gains

ย 

Uno Platform Studio:

  • Antigravity MCP support unlocks agent-first AI workflows
  • Hot Design now launches by default, with a redesigned toolbar
  • New scope selector makes navigating complex UI trees effortless

Wrap up

While the chefs will be proud to showcase the recipes they’ve cooked up, it is also an opportunity to ask difficult questions or participate in casual kitchen banter. Come hang out with chefs and leave inspired for your next creation.

We canโ€™t wait to see what you cook up next. Cheers developers!

Uno Platform 6.5 Community Standup

If you want a quick, no-fluff walkthrough of whatโ€™s new and how it helps you build modern cross-platform .NET apps faster with less friction, come join us for a community standup to celebrate Uno Platform 6.5.

February 19, 2026 ยท 11:00 AM ET

Add to Calendar

The post Uno Platform 6.5: Community Standup appeared first on Uno Platform.

]]>
Uno Platform February Release: Studio Productivity and Platform Improvements https://platform.uno/blog/uno-platform-6-5/ Wed, 11 Feb 2026 16:53:51 +0000 https://platform.uno/?p=53354 ๐Ÿ•“ 7 MINHot Design got smoother, TextBox now supports Unicode, Antigravity is here, WebView2 on WASM is more reliable, and thereโ€™s plenty more in the release.

The post Uno Platform February Release: Studio Productivity and Platform Improvements appeared first on Uno Platform.

]]>
v6.5 Release
TL;DR โ€” Whatโ€™s new in Uno.SDK 6.5
Uno Platform Studio
  • Antigravity MCP support unlocks agent-first AI workflows
  • Hot Design now launches by default, with a redesigned toolbar
  • New scope selector makes navigating complex UI trees effortless
Uno Platform (v6.5)
  • Unicode TextBox support for robust multilingual text input
  • WebView2 improvements across the board
  • Expanded drag-and-drop capabilities for WASM (Skia)
  • Better error diagnostics, plus major performance and stability gains

What a release! We are staying on the forefront of AI innovation while strengthening the core cross-platform .NET open-source ecosystem. The February release delivers something for everyone.

Studio users get tighter AI integration and Hot Design UX and workflow enhancements.

All users benefit from long-requested Unicode text support, further WebView2 improvements, drag-and-drop for browser targets, and a plethora of performance and stability fixes.

The numbers speak for themselves: 450+ community and customer issues resolved, delivering a faster, more stable Uno Platform experience end-to-end.

Uno Platform Studio

Antigravity support for agent-first .NET workflows

Antigravity is Googleโ€™s agent-first IDE built on top of VS Code. It is built for teams that are moving beyond โ€œAI helps me write codeโ€ and toward agent-driven development, where multiple agents can plan, implement, and verify work across an entire feature.

Most AI coding assistants still stop at โ€œit compiles.โ€ Starting today, you can pair Antigravity with the Uno Platform App MCP so agents can verify behavior at runtime, exactly when they need to. An agent can launch your app, inspect the visual tree, capture screenshots, simulate clicks and inputs, and validate real UI behavior instead of guessing from static code.

Because Antigravity runs through mission control and its artifacts model, those checks are reviewable. Agents can leave behind concrete proof, like screenshots and step-by-step results, instead of a โ€œtrust meโ€ summary.

To learn more, start with the Getting Started with Uno Platform and the Antigravity Tech Bite, or dive straight into the docs.ย 

Hot Design UX Enhancements

Launch on first run

You asked for faster onboarding into the visual design loop, and we listened. Creating a new app is now a straight shot: create the project, hit Run, and youโ€™re dropped directly into Hot Design. No extra steps, no detours. From there, a simple toggle lets you move seamlessly between the visual design surface, Agent mode, or straight into code-behind.

A new introductory experience walks you through the three Hot Design modesโ€”Agent, Design, and Interactiveโ€”so you immediately understand whatโ€™s possible and when to use each. Hot Design opens in Agent mode by default, where you can scaffold entire pages or refine existing ones, starting from scratch or building on your existing view model data.

When you want more hands-on control, switch to Design mode to select, tweak, or add UI elements visually. Prefer to explore the app as a user would? Interactive mode lets you navigate through the running application without leaving the design experience.

If youโ€™re not ready to use Hot Design, you can exit at any time and return to the running app. And Studio stays out of your way when you want it to: it remembers your last state, reopening Hot Design automatically if thatโ€™s where you were workingโ€”or staying closed if youโ€™re focused on debugging. This behavior applies consistently across desktop, emulator, and simulator targets, including external windows.

The result: faster discovery, less friction, and a design loop that starts exactly when you do.

Scope Selector for UserControls and Templates

If you’ve worked with deeply nested UI (pages containing UserControls containing ControlTemplates containing DataTemplates), you know the problem: you want to edit a specific control buried three levels deep, and you’re clicking around the visual tree trying to find the right element to select.

The new Hot Design scope selector solves this by letting you jump directly to any UserControl or template visible on the current screen. Pages are now treated as UserControls for navigation purposes, so you can select into them the same way. ControlTemplates and DataTemplates are also directly navigable.

To learn more, check out the Scope Selector documentation.

Redesigned Toolbar

This change came straight from community feedback. The floating toolbar in Hot Design is now a fixed toolbar anchored to the top of the window, so the controls are always in the same place, and the UI stays out of your way while youโ€™re coding. Itโ€™s available both in-app and in the external window.

When youโ€™re working through the external window, a smaller floating toolbar still appears where it makes sense. Use it to toggle Hot Design from the emulator and to surface Hot Reload status. The primary controls stay anchored in one predictable place, while the rest stays out of your way.

To learn more, check out the Toolbar documentation.

Try Hot Design and Hot Design Agent

While you're exploring Hot Design, try the Hot Design Agent. Toggle Agent mode in the toolbar and prompt it to suggest UI updates, align layouts, apply styles, or reorganize components without writing XAML. All actions are transparent and reversible. Hot Design Agent is available to Uno Platform Studio subscribers and trialists.

Start your Free Trial

Uno Platform 6.5

TextBox Unicode Support for Localization

If you’ve only ever built apps for English-speaking markets, you’ve been working with a safety net you didn’t know existed. Latin scripts are forgiving. Codepoints map cleanly to selection works how you’d expect, and caret positioning doesn’t require special handling.

Then you try to render Arabic. Or Mandarin. Or Hindi. And you discover that text rendering isn’t just “draw characters in order.” Scripts flow right-to-left. Characters combine into ligatures. A single grapheme cluster can span multiple Unicode code points. Your caret lands in the middle of a character instead of between characters.

TextBox now renders non-Latin scripts (Mandarin, Arabic, Hindi, and anything else Unicode throws at it) with proper caret positioning, text selection, and keyboard navigation. Mouse selection across multi-byte characters works. Arrow keys move between grapheme clusters instead of codepoints. Text editing behaves like users expect, regardless of script complexity.

XAML
<TextBox Text="ู…ุฑุญุจุง ุจุงู„ุนุงู„ู…" />  <!-- Arabic, right-to-left -->
<TextBox Text="ไฝ ๅฅฝไธ–็•Œ" />        <!-- Mandarin, multi-byte -->
<TextBox Text="เคจเคฎเคธเฅเคคเฅ‡ เคฆเฅเคจเคฟเคฏเคพ" />   <!-- Hindi, complex ligatures -->

This includes visual output plus mouse and keyboard interaction (moving between characters, selection). Input Method Editors (IME) for composition-based languages like Chinese, Japanese, and Korean are not yet supported. If your keyboard outputs characters directly, it works. If you need an IME to compose characters, that’s pending.

WebView2 Enhancements

WebView2 on WebAssembly is now more reliable when loading local, app-bundled web assets (HTML/CSS/JS/images) through local content mapping. In practice, that means fewer โ€œit works on desktop but not on WASMโ€ moments when youโ€™re hosting hybrid UI, embedded docs, or offline pages inside a WebView2.

Uno Platform continues to ship WebView2 as part of the OSS stack at no additional cost.

Drag and Drop Support for Wasm with Skia Renderer

Drag-and-drop on WASM (Skia) now handles more real-world scenarios, including file drops from external applications and the operating system. The result is a more native file upload experience in Uno Platform apps, fewer custom JavaScript workarounds, and feature parity with the native WASM target.

Stability Across Every Target

Feature releases get the attention, but the work that keeps production apps running happens in the fixes. Uno Platform 6.5 includes a broad set of stability improvements spanning every supported target: WebAssembly, iOS, Android, macOS, Windows and Linux.

App startup and navigation are more reliable. We made improvements to many of the core controls, including TextBox, ProgressRing, ListView, PasswordBox, MenuFlyout, and more. WebView is more robust across platforms. The Skia rendering backend also handles more scenarios consistently, particularly around text and layout, where minor inconsistencies previously required workarounds.

None of these are the kind of changes that make a headline. They’re the kind that stop you from filing an issue.

Uno Platform 6.5 Community Standup

If you want a quick, no-fluff walkthrough of whatโ€™s new and how it helps you build modern cross-platform .NET apps faster with less friction, join us for our first community standup.

February 19, 2026 ยท 11:00 AM ET

Add to Calendar

Try Uno Platform & Uno Platform Studio

With Uno Platform 6.5, thereโ€™s never been a better time to ship cross-platform .NET appsโ€”stronger WebAssembly + Skia parity, smoother Hot Design workflows, better localization/Unicode support, and runtime verification via Antigravity App MCP so your AI agent can actually see what itโ€™s building.

  • Upgrade to 6.5: Update your IDE extensions, move to the latest stable Uno.Sdkย and follow the migration guide to cover any breaking changes.
  • Try Hot Design: to get a proper visual-designer workflow, including the latest Hot Design and Hot Design Agent updates.
  • Build with AI (for real): our newest agent-first Uno Tech Bites get you up and running quickly with the patterns and tooling that actually hold up in real projects.
  • Join the community: Share what youโ€™re building, ask questions, and swap notes with the team and other developers on GitHub and Discord.

The post Uno Platform February Release: Studio Productivity and Platform Improvements appeared first on Uno Platform.

]]>