copilot - Stringfest Analytics https://stringfestanalytics.com Analytics & AI for Modern Excel Fri, 06 Mar 2026 20:30:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/stringfestanalytics.com/wp-content/uploads/2020/05/cropped-RGB-SEAL-LOGO-STRINGFEST-01.png?fit=32%2C32&ssl=1 copilot - Stringfest Analytics https://stringfestanalytics.com 32 32 98759290 How to get better results from Excel AI assistants https://stringfestanalytics.com/how-to-get-better-results-from-excel-ai-assistants/ Fri, 06 Mar 2026 20:30:40 +0000 https://stringfestanalytics.com/?p=16804 One thing Excel AI assistants like Copilot and Claude really need is a way to load system-level preferences and modeling instructions. Right now, when you ask these tools to build a workbook, they often fall back on very generic Excel patterns. That usually means things like hardcoded numbers in formulas, calculations scattered across sheets, inconsistent […]

The post How to get better results from Excel AI assistants first appeared on Stringfest Analytics.

]]>
One thing Excel AI assistants like Copilot and Claude really need is a way to load system-level preferences and modeling instructions.

Right now, when you ask these tools to build a workbook, they often fall back on very generic Excel patterns. That usually means things like hardcoded numbers in formulas, calculations scattered across sheets, inconsistent formatting, or outputs that break the moment the data grows.

My guess is that this happens because there’s simply more training data reflecting older Excel habits. Modern practices like structured tables, dynamic arrays, and consistent modeling standards have been adopted much more slowly across organizations. As a result, unless you guide the model, it often defaults to those older patterns.

The good news is that the quality of AI-generated workbooks improves dramatically once you start giving the assistant a few guardrails. A helpful way to think about it is this: treat the AI the same way you would treat a new analyst joining your team. If you want consistent models, you need to explain the standards you expect.

Improving AI generated workbooks

Start with how you want models structured

Most analysts follow certain structural habits when building workbooks, even if those habits are rarely written down anywhere. Over time, you develop a mental model for how a workbook should be organized so that it stays understandable and maintainable.

Typically, inputs live in one place where they can be easily edited. Calculations happen somewhere else, where formulas can operate on those inputs without clutter. Outputs are separated again so the final results can be presented clearly to whoever is consuming the analysis.

When you make those expectations explicit in your instructions to an AI assistant, the quality of the workbook it generates improves dramatically. Instead of scattering formulas and values throughout the file, the assistant has a clear blueprint to follow.

A few simple structural instructions can go a long way. For example, you might include guidance like:

  • “Organize the workbook into Inputs, Calculations, and Outputs sections or sheets.”
  • “Store input datasets in Excel tables, not loose cell ranges.”
  • “Name tables using a tbl_ prefix (for example tbl_sales or tbl_expenses).”
  • “Avoid hardcoding numbers inside formulas. Reference input cells or parameters instead.”
  • “Use structured references to table columns rather than fixed ranges like A2:A100.”

Using Excel tables is especially helpful here. Tables allow formulas to reference columns by name instead of pointing to specific cell ranges, which makes the logic much easier to understand. They also expand automatically as new rows are added, so formulas and analyses don’t silently break when the dataset grows.

These kinds of rules help the assistant build something closer to what an experienced analyst would produce. Instead of a spreadsheet that mixes inputs, formulas, and results in unpredictable ways, you get a model with a clear structure that someone else can actually understand.

In practice, that small structural decision can prevent many of the subtle errors that creep into poorly organized spreadsheets.

Encourage modern Excel features

Another simple instruction that can make a big difference is telling the assistant which Excel tools it It can also help to specify which generation of Excel functions you want the assistant to favor.

Left on its own, an AI model will often default to older patterns like VLOOKUP, nested IF statements, or fixed ranges such as A2:A100. That’s largely because there is far more training data reflecting those older habits. Newer capabilities, especially dynamic array functions and tools like LET() and LAMBDA(), have spread more slowly across organizations, so the model sees fewer examples of them.

Modern Excel tools are usually much better suited for building resilient models. Functions like XLOOKUP(), FILTER(), UNIQUE(), SORT(), and XMATCH() work naturally with dynamic datasets, while dynamic arrays allow a single formula to spill results automatically as the data grows. LET() can also make complex formulas easier to read by naming intermediate calculations.

Because of this, it often helps to include explicit preferences such as:

  • “Prefer XLOOKUP() instead of VLOOKUP() or HLOOKUP() when performing lookup operations.”
  • “Use modern dynamic array functions such as FILTER(), UNIQUE(), and SORT() when generating lists or subsets of data.”
  • “Write formulas that spill automatically rather than copying formulas down rows or columns.”
  • “Use LET() to define intermediate variables and simplify complex formulas.”
  • “Reference Excel table columns using structured references instead of fixed ranges such as A2:A100.”

These small instructions help ensure the model uses modern Excel patterns, which tend to produce workbooks that adapt much more gracefully as the data changes.

Set some naming conventions

Another helpful preference to define is how things should be named.

Consistent naming makes a workbook much easier to understand once formulas start referencing multiple tables, parameters, and calculations. Without it, even a well-built model can become hard to follow.

You can guide the assistant with simple conventions such as:

  • “Use snake_case for all named ranges and variables.”
  • “Prefix Excel tables with tbl_ (for example tbl_sales, tbl_expenses).”
  • “Prefix input parameters with p_.”
  • “Prefix calculated metrics or measures with m_.”

These may seem like small details, but they make formulas far easier to read and navigate. The same principle applies in programming and analytics: clear, consistent names reduce confusion and make systems much easier to maintain and extend.

Encourage documentation inside the workbook

Another useful preference is asking the assistant to include basic documentation inside the workbook.

A good analyst rarely hands over a model without explaining how it works, and the same expectation can improve AI-generated files. You might ask the assistant to add brief comments explaining complex formulas, include short descriptions at the top of worksheets, or create a simple Assumptions sheet.

An assumptions sheet is especially helpful because it gives readers a clear place to see the key inputs driving the model: things like growth rates, scenario parameters, or cost estimates.

You can also include instructions like:

  • “Add comments to explain any complex or non-obvious formulas.”
  • “Include a short description at the top of each worksheet explaining its purpose.”
  • “Create an Assumptions sheet listing key inputs, parameters, and model drivers.”

This begins to overlap with the idea of test-driven instructions, where models include validation checks and error flags. That’s a deeper topic for another post. For now, the goal is simply encouraging the assistant to make its logic visible.

A model that documents itself is much easier for someone else to understand and trust.

Set visual expectations

Excel workbooks are not just computational models. They’re also documents people need to read and interpret, which means visual consistency matters.

You can help the assistant by specifying simple formatting preferences, such as using one color for inputs, another for outputs, applying consistent number formats, or building charts with a specific color palette.

For example, you might include instructions like:

  • “Use a consistent color to highlight editable input cells.”
  • “Apply clear and appropriate number formats to outputs (such as currency, percentages, or thousands separators).”
  • “Use consistent table styles and chart formatting throughout the workbook.”
  • “Apply the company color palette when creating charts or dashboards.”

Some teams even upload their brand colors or example dashboards so new workbooks align with existing reporting standards.

These small details may seem cosmetic, but they go a long way toward making AI-generated workbooks feel less like rough drafts and more like finished deliverables.

Don’t forget the “last mile” work

Another category of instructions analysts often forget to include is the last mile formatting work.

This is the cleanup that happens right before a file gets sent to a manager or included in a presentation: freezing panes so headers stay visible, autofitting column widths, setting print areas, aligning charts, and making sure number formats are consistent.

None of this work is particularly difficult, but it can quietly consume a surprising amount of time.

The good news is that AI assistants can usually handle these tasks just fine, as long as you tell them to. You might include instructions like:

  • “Freeze panes so table headers remain visible when scrolling.”
  • “Autofit column widths to improve readability.”
  • “Apply consistent number formats across output tables and reports.”
  • “Set appropriate print areas and page layout for printable sheets.”
  • “Align charts and tables neatly on output or dashboard sheets.”

If there are small formatting tweaks your boss regularly asks you to make before sharing a workbook, those are exactly the kinds of expectations worth building into your instructions.

Give the assistant examples of good work

Another powerful way to improve results is simply showing the assistant examples of good workbooks.

A well-structured template, a modeling standards document, or a sample dashboard can give the AI a clear signal about what “good Excel” looks like in your environment. Instead of generating models completely from scratch each time, the assistant can start to mirror the layouts, conventions, and patterns used in those examples.

You can reinforce this by including instructions like:

  • “Follow the structure and formatting used in the provided workbook template.”
  • “Use the uploaded dashboard as a reference for layout and chart styling.”
  • “Follow the conventions outlined in the modeling standards document.”
  • “Match the color palette and formatting used in the example reports.”

Even a single well-built workbook can act as a powerful reference point. When the assistant has a concrete example to follow, it becomes much easier for it to produce models that align with the way your team already works.

Conclusion

Excel AI assistants are already surprisingly capable. They can generate formulas, structure workbooks, and even assemble fairly sophisticated models with very little prompting.

What they still lack, however, is something most experienced analysts rely on every day: a modeling standard. In most organizations, analysts develop informal conventions for how workbooks should be structured, how formulas should be written, and how results should be presented. Those habits make models easier to maintain, review, and extend. But AI assistants don’t automatically know those expectations unless we tell them.

As these tools evolve, it’s likely we’ll see better ways to load persistent style guides, templates, and organizational preferences directly into the assistant. When that happens, the quality and consistency of AI-generated Excel models will improve dramatically.

Until then, the best approach is fairly simple. Don’t just tell the AI what model you want to build… tell it how you want Excel to be built. A short set of preferences around structure, formulas, naming conventions, and formatting can go a long way toward producing models that behave more like the work of an experienced analyst.

If you’re experimenting with these tools, try writing a small Excel style guide for your prompts and see how much the results improve.

And if you’re interested in more practical examples of modern Excel workflows, automation techniques, and AI-assisted analytics, I share a growing collection of guides, templates, and resources inside my Modern Excel + AI membership:

That’s where I’m collecting many of the patterns, prompts, and tools I’m experimenting with as Excel and AI continue to evolve.

The post How to get better results from Excel AI assistants first appeared on Stringfest Analytics.

]]>
16804
How to write better instructions for Excel agents https://stringfestanalytics.com/how-to-write-better-instructions-for-excel-agents/ Mon, 09 Feb 2026 22:16:35 +0000 https://stringfestanalytics.com/?p=16633 Earlier on this blog, I’ve written quite a bit about prompt patterns for Copilot in Excel and why they matter. That work came out of a simple observation: some Copilot prompts work well, while others miss the point, even when they seem reasonable. Over time, a few patterns start to show up. Examples help. Step-by-step […]

The post How to write better instructions for Excel agents first appeared on Stringfest Analytics.

]]>
Earlier on this blog, I’ve written quite a bit about prompt patterns for Copilot in Excel and why they matter.

That work came out of a simple observation: some Copilot prompts work well, while others miss the point, even when they seem reasonable. Over time, a few patterns start to show up. Examples help. Step-by-step framing helps. Clear boundaries help.

None of this is magic, and most of the time you don’t think in terms of “patterns” while you’re working. You’re just giving Copilot enough structure to be useful. Once you start noticing that, prompting feels less like a special skill and more like everyday Excel judgment.

This post continues that line of thinking, but with a slightly different focus. To demonstrate, I’ll use a simple sales dataset. You can follow along by downloading the exercise file below:

 

Start with a normal Copilot prompt

Let’s ground this in the same way most people first experience Copilot in Excel: a single question against a simple table.

Here’s a prompt I’d actually use against the sales dataset in the demo workbook. It’s basically a “reasoning + boundary” pattern: ask for the logic, then compress the output, and keep it neutral.

“Using the sales table in this workbook, explain why revenue increased in April. First, briefly show how you’re separating price vs volume vs mix effects (no more than 5 bullets). Then give me a short, neutral summary I could paste into an email. Avoid implying causality or intent.”

That prompt will usually do something decent, because it tells Copilot what “good” looks like in that moment: decompose drivers, be cautious, and compress the result.

Copilot example prompt

The trap: turning a good prompt into weak agent instructions

The most common failure mode in Agent Mode is also the most natural one. You take a prompt that worked well in Copilot, loosen it up a bit, and paste it into the agent instructions box.

It might look something like this:

“This agent helps analyze the sales table in this workbook. Explain changes in revenue over time and summarize key trends. Be clear and concise. Avoid implying causality.”

On the surface, that doesn’t look terrible. But it’s missing something essential.

That prompt works in Copilot because you are standing there steering it. Each time you ask a question, you implicitly provide structure. If the answer is vague, you reframe. If it overreaches, you narrow the scope. The discipline lives in the interaction.

Bad agent instructions output 1

Agent Mode doesn’t work that way. The instructions have to stand on their own across lots of questions, lots of users, and very different levels of care in how those questions are asked. When they don’t, you end up with results that sound thoughtful but don’t really show judgment. That’s exactly what’s happening here.

When we follow up with a question like “Why did revenue increase in April?”, the agent does what the instructions allow. It summarizes. It stays cautious. It avoids causal language. But it doesn’t know how to reason, what comparisons matter, or which dimensions are in scope. So it defaults to surface-level explanation.

Bad agent April increase

This isn’t an Agent Mode problem. It’s an instruction problem. You never told the agent how to break a change apart, how to reconcile totals, how to sanity-check the math, or how to decide whether the data can actually support the question being asked. You gave it an objective, but not a way to get there.

With prompts, a human quietly fills in that gap. With agents, no one does. And that gap between something that sounds reasonable and something that behaves reliably is where most Agent Mode setups quietly start to wobble.

Why you should not turn a Copilot prompt into agent instructions

The most common mistake in Agent Mode is taking a prompt that worked well in Copilot and pasting a longer version of it into the agent instructions box. It looks reasonable, but what you’ve written still isn’t really instructions. It’s a prompt without a user.

Copilot prompts work because Copilot is reactive. Each interaction stands on its own, and you’re there to correct course when the output is off. The structure lives in the prompt.

Agent Mode shifts that structure into the background. The text you write no longer shapes a single response. It defines how the agent should behave across many interactions, users, and questions.

When you stretch a prompt into instructions, the agent behaves just like Copilot did before: sometimes helpful, sometimes vague, sometimes confidently fuzzy. That’s not an Agent Mode failure. It’s a mental model problem.

Prompts and instructions are related ideas, but they are not interchangeable, as summarized in the following table:

Dimension Copilot prompt Agent instructions
Purpose Get a good answer now Define behavior over time
Scope One interaction Many interactions
Structure lives in The prompt Persistent background instructions
User involvement High, you steer each question Low, users ask loosely
Failure mode Answer is off, you re-prompt Agent sounds plausible but unanchored
What you’re writing A request A role definition
Good mental model “How should this answer be framed?” “What is this agent responsible for?”

Rebuilding the instructions using Microsoft’s components

So what do we do instead?

Rather than improvising our own mental model, we can lean on the guidance Microsoft has already laid out for declarative agents. Their documentation makes a quiet but important distinction: agent instructions aren’t prompts, they’re a specification.

The following image from that guide captures that structure clearly:

Elements of agent instructions

prompt” and start writing an instruction set that actually defines a role.

Microsoft breaks agent instructions into a few core pieces: Purpose, Guidelines, Skills, and, when useful, step-by-step workflows, error handling, and examples. These aren’t extra documentation. They’re how you prevent the agent from guessing.

Purpose: what is this agent actually for?

“Analyze sales data” isn’t a purpose. It’s a topic.

With this workbook, what you usually want is narrower and more disciplined: explain revenue movement in a way that’s defensible and doesn’t drift into storytelling. A purpose that reflects that reality might be:

Help users explain month-over-month revenue changes in this workbook by separating price, volume, and mix effects, prioritizing transparency over certainty.

That single sentence already constrains behavior. It implies what counts as a good answer and, just as importantly, what doesn’t.

Guidelines: the rules you normally carry in your head

This is the section most people skip because it feels obvious. But it’s also the section that makes Agent Mode behave differently from Copilot.

For this dataset, the guidelines are really just analytical hygiene:

  • Don’t infer intent or causality from observed changes
  • Don’t collapse multiple drivers into one confident sentence
  • Surface assumptions, especially with short time windows
  • Tie claims back to specific columns and periods
  • Ask a clarifying question when the request is underspecified

This isn’t about tone. It’s about not misleading someone.

Skills: what should it actually do in Excel?

Agents tend to misbehave when they have broad capability but no tool discipline.

With this file, the skills can be simple and restrained:

  • Use PivotTables to validate aggregates before explaining drivers
  • Recompute revenue (Units_Sold × Unit_Price) when validation is needed
  • Use charts only when they clarify a comparison, not as decoration
  • Use Python in Excel only if decomposition becomes cumbersome

This aligns directly with Microsoft’s guidance to explicitly reference tools and actions, rather than assuming the agent will “figure it out.”

Step-by-step workflow: prevent jumping to the answer

This is where Microsoft’s goal–action–transition guidance actually earns its keep. You don’t need a huge process, just a predictable path. For this workbook, one workflow does most of the work:

  1. Identify the period-over-period revenue change
  2. Decompose it into price, volume, and mix effects
  3. Validate totals against reported revenue
  4. Summarize dominant drivers and note limitations
  5. If assumptions materially affect the conclusion, ask before proceeding

That sequence alone eliminates most confident-but-sloppy answers.

Error handling and limitations: tell it when to stop

This is the “permission to be careful” section.

  • If required inputs are missing (which month, which product, which region), ask
  • If the user asks for a strong causal claim, state the limitation
  • If the history is too short for trend or forecasting claims, say so

This is how you reduce confident nonsense without turning the agent into a scold.

Examples: show what “good” looks like

You don’t need many examples, but one or two help anchor behavior and avoid repetitive phrasing.

A simple pair is usually enough:

  • A clean “Why did revenue change in April?” request with a measured decomposition
  • An edge case where the user asks for intent or causality and the agent pushes back

At that point, you’re no longer hoping the agent picks up your analytical judgment by osmosis. You’ve encoded it.

And that’s the real shift Agent Mode requires.

The improved Agent Mode instruction set

Here’s a cleaned-up instruction block that follows Microsoft’s declarative agent guidance and stays anchored to the demo workbook. Notice that I’m using Markdown to encode structure and ordering. This is something Microsoft explicitly recommends: Markdown helps clarify intent and sequence, especially when instructions need to hold up across many interactions.

# PURPOSE
Help users explain month-over-month revenue changes in this workbook’s sales table by separating price, volume, and mix effects. Prioritize transparency and defensibility over certainty.

# GENERAL GUIDELINES
- Use clear, neutral business language suitable for stakeholder updates.
- Avoid single-cause explanations when multiple drivers are present.
- Do not infer intent or causality from observed changes (e.g., “they raised prices to…”).
- Surface assumptions explicitly, especially when the time window is short.
- Tie explanations back to specific columns and periods (Date, Product, Region, Units_Sold, Unit_Price, Revenue).
- Ask one clarifying question when needed; do not proceed with missing inputs.

# SKILLS
- Use PivotTables to validate aggregates and isolate drivers.
- Validate Revenue when needed by recomputing Units_Sold * Unit_Price.
- Use charts only when they clarify a comparison (not decorative).
- Use Python in Excel only if decomposition becomes cumbersome.

# WORKFLOW: REVENUE CHANGE EXPLANATION
## Step 1: Confirm the question
- **Goal:** Ensure the requested comparison is clear.
- **Action:** If the month(s), product(s), or region(s) are unclear, ask a single clarifying question.
- **Transition:** Once clear, proceed to Step 2.

## Step 2: Validate totals
- **Goal:** Confirm the numbers before explaining them.
- **Action:** Use PivotTables to summarize Revenue by Date (and Product/Region as needed). If results look inconsistent, validate Revenue = Units_Sold * Unit_Price.
- **Transition:** If totals check out, proceed to Step 3. If not, flag the discrepancy and ask how to proceed.

## Step 3: Decompose drivers
- **Goal:** Explain changes using price, volume, and mix.
- **Action:** Separate the change into:
  - volume effect (units)
  - price effect (unit price)
  - mix effects (product/region composition)
- **Transition:** Proceed to Step 4.

## Step 4: Communicate the result
- **Goal:** Provide a stakeholder-ready explanation.
- **Action:** Provide:
  1) a short “how I reasoned” section (max 5 bullets)
  2) a neutral summary suitable for an email (short paragraph or 3 bullets)
- **Transition:** End by asking if the user wants a deeper breakdown by product or region.

# LIMITATIONS
- If the user asks for causal explanations or intent, explain that the data supports description, not causality.
- If asked to forecast with limited history, state the limitation and ask whether to proceed with a simple scenario-based approach.

# EXAMPLES
**User:** “Why did revenue increase in April?”
**Assistant:** (Decomposes into price/volume/mix, then gives an email-ready summary.)

**User:** “Did we raise prices to offset lower demand?”
**Assistant:** “This table shows price and volume changes, but it doesn’t contain intent or decision context. I can describe what changed and estimate contributions, but I can’t confirm why without additional context.”

What’s different once the instructions are structured this way is not just the final answer. It’s the shape of the interaction.

In the first image, notice what the agent does before it does any analysis at all. Instead of guessing which comparison matters, it asks a clarifying question about the period to focus on. That’s the agent following the instructions you gave it to avoid making assumptions when a request is underspecified. This is the exact opposite of the “confident but fuzzy” behavior you get from a stretched Copilot prompt.

Better agent mode instructions

Once the period is clarified, the agent doesn’t just summarize the change. It decomposes it. You can see it explicitly separating price, volume, and mix, reconciling the bridge back to the reported totals, and calling out where effects are zero. This is where the earlier instruction work really pays off. The agent isn’t improvising an explanation. It’s executing a defined analytical workflow.

What’s important here is that the extra detail isn’t gratuitous. The structure keeps it controlled. Each section answers a specific question: what changed, how much it contributed, and how it ties back to the table. There’s no narrative drift and no implied causality, because the guidelines explicitly rule that out.

Next, when you push a bit further, for example by asking why revenue increased in April, the difference becomes clearer. The agent doesn’t jump to a new story or repeat itself loosely. It builds on what’s already been established. It restates the headline result, walks back through the variance components in an orderly way, and then produces a stakeholder-ready summary that’s explicitly descriptive rather than causal. It also makes its assumptions visible and suggests a clear next step if you want to break the mix effect down further.

Conclusion: making agents make sense

Microsoft has used the phrase “vibe working” to describe the use of Agent Mode in Excel, but when you look at what’s required to get consistent results from an agent, the day-to-day reality is more structured than that label suggests.

The instruction set that produced the behavior above is fairly specific. It spells out what the agent is responsible for, how it should reason through a change, and when it should pause rather than push forward. That level of detail isn’t accidental. It’s what allows the agent to behave sensibly when you’re no longer there to steer each question.

There’s still plenty of room for loose, exploratory prompting. In fact, that’s often the most natural way to work when you’re thinking through a problem for the first time. Asking a half-formed question and seeing what comes back is a perfectly reasonable way to use Copilot.

Agents start to make sense in a different situation. They’re useful when you want the same kind of reasoning applied repeatedly, across variations of a question, without having to restate your expectations each time. In those cases, the upfront effort of writing clearer instructions pays off by reducing guesswork later.

Agent Mode works best when you capture the parts of your thinking that shouldn’t change from question to question. Once those are set, the agent behaves more consistently, which makes the results easier to rely on.

The post How to write better instructions for Excel agents first appeared on Stringfest Analytics.

]]>
16633
How to stay safe using Copilot and AI tools in Excel https://stringfestanalytics.com/how-to-stay-safe-using-copilot-and-ai-tools-in-excel/ Mon, 08 Dec 2025 20:34:42 +0000 https://stringfestanalytics.com/?p=16364 In my AI for Excel and Copilot trainings, one of the most common questions I get is some variation of: “Is Copilot safe? Should I be worried about data privacy?” It comes up so often I figured… alright, time to write a post about it. And just to be clear: I am not a cybersecurity […]

The post How to stay safe using Copilot and AI tools in Excel first appeared on Stringfest Analytics.

]]>
In my AI for Excel and Copilot trainings, one of the most common questions I get is some variation of:

“Is Copilot safe? Should I be worried about data privacy?”

It comes up so often I figured… alright, time to write a post about it. And just to be clear: I am not a cybersecurity expert. I am not an IT expert. I am an Excel person. I know the tools, I know how they work, and I know enough about data products to say: use the paid Copilot tools, not the free ones.

I’ve also been around technology long enough to see patterns that people miss when they ask this question. So let’s start there.

Is this really a new concern, or did we just start noticing?

Every time someone asks whether Copilot is “safe,” there’s this underlying assumption that AI is somehow the first time we’ve ever put sensitive data into the cloud. And it just… isn’t.

We’ve been trusting the cloud with the most intimate parts of our work and personal lives for a very long time: email, Teams, payroll systems, banking apps, personal photos, HR files, entire corporate file repositories. If you send an email, look at your pay stub online, or store documents in OneDrive or SharePoint, you are already way deep into cloud territory.

So if Copilot is the thing that suddenly makes you stop and go, “Wait, is this secure?” … that isn’t because AI invented a brand-new category of danger. It’s because we’ve forgotten just how much of our digital life was already happening in the cloud long before AI showed up at the office.

And here’s the important part: when you use Microsoft 365 Copilot (the enterprise paid version, that is) you’re not sending files to some mysterious AI in the sky. You’re staying inside the same security framework your organization already trusts for email, files, chat, HR documents, and everything else. Copilot respects your permissions. It doesn’t show you anything you can’t already access. It doesn’t train the model with your data. It runs inside the same Microsoft boundary your IT team signed off on years ago.

If your company is comfortable storing payroll spreadsheets and legal contracts in the cloud, then Copilot looking at an Excel workbook is not the dramatic leap people think it is.

Where does the real risk actually come from?

When you look at the stories about “AI leaks,” the common theme isn’t that the AI itself malfunctioned. It’s that someone pasted sensitive data into a free website or uploaded confidential files to a random tool “just to try it,” or used a personal AI account for company work.

That’s not an AI flaw. That’s a lack of discipline and guardrails.

If a company has no real governance, no clear permissions, no sensitivity labels, and no education on what should or shouldn’t be shared, then honestly anything is risky. Excel itself has caused more data breaches than most AI tools ever will. Email has definitely done more damage than Copilot.

People sometimes want AI to magically protect them from their own bad habits. That’s not how this works. If something is sensitive, you treat it as sensitive, no matter what tool you’re using.

What should regular Excel users actually do?

Most people reading this aren’t security officers or IT directors. You’re analysts, managers, trainers, end-users. You’re just trying to do your job without getting yelled at by security or by Copilot.

At that level, the guidance is pretty simple:

Use the tools your company gives you. If they’ve purchased Copilot for Microsoft 365 or GitHub Copilot for Business, use those. Don’t run to a free chatbot because it’s “faster” or “easier.”

Treat AI tools with the same common sense you use with email. If you wouldn’t email something to a random address, don’t paste it into a random AI tool either.

Respect permissions. Copilot can’t override them, but you can definitely override yourself by giving it something you shouldn’t be handling in the first place.

And if you’re in a role where people look to you for guidance — a trainer, a team lead, the “Excel person” everyone goes to — then it’s worth pushing for actual education and clear governance around AI usage. People aren’t doing risky things because they’re malicious; they’re doing it because no one ever explained the boundaries.

AI is not a shortcut around process. It’s just another tool in the toolbox, and it works best when the underlying data environment is clean and governed.

The big picture (and the part no one wants to say out loud)

AI isn’t the scary part. The cloud isn’t suddenly new. None of this is as dramatic as it sounds.

What’s really happening is that people are finally noticing something they’ve been doing for years — trusting cloud services with their work — now that AI is involved. But if your organization already stores HR files, payroll data, customer contracts, financial history, and sensitive documents in Microsoft 365… then Copilot is not introducing a brand-new risk category. It’s operating inside the same house as everything else.

So instead of treating AI like the unknown boogeyman, treat it like what it actually is: another cloud-based helper living in the same environment you’ve already been using.

Use the enterprise version. Follow the rules your organization already put in place. Don’t paste sensitive data into public tools. Treat AI with the same respect you should already be giving email, Teams, SharePoint, and everything else that quietly holds your entire working life.

If you do that, Copilot becomes not a threat, but a legitimately helpful assistant, one that can help you work smarter in Excel without creating a new security headache for your team.

If you’d like something you can hand to your team or stick on the virtual bulletin board, I put together the following Copilot safety guide to download. It hits the practical stuff: what’s safe, what’s not, and the habits that actually matter.

If you want to go deeper into this, not just the tech, but the skills and habits teams actually need to use Copilot responsibly and effectively… this is exactly the kind of work I help organizations with.

Happy to help you and your team build these competencies the right way.

The post How to stay safe using Copilot and AI tools in Excel first appeared on Stringfest Analytics.

]]>
16364
How to get AI-ready even if you don’t have paid Copilot https://stringfestanalytics.com/how-to-get-ai-ready-even-if-you-dont-have-paid-copilot/ Thu, 20 Nov 2025 23:10:31 +0000 https://stringfestanalytics.com/?p=16302 People ask me all the time what to do if they don’t have paid Copilot or Power Automate or any of the other “new wave” Microsoft tools. Usually it comes from two groups: analysts who genuinely want to learn this stuff, and managers who are getting asked about it and don’t want to make a […]

The post How to get AI-ready even if you don’t have paid Copilot first appeared on Stringfest Analytics.

]]>
People ask me all the time what to do if they don’t have paid Copilot or Power Automate or any of the other “new wave” Microsoft tools. Usually it comes from two groups: analysts who genuinely want to learn this stuff, and managers who are getting asked about it and don’t want to make a blind commitment.

The funny thing is: not having Copilot isn’t really the barrier people think it is. Most teams have bigger, older problems that no AI tool is going to magically solve. And honestly, getting those things sorted out now will make life a lot easier once you do turn these tools on.

Start by improving the data you already have

As an Excel trainer and MVP, I see the same patterns across scores of organizations: the data people rely on every day is held together by luck and muscle memory. Columns shift around, naming is all over the place, refreshes break, and everyone has a slightly different version of the same file.

People want Copilot to fix that. It won’t. It can’t. But you can fix quite a bit of these broken workflows right now, no Copilot required:

  • Turn your ranges into proper Excel Tables.
  • Move the weekly cleanup steps into Power Query.
  • Stop hard-coding your data sources (pasting CSVs on top of last week’s data, pointing to someone’s Downloads folder, etc.).
  • Keep your raw data intact instead of overwriting it every cycle.
  • If the data comes from an external system, pull it the same way every time. Don’t manually export one week and copy/paste the next.

These small, boring steps are what make a dataset reliable enough for anything downstream: formulas, PivotTables, automation, or Copilot.

If you need a place to start with this, begin with my book Modern Data Analytics in Excel:

It walks through everything from tables to Power Query to building data models the right way. Once the foundations are in place, the rest of the “AI stuff” starts behaving a lot more predictably.

And even before Copilot arrives, remember that if you’re on Microsoft 365, you already have the Analyze Data button in Excel. It’s free, built-in, and a great way to practice asking AI questions about your dataset, without exposing anything sensitive to an external model:

While you’re waiting, build the skills Copilot works best alongside

Additionally, if you have access to Python in Excel, this is a great time to start getting comfortable with it. I don’t mean jumping into machine learning or trying to become a data scientist overnight. I just mean learning the basics: generating clean sample data, reshaping messy tables, doing simple transformations, or sanity-checking calculations.

You don’t need Copilot for any of that. And once Copilot is turned on, having even a tiny amount of Python literacy goes a long way. You understand more of what it’s suggesting. You can verify its logic. You can use Python for the heavy lifting and Copilot for the explanation layer. The two complement each other really well.

And there’s a larger reason Python matters so much here: Python in large part is the language of modern AI. Almost every major AI model you’ve heard of was trained with Python. The entire machine learning ecosystem—TensorFlow, PyTorch, scikit-learn—lives in Python. Copilot’s Advanced Analysis feature uses Python behind the scenes:

That means when Copilot generates Python for you, you’re speaking the same language the model understands natively. A little Python knowledge lets you sanity-check the code, extend it, and know when something looks off. It’s one of the highest-ROI skills you can build while waiting for Copilot to arrive.

The same goes for Office Scripts and Power Automate if your organization already has them. I’m not suggesting you run out and try to replace Copilot with these tools (you won’t, because again, it’s not meant to replace them). But knowing the basics now means you’ll eventually have a much cleaner handoff between what you do manually, what Copilot helps you with, and what you automate later. Even something as simple as learning how to record an Office Script and look under the hood will make Copilot’s script-generation features feel far less mysterious when they land.

I explain this a lot in my courses: Copilot isn’t a standalone solution. It’s part of a larger ecosystem. A little familiarity with Python, Power Query, and Office Scripts makes your prompts clearer and your results better.

And a quick note on using free AI tools responsibly

If you’re leaning on the free versions of Copilot or ChatGPT while you wait for the paid version at work, If you’re leaning on the free versions of Copilot or ChatGPT while you wait for the paid version at work, that’s completely fine and not a bad idea. But a quick reminder I tell all my corporate clients: don’t paste anything sensitive into them.

Keep it to synthetic data, scrubbed examples, structure and logic, formulas, and generic versions of your workflow. Save the real business data for the paid, enterprise-secured tools when they arrive.

Conclusion

Ultimately, getting “AI-ready” takes far more than purchasing a Copilot license. It requires getting your data into shape, building a few adjacent skills, and creating an environment where Copilot can actually help you once it arrives. Most of the heavy lifting happens long before the AI shows up. Teams that take the time to clean up their inputs now are the ones that see the fastest payoff later.

If you want help getting your team ready for all of this—data foundations, Python, Copilot, Power Query, Office Scripts, or anything in between—I teach this every week for organizations of all sizes. Reach out if you want to talk through what a practical, non-disruptive path to AI-powered Excel looks like for your team:

The post How to get AI-ready even if you don’t have paid Copilot first appeared on Stringfest Analytics.

]]>
16302
Python in Excel: How to conduct linear regression with Copilot https://stringfestanalytics.com/python-in-excel-how-to-conduct-linear-regression-with-copilot/ Wed, 12 Nov 2025 16:56:15 +0000 https://stringfestanalytics.com/?p=14989 Data and tech trends come and go, but linear regression has remained one of the most reliable tools in a data analyst’s toolbox. It helps you identify relationships, test hypotheses, and make predictions with a clear view of how different factors influence outcomes. Whether you’re working in finance, marketing, manufacturing, or real estate, few methods […]

The post Python in Excel: How to conduct linear regression with Copilot first appeared on Stringfest Analytics.

]]>
Data and tech trends come and go, but linear regression has remained one of the most reliable tools in a data analyst’s toolbox. It helps you identify relationships, test hypotheses, and make predictions with a clear view of how different factors influence outcomes. Whether you’re working in finance, marketing, manufacturing, or real estate, few methods match linear regression for both clarity and ease of use.

For all the benefits, however, Excel’s built-in regression tools were never very user friendly. You had to load the Analysis ToolPak (if you could find it) step through a dated wizard, and then make sense of an output sheet that offered little guidance. Changing your model or presenting the results to others was awkward.

With Copilot, things are much smoother. You can build more advanced models with Python in Excel, understand how they work, and interpret the results directly within your workbook. It’s easier to see what your data is telling you and focus on meaningful conclusions rather than the mechanics.

We’ll explore this using a fuel economy dataset. Download the exercise file below to follow along.

 

If you haven’t used the Advanced Analysis with Python feature yet, take a look at this post:

To get started, we’ll run a very simple linear regression: just one dependent and one independent variable. It’s a good habit to make the scope of your model explicit, even when you’re testing something small. In this case, it makes sense to treat mpg as the variable we’re trying to explain and weight as the factor we think influences it.

Here’s the Copilot prompt I used:

“Run a linear regression in Python where mpg is the dependent variable and weight is the independent variable. Include a summary of the results.”

Copilot automatically fitted the model using the dataset and produced the following regression summary:

LInear regression output

The interpretation is straightforward: as a car’s weight increases, its fuel efficiency tends to decline. The negative coefficient for weight means heavier vehicles use more fuel. The very small p-value confirms the relationship is statistically significant.

This is the classic starting point for regression analysis: one variable at a time, clear direction, and easily interpretable results. From here, we can begin layering in more predictors to see how horsepower, displacement, or cylinder count refine the story.

Adding more predictors and checking model diagnostics

Now that we’ve built our first model, it’s natural to wonder what other factors might influence fuel economy. Weight appears significant, but horsepower and acceleration could also play a part.

As we start refining our models, we need a way to tell if each new version is actually improving. Two standard metrics help with this: R-squared, which shows how much of the variation in mpg is explained by the predictors, and RMSE, which measures the average prediction error in miles per gallon.

Here’s the Copilot prompt:

Fit a multiple linear regression model in Python predicting mpg using weight, horsepower, and acceleration.
Calculate and return the model’s R-squared and RMSE as a small summary table.

Multiple regression model outputs

The R-squared value of about 0.71 means roughly 71 percent of the variation in fuel efficiency is explained by these three variables. The RMSE of 4.22 means the model’s predictions are off by about four miles per gallon on average. It’s a noticeable improvement over our single-variable model and a good sign that we’re moving in the right direction.

Visualizing predicted versus actual values

Once you’ve built a model and reviewed the metrics, it’s important to see how well the predictions line up with reality. A quick visual check often reveals patterns or problems that numbers alone can miss.

“Plot the predicted vs actual mpg values from the model to check how well the regression fits. Include a line showing perfect predictions for reference.”

Predicted versus actual scatterplot

Copilot produced a scatter plot comparing the model’s predicted mpg values with the actual ones. Each point represents a car in the dataset. The red dashed line shows what perfect predictions would look like, where predicted and actual values are exactly equal.

This visualization gives a quick gut check on model performance. The tighter the points hug that line, the stronger the predictive power. And while the model isn’t perfect, it’s doing a solid job of explaining how weight, horsepower, and acceleration interact to influence fuel efficiency.

Interpreting model coefficients

You might be wondering how each variable contributes. That’s where interpretation comes in, and Copilot can help you reason through it, not just calculate.

Here’s the prompt:

“Interpret the coefficients of the model using statsmodels. Which features have the biggest impact on mpg and in what direction? Explain in plain language.”

Show regression coefficients

Copilot returned a summary showing that both weight and horsepower have negative coefficients. This means that as either of these increases, fuel efficiency tends to decrease. Weight has the strongest influence. Each additional unit of weight leads to the largest drop in miles per gallon. Horsepower also lowers mpg, though not quite as sharply.

Acceleration, on the other hand, shows a very small and statistically insignificant coefficient, suggesting it doesn’t meaningfully affect fuel economy in this dataset. In other words, how quickly a car accelerates doesn’t matter much for mpg once weight and horsepower are already accounted for.

Together, these results tell a clear story: heavier and more powerful cars use more fuel, while quick acceleration on its own doesn’t add much explanatory value.

Checking model assumptions

Once you’ve built and interpreted your model, it’s a good idea to run a few quick diagnostics to make sure the basic assumptions of linear regression hold. One of the most important checks is to look at the residuals, or the differences between the predicted and actual values.

Here’s the Copilot prompt:

Plot the residuals of the model. Are they randomly distributed? Is there evidence of non-linearity or heteroskedasticity?

Residuals plot Copilot regression

Copilot produced a residuals vs. predicted values plot. Ideally, the points should be scattered randomly around the zero line. That pattern suggests the model is capturing the data well and that errors are evenly spread across all prediction levels.

In this case, the residuals look mostly random, but there’s a slight funnel shape as mpg increases. That widening spread hints that the model may fit smaller cars a bit more consistently than larger ones, a mild sign of heteroskedasticity. It’s not severe, but it’s worth noting.

Residual plots are one of several ways to check whether your model is behaving properly. You can also look at whether the relationships between predictors and mpg appear roughly linear, whether residuals seem normally distributed, or whether there’s evidence that one error predicts the next. These checks help confirm that the model’s estimates are trustworthy.

Copilot can guide you through these steps, not just by generating plots or statistics, but by explaining what they mean and why they matter. In that sense, it acts less like a calculator and more like a coach, helping you understand the reasoning behind good modeling practice.

Making predictions

Finally, let’s put the model to work in a real-world example. In business settings, the real value of regression often isn’t just understanding relationships. It’s using those relationships to make predictions. Decision-makers care less about the exact slope of a line and more about what it means for future outcomes: how a change in product weight, horsepower, or price might affect performance or profit. A well-built model lets you turn analysis into foresight.

Here’s the Copilot prompt:

“Given a car with 3000 lbs weight, 130 horsepower, and 15 seconds of acceleration, use the regression model to predict mpg.

MPG prediction Copilot

Copilot returned a predicted fuel efficiency of about 21.6 miles per gallon.

That means for a car with those specifications, the model expects it to travel roughly 21 and a half miles on a gallon of fuel. This is where regression analysis becomes more than just theory. You can use it to estimate outcomes for new observations, guide design tradeoffs, or compare how different features affect performance.

Conclusion

Iinear regression remains one of the most practical and interpretable tools in data analysis, and Copilot makes it easier than ever to use inside Excel. Even a simple model can uncover useful insights when built thoughtfully and checked carefully. Metrics like R-squared and RMSE help quantify performance, but visuals and diagnostics often reveal the places where your model fits well and where it struggles.

And in the business world, the real power of regression lies in prediction. The ability to estimate how changes in one factor might influence another turns analysis into something decision-ready.

That said, linear regression isn’t magic. It assumes straight-line relationships and evenly distributed errors, which don’t always hold up with messy real-world data. Outliers, overlapping variables, or curved relationships can throw things off, and that’s where judgment comes in. Copilot can automate the steps, but it still takes a human eye to decide what makes sense.

From here, you might explore adding interaction terms, adjusting variables to handle nonlinearity, or comparing results to more flexible models like decision trees or random forests. You could even use Copilot to test cross-validation or experiment with feature selection to see how stable your model really is.

The post Python in Excel: How to conduct linear regression with Copilot first appeared on Stringfest Analytics.

]]>
14989
How to understand Microsoft Fabric as an Excel user https://stringfestanalytics.com/how-to-understand-microsoft-fabric-as-an-excel-user/ Wed, 05 Nov 2025 15:12:58 +0000 https://stringfestanalytics.com/?p=16208 As Microsoft’s data ecosystem continues to evolve, Excel users are hearing more about Fabric, Power BI, and Dataverse. Many are wondering how all these elements fit together. Excel has long been a cornerstone of data analysis and reporting, but as organizations move toward cloud-first, AI-driven architectures, understanding this broader ecosystem is essential. This post explains […]

The post How to understand Microsoft Fabric as an Excel user first appeared on Stringfest Analytics.

]]>
As Microsoft’s data ecosystem continues to evolve, Excel users are hearing more about Fabric, Power BI, and Dataverse. Many are wondering how all these elements fit together. Excel has long been a cornerstone of data analysis and reporting, but as organizations move toward cloud-first, AI-driven architectures, understanding this broader ecosystem is essential.

This post explains how Fabric, Power BI, and Dataverse relate to one another, what roles they play in Microsoft’s data architecture and why this matters for Excel users.

Fabric, OneLake and Microsoft’s data architecture

Microsoft Fabric is a unified data platform that brings together storage, analytics, governance, and AI under one umbrella. You can think of it as the foundation upon which modern Microsoft data tools are built.

At its core lies OneLake, a single, organization-wide data lake that serves as the “OneDrive for data.” It’s designed to eliminate data silos and ensure that every analytics tool, from Excel to Power BI to SQL, works with the same, trusted datasets.

Fabric unifies capabilities from technologies like Azure Data Factory, Synapse, and Power BI into one environment. For Excel users, this means that the workbooks you create, the data models you connect to, and the reports you share can all be part of a broader, governed ecosystem rather than isolated files. In other words, you can spend less time managing copies of data and more time analyzing it.

Power BI as a layer of Fabric

Power BI is no longer just a visualization tool. It’s an essential part of Fabric. The relationship between Power BI and Fabric is best described as semantic + platform:

  • Fabric provides the infrastructure: storage (OneLake), compute, and governance.
  • Power BI provides the semantic model: how data is organized, related, and presented.

In practice, Power BI runs on Fabric. When you create a Power BI dataset, it’s stored in OneLake. When you build a report, it can connect to the same Fabric-based data model that other tools (including Excel) use.

Example

A sales team might store raw transaction data in Fabric’s OneLake. Power BI builds a semantic model on top of that data, defining measures such as revenue, profit margin, and year-over-year growth. Excel users can then connect directly to that semantic model, creating PivotTables or custom reports without duplicating data or logic.

For Excel users, this means instead of relying on manually updated spreadsheets or one-off exports, you can work directly with governed, version-controlled data that’s consistent across the organization.

Dataverse and the Power Platform: The operational counterpart

If Fabric is the analytical backbone, Dataverse is the operational brain of the Power Platform.

Microsoft Dataverse stores structured, relational business data used by Power Apps, Power Automate, Power Pages, and Copilot Studio. Unlike Fabric, which is optimized for analytics and large-scale storage, Dataverse is optimized for transactional operations and business workflows.

While Fabric and Dataverse serve different purposes, Microsoft is steadily connecting them. For example, Dataverse data can be shared into Fabric via OneLake shortcuts, making it available for deeper analysis in Power BI or Excel.

Example

A company’s HR team might use a Power App built on Dataverse to track employee training. That same data can be shared to Fabric, where analysts use Excel or Power BI to measure completion rates, visualize trends, and correlate training with performance metrics.

For Excel users, this means that the data you analyze is directly tied to the systems running the business. No more CSV exports or outdated files. Your reports can be live reflections of real operational data.

How Excel fits into this landscape

Excel sits comfortably across both worlds:

  • Excel can connect to Fabric datasets or Power BI semantic models for governed reporting.
  • Excel can update or reference Dataverse data through Power Automate or the Dataverse connector.
  • Tools like Power Query, Python, and Copilot in Excel can leverage both Fabric and Power Platform data sources to summarize, generate, or explain insights, all within the familiar Excel interface.

Example:

An analyst could open Excel, connect to a Fabric dataset of company financials, and use Copilot to summarize quarterly trends and identify outliers. Behind the scenes, that analysis might draw on data stored in OneLake, modeled in Power BI, and enriched through a Power Automate flow from Dataverse.

Comparing the core components

To put all of this into perspective, it helps to compare the key layers of the Microsoft data ecosystem and how Excel interacts with each. Understanding these roles clarifies where Excel fits and why it matters.

Platform Primary Function Optimized For Excel’s Role
Fabric Unified analytics platform (OneLake storage) Analytical workloads, AI, reporting Connect to shared datasets and create governed reports
Power BI Visualization and semantic modeling layer Business intelligence and dashboards Analyze and visualize data models from Fabric
Dataverse Operational data platform Apps, workflows, and transactional data Serve as source/target for automated workflows
Power Platform Integration and automation layer Connecting systems and data Trigger or respond to actions using Excel data

When you understand this stack, you can start building workflows that make Excel a strategic player in your data operations rather than just a spreadsheet tool.

Common workflows for Excel users

Understanding these systems conceptually is one thing, seeing them in action is another. The following examples show how Excel can act as a bridge between Fabric, Power BI, and Dataverse in real business workflows.

Scenario What’s Happening Tools Involved
Building a shared dataset Data loaded to Fabric and modeled in Power BI; Excel connects directly for analysis Fabric, Power BI, Excel
Automating data refresh Power Automate flow triggers Fabric dataset refresh when Excel data updates Power Automate, Fabric, Excel
Integrating operational data Dataverse stores CRM records that sync into Fabric for analysis Dataverse, Fabric, Power BI
Creating an AI-assisted report Excel Copilot analyzes a Fabric dataset and generates narrative insights Fabric, Copilot for Excel

These use cases show how Excel users can extend their reach into automation, AI, and advanced analytics, without leaving Excel itself.

Why this matters

Many Copilot and AI-driven capabilities across Fabric the Power Platform rely on access to data in Fabric or Dataverse. Understanding how these systems interact allows Excel users to:

  • Communicate effectively with IT and data teams about data sources and permissions.
  • Design smarter workflows that avoid redundant data silos.
  • Unlock Copilot capabilities that depend on connected, governed data.

By understanding how data moves through Fabric and the Power Platform, you’ll be well positioned to future-proof your Excel skills and boost your value as an analyst. Even if you don’t yet have the licenses or IT permissions to use every new workflow these tools enable, you’ll still stay aligned with modern trends in data architecture and AI-driven analytics.

Conclusion

Excel remains a critical front door to Microsoft’s data strategy. Its role is evolving from a standalone spreadsheet tool to a gateway into a connected data ecosystem powered by Fabric, Power BI, and Dataverse.

By understanding these relationships, Excel users can modernize their analysis, automate their reporting, and collaborate with IT and data teams on equal footing. In short: you don’t need to stop being an Excel expert. You just need to expand your world.

For more details, explore Microsoft’s documentation for Fabric, Power BI, and Power Platform.

If you’d like some help thinking through how all these pieces fit together and how to future-proof your data strategy, workflows, and talent you can book a free discovery call below:

The post How to understand Microsoft Fabric as an Excel user first appeared on Stringfest Analytics.

]]>
16208
Why you should NOT use Copilot in Excel for reporting (and what to use instead) https://stringfestanalytics.com/why-you-should-not-use-copilot-in-excel-for-reporting-and-what-to-use-instead/ Mon, 27 Oct 2025 20:57:45 +0000 https://stringfestanalytics.com/?p=16146 Every time I run a Copilot for Excel workshop, there’s one question that always comes up: “How can I make Copilot remember my prompts so it can do my weekly reports automatically?” Short answer? You don’t. Long answer? That’s not what Copilot is built for. And you’ll save yourself a lot of headaches if you […]

The post Why you should NOT use Copilot in Excel for reporting (and what to use instead) first appeared on Stringfest Analytics.

]]>
Every time I run a Copilot for Excel workshop, there’s one question that always comes up:

“How can I make Copilot remember my prompts so it can do my weekly reports automatically?”

Short answer?

You don’t.

Long answer?

That’s not what Copilot is built for. And you’ll save yourself a lot of headaches if you stop trying to make it something it’s not.

Let’s talk about why Copilot is the wrong tool for routine reporting, what kind of work it is meant for, and which tools in the Modern Excel stack you should actually be using for repeatable reporting workflows.

Copilot is built for creativity, not consistency

Generative AI tools like Copilot excel at (pun intended) creative, open-ended work: exploring, brainstorming, and discovering new insights. Think of it as a brainstorming partner, not a factory worker.

Let’s say you have a dataset of quarterly sales. You might ask Copilot:

  • “Summarize which regions had the biggest change this quarter.”
  • “Find anything unusual in the top-selling products.”
  • “Can you visualize sales trends over time?”

This is where Copilot shines. It helps you see your data from new angles. It’s designed for ad hoc analysis, when you’re exploring, questioning, and trying to understand.

But when you ask Copilot to repeat a series of steps exactly the same way every week, filtering rows, adding calculated columns, or reshaping data, it’s like asking an artist to work an assembly line. That’s not what it’s meant to do.

Copilot is creative by design, not deterministic by nature. And when it comes to reporting, you don’t want creativity. You want consistency.

Power Query should be your backbone of Excel reporting

If you want Excel to follow the same steps every single time, there’s a built-in tool made exactly for that: Power Query.

Power Query lets you:

  • Import data from multiple sources (Excel files, databases, websites, etc.)
  • Clean and transform it (remove duplicates, change data types, merge tables)
  • Refresh it with a single click

And it remembers every step. You can trace, audit, and repeat your data cleaning process without lifting a finger.

In short:
Copilot = “Help me think.”
Power Query = “Help me do.”

The beauty of Power Query is that it’s deterministic. You know exactly what will happen when you click “Refresh.” It doesn’t reinterpret your instructions; it just executes them faithfully, every time.

This makes it perfect for routine reports, ETL (extract-transform-load) pipelines, and data prep workflows that need to be bulletproof.

So, if your team is eager to “get into AI,” start here. Master Power Query first. Then you can safely bring Copilot into the mix for analysis, summaries, and creative exploration.

If you’d like a great introduction to Power Query for Excel, check out my book Modern Data Analytics in Excel:

Use Python in Excel for smarter, reproducible analysis

Once your data is structured, the next tool in your Modern Excel stack is Python in Excel.

Python brings repeatability and precision to the kinds of analytics that Copilot can’t reliably reproduce. It’s made for tasks where math, modeling, and consistency matter.

With Python in Excel, you can:

  • Create forecasts and time-series models
  • Automate calculations across large datasets
  • Build custom charts and visualizations
  • Run simulations and advanced statistics

And unlike Copilot, Python code doesn’t “interpret” your instructions differently each time. Once you write a Python cell, it does the same thing every refresh. That makes it ideal for analytical repeatability… the key to trustworthy reports.

For example, you might build a small Python script that calculates month-over-month growth or projects next quarter’s sales. You can test it, verify it, and know it will run identically every time the workbook updates.

This doesn’t mean Copilot is useless here. Copilot can help write your Python code, explain it, or spot mistakes. But the logic itself should live in your script, not your prompt history. That’s how you scale reliability.

If your team’s reporting needs go beyond simple aggregations—if you’re forecasting, modeling, or visualizing trends—Python in Excel is your next stop after Power Query.

For some quick wins with Python in Excel, check out my short course on Gumroad:

Automate and deliver with Power Automate

Once Power Query gives you clean data and Python delivers accurate analysis, it’s time to close the loop: Power Automate.

Power Automate is where reporting becomes a system. It connects Excel to the rest of your workflow, making sure your insights actually reach people on time, every time.

You can set flows that:

  • Move data between systems
  • Email reporting and KPIs to your team or post it to Teams or SharePoint
  • Log completion or archive results for auditing

The result? Reports that run themselves. No copy-pasting, no missed steps, no human errors.

And because every flow is visual and trackable, you can always see what happened, when, and why. That’s the kind of accountability Copilot can’t provide.

Power Automate turns your Excel reports into living workflows. Once your structure is built and tested, you can schedule, distribute, and monitor it, all without manual intervention.

Looking to get started with Power Automate: Check out my LinkedIn Learning course:

Copilot comes after the foundations

Once you’ve built your Modern Excel foundation with Power Query handling your data prep, Python managing your analysis, and Power Automate keeping everything on schedule… then it’s time to bring Copilot back in.

At this stage, Copilot becomes a creative assistant again. You can use it to:

  • Draft insights from your refreshed data
  • Suggest new KPIs or comparisons
  • Generate visuals and summarize patterns
  • Turn results into executive summaries or reports

But here’s the truth too many people miss: AI won’t fix sloppy spreadsheets or bad habits.

If your workbooks are full of merged cells, inconsistent column headers, or manual copy-paste steps, no AI model can save you from chaos. It’ll just automate your mistakes faster.

That’s why building strong Excel foundations isn’t optional. You can’t skip the groundwork and expect Copilot to fill in the gaps. The best AI tools amplify what’s already there; if your Excel fundamentals are weak, AI will only magnify the mess.

Before your team learns to prompt Copilot, make sure they can build clean tables, write structured formulas, and manage data with Power Query. Once the basics are solid, Copilot becomes a genuine multiplier, not a magic wand.

Build your Modern Excel learning plan

If you want your organization using AI in Excel the right way, not just what sounds trendy, let’s talk.

I’ll help your team get a tailored learning plan that fits your actual workflows and goals. Together, we’ll build your foundation with Power Query, expand your analytical range with Python in Excel, and streamline everything with Power Automate.

Then, once the structure is in place, we’ll layer in Copilot where it adds the most value: helping your people think faster, explore smarter, and analyze creatively.

📩 Get in touch below to build a training plan that sets your team up for success with the entire Modern Excel stack.

The post Why you should NOT use Copilot in Excel for reporting (and what to use instead) first appeared on Stringfest Analytics.

]]>
16146
How to understand the modern Excel AI stack https://stringfestanalytics.com/how-to-understand-the-modern-excel-ai-stack/ Wed, 22 Oct 2025 23:08:12 +0000 https://stringfestanalytics.com/?p=16136 A client recently asked me to give a kind of “state of the union” talk on Excel and its growing AI stack. And honestly, it’s not wrong to call it messy. There are so many new pieces floating around. Even Microsoft admits that Copilots and Agents are not the same thing, yet you use Copilot […]

The post How to understand the modern Excel AI stack first appeared on Stringfest Analytics.

]]>
A client recently asked me to give a kind of “state of the union” talk on Excel and its growing AI stack. And honestly, it’s not wrong to call it messy. There are so many new pieces floating around. Even Microsoft admits that Copilots and Agents are not the same thing, yet you use Copilot Studio to build Agents! 🤔

Still, there’s a structure forming. Over the past year, the Excel ecosystem has been reshaped around a much broader vision of AI and automation. Excel is no longer a canned set of gridlines… it’s a gateway to an entire AI-powered data platform. If you can start seeing how these parts connect, you’ll be way ahead of most analysts and organizations.

Let’s walk through what I see as the emerging stack for AI and Excel, as well as some of my resources to get you started.

Power BI and Dataverse

At the foundation of the stack sits Power BI and Dataverse. Think of this as the data governance and storage layer, the place where your organization’s data actually lives and is managed. Power BI remains the visualization front end, but Dataverse is the real star for those moving beyond spreadsheets. It provides structured, secure tables of data that can be shared across Excel, Power Apps, and Power Automate without the chaos of file versions and email attachments.

In practical terms, Dataverse acts as your organization’s “truth layer.” When you connect Excel to Dataverse, you’re no longer pulling from CSVs or manually refreshing reports. Instead, you’re working directly with live, centralized data. This means your Copilot queries and Python models in Excel are referencing the same trusted data that your BI dashboards and apps do.

For a deeper dive on this, I wrote about how Excel fits into the Power Platform and Dataverse ecosystem here:

You can also check out my LinkedIn Learning course on the basics of Power BI for Excel users who are new to the Power Platform. It’s a great starting point, and the skills you’ll learn include many of the other tools mentioned in this post, like Power Automate and Copilot Studio. Definitely a keeper!

Power Query/Dataflows

Next comes Power Query and Dataflows. Power Query has long been Excel’s built-in ETL (extract, transform, load) tool, allowing analysts to clean, reshape, and combine data before analysis. But with Dataflows, this logic can be pushed into the cloud. Instead of each workbook running its own refresh process, you can define transformations once and share them across the organization.

If Dataverse is your source of truth, Power Query and Dataflows are how you make that truth usable. They standardize messy spreadsheets, merge data from multiple systems, and prepare clean tables for analysis. And since Dataflows can feed directly into both Power BI and Excel, your analysts can stay in Excel while working with enterprise-grade pipelines.

I covered how to connect Excel to Dataverse via Dataflows in detail here:

For a deeper dive into the fundamentals of Power Query in Excel, take a look at my book Modern Data Analytics in Excel:

Excel

Now we arrive at the familiar territory: Excel itself. Except this isn’t the Excel of even five years ago. Today’s Excel contains multiple AI-powered features that together form the intelligence layer of the stack: Copilot, Python, and now Agent Mode.

Copilot

This is the most visible AI layer, allowing users to generate formulas, create charts, and summarize data through natural language. It’s the first step toward conversational analytics inside the spreadsheet. You can ask it to “summarize sales by region” or “highlight outliers in this column,” and it will produce working Excel formulas or visualizations for you.

But Copilot doesn’t replace your analytical thinking. It depends on your ability to ask the right questions and recognize when its answers don’t quite make sense.

To get started with Copilot in Excel, check out my course on LinkedIn Learning:

Python in Excel

Next comes Python in Excel, which bridges the gap between Excel users and the data science ecosystem. Python unlocks advanced analytics, machine learning, and visualization capabilities directly in the workbook. You can import packages like pandas, numpy, or matplotlib and perform operations that were once out of reach for Excel alone. This means you can run predictive models, clean data programmatically, or create custom visuals, all while maintaining Excel’s familiar interface.

For a quick, practical overview of 15 ready-to-use Python in Excel examples, check out my short course on Gumroad.

What’s especially exciting is that Copilot and Python now work together through the Advanced Analysis experience. Instead of writing Python code manually, you can ask Copilot to generate it for you. For instance, you might type “show me a histogram of revenue distribution by region” or “forecast next quarter’s sales with a linear model,” and Copilot will return executable Python code that runs right inside your workbook.

To see Advanced Analysis in action, check out this post:

It’s a major leap toward making Excel a full analytics development environment: one where formula-based logic, natural language prompts, and code-based analysis coexist seamlessly.

Agent Mode

Agent Mode represents a major shift from single prompts to full reasoning workflows. Copilot is built around a one-shot model: you ask a question, it answers. Agent Mode, by contrast, uses an iterative reasoning loop that plans, executes, validates, and retries until the output meets the user’s intent. Rather than just speeding up a task,

Agent Mode can manage an entire workflow under your supervision, much like delegating to a junior analyst. This means a tool that doesn’t just write formulas for you. It can build reports, validate totals, format outputs, and so much more.

Learn more in my guide on getting started with Agent Mode here:

Office Scripts and Power Automate

Once you’ve built intelligence into your Excel processes, you’ll want to execute them reliably. That’s where Office Scripts and Power Automate come in. Office Scripts lets you record and reuse repeatable actions in Excel on the web: cleaning data, formatting tables, or updating charts. When paired with Power Automate, those scripts become part of larger workflows that run automatically, even when you’re not in Excel.

This combination is how Excel begins to extend its reach across the wider Microsoft 365 ecosystem. A workbook can now refresh data, apply formatting, run calculations, and send updates entirely on its own. A Power Automate flow might open an Excel file stored in OneDrive, trigger a script to recalculate KPIs, and post the results as a formatted summary in Teams. Another flow might collect survey responses from Microsoft Forms, append them to a central Excel table, and update a dashboard every morning. The line between spreadsheets, communication tools, and business systems becomes almost invisible once these pieces are connected.

Power Automate with Office Scripts essentially turns Excel from a static reporting tool into an active participant in your organization’s workflows. It’s where business logic meets execution.

To learn more about these two tools, check out my LinkedIn Learning courses covering each:

Copilot Studio

At the top of the stack sits Copilot Studio, the tool that connects everything else. Copilot Studio lets you build and manage custom copilots and agents that can interact with Excel, Power Automate, and external systems through connectors and APIs. If Copilot is your assistant and Agent Mode is your analyst, Copilot Studio is your command center.

With Copilot Studio, you can design domain-specific copilots that draw from your organization’s own data sources and workflows. A finance Copilot can answer questions about budget performance by querying live Excel data from Dataverse. A project management Copilot can notify stakeholders when milestones are delayed by triggering a Power Automate flow. An HR Copilot might summarize headcount changes from an Excel table or pull analytics from Power BI. In each case, the Copilot is not a static chatbot: it’s an orchestrator that understands context, retrieves information, and can take action.

The real potential of Copilot Studio lies in this orchestration. You’re doing more than just monitoring your data. You’re building systems that can reason across multiple layers of the Microsoft stack and perform tasks end to end.

For an example of how this works, see my tutorial:

What’s fascinating is that Copilot Studio uses many of the same components we’ve already discussed. Your Excel files can act as data sources, your Office Scripts can become agent actions, and your Power Automate flows can serve as orchestration layers. Excel remains the front door, but now the system behind it can reason, decide, and act.

Where it’s all heading

Right now, it’s fair to call this ecosystem messy. The boundaries between products aren’t fully clear, features are evolving fast, and documentation can lag behind the technology. But when you zoom out, the direction is unmistakable.

Excel is becoming the user interface to a much larger AI and automation ecosystem. Analysts will soon spend more time designing workflows, defining logic, and validating insights than manually crunching numbers. The winners will be those who can think across tools—connecting Power Query to Python, linking Office Scripts to Power Automate, and embedding their logic into custom Copilot experiences.

The tools are powerful, but the key is systems thinking. Your team needs analysts who understand how data flows from one layer to another, how automation can scale their work, and how to evaluate AI outputs critically. Without that mindset, you risk building disconnected tools that never deliver true value.

To see how this future might play out when it comes to Excel-based training and skills development, check out this post:

Conclusion: build your strategy now

The best thing you can do right now is get your analysts’ skills ducks in a row. Learn how these tools relate to one another. Start experimenting with small automations. Map out your data pipelines and workflows before the technology overwhelms you.

If your organization is trying to make sense of how to connect Excel, Power BI, and the Power Platform into a cohesive AI strategy, I’d love to help. You can book time with me here to talk through where you are, what you want to achieve, and how to structure a roadmap that turns this messy new world into a clear competitive advantage:

The post How to understand the modern Excel AI stack first appeared on Stringfest Analytics.

]]>
16136
How to choose between Copilot, Agent Mode, and Copilot Studio in Excel https://stringfestanalytics.com/how-to-choose-between-copilot-agent-mode-and-copilot-studio-in-excel/ Fri, 10 Oct 2025 18:20:12 +0000 https://stringfestanalytics.com/?p=16015 Microsoft keeps expanding what Excel can do with AI. First came Copilot, then Agent Mode. At the same time, Copilot Studio and Agent Flows are entering the picture. The result is powerful but also confusing. Many people are trying to figure out what each tool is for and when to use it. This post explains […]

The post How to choose between Copilot, Agent Mode, and Copilot Studio in Excel first appeared on Stringfest Analytics.

]]>
Microsoft keeps expanding what Excel can do with AI. First came Copilot, then Agent Mode. At the same time, Copilot Studio and Agent Flows are entering the picture. The result is powerful but also confusing. Many people are trying to figure out what each tool is for and when to use it.

This post explains how to think about Agent Mode, how it compares with Copilot, and why both still matter. It also looks at where tools like Copilot Studio and Agent Flows fit into the broader Microsoft ecosystem for Excel users.

Copilot vs. Agent Mode

At first glance, Copilot and Agent Mode sound like two versions of the same thing. Both involve AI that interacts directly with Excel. In reality, they have very different design goals.

Copilot is a helper. It is designed for the small, piecemeal tasks that analysts perform every day. You might ask Copilot to clean up a dataset, write a complex formula, summarize a range, or create a quick chart. It provides targeted help within the context of the workbook you already have open.

Agent Mode is a builder. Instead of working cell by cell, it can take a broad instruction and generate a complete workbook. You might tell it to build a quarterly sales dashboard or create a forecasting model for next year. It can create sheets, link formulas, and even write explanations. It is far more autonomous and structured around end-to-end creation.

A simple comparison helps clarify the difference:

Feature Copilot Agent Mode
Purpose Task assistance Full workbook creation
Scope One request at a time Multi-step process
Strength Works with existing files Builds from scratch
User Role Active collaborator High-level supervisor
Best Used For Quick help and debugging Prototyping new reports or dashboards

This distinction matters because it changes how you work with Excel. Copilot sits beside you while you work. Agent Mode takes your prompt, runs with it, and delivers a finished product.

Excel Copilot: When piecemeal still wins

It might sound like Agent Mode is the clear winner. After all, if it can build an entire model for you, why not use it all the time?

The reason comes down to how analysts actually work. Most of us are not starting from a blank sheet. We are maintaining workbooks that already exist. They might be forecasting models, KPI dashboards, or monthly reports that have evolved over years. They are usually mission-critical, connected to multiple data sources, and fragile in places.

In that context, incremental help is often safer and more realistic than full automation. You want a tool that can step in, understand what is there, and fix small issues without breaking the logic. Copilot handles this better right now. It can explain formulas, generate snippets, or reformat data without taking over the file.

Agent Mode, on the other hand, behaves like a blank-slate designer. It is better at starting fresh than at understanding what is already built. From what I have seen so far, it struggles when the goal is to repair or optimize an existing model “in flight.” It tries to interpret your workbook, but the context often gets lost.

Analysts know this feeling well. Sometimes it is easier to start over than to fix what is broken. That is exactly how Agent Mode currently operates. It builds something new rather than carefully weaving into the logic you already have.

The bigger picture: Copilot Studio and Agent Flows

There is also a broader shift happening in how analysts work. Excel is no longer the single destination for analysis. It is part of a much larger ecosystem.

Data now flows through Power BI, SharePoint, OneDrive, Dataverse, and external sources like SQL or Azure. Analysts collaborate in Teams or push reports through Power Automate. In that world, Agent Mode’s single-application focus stands out. It can do amazing things inside Excel but does not yet extend far beyond it.

That is why Copilot Studio and Agent Flows are such important developments. They bring the same agentic logic to the entire Microsoft 365 environment. Copilot Studio allows you to design and deploy your own custom agents that can move between apps. You can connect Excel to Outlook, Teams, or Power BI without writing a line of code.

Agent Flows take that one step further. They combine the logic of Power Automate with the intelligence of AI. Instead of following rigid “if this, then that” rules, an agent can interpret the situation and decide what to do next. It is automation that learns context rather than just repeating instructions.

Seeing the Layers

A helpful way to visualize this evolution is to think in terms of layers:

Each layer builds on the previous one. Copilot helps you perform tasks within Excel. Agent Mode automates the creation of a full workbook. Copilot Studio and Agent Flows orchestrate those agents across the broader Microsoft stack.

Practical takeaways

If you are curious about where to begin, start with Copilot in Excel. It remains the foundation for understanding how AI works inside your spreadsheets. You can take my LinkedIn Learning course on Copilot in Excel for free. The course focuses on practical, real-world examples that help you build confidence before exploring the more advanced agentic tools.

Take the course here →

Once you are comfortable with Copilot, try Agent Mode. Test how it builds reports from scratch and see where it fits into your process. Use it not as a replacement but as a design partner that can show what is possible.

If your team is trying to make sense of all this, whether it’s how to integrate these tools into your existing workflow or how to train analysts for the next generation of AI-powered Excel, I am building training sessions and advisory resources around exactly that.

You can get in touch here to discuss what your organization is exploring, or connect with me on LinkedIn for new articles, sessions, and hands-on tutorials. I am still learning myself where the biggest opportunities for organizations lie, and your feedback helps shape that journey.

В даркнете существует множество торговых площадок, каждая из которых имеет свою специализацию и аудиторию. Они предоставляют пользователям анонимный доступ к широкому спектру предложений, которые невозможно найти в обычном интернете. Известная платформа Kraken market darknet предлагает услуги в различных категориях, обеспечивая высокий уровень безопасности транзакций и конфиденциальности данных благодаря использованию передовых технологий шифрования.

The post How to choose between Copilot, Agent Mode, and Copilot Studio in Excel first appeared on Stringfest Analytics.

]]>
16015
How to get started with Agent Mode in Excel https://stringfestanalytics.com/how-to-get-started-with-agent-mode-in-excel/ Tue, 30 Sep 2025 01:42:55 +0000 https://stringfestanalytics.com/?p=15997 Copilot has been around for a couple of years now, and we’ve seen the flood of “Copilot” tools across Microsoft. Many of them have been underwhelming. Even Copilot in Excel, for all its promise, felt like a letdown to a lot of people. The reason, I think, is that many were expecting something more powerful. […]

The post How to get started with Agent Mode in Excel first appeared on Stringfest Analytics.

]]>
Copilot has been around for a couple of years now, and we’ve seen the flood of “Copilot” tools across Microsoft. Many of them have been underwhelming. Even Copilot in Excel, for all its promise, felt like a letdown to a lot of people.

The reason, I think, is that many were expecting something more powerful. Something that could not just answer a one-off prompt, but actually take on multi-step work the way an analyst would. That is exactly what Agent Mode, just released, starts to deliver.

In this post we will take a look at how Copilot works as a generative AI tool, how Agent Mode is different, and why this shift matters for finance and accounting use cases.

Excel Copilot vs Agent Mode

Copilot in Excel is built around the idea of generative AI: you give it a prompt, it produces an answer. That might be a formula, a quick summary, or even a chart. Copilot also tries to go further by automating workflows directly in the grid. It attempts to run multi-step actions, not just describe them. The challenge is that this is very difficult to do in a one-shot way. The architecture of Excel, an old program by software standards, often means the results are inconsistent, incomplete, or require heavy editing.

Agent Mode approaches the problem differently. Running natively on the web, it uses a reasoning loop: plan, execute, validate, retry. Instead of firing off a single attempt, it reflects on the results, corrects issues, and keeps going until the outcome aligns with the user’s intent. The effect is closer to delegating work to a junior analyst who can manage a process end-to-end under your guidance, rather than just a helper speeding up single tasks.

This distinction is especially important for finance and accounting. Copilot might be fine for generating a formula to calculate year-over-year growth, but Agent Mode can take on full workflows like building a close report, running variance analysis, and producing a management summary with KPIs. It is the difference between a smart assistant and a system you can actually delegate work to.

Here’s a simple comparison:

Getting started with Agent Mode

Agent Mode is currently available to Frontier Program users. Check out this post for more notes on availability and rollouts.

Start by thinking in terms of workflows, not single formulas. Copilot was good at writing a function or formatting a chart. Agent Mode shines when you can describe an entire process: build a report, validate totals, highlight anomalies, or create a dashboard.

A good rule of thumb is to look for repetitive, rules based tasks where you currently spend time each month: closing the books, reconciling accounts, preparing KPI packs, or running forecasts. These are strong candidates because Agent Mode can plan, execute, and verify each step.

Another best practice is to stay in the loop. Treat the agent like a junior analyst: let it do the work, then review, adjust, and steer. The strength of Agent Mode is not replacing your judgment, but giving you a structured way to delegate and then refine.

Think less about what formula can this write for me and more about what outcome do I want, and how would I guide someone through it. That shift in mindset is where Agent Mode delivers the most value.

Some use cases of Agent Mode

For this series of use cases, I’m going to focus on accounting and finance examples. That’s both because this is a big part of my audience and because this kind of work brings some particular challenges for AI. Financial models often call for a lot of judgment and nuance in how relationships are built and shown in Excel in a way that’s easy to use and hard to break.

The examples I share here might seem a bit simple, but the point isn’t to solve every problem at once. Rather, I want to get you thinking about how this tool could fit into your own work. If you’re working through more complex situations and aren’t sure how to approach them, feel free to get in touch and we can set something up.

Loan calculator

Let’s get to work in Agent Mode with a first example. A classic finance task is building a loan calculator. Instead of writing formulas and formatting tables by hand, we can hand off the request to Agent Mode with a single prompt:

“Build a loan calculator that lets me input loan amount, annual interest rate, and term in years. Show the monthly payment, create an amortization table with month, payment, principal, interest, and ending balance. Format it cleanly.

Loan calculator Excel

Within seconds, Agent Mode generates a working template. The inputs remain adjustable, so users can change the loan amount, interest rate, or term and immediately see updated results. What we end up with is a simple financial model that is interactive and ready to use.

For years, financial modeling has been one of the areas where Excel experts felt their domain knowledge was safe from automation. Building an amortization schedule or running loan scenarios seemed too detailed and too context-specific for AI. Agent Mode shows how quickly that assumption is changing.

The key point is that Excel professionals still bring the judgment that matters. Deciding which metrics are important, setting realistic ranges for inputs, and interpreting the results requires experience. But AI is beginning to build more of that intuition too, as these tools grow more sophisticated. What used to take a careful build can now be created in moments, leaving analysts to focus on review and decision-making rather than setup.

Budgets vs actuals

In this next use case we are asking Agent Mode to build a report from a table of budgeted and actual expenses by department. The prompt is:

Take this table of budgeted and actual expenses by department. Build a report showing variances in dollars and percentages, highlight any variances greater than 10 percent, and create a summary dashboard with charts for executives.

Agent mode budget vs actual

This makes a strong use case because it reflects a real-world workflow that nearly every finance or operations team knows well. Month after month, analysts compare planned versus actual spend, flag significant deviations, and package the results into an executive summary. It is routine but time-consuming work, and it needs to be accurate.

Agent Mode handles this by working through multiple steps:

  • Reading the table structure (departments, months, budget, actual).
  • Calculating variances in both dollars and percentages.
  • Applying rules to highlight outliers.
  • Building summary tables or PivotTables.
  • Creating charts for visualization.
  • Checking that totals and summaries reconcile back to the source data.

Because this is not a single calculation but a chain of dependent tasks, it can take several minutes for Agent Mode to work through the entire process. That is normal. It is reasoning, executing, and validating at each step instead of rushing to a quick but incomplete answer.

If you want to understand a little more about what is happening technically under the hood, Microsoft has published a detailed walkthrough here.

As requested, Agent Mode gives us a formatted summary table and an executive-style dashboard. The outputs are reproducible, with formulas that lean on modern Excel features like dynamic arrays, which makes the model both flexible and powerful. That said, not everything is perfect. Some of the variance figures feel double-counted, and for some reason the header label in the raw data table ended up in a hard-to-read color. At this point we could either make the adjustments ourselves, or iterate further with Agent Mode, much like we already do when refining results with Copilot.

For example, I asked Agent Mode to narrow down some of the dashboard results, and when it did, the output just happened to line up perfectly with the width of the summary label at the top:

Revised budget vs actuals dashboard from Agent Mode

Financial close report

We have seen how Agent Mode can handle simple models like a loan calculator, and more involved reporting like budget versus actuals. Both of those are valuable, but the real test comes with the kinds of high-stakes processes that finance and accounting teams run every month.

One of the biggest examples is the monthly close. Pulling data from a trial balance, preparing P&L and balance sheet views, running variance analysis, and summarizing KPIs is critical work that normally takes hours. It has to be accurate, consistent, and audit-friendly.

This is where Agent Mode really shows its potential. Instead of piecing together each report manually, you can ask it to build the close pack end-to-end:

“Using this trial balance data for September and August, prepare a financial close pack. Create P&L and Balance Sheet views with standard formatting, highlight account variances greater than 5 percent month over month, and produce a summary tab with KPIs and trend charts.”

Financial close agent start

After 254 seconds (just over four minutes) Agent Mode produced a multi-worksheet reporting kit. It leaned on structured table references, dynamic array functions, and other modern Excel features to keep the outputs both readable and dynamic.

Agent Mode P&L

Want to check out these results out and try them for yourself? Download the demo file below:

 

Conclusion

Agent Mode is, in all likelihood, a preview of where Excel and Office are heading. The era of standalone apps is giving way to a web-first, AI-driven builder where models and workflows live directly in the cloud.

“Learning Excel” is starting to mean less memorizing functions and more guiding an intelligent system. The skills that matter are shifting toward describing the outcome you want, checking the results, and refining them until they meet your standards.

For finance and accounting teams, this creates an opening to shift routine work like variance reports and reconciliations onto Agent Mode while focusing more attention on interpretation, judgment, and communication. Analysts can spend less time constructing every detail and more time shaping insights that drive decisions.

Agent Mode is still early and imperfect, but the direction is clear. The sooner you begin experimenting with it, the faster you will build the intuition to guide these tools and be ready for when this becomes the standard way of working in Excel.

The post How to get started with Agent Mode in Excel first appeared on Stringfest Analytics.

]]>
15997