low code - Stringfest Analytics https://stringfestanalytics.com Analytics & AI for Modern Excel Tue, 03 Mar 2026 11:48:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/stringfestanalytics.com/wp-content/uploads/2020/05/cropped-RGB-SEAL-LOGO-STRINGFEST-01.png?fit=32%2C32&ssl=1 low code - Stringfest Analytics https://stringfestanalytics.com 32 32 98759290 How to validate AI-generated Excel reports with Office Scripts https://stringfestanalytics.com/how-to-validate-ai-generated-excel-reports-with-office-scripts/ Mon, 02 Mar 2026 14:45:20 +0000 https://stringfestanalytics.com/?p=16752 In my last few posts, I’ve been exploring how agents and AI-assisted workflows can support everyday tasks for Excel users. In this one, I want to focus on what is arguably the biggest risk of using generative AI for data analysis and reporting: getting the numbers wrong. One of the oldest and most reliable principles […]

The post How to validate AI-generated Excel reports with Office Scripts first appeared on Stringfest Analytics.

]]>
In my last few posts, I’ve been exploring how agents and AI-assisted workflows can support everyday tasks for Excel users.

In this one, I want to focus on what is arguably the biggest risk of using generative AI for data analysis and reporting: getting the numbers wrong.

One of the oldest and most reliable principles in Excel still applies here. If something matters, you cross-check it using a different calculation method. So we are going to compare the total sales returned from our prompt with the total calculated independently using an Office Script. If those two values match, we will treat that as a validation checkpoint and allow the report to move forward.

The dataset is intentionally simple so you can follow along. Upload it to your OneDrive and keep the workbook open, since we’ll need to create an Office Script for it. After that, head over to make.powerautomate.com and we’ll build the flow together from there.

 

If you’re just getting started with these tools for Excel automation, I’d recommend beginning with my LinkedIn Learning courses to get a solid foundation before diving into this workflow.

If you’d like a more step-by-step walkthrough of how this flow works and why we’re structuring it this way, take a look at this post:

Creating the Office Script

In the workbook, I created a simple Summary sheet where the Sales column is totaled and that result is stored in a named cell called TotalSales.

function main(workbook: ExcelScript.Workbook): number {

    const TotalSales = workbook
        .getNamedItem("TotalSales")
        .getRange()
        .getValue() as number;

    console.log("TotalSales:", TotalSales);

    return TotalSales;
}
Sum sales Office Scripts

The Office Script retrieves the value from that named range, logs it for debugging, and returns it as a number to Power Automate so we can compare it against the AI-generated total as a validation step.

OK, with the workbook ready, let’s move over to Power Automate. I’m going to create a manually triggered flow for now, though you could easily switch it to run on a schedule or another trigger later. Since our data lives in Excel, there are a few extra setup steps involved before everything is wired together.

Get the workbook content

First, search for the action called “Get file content using path.” For this parameter, you simply point it to your workbook. And remember, you can rename each step so the flow is easier to read later.

Get path

Run the custom prompt

Next, choose “Run a prompt” from AI Builder”

Open the Excel workbook [Excel workbook] 
Locate the table containing sales data.



The dataset contains transactional sales data with fields such as:
Date, OrderID, Salesperson, Region, Customer, Product, Category, Units, UnitPrice, and Sales.

Primary KPI:
- TotalSales = sum of the Sales column.

Contextual calculations (for internal use in commentary only):
- TransactionCount = number of rows contributing to TotalSales
- AverageSale = TotalSales / TransactionCount
- TopCategory = category contributing the highest share of TotalSales
- TopSalesperson = salesperson contributing the highest share of TotalSales

Flash Report Objective:

Return:
- TotalSales (numeric only, no formatting)
- Commentary (1–2 concise executive-style sentences)

Commentary Guidance:
- TotalSales is the anchor KPI.
- Use contextual metrics to highlight meaningful performance dynamics such as:
  • Category mix (Hardware vs Electronics vs Service)
  • Revenue concentration by salesperson or customer
  • Volume vs pricing effects (unit-heavy vs premium items)
  • Product mix influence on revenue
- Focus on business interpretation, not mechanics.
- Do NOT restate every metric mechanically.
- Do NOT mention column names, sheets, or calculation steps.
- Do NOT invent trends, time periods, comparisons, or benchmarks not present in the data.
- Neutral, confident executive tone.

Rules:
- Do not filter or modify the workbook.
- Do not fabricate data.
- Return valid JSON only.
- No text outside the JSON object.

Examples of acceptable commentary style:

- "Revenue reflects a balanced mix across Hardware and Electronics, with premium Service transactions meaningfully lifting overall performance."
- "Sales are moderately concentrated among top contributors, though revenue remains diversified across product categories."
- "Performance is supported by steady transaction volume, with higher-priced offerings contributing disproportionate revenue impact."
- "Revenue mix suggests balanced regional contribution without material dependence on a single product line."


Output format:

{
  "TotalSales": number,
  "Commentary": string
}

You can rename this prompt to something clearer, like ‘Sales Flash Report Generator.’ Before saving it, we should test it, and to do that we need to upload a sample workbook. Go to the three dots, open Settings, and turn on the code interpreter.

Next, upload a sample workbook. This file is only for this test run. In the prompt, locate the placeholder [Excel workbook], delete it, type ‘/’, and choose Image or document. That dynamic field is where you will upload the workbook for testing. Later, when we build the flow, we will pass this in as a variable instead of uploading it manually.

Sales report prompt generator

Also, under the output panel on the right where it says “Model Response,” we want the result returned as JSON. There are a few setup steps to get there, but once everything is configured you should see a test result come back. Each run may vary slightly, but ideally the sales total is correct. We will validate that by comparing it to the value returned from the Office Script, and the commentary field will just provide some additional narrative context.

We are structuring the output as JSON because it makes it much easier to extract specific pieces for a downstream report. It also gives us a clean way to verify that the TotalSales figure is accurate before pushing anything forward.

Pass the file contents to the prompt

Go ahead and save the prompt. You will notice there is one more parameter we need to configure. For the purposes of the flow, we have to specify where the workbook will come from. The file we uploaded earlier was just a one-time sample for testing.

In the actual flow, the workbook should be passed in dynamically based on its location, which we can retrieve from the previous step. That said, this action expects the file content as raw base64. We can generate that using the base64() function and pass it directly into this step.

base64(body('Get_file_content_using_path'))
Convert to base 64

Run the Office Script

The prompt has now been run against our workbook, but before we report anything out, we need to confirm the numbers are correct. There are a few ways to validate this, and if you are even slightly risk averse, you should absolutely be checking these totals yourself before sharing them. Even so, this automated cross-check is a smart safeguard.

Run Script Power Automate

Our goal is simple: verify that the Total Sales calculated by the generative AI matches the Total Sales calculated by the Office Script. What happens next in the flow will depend on whether those two values agree. Add the step called “Run script” to your flow, then specify which Office Script to execute and which workbook it should run against.

Set up the condition

Now we want to cross-check that the Total Sales calculated by the prompt’s Code Interpreter matches the Total Sales calculated by the Office Script. If the numbers tie out, we will send the flash report. If they do not, the report will not run and instead I will send myself an email to investigate.

To set this up, add a Condition step to the flow and configure it to compare the TotalSales value from the Office Script with the TotalSales value returned by the prompt. The expressions you will need for this comparison are shown below.

outputs('Run_script')?['body/result']
outputs('Run_a_prompt')?['body/responsev2/predictionOutput/structuredOutput/TotalSales']
Check total sales equality

If false, send an email

There are plenty of ways to handle this, but I am going to keep it simple and just use the “Send an email” action. You will of course want to use your own email address here, not mine. The idea is simply to send yourself a quick, hard-coded message letting you know the numbers did not match so you can take a closer look.

If you wanted to get fancy, you could include the TotalSales value from the Office Script and the value from the prompt in the email body so you can immediately see the discrepancy. That could be a helpful starting point. Personally, I assume I will need to do some old-fashioned analyst digging anyway, so I am keeping it simple for now.

Send email if something goes wrong

If true, post to Teams

Now let’s look at the better outcome. If the two values do match, we will go ahead and post the Sales Flash to Teams.

We will post it as an Adaptive Card. An Adaptive Card is simply a structured JSON layout that Teams understands how to render as a clean, formatted message instead of plain text. Because our inputs from the prompt are already in JSON, it actually makes things easier to stay in JSON. That way we keep everything structured and consistently formatted from calculation all the way through presentation.

Post card to Teams

Of course, that means we need to define a bit more JSON to design the card itself. We will use the Adaptive Card schema framework to do this. Think of it as a template that tells Teams how to display titles, labels, and values. The full card definition is shown below.

You will also notice that I apply a small amount of formatting to the TotalSales variable so it renders as currency, for example $3,760.00 instead of 3760. That final polish makes the flash report feel much more like a proper financial summary.

{
  "type": "AdaptiveCard",
  "version": "1.5",
  "body": [
    {
      "type": "TextBlock",
      "text": "Sales Flash Report",
      "weight": "Bolder",
      "size": "Large"
    },
    {
      "type": "FactSet",
      "facts": [
        {
          "title": "Total Sales:",
          "value": "@{formatNumber(float(coalesce(outputs('Run_a_prompt')?['body/responsev2/predictionOutput/structuredOutput/TotalSales'], 0)), 'C2', 'en-US')}"
        }
      ]
    },
    {
      "type": "TextBlock",
      "text": "@{outputs('Run_a_prompt')?['body/responsev2/predictionOutput/structuredOutput/Commentary']}",
      "wrap": true,
      "spacing": "Medium"
    }
  ]
}

Go ahead and run the flow. Ideally, you should see something like this. If not, you will receive an email alerting you that there was a variance.

Sales flash report

Conclusion

There is a lot more we could layer on top of this. Just because the total ties out does not mean the explanation is solid. The math could be right while the commentary is completely off. You might decide to add additional summary calculations to triangulate the results, or tighten the prompt so it is more deterministic and less free to hallucinate details that are not actually in the data.

You could also introduce a formal approval step so the report routes for signoff or rejection before it ever posts to Teams. That would make the entire process more controlled and auditable, especially in a finance setting.

I would love a clean way to directly guide or override the prompt’s commentary within the flow itself, but at least right now I am not seeing a simple, built-in way to do that.

Even so, this demonstrates that with a bit of structure and creativity, you can absolutely cross-check AI-generated reports. There is real opportunity here, and I will keep experimenting and sharing what I discover.

The post How to validate AI-generated Excel reports with Office Scripts first appeared on Stringfest Analytics.

]]>
16752
How to upload Excel workbooks to Copilot Studio https://stringfestanalytics.com/how-to-upload-excel-workbooks-to-copilot-studio/ Tue, 24 Feb 2026 21:50:06 +0000 https://stringfestanalytics.com/?p=16663 Like a lot of people, I’ve been digging deeper into AI agents lately. The difference is that I haven’t spent much time with tools like the Claudverse or OpenClaw. Most of my clients are deeply invested in Microsoft, and getting approval for anything outside that ecosystem can be a real uphill battle. Microsoft is making […]

The post How to upload Excel workbooks to Copilot Studio first appeared on Stringfest Analytics.

]]>
Like a lot of people, I’ve been digging deeper into AI agents lately. The difference is that I haven’t spent much time with tools like the Claudverse or OpenClaw. Most of my clients are deeply invested in Microsoft, and getting approval for anything outside that ecosystem can be a real uphill battle.

Microsoft is making it easier to build lightweight Copilots directly inside Microsoft 365, but those tools are limited. They’re essentially chatbots with custom instructions and knowledge sources. They don’t let a user upload a new workbook and actually perform actions on it [EDIT: not entirely true, working on some posts about how to do simple versions of this with 365 agents]. If you want that level of capability, you need Copilot Studio.

I’m still early in my exploration of the more advanced features, and if you want serious depth, Matthew Devaney’s work is excellent. But part of writing these posts is helping me sharpen my own understanding and adapt these ideas for practical Excel uses.

In this walkthrough, we’ll build a simple agent that allows users to upload a workbook and generates a basic data dictionary overview.

Yes, Copilot inside Excel could theoretically handle something like this. What I like about the Copilot Studio approach is that it’s structured. The instructions are predefined. The workflow is engineered once and reused. The user doesn’t have to reinvent the prompt every time.

As I get more comfortable with this platform, I’ll share more advanced examples. For now, let’s start simple. Head to copilotstudio.microsoft.com and follow along if you’d like.

Download the exercise files here:

Setting up the agent

To begin, create a new agent and give it a clear, descriptive name like Workbook Table Data Profiler. If you’d like, assign a custom icon as well to make it easier to spot later. I’ll use the Excel logo so it’s instantly recognizable.

You will also need to complete the Description and Instructions fields. The Description explains what the agent is designed to do and appears in the overview so you and anyone else working with it can quickly understand its purpose. The Instructions define how the agent should behave and what steps it should follow when responding. This is where you shape its logic and expectations.

I will provide the exact text for both sections below so you can copy and paste it directly into Copilot Studio.

Workbook profiler config

Be sure to save your work early and often. It is very easy to click away from a page in Copilot Studio and lose unsaved changes, so make it a habit to save after completing each section.

Here is the agent description:

Analyzes an uploaded Excel workbook and generates a structured profile of all structured Excel tables it contains. The agent identifies each table, counts rows and columns, categorizes fields by data type, and provides concise structural observations based strictly on visible column names and data types.

Now the instructions:

You are a data profiler assistant for Excel workbooks.

Your job is to:

1. Open the uploaded Excel workbook.
2. Identify all structured Excel tables in every worksheet.
- Ignore raw ranges that are not defined as structured tables.
3. Count the total number of structured Excel tables.
4. For each structured table:
- Extract the table name.
- Extract the worksheet name.
- Extract the column headers in order.
- Generate a short 1–2 sentence plain-English description of what the table appears to track based only on the column names.

Output rules:

- Only include structured Excel tables.
- Do not modify the workbook.
- Do not describe your reasoning.
- Do not output JSON.
- Do not use Markdown tables.
- Keep descriptions short and clear.
- Follow the exact output format specified in the runtime prompt.

If no structured Excel tables are found, output exactly:

No structured Excel tables were found in this workbook.

Once your overview matches the example shown above, we will move on to configuring the Topics.

Setting up the topic

Next, go to Topics in the left-hand menu and select Add topic, then choose From blank. This is where we define the interaction flow that actually performs the profiling. I will name this topic “Upload workbook and profile data:”

Upload workbook and profile data

A topic is essentially a structured conversation path. It controls when the agent kicks in, what it asks the user for, what logic runs behind the scenes, and what ultimately gets returned. In this case, we are building a very focused workflow: upload a workbook, analyze it, and return a structured table profile.

The topic will run roughly like this:

  • Trigger the topic when the user indicates they want to analyze or upload a workbook.
  • Prompt the user to upload an Excel file.
  • Store the uploaded file in a variable.
  • Run a prompt that inspects the workbook and extracts structured information about its tables.
  • Capture the output of that prompt.
  • Return the structured profile back to the user as the final response.

By defining this as a topic, you create a repeatable, reliable workflow instead of relying on one-off prompts every time someone uploads a workbook.

To get started, go ahead and fill in the topic description, as reproduced below. This is what helps the agent determine when to run the topic. That’s what the “trigger” and “the agent chooses” settings are about. When the agent interprets the user’s message as intent to use this workflow, it will automatically kick off the topic.

Triggers when a user wants to analyze or profile an Excel workbook. Prompts the user to upload a file, scans all worksheets for structured Excel tables, and returns a structured summary listing each table’s name, worksheet, columns, and a short plain-English description of what the table appears to track. Only structured tables are included, and the workbook is not modified.
Topic description

Prompt the user to upload the file

Now that the user has expressed their intent and we know they want to profile a workbook, the next step is to ask them to upload the file.

Upload workbook to the chat window Copilot Studio

Add a “Ask a question” node and configure it to request a file upload. You can customize the prompt so it clearly tells the user to upload an Excel workbook for analysis. Make sure the input type is set to File so Copilot Studio expects a document rather than text.

Upload a file entity Copilot Studio

When you do this, Copilot Studio automatically creates a variable to store the uploaded file. By default, it will be named something generic like Var1, which is not very helpful. Rename it to something more descriptive, such as varExcelFile. Clear variable names make the flow easier to understand and maintain, especially as your topic grows more complex.

Write varExcelFile in Copilot studio

The variable now contains the uploaded workbook and can be used in subsequent steps. We’ll pass this file into a prompt for analysis. Add a new node, select “Add a tool,” and choose “New prompt” for the next step.

New prompt

Setting up the prompt

When you select this option, a prompt editor will open. We’re going to use generative AI to run this prompt against whatever workbook the user uploads. Paste the prompt below into the editor as a starting point.

Goal:
Generate a structured profiling summary for all structured Excel tables in this workbook.

Inputs
Excel file: [Upload Excel workbook]  


Task:

1. Review the entire workbook.
2. Identify only structured Excel tables (ignore normal cell ranges).
3. Count the total number of structured tables.
4. For each structured table, produce one section using the exact format below.
5. Base all observations strictly on visible column names and data types.

Output Format (follow exactly):

Total Structured Tables: <number>

----- TABLE 1 -----
Table Name: <TableName>
Worksheet: <SheetName>

Row Count: <number>
Column Count: <number>

Numeric Columns:
- <ColumnName>
- <ColumnName>

Date Columns:
- <ColumnName>

Text Columns:
- <ColumnName>

Other Columns (if any):
- <ColumnName>

Observations:
<Write 1–3 concise sentences describing structural characteristics of the table. 
Mention patterns such as calculated-looking fields, presence of time-based columns, mix of dimensions and measures, or suitability for aggregation. 
Base statements only on column names and data types.>

Repeat the exact structure for TABLE 2, TABLE 3, etc.

If no structured Excel tables are found, output exactly:

Total Structured Tables: 0
No structured Excel tables were found.

Rules:

- Include only structured Excel tables.
- Do not modify the workbook.
- Do not output JSON.
- Do not use Markdown tables.
- Do not add commentary before or after the required structure.
- If a category has no columns, write “None”.

We’re going to need a few things to close the loop on making it so the prompt can run with the uploaded workbook. First, we’re going to hit the three dots net to Instructions and go to Settings.

Make sure you enable the code interpreter. This is essential because it allows the prompt to actually read and process Excel files. Without it, the model cannot properly inspect the workbook. You will also see options like temperature and retrieval. For structured profiling, a lower temperature is usually better so the output stays consistent.

Enable code interpreter

Exit the Instructions view to return to the main prompt editor. Now we need to pass the uploaded workbook into the prompt dynamically.

Find the placeholder text [Upload workbook] near the top of the prompt. Delete that text and type a forward slash (/). This will open the menu for inserting dynamic entities. Select Image or document.

Give the entity a clear name, such as “Upload Excel workbook.” Then upload a placeholder test workbook. This file is only used so the prompt has something to run against during setup. Upload one of the sample workbooks provided earlier in this post.

Upload Excel workbook

Now you’re ready to test the prompt. Run it and give it a moment to complete. It may take a couple of attempts the first time. If you see an error, try running it again. Occasionally it takes a run or two for everything to settle in.

But eventually you should have some output!

Excel workbook table profiler

Save your prompt.. but we’re not done yet.

Right now the Prompt node doesn’t automatically know which file to use. If you leave it like this, you’d have to manually upload a workbook inside the prompt builder every time. Instead, we want it to use the file the user uploads in chat.

So under the Prompt node’s input, bind the file parameter to varExcelFile. That variable holds the uploaded workbook from the conversation. Once it’s mapped, the prompt will automatically use whatever file the user provides.

Those red warnings about missing input bindings look dramatic, but they’re usually just stale references. Copilot Studio may have renamed or regenerated internal IDs when you edited the upload step. The Prompt node is just pointing to an old name that no longer exists. Rebinding to varExcelFile fixes it. After saving or refreshing, the warnings typically go away.

Similarly, give the output of this Prompt node a clear, memorable name. For example, set the prompt output to varPromptOutput so you can easily reference it later in the topic:

var prompt output Copilot Studio

Send the result to the user

Now we’re finally ready to return the result to the user.

Add a new node, “Send a message.” Start the message with something simple, like:
“Here’s a summary of your workbook’s tables.”

var Output Text

Then insert part of the prompt’s output. Click the small {x} icon in the editor to open the variable picker. From there, select varPromptOutput and choose its text property. That’s the actual generated content you want to display.

Amazing! Let’s go ahead and test this thing.

Testing the agent

Over on the right side of the screen, you’ll see the test panel. Start the agent with something like, “profile my workbook.” That should trigger your upload-and-profile topic, and Copilot Studio will prompt you to upload a file.

Go ahead and upload a different workbook than the placeholder you originally used. Hit send, and after a moment, the agent should return the overview in chat:

Copilot studio test run

If you try this with the second demo workbook and then open that file to double-check the output, you’ll see something interesting: Copilot Studio is actually pretty smart here. It only profiles structured Excel tables. Stray ranges, helper cells, and random data sitting on the sheet are ignored. It focuses strictly on defined tables.

On the other hand, you might notice the formatting of the results is a little odd, or that some of the table descriptions feel thin or generic. That’s normal. This is a solid foundation, but there’s definitely room to refine the prompt and improve the quality of the output.

Conclusion

I hope this walkthrough sparked a few practical ideas you can try in your own projects. One of the real advantages of Copilot Studio is that you are not limited to a one-off chat interaction. You can design something structured and repeatable, especially when you allow users to upload their own files and trigger a defined workflow behind the scenes.

Once you have that foundation working, there are several ways to make the experience more robust. For example, you could enforce a defined JSON schema so the output follows the same structure every time. That makes downstream steps more reliable and easier to maintain. Instead of returning plain text, you could take it a step further and write the results directly into a newly generated Excel workbook or text file, giving the user something tangible and ready to use. You could also connect a knowledge source to enrich the uploaded file with reference data, validation rules, or business logic before returning it.

All of that requires a bit more care in how you design and test your Copilot Studio flow, but it is achievable. And once you see it working end to end, it starts to feel less like a chatbot and more like a purpose-built analytics assistant. More on that soon.

The post How to upload Excel workbooks to Copilot Studio first appeared on Stringfest Analytics.

]]>
16663
How to connect your Excel workbook to the Dataverse via Dataflows https://stringfestanalytics.com/how-to-connect-your-excel-workbook-to-the-dataverse-via-dataflows/ Mon, 20 Oct 2025 11:50:40 +0000 https://stringfestanalytics.com/?p=16089 If you want to make your Excel data more useful across the Power Platform, connecting it to Dataflows within Power Apps is the right starting point. By moving data from your workbook into a Dataflow, you make it accessible to Dataverse and ready for automation through tools like Power Automate and Copilot Studio. Once this […]

The post How to connect your Excel workbook to the Dataverse via Dataflows first appeared on Stringfest Analytics.

]]>
If you want to make your Excel data more useful across the Power Platform, connecting it to Dataflows within Power Apps is the right starting point. By moving data from your workbook into a Dataflow, you make it accessible to Dataverse and ready for automation through tools like Power Automate and Copilot Studio.

Once this connection is in place, you can start running prompts or reports on a schedule; for example, automatically generating a weekly variance report and posting it to Teams or emailing it. You could also build guardrails into your flows, such as approval steps or editable review stages, so your team can validate AI-generated insights before they’re published.

In this post we’ll connect a sales data table so that the data flows into Dataverse through Dataflows. I know that in the real world your sales data probably doesn’t live in Excel, or at least I hope it doesn’t. Excel often isn’t the best place for a single source of truth. But I also live in the real world, and I know that Excel has its advantages for powering those sources of truth. So here’s how to bridge that gap into a more developer and enterprise friendly way of doing things.

 

Save this workbook somewhere easy to find on your OneDrive or SharePoint. You’ll need it there for the next steps.

What is a Dataflow?

A Dataflow is a cloud-based way to extract, clean, and load data into the Power Platform. Power Apps will be our way in to set up these Dataflows, since that is where you can create and manage them to feed data into Dataverse. Dataflows use the same Power Query technology that Excel users already know, but instead of saving transformations inside a single workbook, they are stored in the cloud so multiple apps and services can reuse the same prepared dataset.

Once a Dataflow loads data into Dataverse, it becomes accessible across the Power Platform, including Power Apps for building interactive applications, Power Automate for scheduling and running flows, and Copilot Studio for creating generative AI experiences that draw on your Dataverse data.

A comparison table of using Power Query in Excel versus Dataflows in Power Apps follows.

Feature Power Query in Excel Dataflows in Power Apps
Where it runs On your computer In the Power Platform cloud
Purpose Transform data for a single workbook Prepare and share data for multiple apps
Storage Inside the Excel file In Dataverse or Azure Data Lake
Refresh Manual or workbook-triggered Scheduled automatic refresh
Collaboration Local to one user Centralized and reusable across apps
Integration Excel only Connects to Power Apps, Power Automate, and Copilot Studio

Syncing the Excel data to Dataflows

To get started, go to make.powerapps.com and head to Tables. These data sources are actually Dataverse tables. You’ll notice that many of them are already there by default. You didn’t create them, and you can safely ignore them for now. Dataverse includes a set of standard tables out of the box like Account, Contact, and Appointment because they support core business functions used by many Power Apps solutions. For this walkthrough, we’ll focus on adding our own table by going to the top menu, selecting Import, and then choosing Import data with Dataflows.

At the top, select Import, then choose Import data with Dataflows. This is where we’ll set up our live connection between Excel and Dataverse. While there are several simple ways to import static data from Excel (for example, by uploading a CSV), Dataflows are what make the connection dynamic.

Import data from Dataflows

Great work! Now select Choose Data Source, then pick “Excel workbook.”

At this point, your workbook needs to be stored on OneDrive or SharePoint. Go ahead and sign in with your account, then browse to and select the workbook you want to connect.

Browse OneDrive

Now things should start to look more familiar if you’ve used Power Query before. You’ll first choose which parts of the workbook to connect to. In this case, I’m connecting only to the table I need, currently named “Table1” (I really should have named it better!). Fortunately, we’ll get a chance to give it a clearer name once it’s loaded into Dataverse.

Get data Dataflows

Now we are fully in Power Query, and you can see the label in the upper left corner confirming it. If you wanted to build data transformations, this is where you could do it. For now, I will skip that step and select Next to finalize the Dataflow.

Power Query dataflows

Now we have a few options for how to finalize this connection into Dataverse. I am going to have it create a completely new table.

Here we can name the table, set data types, define a primary key, rename columns, and make other adjustments under the Column mapping dropdown. Each column is assumed to have a single type, just like in Power Query.

For now, I will assume Dataflows and Dataverse handle this setup correctly. If something needs to be changed later, that is fine. This process is reproducible, so we can always go back and update it.

Load to Dataflows

Last, we’ll be asked how we want the Dataflow to refresh. We can set it to run on a schedule or trigger it manually. I am going to set it to refresh manually for now. Since there is a way to trigger a refresh directly within Power Automate, I’ll handle automation later as part of a broader Power Automate flow.

Refresh settings Dataflows

Great work. Now let’s test it with a manual refresh. Go back to your Excel workbook and add a test row. Then return to Power Apps and open Dataflows. You should see your active Dataflow listed, and it may take a moment to finish publishing. From there, select the three dots next to your flow. You can give it a name and choose Refresh to trigger the update.

Edit and rename dataflow

Amazing. The new record is there. We’ve successfully set up a Dataflow that brings data from an Excel workbook into Dataverse.

New record dataflow

Conclusion

In this post, we connected an Excel workbook to a Dataflow in Power Apps and saw how this simple step brings your spreadsheet data into the Power Platform ecosystem. Once the data is stored in Dataverse, it’s no longer limited to one workbook. It can power automations, apps, and even AI-driven summaries across your organization.

In a future post, we’ll build on this foundation by creating a Power Automate flow that pulls your Dataverse data and uses a prompt to generate a dynamic weekly summary. We’ll even have it post directly to Teams for an automated, AI-powered status update, set up an approval process for review and more. Stay tuned!

The post How to connect your Excel workbook to the Dataverse via Dataflows first appeared on Stringfest Analytics.

]]>
16089
How to sum an Excel column in Power Automate https://stringfestanalytics.com/how-to-sum-an-excel-column-in-power-automate/ Fri, 17 Oct 2025 18:17:29 +0000 https://stringfestanalytics.com/?p=16066 As Excel users, we know our data doesn’t always just stay in Excel. We often want to push it out, maybe to send totals in an email, post updates to Teams, or trigger a notification when spending passes a certain amount. Power Automate is great for that kind of integration, but even doing something as […]

The post How to sum an Excel column in Power Automate first appeared on Stringfest Analytics.

]]>
As Excel users, we know our data doesn’t always just stay in Excel. We often want to push it out, maybe to send totals in an email, post updates to Teams, or trigger a notification when spending passes a certain amount. Power Automate is great for that kind of integration, but even doing something as simple as summing a column inside a flow isn’t as straightforward as it sounds.

In this post, we’ll look at two ways to do it:

  1. Setting up a loop in Power Automate to add up your values.
  2. Using an Excel-specific method that runs an Office Script to calculate the total and return it to your flow.

We’ll use a small expenses table as our example dataset. Download it below to follow along.

 

For this post, I’m assuming you already know how to set up basic trigger-based flows, connect to Excel workbooks in Power Automate (remember: your data needs to be stored as a table!), and work with dynamic content and Power Fx expressions.

If you’d like a refresher on these topics before diving in, check out my LinkedIn Learning course on Power Automate for Excel:

Our goal here is to add up the values in the Amount column of the expenses table in the workbook provided. Save the file somewhere easy to find in your OneDrive or SharePoint, then head over to make.powerautomate.com and create a new trigger-based flow. We’ll use it to explore how to sum this column.

Method 1: Build a loop in Power Automate

Our first step is to set up an empty variable to hold the running total. When using a loop, we need a place to store the sum as each value is added. Here we create a variable called TotalAmount, set the type to Float, and give it an initial value of 0. We use the Float type because expense amounts often include decimals, and this ensures the total is calculated accurately.

Initialize variable Power Automate

Next, we use the List rows present in a table action to pull in all the data from our Excel table. Power Automate needs access to each row so it can loop through them and add up the values. By connecting to the workbook and selecting the correct table name, the flow can read every record in that table and use those values in later steps to calculate the total.

List rows present in table

Next we use the “Increment variable” action to add each value in the Amount column to our running total. This action increases the TotalAmount variable by the specified amount every time the loop runs.

The expression

float(items('For_each')?['Amount'])

tells Power Automate to take the Amount from the current row and treat it as a number. Using float() ensures the value is recognized as a decimal rather than text. Each time the loop runs, the flow adds that number to TotalAmount, giving us the full sum once all rows are processed.

Finally, we’ll post the result to Teams so the total shows up right where the team works:

Add amount to Teams loop

In the body of the message, we can insert our total and format it neatly using this Power Fx expression:

formatNumber(variables('TotalAmount'), 'N2')

This tells Power Automate to take the value stored in TotalAmount and display it with two decimal places. It makes the output look like a proper currency figure instead of a long or uneven number, giving a clean result such as $1,234.56 in the Teams message.

Now we can test the flow and see the result in Teams, like the message shown below:

Total amount test

From here, you can easily expand this flow for real business uses. For example, you might schedule it to run daily or weekly to post expense updates, or add conditions to send an alert if spending passes a certain threshold. You could also connect it to other flows to log totals in a dashboard or send summaries to managers by email.

The looping approach represents the classic Power Automate method for totaling values. For instance, if your data were stored in a SharePoint list, this would be the standard and most efficient way to handle the calculation, since Power Automate is built to iterate over list items easily.

Method 2: Run Office Script

However, because our data lives in Excel, we have another option: Office Scripts. With Office Scripts, we can perform the same calculation directly inside Excel, then call that script from Power Automate without relying on loops.

If you haven’t explored Office Scripts before and want a solid introduction to how to create, store, and share scripts, check out my LinkedIn Learning course on Office Scripts for Excel.

At a high level, this Office Script gets the first worksheet in the workbook, finds the first table on that sheet, and looks for a column called Amount. It then reads all the values from that column, converts them to numbers, adds them up, and returns the total.

We’re storing this script in our Office Scripts library, which means it lives in the cloud and can be accessed directly from Power Automate. Unlike traditional VBA macros that run locally on your computer, Office Scripts are completely cloud-based, making them easy to run, share, and automate across different devices and users.

Back in Power Automate, we’ll create another trigger-based flow, but this time we’ll call an Office Script to calculate the total. All we need to do is point Power Automate to the workbook’s location and specify the name of the script we want to run.

Run script total column

The next step is the same as before: we’ll send the result to Teams.

Total amount office scripts

This flow only takes three steps compared to five in the looping method. Does that make it better? Like most things in data, the answer is: it depends.

Conclusion

Both the loop and Office Script approaches get the job done, but they each come with tradeoffs. The loop method is the more classic Power Automate approach. It’s flexible, transparent, and easier to troubleshoot step by step. If your data were stored in a SharePoint list or another system that Power Automate natively integrates with, looping through rows would be the clear choice. However, it does take more setup and a few extra actions to build and maintain.

The Office Script method, on the other hand, is leaner and often faster. Because the logic runs inside Excel itself, Power Automate only needs to call one script and retrieve a result. The tradeoff is that Office Scripts can feel a bit more brittle if the script or table structure changes, and debugging them means switching between environments.

At a high level:

Feature Loop in Power Automate Office Script
Setup time More actions and configuration Quick once script is created
Performance Slower on large datasets Typically faster, runs in Excel
Maintainability Easier to edit or extend Requires script updates if structure changes
Integration Works great with SharePoint or non-Excel data Best for Excel-based workflows
Transparency Each step visible in the flow Logic contained in script code

If you’d like help implementing next-generation automation workflows like these for your organization’s Excel setup, or want to explore how tools like Power Automate, Office Scripts, and Copilot can streamline your reporting, get in touch:

The post How to sum an Excel column in Power Automate first appeared on Stringfest Analytics.

]]>
16066
The future of analytics may be “low/no-code” … at first https://stringfestanalytics.com/no-code-future-at-first/ https://stringfestanalytics.com/no-code-future-at-first/#comments Sat, 16 Jan 2021 19:42:17 +0000 https://georgejmount.com/?p=6894 You can’t get through much datascrolling these days without hearing about the rise to dominance of so-called “low- and no-code” data tools. “It’s bringing data to the people.” the typical post reads. “You no longer need to learn how to code to perform sophisticated insert analytics project here.” I absolutely see the draw to low- and no-code data solutions: after how many tedious rounds of import pandas as pd and object of type 'closure' is not subsettable errors could you not? That said, I don’t see this trend as revolutionary or as permanent as it’s often claimed to be. […]

The post The future of analytics may be “low/no-code” … at first first appeared on Stringfest Analytics.

]]>
You can’t get through much datascrolling these days without hearing about the rise to dominance of so-called “low- and no-code” data tools. “It’s bringing data to the people.” the typical post reads. “You no longer need to learn how to code to perform sophisticated insert analytics project here.”

I absolutely see the draw to low- and no-code data solutions: after how many tedious rounds of import pandas as pd and object of type 'closure' is not subsettable errors could you not? That said, I don’t see this trend as revolutionary or as permanent as it’s often claimed to be. Here’s why.

It’s an expected part of the adoption cycle

Take as an example of low- and no-code data products the ability to build Azure machine learning models right from Excel. It’s wild stuff, and there’s no question that it’s democratizing machine learning like never before. What’s not asked, though is what came before this innovation, and what comes next.

Innovation often arrives in waves, with one trend building on and ultimately supplanting the next. Analytics software is no exception. Each wave has been built on code, rolled out by low- and no-code graphical user interfaces (GUIs), and then supplanted again by code. 

The waves of analytics software: low- and no-code is arrived at, then supplanted.

This is a greatly simplified adoption wave: doing it justice would require a dissertation (and I’ve studied innovation at the doctoral level, so I’m not kidding). The upshot is that low- and no-code has come and gone in analytics tools; let’s look at some examples.

Case study: SPSS

SPSS (Statistical Program for the Social Sciences) began in the late 1960s and by the next decade, was joined by S and SAS as a new wave of tools for exploratory data analysis and statistical programming.

Back then, computer scripts generally needed first to be compiled into a machine-readable file, and then run; this made it difficult to manipulate, visualize and analyze data on the fly. These tools were novel in that they allowed for bits and portions of a script to be executed and printed immediately, which greatly enabled iteration. Analysts could now focus on the data analysis rather than compiling the code.

At some point (And I’m not able to find the exact launch, so if someone does know please get in touch!) SPSS went a step further and added menu-driven options for writing programs. All menu choices generated syntax which could be saved, but the idea was that analysts could further focus less on the code and more on the data, hence democratizing analysis. Technically-savvy statisticians no longer had the monopoly on working with data.

Low- and no-code can get messy, as SPSS can attest. (Source: Greg Elvers, University of Dayton)

One fruit of this “no- and low-code” implementation is the above menu screenshot. There’s no one-size-fits-all answer to working with data, so trying to accommodate all needs into a single menu can result in, let’s say, a bloated interface. I used SPSS in grad school, and while it was great to be able to point-and-click my way to data analysis, hence focusing on the conceptual bits of the work, I quickly found it easier just to write the code than to navigate the menus. So, the SPSS syntax generation is a blessing… but it’s not the be-all, end-all.

SPSS’s menu was just one product of the computing revolution driven by the GUI. As point-and-click options, GUIs offered relatively low- or no-code features to data users. Another result of this revolution was the spreadsheet, which in the opinion of many was the first “killer app” of the personal computer. Business users now had computing ability at their fingertips, without necessarily needing to code.

Some assembly always required

Let’s stick with spreadsheets because they’re facing the same GUI dilemma as SPSS in the age of what I am calling “personalized cloud computing:” computer applications which rely on cloud capabilities for storing and working with data.

Excel’s Power Query is a show-stopping innovation allowing users to build extract, transform, load (ETL) pipelines right from a spreadsheet. (Similar tools for low/no-code data prep include Alteryx, Tableau Prep, etc.). While based on the M programming language, it includes menu-driven syntax generation, much like SPSS. Not a cloud application per se, Power Query is part of Microsoft’s larger “Power Platform” which is largely marketed as a cloud (and no/low code) solution.

Its menus can be used for most of a user’s needs… but not all. And indeed, a rite of passage for Power Query users is that they begin writing their own code:

Recently, I was trying to add an index number by sub-group in Power Query; this took quite a bit of doing between Power Query menus and M custom coding. By the end, I asked myself, Was this really any easier than just strictly coding? After all, Power Query doesn’t offer a dedicated integrated development environment with package management, like R or Python. It’s an odd soup of GUI and code, somewhat like SPSS.

Working with data is messy in more ways than any of us can count. And it’s this ambiguity that makes building a rigid set of menu options so difficult. I’ve yet to see a GUI that easily accomodates everything I want to do with my data in a simple user experience. Can it be done? Possibly given future technologies. But learning to code has untapped far more possibilities for me than any GUI ever has. So why wait for some idealized UX?

Code and GUIs, ying and yang

It’s at least worth pointing out that many claim the rise to R and Python is in part because they are purely-programmed applications. By offering great development environments and large open source code bases, it became possible to do nearly everything with these two languages… if you would learn to code. There’s little debate that if, perhaps not as the layer the user interacts with, code should be the primary artifact for how analytics is done.

So, why the change in heart to low- and no-code analytics solutions? Like I said earlier, it can get frustating to write the same calls and receive the same trivial errors time and again. So I get that these could be seen as roadblocks to greater data democracy. GUIs have had their time and place in the waves of analytics innovation, often when some application has hit a certain level of maturity. Code also plays a part to build the applications out so they can reach that maturity.

I don’t know what the next wave will be, but I’m certain that this current wave of low- and no-code solutions won’t last it. It may be that fewer coders are needed to get an innovation to the low/no-code part of the adoption, and that the product can genuinely do everything required of its users.

Until that time, I recommend data professionals learn a bit about coding. Maybe not every data solution requires it; that’s fine. But given where we’ve come from in the data world, I’m not inclined to say that the future is all low and no code.

The post The future of analytics may be “low/no-code” … at first first appeared on Stringfest Analytics.

]]>
https://stringfestanalytics.com/no-code-future-at-first/feed/ 3 6894