no code - Stringfest Analytics https://stringfestanalytics.com Analytics & AI for Modern Excel Tue, 03 Mar 2026 11:48:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://i0.wp.com/stringfestanalytics.com/wp-content/uploads/2020/05/cropped-RGB-SEAL-LOGO-STRINGFEST-01.png?fit=32%2C32&ssl=1 no code - Stringfest Analytics https://stringfestanalytics.com 32 32 98759290 How to validate AI-generated Excel reports with Office Scripts https://stringfestanalytics.com/how-to-validate-ai-generated-excel-reports-with-office-scripts/ Mon, 02 Mar 2026 14:45:20 +0000 https://stringfestanalytics.com/?p=16752 In my last few posts, I’ve been exploring how agents and AI-assisted workflows can support everyday tasks for Excel users. In this one, I want to focus on what is arguably the biggest risk of using generative AI for data analysis and reporting: getting the numbers wrong. One of the oldest and most reliable principles […]

The post How to validate AI-generated Excel reports with Office Scripts first appeared on Stringfest Analytics.

]]>
In my last few posts, I’ve been exploring how agents and AI-assisted workflows can support everyday tasks for Excel users.

In this one, I want to focus on what is arguably the biggest risk of using generative AI for data analysis and reporting: getting the numbers wrong.

One of the oldest and most reliable principles in Excel still applies here. If something matters, you cross-check it using a different calculation method. So we are going to compare the total sales returned from our prompt with the total calculated independently using an Office Script. If those two values match, we will treat that as a validation checkpoint and allow the report to move forward.

The dataset is intentionally simple so you can follow along. Upload it to your OneDrive and keep the workbook open, since we’ll need to create an Office Script for it. After that, head over to make.powerautomate.com and we’ll build the flow together from there.

 

If you’re just getting started with these tools for Excel automation, I’d recommend beginning with my LinkedIn Learning courses to get a solid foundation before diving into this workflow.

If you’d like a more step-by-step walkthrough of how this flow works and why we’re structuring it this way, take a look at this post:

Creating the Office Script

In the workbook, I created a simple Summary sheet where the Sales column is totaled and that result is stored in a named cell called TotalSales.

function main(workbook: ExcelScript.Workbook): number {

    const TotalSales = workbook
        .getNamedItem("TotalSales")
        .getRange()
        .getValue() as number;

    console.log("TotalSales:", TotalSales);

    return TotalSales;
}
Sum sales Office Scripts

The Office Script retrieves the value from that named range, logs it for debugging, and returns it as a number to Power Automate so we can compare it against the AI-generated total as a validation step.

OK, with the workbook ready, let’s move over to Power Automate. I’m going to create a manually triggered flow for now, though you could easily switch it to run on a schedule or another trigger later. Since our data lives in Excel, there are a few extra setup steps involved before everything is wired together.

Get the workbook content

First, search for the action called “Get file content using path.” For this parameter, you simply point it to your workbook. And remember, you can rename each step so the flow is easier to read later.

Get path

Run the custom prompt

Next, choose “Run a prompt” from AI Builder”

Open the Excel workbook [Excel workbook] 
Locate the table containing sales data.



The dataset contains transactional sales data with fields such as:
Date, OrderID, Salesperson, Region, Customer, Product, Category, Units, UnitPrice, and Sales.

Primary KPI:
- TotalSales = sum of the Sales column.

Contextual calculations (for internal use in commentary only):
- TransactionCount = number of rows contributing to TotalSales
- AverageSale = TotalSales / TransactionCount
- TopCategory = category contributing the highest share of TotalSales
- TopSalesperson = salesperson contributing the highest share of TotalSales

Flash Report Objective:

Return:
- TotalSales (numeric only, no formatting)
- Commentary (1–2 concise executive-style sentences)

Commentary Guidance:
- TotalSales is the anchor KPI.
- Use contextual metrics to highlight meaningful performance dynamics such as:
  • Category mix (Hardware vs Electronics vs Service)
  • Revenue concentration by salesperson or customer
  • Volume vs pricing effects (unit-heavy vs premium items)
  • Product mix influence on revenue
- Focus on business interpretation, not mechanics.
- Do NOT restate every metric mechanically.
- Do NOT mention column names, sheets, or calculation steps.
- Do NOT invent trends, time periods, comparisons, or benchmarks not present in the data.
- Neutral, confident executive tone.

Rules:
- Do not filter or modify the workbook.
- Do not fabricate data.
- Return valid JSON only.
- No text outside the JSON object.

Examples of acceptable commentary style:

- "Revenue reflects a balanced mix across Hardware and Electronics, with premium Service transactions meaningfully lifting overall performance."
- "Sales are moderately concentrated among top contributors, though revenue remains diversified across product categories."
- "Performance is supported by steady transaction volume, with higher-priced offerings contributing disproportionate revenue impact."
- "Revenue mix suggests balanced regional contribution without material dependence on a single product line."


Output format:

{
  "TotalSales": number,
  "Commentary": string
}

You can rename this prompt to something clearer, like ‘Sales Flash Report Generator.’ Before saving it, we should test it, and to do that we need to upload a sample workbook. Go to the three dots, open Settings, and turn on the code interpreter.

Next, upload a sample workbook. This file is only for this test run. In the prompt, locate the placeholder [Excel workbook], delete it, type ‘/’, and choose Image or document. That dynamic field is where you will upload the workbook for testing. Later, when we build the flow, we will pass this in as a variable instead of uploading it manually.

Sales report prompt generator

Also, under the output panel on the right where it says “Model Response,” we want the result returned as JSON. There are a few setup steps to get there, but once everything is configured you should see a test result come back. Each run may vary slightly, but ideally the sales total is correct. We will validate that by comparing it to the value returned from the Office Script, and the commentary field will just provide some additional narrative context.

We are structuring the output as JSON because it makes it much easier to extract specific pieces for a downstream report. It also gives us a clean way to verify that the TotalSales figure is accurate before pushing anything forward.

Pass the file contents to the prompt

Go ahead and save the prompt. You will notice there is one more parameter we need to configure. For the purposes of the flow, we have to specify where the workbook will come from. The file we uploaded earlier was just a one-time sample for testing.

In the actual flow, the workbook should be passed in dynamically based on its location, which we can retrieve from the previous step. That said, this action expects the file content as raw base64. We can generate that using the base64() function and pass it directly into this step.

base64(body('Get_file_content_using_path'))
Convert to base 64

Run the Office Script

The prompt has now been run against our workbook, but before we report anything out, we need to confirm the numbers are correct. There are a few ways to validate this, and if you are even slightly risk averse, you should absolutely be checking these totals yourself before sharing them. Even so, this automated cross-check is a smart safeguard.

Run Script Power Automate

Our goal is simple: verify that the Total Sales calculated by the generative AI matches the Total Sales calculated by the Office Script. What happens next in the flow will depend on whether those two values agree. Add the step called “Run script” to your flow, then specify which Office Script to execute and which workbook it should run against.

Set up the condition

Now we want to cross-check that the Total Sales calculated by the prompt’s Code Interpreter matches the Total Sales calculated by the Office Script. If the numbers tie out, we will send the flash report. If they do not, the report will not run and instead I will send myself an email to investigate.

To set this up, add a Condition step to the flow and configure it to compare the TotalSales value from the Office Script with the TotalSales value returned by the prompt. The expressions you will need for this comparison are shown below.

outputs('Run_script')?['body/result']
outputs('Run_a_prompt')?['body/responsev2/predictionOutput/structuredOutput/TotalSales']
Check total sales equality

If false, send an email

There are plenty of ways to handle this, but I am going to keep it simple and just use the “Send an email” action. You will of course want to use your own email address here, not mine. The idea is simply to send yourself a quick, hard-coded message letting you know the numbers did not match so you can take a closer look.

If you wanted to get fancy, you could include the TotalSales value from the Office Script and the value from the prompt in the email body so you can immediately see the discrepancy. That could be a helpful starting point. Personally, I assume I will need to do some old-fashioned analyst digging anyway, so I am keeping it simple for now.

Send email if something goes wrong

If true, post to Teams

Now let’s look at the better outcome. If the two values do match, we will go ahead and post the Sales Flash to Teams.

We will post it as an Adaptive Card. An Adaptive Card is simply a structured JSON layout that Teams understands how to render as a clean, formatted message instead of plain text. Because our inputs from the prompt are already in JSON, it actually makes things easier to stay in JSON. That way we keep everything structured and consistently formatted from calculation all the way through presentation.

Post card to Teams

Of course, that means we need to define a bit more JSON to design the card itself. We will use the Adaptive Card schema framework to do this. Think of it as a template that tells Teams how to display titles, labels, and values. The full card definition is shown below.

You will also notice that I apply a small amount of formatting to the TotalSales variable so it renders as currency, for example $3,760.00 instead of 3760. That final polish makes the flash report feel much more like a proper financial summary.

{
  "type": "AdaptiveCard",
  "version": "1.5",
  "body": [
    {
      "type": "TextBlock",
      "text": "Sales Flash Report",
      "weight": "Bolder",
      "size": "Large"
    },
    {
      "type": "FactSet",
      "facts": [
        {
          "title": "Total Sales:",
          "value": "@{formatNumber(float(coalesce(outputs('Run_a_prompt')?['body/responsev2/predictionOutput/structuredOutput/TotalSales'], 0)), 'C2', 'en-US')}"
        }
      ]
    },
    {
      "type": "TextBlock",
      "text": "@{outputs('Run_a_prompt')?['body/responsev2/predictionOutput/structuredOutput/Commentary']}",
      "wrap": true,
      "spacing": "Medium"
    }
  ]
}

Go ahead and run the flow. Ideally, you should see something like this. If not, you will receive an email alerting you that there was a variance.

Sales flash report

Conclusion

There is a lot more we could layer on top of this. Just because the total ties out does not mean the explanation is solid. The math could be right while the commentary is completely off. You might decide to add additional summary calculations to triangulate the results, or tighten the prompt so it is more deterministic and less free to hallucinate details that are not actually in the data.

You could also introduce a formal approval step so the report routes for signoff or rejection before it ever posts to Teams. That would make the entire process more controlled and auditable, especially in a finance setting.

I would love a clean way to directly guide or override the prompt’s commentary within the flow itself, but at least right now I am not seeing a simple, built-in way to do that.

Even so, this demonstrates that with a bit of structure and creativity, you can absolutely cross-check AI-generated reports. There is real opportunity here, and I will keep experimenting and sharing what I discover.

The post How to validate AI-generated Excel reports with Office Scripts first appeared on Stringfest Analytics.

]]>
16752
How to connect your Excel workbook to the Dataverse via Dataflows https://stringfestanalytics.com/how-to-connect-your-excel-workbook-to-the-dataverse-via-dataflows/ Mon, 20 Oct 2025 11:50:40 +0000 https://stringfestanalytics.com/?p=16089 If you want to make your Excel data more useful across the Power Platform, connecting it to Dataflows within Power Apps is the right starting point. By moving data from your workbook into a Dataflow, you make it accessible to Dataverse and ready for automation through tools like Power Automate and Copilot Studio. Once this […]

The post How to connect your Excel workbook to the Dataverse via Dataflows first appeared on Stringfest Analytics.

]]>
If you want to make your Excel data more useful across the Power Platform, connecting it to Dataflows within Power Apps is the right starting point. By moving data from your workbook into a Dataflow, you make it accessible to Dataverse and ready for automation through tools like Power Automate and Copilot Studio.

Once this connection is in place, you can start running prompts or reports on a schedule; for example, automatically generating a weekly variance report and posting it to Teams or emailing it. You could also build guardrails into your flows, such as approval steps or editable review stages, so your team can validate AI-generated insights before they’re published.

In this post we’ll connect a sales data table so that the data flows into Dataverse through Dataflows. I know that in the real world your sales data probably doesn’t live in Excel, or at least I hope it doesn’t. Excel often isn’t the best place for a single source of truth. But I also live in the real world, and I know that Excel has its advantages for powering those sources of truth. So here’s how to bridge that gap into a more developer and enterprise friendly way of doing things.

 

Save this workbook somewhere easy to find on your OneDrive or SharePoint. You’ll need it there for the next steps.

What is a Dataflow?

A Dataflow is a cloud-based way to extract, clean, and load data into the Power Platform. Power Apps will be our way in to set up these Dataflows, since that is where you can create and manage them to feed data into Dataverse. Dataflows use the same Power Query technology that Excel users already know, but instead of saving transformations inside a single workbook, they are stored in the cloud so multiple apps and services can reuse the same prepared dataset.

Once a Dataflow loads data into Dataverse, it becomes accessible across the Power Platform, including Power Apps for building interactive applications, Power Automate for scheduling and running flows, and Copilot Studio for creating generative AI experiences that draw on your Dataverse data.

A comparison table of using Power Query in Excel versus Dataflows in Power Apps follows.

Feature Power Query in Excel Dataflows in Power Apps
Where it runs On your computer In the Power Platform cloud
Purpose Transform data for a single workbook Prepare and share data for multiple apps
Storage Inside the Excel file In Dataverse or Azure Data Lake
Refresh Manual or workbook-triggered Scheduled automatic refresh
Collaboration Local to one user Centralized and reusable across apps
Integration Excel only Connects to Power Apps, Power Automate, and Copilot Studio

Syncing the Excel data to Dataflows

To get started, go to make.powerapps.com and head to Tables. These data sources are actually Dataverse tables. You’ll notice that many of them are already there by default. You didn’t create them, and you can safely ignore them for now. Dataverse includes a set of standard tables out of the box like Account, Contact, and Appointment because they support core business functions used by many Power Apps solutions. For this walkthrough, we’ll focus on adding our own table by going to the top menu, selecting Import, and then choosing Import data with Dataflows.

At the top, select Import, then choose Import data with Dataflows. This is where we’ll set up our live connection between Excel and Dataverse. While there are several simple ways to import static data from Excel (for example, by uploading a CSV), Dataflows are what make the connection dynamic.

Import data from Dataflows

Great work! Now select Choose Data Source, then pick “Excel workbook.”

At this point, your workbook needs to be stored on OneDrive or SharePoint. Go ahead and sign in with your account, then browse to and select the workbook you want to connect.

Browse OneDrive

Now things should start to look more familiar if you’ve used Power Query before. You’ll first choose which parts of the workbook to connect to. In this case, I’m connecting only to the table I need, currently named “Table1” (I really should have named it better!). Fortunately, we’ll get a chance to give it a clearer name once it’s loaded into Dataverse.

Get data Dataflows

Now we are fully in Power Query, and you can see the label in the upper left corner confirming it. If you wanted to build data transformations, this is where you could do it. For now, I will skip that step and select Next to finalize the Dataflow.

Power Query dataflows

Now we have a few options for how to finalize this connection into Dataverse. I am going to have it create a completely new table.

Here we can name the table, set data types, define a primary key, rename columns, and make other adjustments under the Column mapping dropdown. Each column is assumed to have a single type, just like in Power Query.

For now, I will assume Dataflows and Dataverse handle this setup correctly. If something needs to be changed later, that is fine. This process is reproducible, so we can always go back and update it.

Load to Dataflows

Last, we’ll be asked how we want the Dataflow to refresh. We can set it to run on a schedule or trigger it manually. I am going to set it to refresh manually for now. Since there is a way to trigger a refresh directly within Power Automate, I’ll handle automation later as part of a broader Power Automate flow.

Refresh settings Dataflows

Great work. Now let’s test it with a manual refresh. Go back to your Excel workbook and add a test row. Then return to Power Apps and open Dataflows. You should see your active Dataflow listed, and it may take a moment to finish publishing. From there, select the three dots next to your flow. You can give it a name and choose Refresh to trigger the update.

Edit and rename dataflow

Amazing. The new record is there. We’ve successfully set up a Dataflow that brings data from an Excel workbook into Dataverse.

New record dataflow

Conclusion

In this post, we connected an Excel workbook to a Dataflow in Power Apps and saw how this simple step brings your spreadsheet data into the Power Platform ecosystem. Once the data is stored in Dataverse, it’s no longer limited to one workbook. It can power automations, apps, and even AI-driven summaries across your organization.

In a future post, we’ll build on this foundation by creating a Power Automate flow that pulls your Dataverse data and uses a prompt to generate a dynamic weekly summary. We’ll even have it post directly to Teams for an automated, AI-powered status update, set up an approval process for review and more. Stay tuned!

The post How to connect your Excel workbook to the Dataverse via Dataflows first appeared on Stringfest Analytics.

]]>
16089
The future of analytics may be “low/no-code” … at first https://stringfestanalytics.com/no-code-future-at-first/ https://stringfestanalytics.com/no-code-future-at-first/#comments Sat, 16 Jan 2021 19:42:17 +0000 https://georgejmount.com/?p=6894 You can’t get through much datascrolling these days without hearing about the rise to dominance of so-called “low- and no-code” data tools. “It’s bringing data to the people.” the typical post reads. “You no longer need to learn how to code to perform sophisticated insert analytics project here.” I absolutely see the draw to low- and no-code data solutions: after how many tedious rounds of import pandas as pd and object of type 'closure' is not subsettable errors could you not? That said, I don’t see this trend as revolutionary or as permanent as it’s often claimed to be. […]

The post The future of analytics may be “low/no-code” … at first first appeared on Stringfest Analytics.

]]>
You can’t get through much datascrolling these days without hearing about the rise to dominance of so-called “low- and no-code” data tools. “It’s bringing data to the people.” the typical post reads. “You no longer need to learn how to code to perform sophisticated insert analytics project here.”

I absolutely see the draw to low- and no-code data solutions: after how many tedious rounds of import pandas as pd and object of type 'closure' is not subsettable errors could you not? That said, I don’t see this trend as revolutionary or as permanent as it’s often claimed to be. Here’s why.

It’s an expected part of the adoption cycle

Take as an example of low- and no-code data products the ability to build Azure machine learning models right from Excel. It’s wild stuff, and there’s no question that it’s democratizing machine learning like never before. What’s not asked, though is what came before this innovation, and what comes next.

Innovation often arrives in waves, with one trend building on and ultimately supplanting the next. Analytics software is no exception. Each wave has been built on code, rolled out by low- and no-code graphical user interfaces (GUIs), and then supplanted again by code. 

The waves of analytics software: low- and no-code is arrived at, then supplanted.

This is a greatly simplified adoption wave: doing it justice would require a dissertation (and I’ve studied innovation at the doctoral level, so I’m not kidding). The upshot is that low- and no-code has come and gone in analytics tools; let’s look at some examples.

Case study: SPSS

SPSS (Statistical Program for the Social Sciences) began in the late 1960s and by the next decade, was joined by S and SAS as a new wave of tools for exploratory data analysis and statistical programming.

Back then, computer scripts generally needed first to be compiled into a machine-readable file, and then run; this made it difficult to manipulate, visualize and analyze data on the fly. These tools were novel in that they allowed for bits and portions of a script to be executed and printed immediately, which greatly enabled iteration. Analysts could now focus on the data analysis rather than compiling the code.

At some point (And I’m not able to find the exact launch, so if someone does know please get in touch!) SPSS went a step further and added menu-driven options for writing programs. All menu choices generated syntax which could be saved, but the idea was that analysts could further focus less on the code and more on the data, hence democratizing analysis. Technically-savvy statisticians no longer had the monopoly on working with data.

Low- and no-code can get messy, as SPSS can attest. (Source: Greg Elvers, University of Dayton)

One fruit of this “no- and low-code” implementation is the above menu screenshot. There’s no one-size-fits-all answer to working with data, so trying to accommodate all needs into a single menu can result in, let’s say, a bloated interface. I used SPSS in grad school, and while it was great to be able to point-and-click my way to data analysis, hence focusing on the conceptual bits of the work, I quickly found it easier just to write the code than to navigate the menus. So, the SPSS syntax generation is a blessing… but it’s not the be-all, end-all.

SPSS’s menu was just one product of the computing revolution driven by the GUI. As point-and-click options, GUIs offered relatively low- or no-code features to data users. Another result of this revolution was the spreadsheet, which in the opinion of many was the first “killer app” of the personal computer. Business users now had computing ability at their fingertips, without necessarily needing to code.

Some assembly always required

Let’s stick with spreadsheets because they’re facing the same GUI dilemma as SPSS in the age of what I am calling “personalized cloud computing:” computer applications which rely on cloud capabilities for storing and working with data.

Excel’s Power Query is a show-stopping innovation allowing users to build extract, transform, load (ETL) pipelines right from a spreadsheet. (Similar tools for low/no-code data prep include Alteryx, Tableau Prep, etc.). While based on the M programming language, it includes menu-driven syntax generation, much like SPSS. Not a cloud application per se, Power Query is part of Microsoft’s larger “Power Platform” which is largely marketed as a cloud (and no/low code) solution.

Its menus can be used for most of a user’s needs… but not all. And indeed, a rite of passage for Power Query users is that they begin writing their own code:

Recently, I was trying to add an index number by sub-group in Power Query; this took quite a bit of doing between Power Query menus and M custom coding. By the end, I asked myself, Was this really any easier than just strictly coding? After all, Power Query doesn’t offer a dedicated integrated development environment with package management, like R or Python. It’s an odd soup of GUI and code, somewhat like SPSS.

Working with data is messy in more ways than any of us can count. And it’s this ambiguity that makes building a rigid set of menu options so difficult. I’ve yet to see a GUI that easily accomodates everything I want to do with my data in a simple user experience. Can it be done? Possibly given future technologies. But learning to code has untapped far more possibilities for me than any GUI ever has. So why wait for some idealized UX?

Code and GUIs, ying and yang

It’s at least worth pointing out that many claim the rise to R and Python is in part because they are purely-programmed applications. By offering great development environments and large open source code bases, it became possible to do nearly everything with these two languages… if you would learn to code. There’s little debate that if, perhaps not as the layer the user interacts with, code should be the primary artifact for how analytics is done.

So, why the change in heart to low- and no-code analytics solutions? Like I said earlier, it can get frustating to write the same calls and receive the same trivial errors time and again. So I get that these could be seen as roadblocks to greater data democracy. GUIs have had their time and place in the waves of analytics innovation, often when some application has hit a certain level of maturity. Code also plays a part to build the applications out so they can reach that maturity.

I don’t know what the next wave will be, but I’m certain that this current wave of low- and no-code solutions won’t last it. It may be that fewer coders are needed to get an innovation to the low/no-code part of the adoption, and that the product can genuinely do everything required of its users.

Until that time, I recommend data professionals learn a bit about coding. Maybe not every data solution requires it; that’s fine. But given where we’ve come from in the data world, I’m not inclined to say that the future is all low and no code.

The post The future of analytics may be “low/no-code” … at first first appeared on Stringfest Analytics.

]]>
https://stringfestanalytics.com/no-code-future-at-first/feed/ 3 6894