Thursday, January 1, 2026

CORA Voice Agent: Using AI to Help Humans Sound More Human

Happy New Year all — I spent a bit of time over the holidays building something I’d been meaning to explore for a while, and figured it was worth sharing.

Customer service AI usually shows up in one of two forms:

  1. a chatbot that answers questions correctly but with the emotional range of a parking meter, or

  2. a demo that works great until someone speaks slightly faster than expected.

The CORA Voice Agent project sits between low-code and pro-code. It’s not trying to replace human agents, and it’s definitely not trying to win a “most autonomous AI” contest. Instead, it explores a more practical question: What if AI could help humans get better at customer conversations, in real time, using voice?

This project was built as a learning journey first, but one that intentionally lands on a real, usable solution — delivered in two parts:
  • six-module training workshop that walks through the architecture and build-out step by step - using easily deployed Azure components. 
  • GitHub repository you can fork, break, improve, or deploy using GitHub Actions and Azure Developer CLI

CORA isn't a new industry acronym to learn or standard to follow.

CORA stands for Customer-Oriented Response Agent, she’s the name we gave our AI training application! 🤖

CORA even has personality with over 100 voice tones (in Edge browser) and 6 selectable persona moods to simulate customer emotional state for broad training experiences.



Module 1: Solution Overview

In Module One, we take a step back and compare low-code tools such as Copilot Studio with Azure-based solutions, exploring when it makes sense to reach for more advanced features without making things harder than they need to be. It’s a look at how Azure OpenAI and Microsoft Foundry can be combined to support real-time voice interaction and scoring.

You’ll walk through the core components, how they talk to each other, and why certain architectural choices were made. Think of it as the “map before the hike” — helpful context before you start provisioning things and wondering why nothing works yet.


Module 2: Infrastructure Setup

Here’s where the cloud work begins. This module focuses on standing up the required Azure infrastructure using Azure Developer CLI (azd).

Container Apps, storage, identities, telemetry — the usual cast of characters — but wired together with repeatability in mind. The goal isn’t just to get something running once, it’s to make sure you can tear it down, redeploy it, and still remember how it works the next day.


Module 3: Application Deployment

Now the voice agent starts to feel real.

This module walks through deploying the Flask-based application, containerizing it, and getting it running in Azure Container Apps. It also introduces how real-time voice flows move between the browser, the backend, and Azure services.

At this point, you’re no longer just reading diagrams. You’re deploying something that listens, responds, and occasionally surprises you — usually right before you check the logs.


Module 4: Azure OpenAI and Microsoft Foundry Integration

This is where the intelligence layer comes into focus.

You’ll configure a Microsoft Foundry project, deploy models, and wire them into the conversation flow. The emphasis here isn’t “look how smart the model is,” but rather how it’s used — scoring interactions based on tone, clarity, and empathy rather than just correctness.

It’s less about generating perfect answers and more about evaluating how those answers sound when spoken aloud by a human agent.


Module 5: Analytics Dashboard

Voice interactions are interesting. Metrics make them useful.

This module introduces a lightweight analytics dashboard built with Chart.js and Azure Table Storage. It visualizes conversation scores over time, helping surface patterns that are easy to miss when you’re just listening to recordings.

Nothing flashy. Just enough insight to answer questions like:

  • Are agents improving?

  • Are certain scenarios harder than others?

  • Is the AI being too generous… or too harsh?


Module 6: Advanced Topics (Optional, but Encouraged)

The final module is intentionally open-ended. Monitoring, observability, and production considerations all show up here, along with ideas for extending the system beyond the workshop.

If you’re the kind of person who can’t leave a project alone once it’s “done,” this section will feel very familiar.


The Repository: From Learning Exercise to Something You Can Actually Use

The training workshop is paired with a GitHub repository that mirrors exactly what you build across the modules. This isn’t reference code meant to be admired from a distance. It’s meant to be forked, modified, and occasionally broken while you figure out how the pieces fit together.

At a technical level, the repository brings together a fairly opinionated but approachable stack:

  • Python + Flask for the core web application

  • WebSockets to support real-time voice interaction

  • Web Speech API for speech recognition and speech synthesis

  • Azure OpenAI for language understanding and response generation

  • Microsoft Foundry for model orchestration and agent configuration

  • Azure Container Apps for scalable, managed runtime

  • Azure Storage (Tables) for lightweight analytics persistence

  • Chart.js for visualizing interaction metrics

  • Azure Developer CLI (azd) to glue infrastructure and app deployment together

The result is a system that feels modern without being overly abstracted. You can see where data flows, where models are called, and where decisions are made — which is exactly what you want when you’re learning or experimenting.

The repo includes:

  • The full Flask application used throughout the training

  • Infrastructure templates aligned with the workshop architecture

  • Predefined SDK usage for Azure OpenAI and Foundry to reduce setup friction

  • GitHub Actions workflows that automate Azure deployment end to end

One of the deliberate design choices here was local development first. You can run the app locally, iterate quickly, and only push to Azure when it’s ready. When you do, azd handles the heavy lifting — provisioning resources, wiring identities, and keeping environments consistent.

From there, GitHub workflows step in to make things repeatable. Whether you’re testing changes, sharing improvements, or just tired of clicking through the portal, the automation is there when you want it — and out of the way when you don’t.

The goal isn’t to enforce a single deployment path. It’s to give you options without adding complexity for complexity’s sake.


Why Share This at All?

This project didn’t start as a demo idea or a “let’s build something cool” exercise. It grew out of real customer conversations — questions around voice AI, agent coaching, and whether modern AI tooling could support human agents without getting in their way.

Voice AI is easy to demo and surprisingly hard to do well. Once you add humans into the loop — especially in customer service — things like tone, pacing, empathy, and consistency matter just as much as accuracy. That’s where the curiosity turned into experimentation, and eventually into something worth sharing.

Rather than keep the learning internal, this felt like a good opportunity to:

  • Document what worked (and what didn’t)

  • Show how Azure OpenAI and Microsoft Foundry behave in a real voice scenario

  • Share an approach that others can adapt, extend, or completely rethink

CORA is an open learning project — part experiment, part reference, and a practical starting point for anyone curious about:

  • Real-time voice agents beyond scripted demos

  • Coaching and scoring human interactions, not replacing them

  • Azure OpenAI and Microsoft Foundry in practical, hands-on scenarios

  • Balancing local development flexibility with cloud-scale automation

If someone forks the repo, learns something new, or walks away with a better mental model of how voice AI can support humans instead of replacing them — then the project has already succeeded.


Wrapping It Up

If you’re curious to see how this all comes together, the full training modules and deployment walkthrough are available on GitHub Pages. You can move through each module at your own pace, save your progress along the way, and if you make it to the end, there’s even a certificate of achievement waiting for you — mostly as proof that you survived a real-world voice AI build without losing your sanity.

👉 https://jbaart37.github.io/Cora-Voice-Agent-Training/

If reading site pages isn’t your thing and you’d rather learn by breaking code, you can jump straight into the repository. Clone it, run it locally, or use the included GitHub workflows to deploy everything into Azure with minimal technical acrobatics.

👉 https://github.com/jbaart37/Cora-Voice-Agent-Training

Whether you follow the training step by step or dive directly into the code, the goal is the same: learn by building, explore what’s possible with voice AI and human-in-the-loop systems, and hopefully come away with a few ideas of your own.

Sunday, November 16, 2025

Copilot - Graph Connectors vs Custom Agent Connectors with ServiceNow

 You started using Copilot and your organization is interested in agent solutions to address purpose built agent solutions to help with organizational data for specific use cases. Ever been confused on which agent model to use, or which connector would be the best for your scenario ? 


This is where intentional architecture matters.

Microsoft offers a wide variety of tools and choices, which can feel overwhelming at first, but ultimately enables better response accuracy, richer agent experiences, and more efficient cost management.

End users are becoming more comfortable with AI-powered search and retrieval solutions, while technical teams are eager to move further into custom-built experiences. Enter Copilot Studio — a low-code platform that makes agent development accessible and fast. However, ease of use can sometimes create blind spots, leading to architectural missteps if foundational planning isn’t addressed early.

Before jumping in, consider a few key questions:

  • Is your Power Platform governed correctly and supported with ALM-ready environments?

  • Do you truly need a Custom Engine agent or custom connectors, or would a Microsoft Graph connector meet the requirement more efficiently?

  • Are users already licensed for M365 Copilot, and could leveraging Graph connectors help manage cost while still delivering secure data access?

I frequently see the excitement around Copilot Studio spark immediate building — which is great — but without proper planning, this can quickly introduce unnecessary cost, confusion for end users, and complex licensing gaps between fully licensed and pay-as-you-go users.

In this post, we’ll explore connector strategies along with the architectural considerations you should validate before building your first agent. We’ll compare these decisions through the lens of the ServiceNow ecosystem, which is equally robust and full of solution design possibilities.

Real world scenario with ServiceNow:

To set the stage: your organization is using M365 Copilot Chat — adoption may be growing, but not yet universal. You know valuable insights live inside ServiceNow, across Knowledge Bases, tickets and incidents, and service catalogs. The natural next step is figuring out how to bring that value into Copilot experiences — intentionally. 

As a builder, you’re aware there’s no shortage of paths to take — and one of your primary starting points lives right inside the M365 Admin Center under Copilot, where you can configure access, licensing, and foundational settings.

And naturally, curiosity leads you to explore Copilot Studio—an exciting space full of possibilities, but also one that can quickly increase confusion if the roadmap isn’t clear.


You might be wondering: Why are there three different Graph connector options, and how do they compare to the ServiceNow connectors available in Copilot Studio?

To clarify this, we’ll walk through Graph Connector–based agents vs. Custom Connector–based agents, which will make it easier to understand the purpose and value of each Graph connector type.

Graph Connector – Integrates ServiceNow data into Microsoft 365 Copilot experiences (e.g., search, summarization, answers) by indexing ServiceNow content through Microsoft Graph. This enables enterprise search-driven intelligence without requiring direct API calls during each interaction.

Custom Connector – Enables a custom agent to call ServiceNow APIs in real time, supporting workflow actions, transactional operations, and deeper conversational logic. Because it executes API calls on demand during each interaction, it does not rely on Graph indexing and requires consistent API availability and performance.

Both approaches are secure, but they rely on different permission models and can drive different cost implications depending on how data is accessed and used. The three Graph connector options help accommodate these varying permission scopes and governance requirements.


Comparison - M365 Copilot with Graph connector vs Custom Agent connector

Lets start with Knowledge Base research.

Using M365 Copilot Chat with Graph connector enabled - lets search for any articles related to Apple products.


The same prompt leveraging a Copilot Studio agent with custom connector and API call. (Demonstrated using Copilot Studio test pane to review the connector steps and the generative orchestration enabled through the agent configuration)


While both approaches provide helpful responses, there are some key differences. The Studio agent calls the connector and directly queries the ServiceNow API, generating a more condensed set of responses. In contrast, the M365 Copilot Chat experience leverages Graph index data and generative orchestration to provide richer, more detailed answers within the conversation while also including reference links to the ServiceNow source.

In practice, even though this difference may not be immediately obvious, the user experience can be quite similar. For many scenarios—such as retrieving knowledge base articles—direct API calls aren’t always necessary. Using the Graph connector to index ServiceNow data can be more responsive and cost-effective, requiring only configuration of the connector without the need to build a fully interactive agent.

Using Graph-indexed data can optimize the experience when the source content is relatively static and not constantly changing. Knowledge base articles or reference materials are ideal candidates for this approach. In contrast, tickets, incidents, or catalog items tend to be more dynamic, frequently updated, or require additional user interaction to reach the correct information. In these cases, direct API queries via a custom connector or Studio agent may provide more accurate and timely responses.

Additionally, ticket or incident data may require different user permissions to ensure that critical or sensitive information isn’t inadvertently shared across the organization. Proper access controls and governance are essential when exposing dynamic or confidential data through either Graph-indexed solutions or custom connectors.

Topic integration for advanced scenarios like ServiceNow Catalogs

Where Copilot Studio agents truly shine is in their flexibility—for example, when working with catalog knowledge in ServiceNow. While catalogs can be accessed through both Graph-indexed data and custom connectors, the combination of Topics and Tools in Studio allows for a more intuitive and interactive end-user experience. This approach enables users to navigate complex catalogs more naturally and complete tasks with fewer steps.

In this example, the end user may not know which catalogs or catalog items are available through M365 Copilot Chat. Studio agents, with their flexible combination of Topics and Tools, can surface relevant options more intuitively, guiding users to the right catalog items without requiring prior knowledge of what’s available.

Copilot Chat typically responds with a list of items, requiring the user to read through the options to determine which catalog or topic is most appropriate. This is an example where a more targeted or extensive prompt could produce a more direct answer, helping the user reach the desired information faster and with less effort.

A custom-built agent in Copilot Studio can be designed to deliver a more intuitive sequence, guiding users efficiently and aligning responses more closely with their specific needs using generative orchestration.

Using a Topic to steer the conversation to the appropriate tool (GetCatalogs):

The end user is asked what information (catalog) are they interested in.


Once the catalog is selected - the topic calls an additional tool (GetCatalogItems) to gather the inventory and provide links to each catalog item.

To support an intuitive user flow, the agent is structured and designed in the following way:





The Topic guides the conversation flow between the ServiceNow connector tools and generative orchestration, while instructions define how the final output should be formatted for the end user. In this example, variable management (Set Variable Value Expression) is also leveraged to dynamically present the available catalog list directly within the agent’s chat experience, creating a more interactive and personalized workflow.

When working with multiple catalogs and large, detailed inventories, an interactive agent experience can significantly improve user satisfaction by helping users navigate options more efficiently and reach the right information with less friction.


Cost comparison - M365 Copilot with Graph connector vs Custom Agent connector

When using M365 Copilot with a Graph connector, licensing is straightforward — Copilot is priced at $30 per user per month, and Graph-based queries are included in that cost. For organizations with active M365 Copilot adoption, this becomes a predictable, all-inclusive experience.

In contrast, Copilot Studio agents that rely on custom connectors call external APIs (such as ServiceNow) during each conversation. Users who do not have an M365 Copilot license will consume pay-as-you-go (PAYG) credits, introducing variable usage-based costs. This can be very cost-effective at low usage but may scale significantly in high-volume scenarios.


Example Cost Model (at time of writing)

Copilot Studio is billed using Copilot Credits, the universal currency for agent execution.

Rate Options
Prepaid Pack: 25,000 credits for $200/month
PAYG: $0.01 per credit
Estimated Credit Use (ServiceNow-integrated agent)
Generative response: 2 credits
External API call (ServiceNow): 5 credits per call
Assume ~3 API actions per conversation → (3 × 5) + 2 = 17 credits per conversation

Sample Monthly Scenario

2,000 conversations × 17 credits = 34,000 credits / month

ComponentCost
Prepaid 25,000-credit pack$200
Remaining 9,000 credits (PAYG @ $0.01)$90
Total Estimated Monthly Cost~$290

Take a look at the agent usage estimator for more detail: Microsoft agent usage estimator

Closing

In short, choosing between Graph connectors and custom connectors in Copilot Studio isn’t just a technical decision — it’s a strategic one. Graph connectors let you securely bring ServiceNow data into Microsoft 365 Copilot experiences, while custom connectors unlock direct API-driven workflows, richer dialogues, and real-time actions. Each has its own security, permission, and cost trade-offs — and Microsoft’s three Graph connector variants give you flexibility to align with your governance and usage model. From a cost standpoint, leveraging existing M365 Copilot licenses can be very efficient, but building in Studio with custom connectors may introduce pay-as-you-go consumption for non-licensed users,  that needs to be managed carefully.

If you want to dive deeper, here are a few great resources:


Stay tuned - in the next article I will walk through Topic building in the canvas editor, vs Code View - and the nuances between Power FX and YAML formatting when building agents in Copilot Studio.

Saturday, September 13, 2025

Copilot Studio and VS Code - Start using the Copilot Studio Extension

Getting started with Copilot Studio is fast and approachable. Whether you begin by using the Describe interface to chat with the Studio Agent or dive straight into Configure, you can spin up your first agent framework in just minutes. The user experience in Copilot Studio is designed to be intuitive, but sometimes you may want more visibility into the agent’s structure—or the flexibility to design faster with AI-powered assistance. That’s where Visual Studio Code, the Copilot Studio extension, and GitHub Copilot come together to supercharge your workflow. 


With Copilot Studio’s ease of use paired with the added flexibility of VS Code and GitHub Copilot, you don’t need to be a pro coder to take your agents further—these tools can help you refine, customize, and build with confidence. Let’s explore how you can get started step by step.


To begin, in Copilot Studio create a basic agent out of the box—simple and ungrounded, with no knowledge sources, topics, or defined scope yet. This clean starting point gives you the flexibility to shape the agent exactly how you need it.



Ready to see it in action? Start by adding the Copilot Studio Extension to VS Code.


You may be working in multiple tenants or environments, quick tip to define your default identity to ensure access to the desired Copilot Studio environment.


Next - lets connect to the tenant and clone the agent locally to VS Code. (this process is very similar to Github development)


The command palate in VS code presents environment selected based on the default account identity we defined earlier

Now we can select the Agent initially started in Copilot Studio


Pick a local folder to clone the agent configuration. Additional tip - be aware if selecting a cloud storage location, you may not want agent configuration details to by synced to OneDrive or other cloud storage.


Now from the explorer tab we can see the agent structure, settings, topics and knowledge sources ... and we have access to Github Copilot within VS Code's IDE.


When changes are made in the Copilot Studio interface - similar to Github, source control allows for synchronization of remote changes.


Now we see the remote addition of a knowledge source. I recommend adding one knowledge source as a reference, in order to provide Github Copilot a refence format for naming and file structure.

From Studio to Code: Unlocking More with GitHub Copilot

The real magic happens when you bring the Copilot Studio extension into VS Code and pair it with GitHub Copilot’s agentic support. Together, they make enhancing your agent simple and approachable. For example, here’s a straightforward prompt that adds three new knowledge sources with ease.

(Notice I added the file reference in the prompt using the #file.name format).


The result:


What would have taken 5-10 minutes in Copilot Studio, now complete within seconds using Github Copilot Agent mode (in my example using Claude Sonnet 4 from Anthropic)

From here we can review the knowledge source format and select "Keep" from the chat window.

Lastly we go back to source control, to push our knowledge source additions from VS Code back to Copilot Studio.


Back in Copilot Studio


The new knowledge sources are available and ready to test in the Test Pane.



Give it a try

Adding knowledge sources is just the beginning—and as you’ve seen, it’s quick and straightforward in Copilot Studio. Once you’re comfortable, you can build on this foundation to tackle more complex scenarios like managing topics, handling multi-turn conversations, enabling agent-to-agent interactions, and triggering actions.

I hope this walkthrough was helpful—especially if you’re already familiar with working in VS Code. Now it’s your turn to explore the possibilities and see how far you can take your agents.

Friday, August 29, 2025

Copilot Enablement Options - Using Pay as You Go to share Copilot Agents

  Ever wished you could spin up your own Copilot agent without committing to a full subscription? Now you can, thanks to Copilot Pay-As-You-Go! This flexible option lets you create and share custom agents or simply enable users to tap into Copilot chat—while keeping costs predictable through Azure billing. No more over-provisioning or worrying about unused licensing; you pay only for what you use. It’s perfect for teams experimenting with AI or scaling solutions without upfront commitments.


While users with the M365 Copilot license will enjoy the most feature-rich experience, you may also want to empower your entire organization with a custom AI agent. This agent can be tailored to your needs—grounded in critical SharePoint content, trained on specific internal documents, and secured with enterprise-grade data protection.

I also want recommend following Dewain Robinson for great content and guidance on all things Copilot and agent development in Copilot Studio

In this post, we’ll guide you through enabling and managing Copilot with a pay-as-you-go model—ideal for organizations looking to extend AI capabilities without committing to full M365 Copilot licensing. Whether you're an IT admin, business leader, or platform owner, this guide is designed to help you get started quickly and confidently.

Here's what we'll cover:

  • Who this post is for – Understand the roles and scenarios where pay-as-you-go Copilot makes sense
  • Enabling pay-as-you-go – Step-by-step guidance for activating pay-as-you-go for Copilot Studio and Copilot users
  • Usage reporting and cost control – How to gain visibility into usage, monitor consumption, and manage costs effectively
  • Understanding message costs – A breakdown of how message-based billing works and what to expect
  • Creating and sharing a custom Copilot agent – How to build a custom AI agent grounded in your organization’s content, and share it within Copilot and Teams

By the end, you’ll have a clear path to delivering powerful AI experiences to your users—securely, flexibly, and at your own pace.

Who is this for?

  • IT admins and Power Platform admins who need clear prerequisites, steps, and knobs to manage risk and spend.
  • Makers & developers who want the fastest path to publish agents and let Azure pick up the bill only when users engage.
  • Finance & ops folks who live in Azure Cost Management and want budgets/alerts for AI usage.

Architecture at a glance

M365 Copilot Chat & SharePoint agents PAYG → Create a billing policy in Microsoft 365 admin center, scope it to users or groups, then connect it to services like Copilot Chat or SharePoint agents. Set up Microsoft 365 Copilot pay-as-you-go for IT admins | Microsoft Learn

Copilot Studio PAYG → Attach a billing plan to one or more environments in PPAC; agent message usage flows to your Azure subscription as metered consumption. Set up a pay-as-you-go plan - Power Platform | Microsoft Learn

Governance stays centralized: Integrated Apps (app/agent lifecycle) + PPAC (capacity & usage) + Azure Cost Management (billing). View usage and billing for pay-as-you-go plan - Power Platform | Microsoft Learn

Prerequisites & roles

Getting Started

Enable M365 Copilot Chat & SharePoint agents (PAYG) in M365 admin center - enabling users create/use agents in Copilot Chat or on SharePoint sites without seat licenses.
  • Set up A billing policy scoped to all users or a security group, then connect it to Copilot Chat and/or SharePoint agents
  • Select Services and include M365 Copilot Chat and SharePoint Agents
  • Set budget limits and users - users can be scoped to an Entra security group as needed.

Enable Copilot Studio (PAYG) in PPAC (optional | required for building and sharing through Studio) - enabling building/hosting agents across channels with low‑code + integrations.
This step allows defined users/builders to create custom agents in Copilot Studio, and share with others who wish to interact with the custom agent - who may not have an M365 Copilot license.


In setting up this option, we align the Pay-as-you-go Billing plan to an existing Azure subscription and resource group. We also define the target Power Platform environment for agent development and sharing.

Important Note - Common Pitfall
If this is your first time managing environments in Power Platform Admin Center (PPAC) - the only existing environment is "default". The default environment is not eligible for pay-as-you-go capacity, only Sandbox and Production environments can be used. It is recommended to create a new environment, scoped to users for pay-as-you-go capacity. If you followed the above steps, and notice the Target Environments field is blank, or you are unable to select an environment during setup - this is your problem. (more detail HERE )


Building and Sharing a Custom Agent

Here is where the fun begins, now that you have your environment enabled for pay-as-you-go. Proceed to https://copilotstudio.microsoft.com/

Pitfall 2 - Be sure to select the defined environment previously configured for pay-as-you-go capacity, in the upper right hand corner of the Studio UX.


Once Environment and New Agent is selected - Copilot Studio presents the ability to create an agent by chatting with the "builder agent" describing your intent or you can proceed directly to configure.

I wont go into depth regarding all of the options, and capabilities when creating a custom agent for your organization - the possibilities are endless.
Try out building with Chat by describing your agent, and compare to direct configuration options.


A quick and easy agent to start with, a SharePoint-grounded knowledge finder:

I recommend selecting Generative Orchestration which enriches the agents capability to navigate through the knowledge sources. Also note you can define the response model used by your agent. This can be edited later, and defined in the overall Copilot Studio Generative AI settings.

Under the Knowledge section, select a few SharePoint sites important for your users. Also note the option to include or exclude Web Search

 Enabling web search allows the agent to traverse your defined content grounding, and leverage public web search if results are not available. Disabling web search only allows the agent to reason over the defined knowledge locations.

Give it a test in the test pane:

Share your agent for others to co-develop or begin using:

Before you can share, the agent need to be published.

Here you can define who has access to your agent, or co-authors you wish to edit with, and options to publish the agent to Teams and Copilot Agent Store (Get Agents in M365 Copilot)

When selecting "show to everyone in my org" - this will trigger an approval process in M365 admin Center - for approval before making the agent available in Copilot Agent Store. Pending approvals appear here:


If you want to share your agent directly with users before publishing to the Agent Store - Copy the link and share in Teams Chat. (shown above)

Invite co-authors to help test and edit in Copilot Studio - note these users must also be in the security group defined with access to the Pay-as-you-go capacity defined in the initial setup steps.
The Copy Link on this page - shares a direct link to the agent builder in Copilot Studio. Note - the copy link in Manage Sharing will share the agent directly - the copy link in the image above shares the link to Studio.

Cost Management and Observability

Copilot Studio (PAYG)
Microsoft 365 Copilot Chat & SharePoint Agents (PAYG)
  • SharePoint agents: billed at $0.01/message; a “successful interaction” typically uses ~12 messages. M365 PAYG pricing
  • Copilot Chat agents: enable metered consumption for users without an M365 Copilot license; licensed users aren’t charged for eligible agent events. Agents in Copilot Chat · Billing scenarios
Additional resources

Thanks much to my rockstar peer Brandon Marcurella for guidance and help along the way.


Closing
Find your agent, if you published and approved, in Teams and the M365 Copilot 


Happy "agenting" in Copilot



 


Here is a set of useful links to bookmark:

  • Copilot Studio licensing — what’s included, PAYG vs packs, pricing: Learn
  • Billing rates & message scenarios — exactly what burns messages: Learn
  • Set up PAYG (PPAC) — billing plans & environment linking: Learn
  • Manage messages & capacity (PPAC) — allocation & monitoring: Learn
  • Set up PAYG for M365 Copilot (MAC) — billing policy + budgets: Learn
  • Set up or disconnect PAYG for Copilot services — end‑to‑end guide: Learn
  • Agents in Copilot Chat — enable, author, manage: Learn
  • M365 PAYG pricing for SharePoint agents — rate card: Learn
  • Manage agents (Integrated Apps) — centralized governance: Learn
  • PAYG overview (Power Platform) — how meters/policies work: Learn
  • View usage & billing — Azure Cost Management + PPAC reports: Learn
  • CORA Voice Agent: Using AI to Help Humans Sound More Human

    Happy New Year all — I spent a bit of time over the holidays building something I’d been meaning to explore for a while, and figured it was ...