Friday, August 29, 2025

Copilot Enablement Options - Using Pay as You Go to share Copilot Agents

  Ever wished you could spin up your own Copilot agent without committing to a full subscription? Now you can, thanks to Copilot Pay-As-You-Go! This flexible option lets you create and share custom agents or simply enable users to tap into Copilot chat—while keeping costs predictable through Azure billing. No more over-provisioning or worrying about unused licensing; you pay only for what you use. It’s perfect for teams experimenting with AI or scaling solutions without upfront commitments.


While users with the M365 Copilot license will enjoy the most feature-rich experience, you may also want to empower your entire organization with a custom AI agent. This agent can be tailored to your needs—grounded in critical SharePoint content, trained on specific internal documents, and secured with enterprise-grade data protection.

I also want recommend following Dewain Robinson for great content and guidance on all things Copilot and agent development in Copilot Studio

In this post, we’ll guide you through enabling and managing Copilot with a pay-as-you-go model—ideal for organizations looking to extend AI capabilities without committing to full M365 Copilot licensing. Whether you're an IT admin, business leader, or platform owner, this guide is designed to help you get started quickly and confidently.

Here's what we'll cover:

  • Who this post is for – Understand the roles and scenarios where pay-as-you-go Copilot makes sense
  • Enabling pay-as-you-go – Step-by-step guidance for activating pay-as-you-go for Copilot Studio and Copilot users
  • Usage reporting and cost control – How to gain visibility into usage, monitor consumption, and manage costs effectively
  • Understanding message costs – A breakdown of how message-based billing works and what to expect
  • Creating and sharing a custom Copilot agent – How to build a custom AI agent grounded in your organization’s content, and share it within Copilot and Teams

By the end, you’ll have a clear path to delivering powerful AI experiences to your users—securely, flexibly, and at your own pace.

Who is this for?

  • IT admins and Power Platform admins who need clear prerequisites, steps, and knobs to manage risk and spend.
  • Makers & developers who want the fastest path to publish agents and let Azure pick up the bill only when users engage.
  • Finance & ops folks who live in Azure Cost Management and want budgets/alerts for AI usage.

Architecture at a glance

M365 Copilot Chat & SharePoint agents PAYG → Create a billing policy in Microsoft 365 admin center, scope it to users or groups, then connect it to services like Copilot Chat or SharePoint agents. Set up Microsoft 365 Copilot pay-as-you-go for IT admins | Microsoft Learn

Copilot Studio PAYG → Attach a billing plan to one or more environments in PPAC; agent message usage flows to your Azure subscription as metered consumption. Set up a pay-as-you-go plan - Power Platform | Microsoft Learn

Governance stays centralized: Integrated Apps (app/agent lifecycle) + PPAC (capacity & usage) + Azure Cost Management (billing). View usage and billing for pay-as-you-go plan - Power Platform | Microsoft Learn

Prerequisites & roles

Getting Started

Enable M365 Copilot Chat & SharePoint agents (PAYG) in M365 admin center - enabling users create/use agents in Copilot Chat or on SharePoint sites without seat licenses.
  • Set up A billing policy scoped to all users or a security group, then connect it to Copilot Chat and/or SharePoint agents
  • Select Services and include M365 Copilot Chat and SharePoint Agents
  • Set budget limits and users - users can be scoped to an Entra security group as needed.

Enable Copilot Studio (PAYG) in PPAC (optional | required for building and sharing through Studio) - enabling building/hosting agents across channels with low‑code + integrations.
This step allows defined users/builders to create custom agents in Copilot Studio, and share with others who wish to interact with the custom agent - who may not have an M365 Copilot license.


In setting up this option, we align the Pay-as-you-go Billing plan to an existing Azure subscription and resource group. We also define the target Power Platform environment for agent development and sharing.

Important Note - Common Pitfall
If this is your first time managing environments in Power Platform Admin Center (PPAC) - the only existing environment is "default". The default environment is not eligible for pay-as-you-go capacity, only Sandbox and Production environments can be used. It is recommended to create a new environment, scoped to users for pay-as-you-go capacity. If you followed the above steps, and notice the Target Environments field is blank, or you are unable to select an environment during setup - this is your problem. (more detail HERE )


Building and Sharing a Custom Agent

Here is where the fun begins, now that you have your environment enabled for pay-as-you-go. Proceed to https://copilotstudio.microsoft.com/

Pitfall 2 - Be sure to select the defined environment previously configured for pay-as-you-go capacity, in the upper right hand corner of the Studio UX.


Once Environment and New Agent is selected - Copilot Studio presents the ability to create an agent by chatting with the "builder agent" describing your intent or you can proceed directly to configure.

I wont go into depth regarding all of the options, and capabilities when creating a custom agent for your organization - the possibilities are endless.
Try out building with Chat by describing your agent, and compare to direct configuration options.


A quick and easy agent to start with, a SharePoint-grounded knowledge finder:

I recommend selecting Generative Orchestration which enriches the agents capability to navigate through the knowledge sources. Also note you can define the response model used by your agent. This can be edited later, and defined in the overall Copilot Studio Generative AI settings.

Under the Knowledge section, select a few SharePoint sites important for your users. Also note the option to include or exclude Web Search

 Enabling web search allows the agent to traverse your defined content grounding, and leverage public web search if results are not available. Disabling web search only allows the agent to reason over the defined knowledge locations.

Give it a test in the test pane:

Share your agent for others to co-develop or begin using:

Before you can share, the agent need to be published.

Here you can define who has access to your agent, or co-authors you wish to edit with, and options to publish the agent to Teams and Copilot Agent Store (Get Agents in M365 Copilot)

When selecting "show to everyone in my org" - this will trigger an approval process in M365 admin Center - for approval before making the agent available in Copilot Agent Store. Pending approvals appear here:


If you want to share your agent directly with users before publishing to the Agent Store - Copy the link and share in Teams Chat. (shown above)

Invite co-authors to help test and edit in Copilot Studio - note these users must also be in the security group defined with access to the Pay-as-you-go capacity defined in the initial setup steps.
The Copy Link on this page - shares a direct link to the agent builder in Copilot Studio. Note - the copy link in Manage Sharing will share the agent directly - the copy link in the image above shares the link to Studio.

Cost Management and Observability

Copilot Studio (PAYG)
Microsoft 365 Copilot Chat & SharePoint Agents (PAYG)
  • SharePoint agents: billed at $0.01/message; a “successful interaction” typically uses ~12 messages. M365 PAYG pricing
  • Copilot Chat agents: enable metered consumption for users without an M365 Copilot license; licensed users aren’t charged for eligible agent events. Agents in Copilot Chat · Billing scenarios
Additional resources

Thanks much to my rockstar peer Brandon Marcurella for guidance and help along the way.


Closing
Find your agent, if you published and approved, in Teams and the M365 Copilot 


Happy "agenting" in Copilot



 


Here is a set of useful links to bookmark:

  • Copilot Studio licensing — what’s included, PAYG vs packs, pricing: Learn
  • Billing rates & message scenarios — exactly what burns messages: Learn
  • Set up PAYG (PPAC) — billing plans & environment linking: Learn
  • Manage messages & capacity (PPAC) — allocation & monitoring: Learn
  • Set up PAYG for M365 Copilot (MAC) — billing policy + budgets: Learn
  • Set up or disconnect PAYG for Copilot services — end‑to‑end guide: Learn
  • Agents in Copilot Chat — enable, author, manage: Learn
  • M365 PAYG pricing for SharePoint agents — rate card: Learn
  • Manage agents (Integrated Apps) — centralized governance: Learn
  • PAYG overview (Power Platform) — how meters/policies work: Learn
  • View usage & billing — Azure Cost Management + PPAC reports: Learn
  • Monday, July 7, 2025

    Azure Automation for Shared Calling Enablement

    Automating Enterprise Voice Enablement for Teams Shared Calling: A Journey in Iteration

    This one’s a long read—because the work was iterative, the scope deceptively simple, and the edge cases... well, they were not shy.



    The goal? Automate Enterprise Voice (EV) enablement for users in Microsoft Teams Shared Calling scenarios. Many organizations are adopting Shared Calling to provide basic PSTN access to all users while reserving DIDs and calling plans for high-volume users. It’s cost-effective, scalable, and flexible. But there’s a catch: even with group-based licensing and policy assignment in Entra ID, Teams doesn’t automatically flip the Enterprise Voice bit. That still requires PowerShell or a manual toggle in the Teams Admin Center.

    So I built an automation to do just that.

    Why This Matters

    This model—what we affectionately call a “reverse migration” (credit to Matt Edlhuber)—lets organizations enable outbound and auto-attendant-based inbound calling for everyone. Then, based on usage or cost analysis, they can selectively assign DIDs and calling plans when porting timelines align. It’s a way to decouple enablement from carrier constraints.
    The Setup

    Picture this: you’ve just migrated hundreds of users to Shared Calling using PowerShell. High-fives all around. But now you need to ensure they’re EV-enabled. Manually? No thanks.

    Here’s the stack I used:
    • Entra ID: Security group membership drives license and policy assignment.
    • Microsoft Graph API: Subscribes to group membership changes.
    • Azure Logic App: The orchestration layer.
    • Webhook Trigger: Fires on group updates.
    • Azure Automation Account: Hosts the PowerShell runbook.
    • Runbook: Validates license and applies EV enablement.

    The Obvious Path

    Iteration 1: Sure, I could’ve scheduled a daily PowerShell job or used Power Automate to trigger the runbook. Shoutout to Laure Vanderhauert for the excellent documentation that got me started.
    But I wanted near-real-time enablement. Why wait a day when we can act in minutes?

    Challenge #1: Detecting Deltas
    The first hurdle: how do we detect only the new users added to the group? Most orgs already automate license and policy assignment, but EV enablement is often manual. I needed a way to isolate just the new additions.

    I’d previously worked with Graph API subscriptions and Azure Event Grid in Call Record Insights, so I figured I could apply a similar pattern here.

    Spoiler: Event Grid doesn’t give you the delta. It tells you a group changed, but not how. No user info in the payload = no go.

    Enter Copilot(s)

    This is where GitHub Copilot and M365 Copilot saved me hours. I’ll write more soon about using Claude Sonnet 4 in Agentic vs Ask mode in VS Code. TL;DR: Agentic mode is powerful, but Ask mode gave me the iterative control I needed to learn as I built.
    Iteration 2: Build the Runbook First

    I started with the end in mind: a runbook that accepts a user ID and group ID, validates licensing, and enables EV. I tested it locally in VS Code, then manually in the Azure Portal. It worked.

    Then life happened. I paused.
    Iteration 3: Logic App + Graph Subscription

    Back at it, I wired up the Logic App to the Graph subscription. It worked—until it didn’t.

    Challenge #2: Add ≠ Remove
    Turns out, Graph fires on any group membership change. Add or remove. My Logic App didn’t discriminate, so it happily re-enabled users who had just been removed. Oops.

    Fix: I added logic to filter for additions only. Most orgs remove licenses and policies when users leave the group, so I focused on the “add” path.

    Challenge #3: Bulk Adds
    What happens when multiple users are added at once? Is the payload an array? Do we get one notification per user? I had to build logic to handle both cases.

    Challenge #4: The Subscription That Multiplies
    When testing your Graph subscription and Logic App flow, it’s surprisingly easy to accidentally create multiple subscriptions. And when you do? Each one will happily fire off its own webhook, triggering your Logic App and runbook multiple times.


    I’ll go deeper into subscription setup in the next section, but this one deserves a spotlight.
    Here’s the key:
    • Make sure you only have one active subscription.
    • Only monitor the resource: /groups/{group-id}/members
    That last part—members—is critical. If you subscribe to just /groups/{group-id}, you’ll get notified on any group change (like metadata updates), not just membership changes. That’s a fast track to unintended runbook executions and potential chaos.
    So, before you hit “Deploy,” double-check:You’re not stacking subscriptions.
    You’re watching the right resource.
    You’re not about to create a webhook-triggered infinite loop.

    Trust me, your future self will thank you.

    The Build: Where the Magic Happens

    Let’s talk about the build. The real magic lies in the Graph API subscription and the Azure Logic App with a webhook trigger. But first, let’s set the scene.


    Graph Subscription: Your Digital Bouncer

    Imagine you’re the bouncer at Club Entra. You don’t want to stand at the door all night checking who’s coming and going from the VIP group (say, “Teams Voice Users”). So you hire Microsoft Graph to do it for you.

    A Graph API subscription is your way of saying:

    “Hey Graph, tap me on the shoulder whenever someone joins or leaves this group.”

    Here’s what that looks like in practice:

    POST https://graph.microsoft.com/v1.0/subscriptions
    {
      "changeType": "updated",
      "notificationUrl": "https://yourlogicapp.azurewebsites.net/api/notify",
      "resource": "/groups/{group-id}/members",
      "expirationDateTime": "2025-07-07T11:00:00Z",
      "clientState": "secretSauce123"
    }

    What’s Going On Here?

    • changeType: "updated" — You care about membership changes.
    • resource: The Entra ID group you’re watching.
    • notificationUrl: Where Graph sends the “Yo, something changed!” message.
    • clientState: A secret handshake to verify the message is legit.
    Graph will first validate your notificationUrl to make sure it’s not a prank. Once that handshake is done, you’re officially subscribed.

    When someone joins or leaves the group, Graph sends a POST to your notificationUrl with a payload like this:
    {
      "value": [
        {
          "subscriptionId": "...",
          "changeType": "updated",
          "resource": "groups/{group-id}/members",
          "resourceData": {
            "id": "user-id"
          }
        }
      ]
    }
    
    It’s like getting a text that says, “Someone just walked into the VIP room,” and then checking the security cam to see who it was.

    Azure Logic App: Your Always-On Concierge

    Your Logic App is the concierge that handles these notifications:
    • Trigger: HTTP request from Graph hits your Logic App.
    • Parse: Extract the user-id from the payload.
    • Lookup: Call Graph to get full user details (/users/{user-id}).
    • Action: Trigger an Azure Automation runbook to enable Enterprise Voice.

    Flow Summary

    Here’s the full flow, start to finish:
    • Entra ID Group Membership Changes
      • A user is added to or removed from a group like “Teams Voice Users.”
    • Graph API Subscription Detects the Change
      • You’ve subscribed to /groups/{group-id}/members with changeType: "updated".
    • Graph Sends a Notification
      • A POST hits your Logic App’s HTTP trigger with metadata like resourceData.id.
    • Logic App is TriggeredValidates clientState (optional but smart).
      • Extracts the user-id.
      • Calls Graph to get full user details.
    • Triggers the runbook to take action (enable EV, log, alert, etc.).

    Note: Logic Apps don’t poll Entra ID. They rely on Graph’s webhook notifications. The subscription is the middleman that makes this reactive and efficient.

    Sample Code: Creating the Subscription

    Here’s a generic PowerShell snippet to create the subscription:
    # Step 0: Auth setup
    $tenantId = "<your-tenant-id>"
    $clientId = "<your-client-id>"
    $clientSecret = "<your-client-secret>"
    $scope = "https://graph.microsoft.com/.default"
    
    # Get token
    $body = @{
        grant_type    = "client_credentials"
        client_id     = $clientId
        client_secret = $clientSecret
        scope         = $scope
    }
    
    $tokenResponse = Invoke-RestMethod -Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" -Method POST -Body $body
    $accessToken = $tokenResponse.access_token
    
    # Step 1: Create the subscription
    $subscriptionBody = @{
        changeType          = "updated"
        notificationUrl     = "https://yourlogicapp.azurewebsites.net/api/notify"
        resource            = "/groups/{group-id}/members"
        expirationDateTime  = (Get-Date).AddHours(1).ToString("yyyy-MM-ddTHH:mm:ssZ")
        clientState         = "secretSauce123"
    } | ConvertTo-Json -Depth 3
    
    $response = Invoke-RestMethod -Uri "https://graph.microsoft.com/v1.0/subscriptions" `
        -Headers @{ Authorization = "Bearer $accessToken" } `
        -Method POST `
        -Body $subscriptionBody `
        -ContentType "application/json"
    
    $response
    
    


    In the full deployment guide, I’ll include an additional runbook designed to run independently on a scheduled basis—separate from the Logic App trigger. This daily run ensures that the Graph subscription remains active and properly connected to the Logic App. It’s a critical step, as the subscription must be able to communicate with the Logic App endpoint to deliver notifications reliably.

    Permissions Matter

    To make this work, your Logic App must be exposed as an Enterprise Application so you can assign the right API permissions—namely User.Read.All and Group.ReadWrite.All. I’ll cover this in more detail in the deployment guide.

    The Logic App: Lightweight, Serverless, and Smarter Than It Looks

    If you’ve worked with Power Automate (formerly known as Flow), Azure Logic Apps will feel familiar. Think of them as the grown-up, serverless cousin—deployed under a consumption plan, stateless, and built to handle logic flows with minimal overhead.

    In our case, the Logic App is triggered by an HTTP POST from the Microsoft Graph subscription. It’s the always-on listener that springs into action when someone joins (or leaves) our Entra ID group.

    Despite being lightweight, Logic Apps are surprisingly robust. They’re great at making decisions, branching logic, and calling downstream services—like our Azure Automation runbook.

    Here’s what we needed our Logic App to handle:

    1. Respond to Graph’s Token Validation
      • When you first create a Graph subscription, Microsoft sends a validation request to your notificationUrl. Your Logic App needs to recognize this and respond with the validationToken to complete the handshake. No token, no subscription.
    2. Handle Membership Deltas (Adds and Removes)
      • Graph sends a notification whenever group membership changes. That could mean one user or several. Your Logic App needs to:
        • Iterate through the payload (which might be a single user or an array).
        • Identify each user’s ID.
        • Decide what to do next.
    3. Ignore Removals, Focus on Adds
      • We don’t need to trigger the runbook when a user is removed from the group. Most orgs handle license and policy cleanup separately, and we’re not trying to disable Enterprise Voice here—just enable it.
      • So we added logic to:
        • Filter out removes.
        • Only process adds.
    This keeps the automation focused and avoids unnecessary runbook executions.
    When spinning up your Logic App, the first decision is the hosting plan. For this use case, Consumption is the way to go. It’s serverless, stateless, and perfect for low-volume, event-driven workflows—like ours, which only fires when Graph sends a webhook.

    Once deployed, you’ll land in the Azure Portal’s Logic App Designer. If you’ve used Power Automate before, this will feel familiar: a visual drag-and-drop interface for building workflows. Prefer code? You can switch to the JSON view, which is especially handy when working with Copilot to craft precise expressions and control flow logic.

    Whether you’re clicking or coding, the goal is the same: build a lightweight, reactive app that listens for Graph events and kicks off the right automation—without overcomplicating things.

    Here’s a common pitfall: don’t assume that a True condition always means “run the automation” and False means “don’t.” It’s not that binary.

    In our Logic App, the flow is designed to evaluate multiple conditions before ultimately reaching the step that triggers the HTTP webhook to the runbook. So while the final condition must evaluate to True to proceed, earlier branches might also return True or False depending on what you're filtering for - like whether the payload includes a validationToken, or if the user action was an add vs. a remove.

    In the upcoming deployment guide, I’ll include the full JSON view of the Logic App so you can see exactly how the expressions are structured. It’s not exactly human-readable prose—it’s written in Azure Logic Apps’ Workflow Definition Language (WDL), which takes some getting used to. But once you understand the flow, it becomes much easier to debug and extend.


    What’s Next?

    I’ll be publishing the full deployment guide and scripts to GitHub soon—both for my client and for the many others who’ve asked for this kind of automation. Hopefully, it saves you from the same toe-stubbing I ran into.

    Final Thoughts

    This project reminded me that automation isn’t just about writing scripts—it’s about designing resilient systems that handle real-world messiness. And sometimes, that means multiple trips to the hardware store. It took a few iterations to get things optimized.

    If you’re building something similar - or want to - stay tuned for more details and code snippets. Just don’t ask me to debug your webhook at 2 a.m.


    Prologue: The Prompt That Prompted Too Much


    I saved this part for the end because, well, it’s funny in hindsight. What I didn’t mention earlier was my actual first iteration. I sat down, opened GitHub Copilot, and figured I’d just “talk it out” to get the creative juices flowing. My prompt?

    “I would like to start a project to automate Enterprise Voice enablement for Teams Phone, based on security group membership. Please help with initial architecture concepts.”


    Sounds reasonable, right?

    I had Agentic mode enabled. Ten minutes later, I had 32 files across 26 directories—including .bat files and shell scripts to spin up a local Java app on my laptop. It was like asking for a sandwich recipe and getting a blueprint for a deli franchise.

    Lesson learned: prompt engineering is real. Ask a vague question, get a very enthusiastic answer. Ask a precise question, get something you can actually use.


    Deployment guidance coming in the next post later this week. Enjoy for now.

    Thursday, June 26, 2025

    Microsoft Puview - Protect Teams Voicemail in Exchange

    Securing sensitive content across platforms is no longer optional—it’s foundational. Microsoft Purview Information Protection offers a unified framework to classify, label, and protect data across Microsoft 365, including Exchange Online. One often-overlooked vector is voicemail: Teams voicemail is stored in Exchange as an email attachment, it becomes subject to the same compliance and security policies as any other message.



    By leveraging Purview’s sensitivity labels and auto-labeling policies, organizations can enforce granular controls over voicemail content. This includes the ability to automatically apply encryption and restrict actions such as downloading, saving, or forwarding voicemail messages—ensuring that sensitive voice communications remain contained within the intended compliance boundary. These protections are enforced directly in Outlook clients, while Teams clients are intentionally excluded from accessing protected voicemail, further reducing the risk of data leakage. Message preview is allowed, so the voicemail can be reviewed, the transcription is also provided for preview but the recorded voice can not be downloaded, forwarded or saved.

    Before:


    After:

    Sensitivity label applied - message encrypted - forward, download, copy/paste prevented


    Background:

    I put this post together for a few reasons. First, and most important - a client needed this solution guidance to align with current security, privacy and compliance rules. Second - there has been changes in Purview and rights management and i found the public documentation lacking, specific to voicemail.

    Before we get started, I also wanted to point out the importance for Teams Admin's to include their Exchange Admin and Compliance Admin partners in the conversation. While Teams calling and voicemail is the target for our solution - M365 cloud voicemail "lives" in Exchange, and compliance and data protection rules are configured in Purview so we need all parties involved to reach our goal.

    With regard to documentation around Exchange Rights Management and Purview sensitivity labels, much has changed, including the sender IP ranges used for cloud voicemail.

    Ingredients:

    To accomplish the outcome here we will use:

    • Microsoft Purview (available as part of E5 or stand-alone)
      • Sensitivity Label
      • Auto Label Policy - with 2 rules supporting both internal voicemail recordings and PSTN voicemail recording
    Exchange transport rules are not needed for this implementation but recommended to review with Exchange Admins.

    Important note - as we are encrypting and securing the voicemail stored in Exchange, this also prevents the Teams client from presenting the voicemail and transcription so that listening to the voicemail, or reviewing the transcript can only be performed from the Outlook client. 




    Create a Sensitivity Label for protecting voicemail in Exchange.
    Once in the menu for label creation, give your label a name, something specific that helps you identify the purpose and keeps this auto label differentiated from user-assigned labels when sending emails or protecting other file content.

    Under scope select emails
    Under items select control access - since we wont be applying content marking.

    Under access control select assign permissions now, and then proceed to assign permissions selection at the bottom.


    Choose the users or groups you wish to protect (keep in mind any selected users or groups will need a Purview license assigned)

    And select the specific permissions to control - in my example here, I used the viewer permission which prevents forwarding, download/save, and copy. You could also build custom permissions as needed.


    Once saved - you will see users, groups and permissions within the access control menu.
    Auto-Labeling we will leave off for now, as we will perform this with an auto-label policy
    Skip Groups and Sites - this is an Exchange policy, and proceed to save the label.


    Build Auto Label policy to apply to voicemail objects in Exchange.

    Next proceed to Auto-labeling policies and create a new policy. Auto-labeling is the method needed to protect voicemail as this is an inbound message, and we would not want users to select and apply their

    Give the auto-label policy a name, and on the label screen, then select the sensitivity label created in the previous step.

    Admin Units can be left default to Full Directory, we selected applicable users and groups which apply to the specific users and groups targeted for the label itself.

    Select Exchange Email as the target


    Exchange Rules is where the magic happens, to ensure we are labeling and securing voicemails, and not other unintended email. For those familiar with Exchange transport rules, this will be a familiar exercise.

    In this section, create 2 rules with 2 conditions in each rule. The rules are treated as "OR" so if either rule is matched the sensitivity label is applied. Within a rule, the conditions to be matched are treated as "AND" - so 2 rules are needed to support the "OR" condition.

    The first rule, designed to capture external/PSTN calls who recorded voicemail for users, we define the senders address. PSTN calls which present a phone number, result in voicemail from noreply@skype.voicemail.microsoft.com.

    Content-Class=Voice-CA allows us to select an additional message header condition to ensure we label voicemails.


    The second rule is targeted to label internal voicemail, when called from a user inside the tenant. In this example because the caller identity is know to the tenant and Exchange, the sender name is the caller and not noreply@skype.voicemail.microsoft.com. Here we use the domains known to the tenant as the sender domain. (Your tenant domain names)


    Often these rules, whether in Exchange Transport or Purview Sensitivity Labels are written with sender IP address conditions, in an attempt to ensure accurate rule processing based on voicemail entry into exchange. In my testing and implementation I could not find one complete set of IP sender addresses to incorporate voicemail filtering, so I chose to use sender/domain name with header Content-Class=VoiceCA. Sensitivity auto-label rules require a sender or IP address condition to be met in order to provide the additional header filter (ie. you cant apply a "header contains" filter only).

    Next replace any existing labels, and apply encryption with a rights management owner. 


    Note on this final step - policy mode is set to simulation only. Enable the policy after 7 days of simulation or once the policy is saved, activate immediately.
    DONE !!


    To report on auto-label success rates, in the Purview portal, select Explorers -> Activity Explorer and filter for the newly created label to view a chart and report of all label activities.



    Some additional tips, review message headers to fine tune or adjust your label criteria if desired. If Exchange transport rules are already configured (example below) Rights Protection, this rule is processed before the message gets to Purview, and will replace the Content-Class header with rpmsg.message. When this happens, while the message may be rights protected, the Purview label and rules are not applied because the Content-Class header does not match - consider turning off Exchange Transport rules for protection and encryption purposes and leveraging Purview, or update the auto-label policy to include a rule that also looks for rpmsg.message.


    Exchange message trace can help ensure the label is being applied, or provide insight to why the message may not be evaluated.

    Just one last reminder - with this implementation users will listen (preview) their recording or review the transcript in Outlook, and be able to see there was a missed call, and recorded voicemail in Teams, but not be able to listen from the Teams client ... if your organization is needing to secure recorded voicemail - this guide will help.





    Copilot Studio and VS Code - Start using the Copilot Studio Extension

    Getting started with Copilot Studio is fast and approachable. Whether you begin by using the Describe interface to chat with the Studio Age...