Monday, July 7, 2025

Azure Automation for Shared Calling Enablement

Automating Enterprise Voice Enablement for Teams Shared Calling: A Journey in Iteration

This one’s a long read—because the work was iterative, the scope deceptively simple, and the edge cases... well, they were not shy.



The goal? Automate Enterprise Voice (EV) enablement for users in Microsoft Teams Shared Calling scenarios. Many organizations are adopting Shared Calling to provide basic PSTN access to all users while reserving DIDs and calling plans for high-volume users. It’s cost-effective, scalable, and flexible. But there’s a catch: even with group-based licensing and policy assignment in Entra ID, Teams doesn’t automatically flip the Enterprise Voice bit. That still requires PowerShell or a manual toggle in the Teams Admin Center.

So I built an automation to do just that.

Why This Matters

This model—what we affectionately call a “reverse migration” (credit to Matt Edlhuber)—lets organizations enable outbound and auto-attendant-based inbound calling for everyone. Then, based on usage or cost analysis, they can selectively assign DIDs and calling plans when porting timelines align. It’s a way to decouple enablement from carrier constraints.
The Setup

Picture this: you’ve just migrated hundreds of users to Shared Calling using PowerShell. High-fives all around. But now you need to ensure they’re EV-enabled. Manually? No thanks.

Here’s the stack I used:
  • Entra ID: Security group membership drives license and policy assignment.
  • Microsoft Graph API: Subscribes to group membership changes.
  • Azure Logic App: The orchestration layer.
  • Webhook Trigger: Fires on group updates.
  • Azure Automation Account: Hosts the PowerShell runbook.
  • Runbook: Validates license and applies EV enablement.

The Obvious Path

Iteration 1: Sure, I could’ve scheduled a daily PowerShell job or used Power Automate to trigger the runbook. Shoutout to Laure Vanderhauert for the excellent documentation that got me started.
But I wanted near-real-time enablement. Why wait a day when we can act in minutes?

Challenge #1: Detecting Deltas
The first hurdle: how do we detect only the new users added to the group? Most orgs already automate license and policy assignment, but EV enablement is often manual. I needed a way to isolate just the new additions.

I’d previously worked with Graph API subscriptions and Azure Event Grid in Call Record Insights, so I figured I could apply a similar pattern here.

Spoiler: Event Grid doesn’t give you the delta. It tells you a group changed, but not how. No user info in the payload = no go.

Enter Copilot(s)

This is where GitHub Copilot and M365 Copilot saved me hours. I’ll write more soon about using Claude Sonnet 4 in Agentic vs Ask mode in VS Code. TL;DR: Agentic mode is powerful, but Ask mode gave me the iterative control I needed to learn as I built.
Iteration 2: Build the Runbook First

I started with the end in mind: a runbook that accepts a user ID and group ID, validates licensing, and enables EV. I tested it locally in VS Code, then manually in the Azure Portal. It worked.

Then life happened. I paused.
Iteration 3: Logic App + Graph Subscription

Back at it, I wired up the Logic App to the Graph subscription. It worked—until it didn’t.

Challenge #2: Add ≠ Remove
Turns out, Graph fires on any group membership change. Add or remove. My Logic App didn’t discriminate, so it happily re-enabled users who had just been removed. Oops.

Fix: I added logic to filter for additions only. Most orgs remove licenses and policies when users leave the group, so I focused on the “add” path.

Challenge #3: Bulk Adds
What happens when multiple users are added at once? Is the payload an array? Do we get one notification per user? I had to build logic to handle both cases.

Challenge #4: The Subscription That Multiplies
When testing your Graph subscription and Logic App flow, it’s surprisingly easy to accidentally create multiple subscriptions. And when you do? Each one will happily fire off its own webhook, triggering your Logic App and runbook multiple times.


I’ll go deeper into subscription setup in the next section, but this one deserves a spotlight.
Here’s the key:
  • Make sure you only have one active subscription.
  • Only monitor the resource: /groups/{group-id}/members
That last part—members—is critical. If you subscribe to just /groups/{group-id}, you’ll get notified on any group change (like metadata updates), not just membership changes. That’s a fast track to unintended runbook executions and potential chaos.
So, before you hit “Deploy,” double-check:You’re not stacking subscriptions.
You’re watching the right resource.
You’re not about to create a webhook-triggered infinite loop.

Trust me, your future self will thank you.

The Build: Where the Magic Happens

Let’s talk about the build. The real magic lies in the Graph API subscription and the Azure Logic App with a webhook trigger. But first, let’s set the scene.


Graph Subscription: Your Digital Bouncer

Imagine you’re the bouncer at Club Entra. You don’t want to stand at the door all night checking who’s coming and going from the VIP group (say, “Teams Voice Users”). So you hire Microsoft Graph to do it for you.

A Graph API subscription is your way of saying:

“Hey Graph, tap me on the shoulder whenever someone joins or leaves this group.”

Here’s what that looks like in practice:

POST https://graph.microsoft.com/v1.0/subscriptions
{
  "changeType": "updated",
  "notificationUrl": "https://yourlogicapp.azurewebsites.net/api/notify",
  "resource": "/groups/{group-id}/members",
  "expirationDateTime": "2025-07-07T11:00:00Z",
  "clientState": "secretSauce123"
}

What’s Going On Here?

  • changeType: "updated" — You care about membership changes.
  • resource: The Entra ID group you’re watching.
  • notificationUrl: Where Graph sends the “Yo, something changed!” message.
  • clientState: A secret handshake to verify the message is legit.
Graph will first validate your notificationUrl to make sure it’s not a prank. Once that handshake is done, you’re officially subscribed.

When someone joins or leaves the group, Graph sends a POST to your notificationUrl with a payload like this:
{
  "value": [
    {
      "subscriptionId": "...",
      "changeType": "updated",
      "resource": "groups/{group-id}/members",
      "resourceData": {
        "id": "user-id"
      }
    }
  ]
}
It’s like getting a text that says, “Someone just walked into the VIP room,” and then checking the security cam to see who it was.

Azure Logic App: Your Always-On Concierge

Your Logic App is the concierge that handles these notifications:
  • Trigger: HTTP request from Graph hits your Logic App.
  • Parse: Extract the user-id from the payload.
  • Lookup: Call Graph to get full user details (/users/{user-id}).
  • Action: Trigger an Azure Automation runbook to enable Enterprise Voice.

Flow Summary

Here’s the full flow, start to finish:
  • Entra ID Group Membership Changes
    • A user is added to or removed from a group like “Teams Voice Users.”
  • Graph API Subscription Detects the Change
    • You’ve subscribed to /groups/{group-id}/members with changeType: "updated".
  • Graph Sends a Notification
    • A POST hits your Logic App’s HTTP trigger with metadata like resourceData.id.
  • Logic App is TriggeredValidates clientState (optional but smart).
    • Extracts the user-id.
    • Calls Graph to get full user details.
  • Triggers the runbook to take action (enable EV, log, alert, etc.).

Note: Logic Apps don’t poll Entra ID. They rely on Graph’s webhook notifications. The subscription is the middleman that makes this reactive and efficient.

Sample Code: Creating the Subscription

Here’s a generic PowerShell snippet to create the subscription:
# Step 0: Auth setup
$tenantId = "<your-tenant-id>"
$clientId = "<your-client-id>"
$clientSecret = "<your-client-secret>"
$scope = "https://graph.microsoft.com/.default"

# Get token
$body = @{
    grant_type    = "client_credentials"
    client_id     = $clientId
    client_secret = $clientSecret
    scope         = $scope
}

$tokenResponse = Invoke-RestMethod -Uri "https://login.microsoftonline.com/$tenantId/oauth2/v2.0/token" -Method POST -Body $body
$accessToken = $tokenResponse.access_token

# Step 1: Create the subscription
$subscriptionBody = @{
    changeType          = "updated"
    notificationUrl     = "https://yourlogicapp.azurewebsites.net/api/notify"
    resource            = "/groups/{group-id}/members"
    expirationDateTime  = (Get-Date).AddHours(1).ToString("yyyy-MM-ddTHH:mm:ssZ")
    clientState         = "secretSauce123"
} | ConvertTo-Json -Depth 3

$response = Invoke-RestMethod -Uri "https://graph.microsoft.com/v1.0/subscriptions" `
    -Headers @{ Authorization = "Bearer $accessToken" } `
    -Method POST `
    -Body $subscriptionBody `
    -ContentType "application/json"

$response


In the full deployment guide, I’ll include an additional runbook designed to run independently on a scheduled basis—separate from the Logic App trigger. This daily run ensures that the Graph subscription remains active and properly connected to the Logic App. It’s a critical step, as the subscription must be able to communicate with the Logic App endpoint to deliver notifications reliably.

Permissions Matter

To make this work, your Logic App must be exposed as an Enterprise Application so you can assign the right API permissions—namely User.Read.All and Group.ReadWrite.All. I’ll cover this in more detail in the deployment guide.

The Logic App: Lightweight, Serverless, and Smarter Than It Looks

If you’ve worked with Power Automate (formerly known as Flow), Azure Logic Apps will feel familiar. Think of them as the grown-up, serverless cousin—deployed under a consumption plan, stateless, and built to handle logic flows with minimal overhead.

In our case, the Logic App is triggered by an HTTP POST from the Microsoft Graph subscription. It’s the always-on listener that springs into action when someone joins (or leaves) our Entra ID group.

Despite being lightweight, Logic Apps are surprisingly robust. They’re great at making decisions, branching logic, and calling downstream services—like our Azure Automation runbook.

Here’s what we needed our Logic App to handle:

  1. Respond to Graph’s Token Validation
    • When you first create a Graph subscription, Microsoft sends a validation request to your notificationUrl. Your Logic App needs to recognize this and respond with the validationToken to complete the handshake. No token, no subscription.
  2. Handle Membership Deltas (Adds and Removes)
    • Graph sends a notification whenever group membership changes. That could mean one user or several. Your Logic App needs to:
      • Iterate through the payload (which might be a single user or an array).
      • Identify each user’s ID.
      • Decide what to do next.
  3. Ignore Removals, Focus on Adds
    • We don’t need to trigger the runbook when a user is removed from the group. Most orgs handle license and policy cleanup separately, and we’re not trying to disable Enterprise Voice here—just enable it.
    • So we added logic to:
      • Filter out removes.
      • Only process adds.
This keeps the automation focused and avoids unnecessary runbook executions.
When spinning up your Logic App, the first decision is the hosting plan. For this use case, Consumption is the way to go. It’s serverless, stateless, and perfect for low-volume, event-driven workflows—like ours, which only fires when Graph sends a webhook.

Once deployed, you’ll land in the Azure Portal’s Logic App Designer. If you’ve used Power Automate before, this will feel familiar: a visual drag-and-drop interface for building workflows. Prefer code? You can switch to the JSON view, which is especially handy when working with Copilot to craft precise expressions and control flow logic.

Whether you’re clicking or coding, the goal is the same: build a lightweight, reactive app that listens for Graph events and kicks off the right automation—without overcomplicating things.

Here’s a common pitfall: don’t assume that a True condition always means “run the automation” and False means “don’t.” It’s not that binary.

In our Logic App, the flow is designed to evaluate multiple conditions before ultimately reaching the step that triggers the HTTP webhook to the runbook. So while the final condition must evaluate to True to proceed, earlier branches might also return True or False depending on what you're filtering for - like whether the payload includes a validationToken, or if the user action was an add vs. a remove.

In the upcoming deployment guide, I’ll include the full JSON view of the Logic App so you can see exactly how the expressions are structured. It’s not exactly human-readable prose—it’s written in Azure Logic Apps’ Workflow Definition Language (WDL), which takes some getting used to. But once you understand the flow, it becomes much easier to debug and extend.


What’s Next?

I’ll be publishing the full deployment guide and scripts to GitHub soon—both for my client and for the many others who’ve asked for this kind of automation. Hopefully, it saves you from the same toe-stubbing I ran into.

Final Thoughts

This project reminded me that automation isn’t just about writing scripts—it’s about designing resilient systems that handle real-world messiness. And sometimes, that means multiple trips to the hardware store. It took a few iterations to get things optimized.

If you’re building something similar - or want to - stay tuned for more details and code snippets. Just don’t ask me to debug your webhook at 2 a.m.


Prologue: The Prompt That Prompted Too Much


I saved this part for the end because, well, it’s funny in hindsight. What I didn’t mention earlier was my actual first iteration. I sat down, opened GitHub Copilot, and figured I’d just “talk it out” to get the creative juices flowing. My prompt?

“I would like to start a project to automate Enterprise Voice enablement for Teams Phone, based on security group membership. Please help with initial architecture concepts.”


Sounds reasonable, right?

I had Agentic mode enabled. Ten minutes later, I had 32 files across 26 directories—including .bat files and shell scripts to spin up a local Java app on my laptop. It was like asking for a sandwich recipe and getting a blueprint for a deli franchise.

Lesson learned: prompt engineering is real. Ask a vague question, get a very enthusiastic answer. Ask a precise question, get something you can actually use.


Deployment guidance coming in the next post later this week. Enjoy for now.

Thursday, June 26, 2025

Microsoft Puview - Protect Teams Voicemail in Exchange

Securing sensitive content across platforms is no longer optional—it’s foundational. Microsoft Purview Information Protection offers a unified framework to classify, label, and protect data across Microsoft 365, including Exchange Online. One often-overlooked vector is voicemail: Teams voicemail is stored in Exchange as an email attachment, it becomes subject to the same compliance and security policies as any other message.



By leveraging Purview’s sensitivity labels and auto-labeling policies, organizations can enforce granular controls over voicemail content. This includes the ability to automatically apply encryption and restrict actions such as downloading, saving, or forwarding voicemail messages—ensuring that sensitive voice communications remain contained within the intended compliance boundary. These protections are enforced directly in Outlook clients, while Teams clients are intentionally excluded from accessing protected voicemail, further reducing the risk of data leakage. Message preview is allowed, so the voicemail can be reviewed, the transcription is also provided for preview but the recorded voice can not be downloaded, forwarded or saved.

Before:


After:

Sensitivity label applied - message encrypted - forward, download, copy/paste prevented


Background:

I put this post together for a few reasons. First, and most important - a client needed this solution guidance to align with current security, privacy and compliance rules. Second - there has been changes in Purview and rights management and i found the public documentation lacking, specific to voicemail.

Before we get started, I also wanted to point out the importance for Teams Admin's to include their Exchange Admin and Compliance Admin partners in the conversation. While Teams calling and voicemail is the target for our solution - M365 cloud voicemail "lives" in Exchange, and compliance and data protection rules are configured in Purview so we need all parties involved to reach our goal.

With regard to documentation around Exchange Rights Management and Purview sensitivity labels, much has changed, including the sender IP ranges used for cloud voicemail.

Ingredients:

To accomplish the outcome here we will use:

  • Microsoft Purview (available as part of E5 or stand-alone)
    • Sensitivity Label
    • Auto Label Policy - with 2 rules supporting both internal voicemail recordings and PSTN voicemail recording
Exchange transport rules are not needed for this implementation but recommended to review with Exchange Admins.

Important note - as we are encrypting and securing the voicemail stored in Exchange, this also prevents the Teams client from presenting the voicemail and transcription so that listening to the voicemail, or reviewing the transcript can only be performed from the Outlook client. 




Create a Sensitivity Label for protecting voicemail in Exchange.
Once in the menu for label creation, give your label a name, something specific that helps you identify the purpose and keeps this auto label differentiated from user-assigned labels when sending emails or protecting other file content.

Under scope select emails
Under items select control access - since we wont be applying content marking.

Under access control select assign permissions now, and then proceed to assign permissions selection at the bottom.


Choose the users or groups you wish to protect (keep in mind any selected users or groups will need a Purview license assigned)

And select the specific permissions to control - in my example here, I used the viewer permission which prevents forwarding, download/save, and copy. You could also build custom permissions as needed.


Once saved - you will see users, groups and permissions within the access control menu.
Auto-Labeling we will leave off for now, as we will perform this with an auto-label policy
Skip Groups and Sites - this is an Exchange policy, and proceed to save the label.


Build Auto Label policy to apply to voicemail objects in Exchange.

Next proceed to Auto-labeling policies and create a new policy. Auto-labeling is the method needed to protect voicemail as this is an inbound message, and we would not want users to select and apply their

Give the auto-label policy a name, and on the label screen, then select the sensitivity label created in the previous step.

Admin Units can be left default to Full Directory, we selected applicable users and groups which apply to the specific users and groups targeted for the label itself.

Select Exchange Email as the target


Exchange Rules is where the magic happens, to ensure we are labeling and securing voicemails, and not other unintended email. For those familiar with Exchange transport rules, this will be a familiar exercise.

In this section, create 2 rules with 2 conditions in each rule. The rules are treated as "OR" so if either rule is matched the sensitivity label is applied. Within a rule, the conditions to be matched are treated as "AND" - so 2 rules are needed to support the "OR" condition.

The first rule, designed to capture external/PSTN calls who recorded voicemail for users, we define the senders address. PSTN calls which present a phone number, result in voicemail from noreply@skype.voicemail.microsoft.com.

Content-Class=Voice-CA allows us to select an additional message header condition to ensure we label voicemails.


The second rule is targeted to label internal voicemail, when called from a user inside the tenant. In this example because the caller identity is know to the tenant and Exchange, the sender name is the caller and not noreply@skype.voicemail.microsoft.com. Here we use the domains known to the tenant as the sender domain. (Your tenant domain names)


Often these rules, whether in Exchange Transport or Purview Sensitivity Labels are written with sender IP address conditions, in an attempt to ensure accurate rule processing based on voicemail entry into exchange. In my testing and implementation I could not find one complete set of IP sender addresses to incorporate voicemail filtering, so I chose to use sender/domain name with header Content-Class=VoiceCA. Sensitivity auto-label rules require a sender or IP address condition to be met in order to provide the additional header filter (ie. you cant apply a "header contains" filter only).

Next replace any existing labels, and apply encryption with a rights management owner. 


Note on this final step - policy mode is set to simulation only. Enable the policy after 7 days of simulation or once the policy is saved, activate immediately.
DONE !!


To report on auto-label success rates, in the Purview portal, select Explorers -> Activity Explorer and filter for the newly created label to view a chart and report of all label activities.



Some additional tips, review message headers to fine tune or adjust your label criteria if desired. If Exchange transport rules are already configured (example below) Rights Protection, this rule is processed before the message gets to Purview, and will replace the Content-Class header with rpmsg.message. When this happens, while the message may be rights protected, the Purview label and rules are not applied because the Content-Class header does not match - consider turning off Exchange Transport rules for protection and encryption purposes and leveraging Purview, or update the auto-label policy to include a rule that also looks for rpmsg.message.


Exchange message trace can help ensure the label is being applied, or provide insight to why the message may not be evaluated.

Just one last reminder - with this implementation users will listen (preview) their recording or review the transcript in Outlook, and be able to see there was a missed call, and recorded voicemail in Teams, but not be able to listen from the Teams client ... if your organization is needing to secure recorded voicemail - this guide will help.





Saturday, April 12, 2025

Navigating the Rollout: Best Practices for Conditional Access and Device Code Flow

The rollout of Microsoft-managed policies to block Device Code Flow in conditional access will impact remote management and login processes for Teams devices, common area phones, and Teams Rooms devices. As the default policy will be set to block device code flow, administrators need to adopt best practices to leverage conditional access effectively. This blog post outlines the key impacts of this change and provides practical recommendations for allowing device code flow to ensure seamless and secure remote management.

Starting in February, 2025 and continuing through May, Microsoft is implementing the block on device code flow (DCF) authentication to enhance security and protect tenants against potential threats. The new Microsoft-managed policy aims to secure accounts using DCF authentication by initially rolling out in report-only mode, allowing administrators to review the impact before enforcement. Administrators have at least 45 days to evaluate and configure the policies before they are automatically moved to the "On" state.

The policy changes are particularly important for Android-based shared Teams devices, such as Microsoft Teams Rooms on Android, IP Phones, and Panels. Without creating exclusion lists for these devices, administrators will lose the ability to remotely sign in and manage them after sign-out, as they will not be able to re-authenticate with DCF.

Take a moment to review the blog post covering this announcement and bookmark for future updates:

https://techcommunity.microsoft.com/blog/microsoftteamsblog/policy-changes-for-microsoft-teams-devices-using-device-code-flow-authentication/4399337

While we acknowledge the possibility of exclusions, additional conditional access rules can enable remote logins through device code flow. However, this approach must align with enhanced security protocols aimed at restricting device code flow. This includes measures like security group membership, trusted location verification or limiting device code access exclusively to administrators overseeing common areas, such as rooms and shared phone devices.

UPDATE - also take a look at Daryl's blog here for all impacting device management updates

https://darylhunter.me/blog/2025/04/upcoming-teams-devices-updates-spring-summer-2025.html

Options:

  • Trusted Locations: Permit device code flow only when initiated from known IP ranges (e.g., office subnets or VPN IPs).
  • Named Locations with MFA: Require MFA for named locations if allowing device code flow for users with devices outside of your main perimeter.
  • User/Group Scope: Limit the exclusion policy to only specific device accounts used for Teams devices.

Additional monitoring and reporting of DCF use:

  • Track usage of device code flow.
  • Set up alerts for unexpected geolocations, time-of-day logins, or unusual IPs

The default policy:

Initially the Microsoft managed policy will be implemented in report-only mode.

At the time of this post, the policy was created manually, the Microsoft managed policy is not deployed to this tenant.

The policy may look something like this:


Note the policy is applied to all users and the grant control is set to block.

In report-only mode, we can review sign-in logs, and select conditional access report-only to confirm this policy would block the DCF authentication flow. 

Authentication Redirect is used for Teams Room Device QR Code Join - so this may also be blocked by default.

If this policy is moved from report-only to enabled:

The device screen will still present a remote login prompt:


When navigating to authentication broker remote login, the remote authentication attempt will fail (an account can still log in directly from the device - but this may be difficult for admin's remotely managing devices, or local resources who would need to navigate a smaller device screen, or non-touchscreen devices) 



Remember the Microsoft managed policy default settings block DCF for all users with no excluded accounts or devices.

Modify the Policy:

Lets modify the policy to require Compliant Device and allow from trusted location(s).

Exclude the named location as trusted and exclude the named trusted location from the block DCF policy - if your organization has not leveraged named locations before the guidance can be found HERE

Navigate to the Device Code Flow policy - and exclude the named trusted location.

Additionally you can require only compliant devices. There are a few ways to accomplish this. One option is to build an additional exclusion device filter applied to the condition.


When configuring Conditional Access policies in Microsoft Entra ID (formerly Azure AD), the behavior of exclusions depends on how the conditions are set up. If you apply exclusions for both named trusted locations and devices marked as compliant, the exclusion will take place if either condition is met. This means that if a sign-in attempt comes from a named trusted location or from a device marked as compliant, the policy will exclude that attempt from the restrictions.

An alternate, more secure method is applying a grant control, requiring a compliant device.


Note - when selecting "Require device to be marked as compliant" the grant control changes from Block Access to 1 (or more) controls selected.

Requiring device compliance as a grant control is generally considered more secure because it enforces compliance checks at the point of access, ensuring that only devices meeting the organization's security standards can access resources. You may apply this grant control to all cloud applications within a specific policy, not nested within the DCF exclusion. This approach reduces the risk of unauthorized access and provides a higher level of security compared to excluding devices marked as compliant, which may leave gaps if not carefully managed.

Result:


It's important to note that while excluding trusted named locations is one way to permit Device Code Flow (DCF), alternative approaches may offer a stronger security posture. For example, instead of relying solely on location-based exclusions, consider limiting DCF access to only shared device accounts—such as Teams Rooms and common area phones by using a dedicated security group. This ensures DCF is tightly scoped to specific, low-privilege identities used solely for device provisioning and management.


Closing Thoughts:

As Microsoft continues to tighten the security posture of identity and access management through managed policies like the DCF block, it’s essential for organizations to strike the right balance between usability and protection. While trusted named locations offer a straightforward method for allowing device code flow, more robust alternatives—such as scoping access to dedicated security groups for Teams Room and shared device accounts—provide greater assurance against misuse. By leveraging Conditional Access, enforcing least privilege, and monitoring sign-in activity, organizations can maintain a strong security posture without disrupting the legitimate provisioning and management of Teams devices.

Sunday, December 8, 2024

Passwordless Signin with MFA and Microsoft Authenticator

I wanted to take a moment to chat about something that's been on my mind lately: Microsoft's Secure Future Initiative (SFI). It's a pretty big deal around here, and I think it's worth diving into what it's all about and why it matters.

So, what exactly is SFI? In a nutshell, it's Microsoft's commitment to making sure our technology is as secure as possible. This isn't just a one-time thing; it's an ongoing effort to stay ahead of the ever-evolving threat landscape. The initiative is built on three core principles: Secure by design, secure by default, and secure operations.

As an architect working with customers and supporting demonstration and development environments with multiple user personas, MFA with passwordless sign-in optimize and secure my activity and environment. They ensure that all user personas, whether for testing or demonstration purposes, are protected against unauthorized access. These principles not only apply to demonstration and development environments, they expand to enterprise-wide enablement strategies, adding value by maintaining robust security standards and improve end-user interactions.


One of the coolest things about SFI is how it ties into our push for passwordless authentication combined with multi-factor authentication (MFA). If you haven't heard, passwordless authentication is a game-changer. It eliminates the need for traditional passwords, which are often the weakest link in security. Instead, we use things like biometrics or security keys, making it much harder for bad actors to get in.
Combining passwordless authentication with MFA adds an extra layer of security. Even if someone manages to get past one barrier, they've still got another to contend with. This approach not only boosts security but also makes life easier for users. No more juggling multiple passwords or dealing with the hassle of password resets - or even worse, storing passwords in less than optimal places.
While passwords can be stored in authenticator, ensuring you aren't storing passwords in documents or cloud storage files,  Passwordless sign-in takes security to the next level. By using methods like biometrics (fingerprint or facial recognition) or security keys, it eliminates the need for traditional passwords altogether. This not only reduces the risk of password-related attacks but also simplifies the user experience. With passwordless sign-in, you can access your accounts quickly and securely.

Getting Started

Enabling Passwordless authentication is different than enabling Microsoft Authenticator for Multifactor Authentication. Many tenants and users may already have MFA required (if not I highly recommend). The additional documentation and steps below walk through the additional steps to add passwordless authentication to authenticator based MFA.


First we start in Entra ID under policies and authentication methods.




Note in these screen shots - application name and location are marked as Microsoft Managed - you have the option to set these to enabled as an additional control to ensure users approve the application and location they are accessing from.

User registration

Users register themselves for the passwordless authentication method of Microsoft Entra ID. For users who already registered the Microsoft Authenticator app for multifactor authentication, skip to the next section, enable phone sign-in.

Its important to note here that users may have already enabled the Authenticator App for MFA which is important to do, but doesn't complete the enablement of passwordless signin.

Guided registration with My Sign-ins

To register the Microsoft Authenticator app, follow these steps:

  1. Browse to https://aka.ms/mysecurityinfo.
  2. Sign in, then select Add method > Authenticator app > Add to add Microsoft Authenticator.
  3. Follow the instructions to install and configure the Microsoft Authenticator app on your device.
  4. Select Done to complete Microsoft Authenticator configuration.

Enable phone sign-in from your authenticator app

After users registered themselves for the Microsoft Authenticator app, they need to enable phone sign-in:

  1. In Microsoft Authenticator, select the account registered.
  2. Select Enable phone sign-in.
  3. Follow the instructions in the app to finish registering the account for passwordless phone sign-in


In our policy we administratively still allow for password signin but now the user can define passwordless signin as the default method.


Once enabled the end users (or test users in our development environments) are no longer prompted for password or they can set password to the default method, and select alternate options in the login process (as shown below)




I hope this post was helpful in guiding through the deferent security and authentication practices. I plan to follow up with a future post discussing the use and enablement of Passkey. Passkeys are a strong, phishing-resistant authentication method that completely replace the need for a password when logging into applications and websites. They are created and stored on a user's device, such as a smartphone or computer. Using a passkey is as easy as using your face, fingerprint, or device PIN. Passkeys are designed to be highly secure and user-friendly, making them the preferred way to sign in. Stay tuned.

Azure Automation for Shared Calling Enablement

Automating Enterprise Voice Enablement for Teams Shared Calling: A Journey in Iteration This one’s a long read—because the work was iterativ...