Thursday, September 5, 2024

Call Records Insights Update 2 - Export UPN from Entra ID and update Call Records

In the previous chapter of CRI, I discussed an option for inserting user UPNs into the 'Call Record Insights' query results. To further simplify the work of administrators, this post will explore how to automate the integration of user and UPN data from Entra ID. We'll cover how to automate the replacement of this table on a scheduled basis using Azure Automation (Runbooks) or PowerShell.


Recap

The previous posts on Teams Call Record Insights covered getting started and query updates to include additional data. Below are links to those chapters.

In the last session we reviewed how to create and populate user UPN's in the CRI database, and then use Kusto Query join to return the UPN in the call records query.


The real takeaway from the last chapter—and this post—isn't just about adding UPNs to call records; it's about empowering your organization with complete ownership, management, and flexibility of your records database. Once implemented, this data is yours to control, allowing you to retain, modify, and adapt it as your needs evolve.

Previously, we created the user_UPN table through the ADX web interface, setting up a new table and importing users, GUIDs, and UPNs using the data import tool in ADX.

But what if you need to ensure this user UPN table stays up-to-date daily, reflecting changes as users are added or removed from your tenant

Automation of table data

Given my familiarity with PowerShell and runbooks, and the added support from GitHub Copilot, this demonstration will focus on automating the process using PowerShell and Azure Automation (Runbooks). While this could also be achieved with C# or Python (I initially experimented with Python but ultimately chose PowerShell for the final solution), PowerShell’s robust capabilities made it the ideal choice for this task.

The Process

First, we'll connect to Entra ID and export the upn, display name, and GUID into a CSV file. Then, we'll rename the existing user_UPN table to keep a backup without including it in future queries. After that, we'll create a new, blank user_UPN table and import the CSV data from Entra ID to populate it with updated user information. From the previos session, I chose to export and populate upn, GUID, and display name but only upn and one of the other fields are required to match and join from the CRI table. Additionally, I've incorporated some error checking and output presentation for validation, which is helpful during setup but not necessary for the final runbook

Step 1 - connect and export user data

# Define the CSV path with a date stamp
$dateStamp = Get-Date -Format "yyyyMMdd"
$csvPath = "[your directory path]\userExport_$dateStamp.csv"

# Ensure you're connected to Azure
Connect-AzAccount -Tenant "[your tenant ID]"

# Fetch user information
$users = Get-AzADUser | Select-Object DisplayName, Id, UserPrincipalName

# Export to CSV
$userCount = $users.Count
Write-Host -ForegroundColor Blue "Number of users: $userCount"
$users | Export-Csv -Path $csvPath -NoTypeInformation

# Display in a table if needed
$users | Format-Table -AutoSize

# Prompt the user to proceed to update the Kusto database
$proceed = Read-Host "CSV file generated at $csvPath. Do you want to proceed with updates to CRI Kusto Database (yes/no)"
if ($proceed -ne "yes") {
    Write-Host "Operation aborted by the user."
    exit
}

Note - with runbook the Write-Host and Prompt to proceed should be omitted.

Step 2 - Import modules, libraries and set variables (Kusto libraries can be downloaded and locally referenced)
Reference THIS LINK for details on Kusto in PowerShell.

# Define the tenant ID, client ID, and client secret of the service principal
$tenantId = "[your tenant ID]"
$clientId = "[your client ID"
$clientSecret = "[your client secret]"
$authority = "[your tenant domain name]" 

# Define the path to the Kusto client libraries
$kustoLibPath = "[your library path]\net6.0\"

# Load the Kusto client libraries in the correct order
Add-Type -Path "$kustoLibPath\Microsoft.Identity.Client.dll"
Add-Type -Path "$kustoLibPath\Kusto.Data.dll"

Note - ClientID, ClientSecret and Authority are needed when not using a user login, or when running script as runbook where service principle is required. This section of the documentation explains the variables needed with User, Application or Azure CLI interaction with Kusto in PowerShell.

Step 3 - Create Kusto query with Kusto Custom String Builder and Client Request Properties

# Define Kusto connection string variables
$clusterUrl = "[your Azure Data Explorer cluster url]"
$databaseName = "CallRecordInsights"

# Create the Kusto connection string with the OAuth token if using service principal
$kcsb = [Kusto.Data.KustoConnectionStringBuilder]::new($clusterUrl, $databaseName)
$kcsb = $kcsb.WithAadApplicationKeyAuthentication($clientId, $clientSecret, $authority)

# Define client request properties
$crp = [Kusto.Data.Common.ClientRequestProperties]::new()
$crp.ClientRequestId = "UserUPNpsh.ExecuteQuery." + [Guid]::NewGuid().ToString()
$crp.SetOption([Kusto.Data.Common.ClientRequestProperties]::OptionServerTimeout, [TimeSpan]::FromSeconds(30))

Step 4a (run as user) or 4b (run as application)

Here is an example of a query to test connecting and querying the manually created  user_UPN table and displaying the output to screen. (this would not be needed or valuable in a runbook)

# Query and present the existing table to test
$queryProvider = [Kusto.Data.Net.Client.KustoClientFactory]::CreateCslQueryProvider($kcsb)
$query = "user_upn | take 10"
Write-Host "Executing query: '$query' to test access to Kusto with connection string: '$($kcsb.ToString())'"
$reader = $queryProvider.ExecuteQuery($query, $crp)
$dataTable = [Kusto.Cloud.Platform.Data.ExtendedDataReader]::ToDataSet($reader).Tables[0]
$dataView = New-Object System.Data.DataView($dataTable)
$dataView | Sort DisplayName | Format-Table -AutoSize 

Here is the steps to rename the existing table and create a new table, ingesting the previously created CSV file, and ignoring the top header row (we don't want to ingest the column headers as data in the Cosmos DB table). Also note: Write-Host and Proceed lines should be omitted in a runbook.

# Rename the existing user_upn table to user_upn_old to preserve the table
$renameQuery = ".rename table user_upn to user_upn_old"
Write-Host -ForegroundColor Blue "Executing rename query: '$renameQuery'"
# Execute the rename query
try {
    $queryProvider.ExecuteControlCommand($renameQuery, $crp)
    Write-Host "Table renamed successfully."
} catch {
    Write-Host "Error renaming table: $_"
}

# Prompt the user to proceed to update the Kusto database
$proceed = Read-Host "User_upn renamed. Do you want to proceed with updates to CRI Kusto Database (yes/no)"
if ($proceed -ne "yes") {
    Write-Host "Operation aborted by the user."
    exit
}
# Define the create table query to recreate user_upn table with new upn data
$createTableQuery = @"
.create table user_upn (
    DisplayName: string,
    Id: guid,
    UserPrincipalName: string
)
"@
Write-Host -ForegroundColor Blue "Executing create new table: '$createTableQuery'"
# Execute the create table query
try {
    $queryProvider.ExecuteControlCommand($createTableQuery, $crp)
    Write-Host "New table created successfully."
} catch {
    Write-Host "Error creating new table: $_"
}
# Ingest data into the new table
$rowCount = (Import-Csv -Path $csvPath | Measure-Object).Count
Write-Host -ForegroundColor Blue "Executing table data ingest with $rowCount records:"

# Construct the ingestion query to ingest the CSV file into the table, and remove the column headers from the table data
$csvContent = Get-Content -Path $csvPath -Raw
$csvLines = $csvContent -split "`n"
$csvLinesWithoutHeader = $csvLines | Select-Object -Skip 1
$csvContentWithoutHeader = $csvLinesWithoutHeader -join "`n"
$tableName = "user_upn"
$ingestQuery = @"
.ingest inline into table $tableName <|
$csvContentWithoutHeader
"@

# Execute the ingestion query
try {
    $queryProvider.ExecuteControlCommand($ingestQuery, $crp)
    Write-Host "CSV file ingested successfully into the table $tableName."
} catch {
    Write-Error "Failed to ingest CSV file: $_"
}
Write-host -ForegroundColor Green "User UPN data updated successfully. Please check Kusto database for the updated data."

Azure Automation

Create Automation Account

Assign Roles and Perms for Automation Account
  1. Enable Managed Identity for the Azure Automation Account:

    • Navigate to your Azure Automation account in the Azure portal.
    • Under "Account Settings", select "Identity".
    • Enable the System-Assigned Managed Identity or add a User-Assigned Managed Identity.
  2. Assign Azure Roles:

    • Go to the "Access control (IAM)" section of your subscription.
    • Add a role assignment for the managed identity with the Contributor role at the subscription level. This ensures it has the necessary permissions to manage resources within the subscription.
    • Additionally, assign the Reader role on the Azure AD to the managed identity. This allows it to read user information.
  3. Grant Azure AD Permissions:

    • Navigate to the Azure AD section in the Azure portal.
    • Go to "App registrations" and find the managed identity.
    • Under "API permissions", add the Directory.Read.All permission.
    • Grant admin consent for the permission.
or run script to assign permissions
# Install the Microsoft Graph PowerShell module if not already installed
Install-Module Microsoft.Graph -Scope CurrentUser

# Connect to Microsoft Graph
Connect-MgGraph -Scopes "Application.ReadWrite.All", "AppRoleAssignment.ReadWrite.All", "Directory.Read.All"

# Find the managed identity
$DisplayNameOfMSI = "<Your-Managed-Identity-Name>"
$MSI = Get-MgServicePrincipal -Filter "displayName eq '$DisplayNameOfMSI'"

# Find the Microsoft Graph service principal
$GraphAppId = "00000003-0000-0000-c000-000000000000"
$GraphServicePrincipal = Get-MgServicePrincipal -Filter "appId eq '$GraphAppId'"

# Find the app role for Directory.Read.All
$PermissionName = "Directory.Read.All"
$AppRole = $GraphServicePrincipal.AppRoles | Where-Object {$_.Value -eq $PermissionName -and $_.AllowedMemberTypes -contains "Application"}

# Assign the app role to the managed identity
New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId $MSI.Id -PrincipalId $MSI.Id -ResourceId $GraphServicePrincipal.Id -AppRoleId $AppRole.Id
Lets build the automation account







Runbooks can be environment specific, we can load the Kusto Client DLLs from storage already created during the CRI deployment steps - from the NUGET file share in the CRI storage account. Optionally you could create blob storage and upload the libraries from this link - https://www.nuget.org/packages/Microsoft.Azure.Kusto.Data/



# Define the path to the Kusto client libraries
$kustoLibPath = "[your library path]\net6.0\"

# Load the Kusto client libraries in the correct order
Add-Type -Path "$kustoLibPath\Microsoft.Identity.Client.dll"
Add-Type -Path "$kustoLibPath\Kusto.Data.dll"


These libraries are required for the script to construct the Kusto client query, used in this context:
$kcsb = [Kusto.Data.KustoConnectionStringBuilder]::new($clusterUrl, $databaseName)

I also simplified the runbook, removing the dependency to write and store the user data in CSV, and just held the data in temp storage (JSON format) in the script.

# Fetch user information
$users = Get-AzADUser | Select-Object DisplayName, Id, UserPrincipalName

# Convert user data to JSON format for Cosmos DB ingestion
$usersJson = $users | ForEach-Object {
    [PSCustomObject]@{
        DisplayName       = $_.DisplayName
        Id                = $_.Id
        UserPrincipalName = $_.UserPrincipalName
    }
}

In our query we get UPN's in our call records, with an automated process to keep things up to date:



Next

I hope this post helps share the flexibility and specific ways to integrate additional data into call records with automation. You may not need this specific use case, but experiment with data integration needs for your organization.

In our next chapter we will investigate visualizing Call Record Insights in Power BI, and the value of CRI data retention in comparison to Teams Call Quality Dashboard, and QER in PowerBI - and way to leverage CRI to investigate call issues and correlate call ID's between TAC and CRI.

Thanks
JB


Sunday, July 21, 2024

Call Record Insights Update - Add User Principal Name to Records

In my previous post, "Getting Started with Call Record Insights," I discussed the basics of working with Call Record Insights (CRI). Since then, I've been focusing on an important update: including the user UPN (User Principal Name) in each record. Currently, CRI provides the user display name and user ID (GUID) but doesn't support UPN yet. The CRI development team has assured me that UPN support is coming in a future release. However, I have a few clients who need a quick and easy solution now. In this post, I'll walk you through a few modifications that will enable you to surface the UPN for each record right away.


The Challenge:

A record query in Call Record Insights (CRI) only provides the UserDisplayName, Id (GUID), and roles such as Callee, Caller, and Organizer. However, many organizations require a common identifier for consistent reporting and analytics. The UserPrincipalName (UPN) is highly desired in call records to standardize identity references and streamline reporting outputs.



The Process:

First we need to gather the UPN data and create a table for CRI to map.

Start with a simple PowerShell to get users DisplayName, GUID and UserPrincipalName

# Ensure you're connected to Azure
Connect-AzAccount -Tenant "YourTenantID"

# Fetch user information
$users = Get-AzADUser | Select-Object DisplayName, Id, UserPrincipalName

# Export to CSV
$users | Export-Csv -Path "YourFileLocation\userExport.csv" -NoTypeInformation

# Display in a table if needed
$users | Format-Table -AutoSize

This is a one-time step to get started. Later we will cover methods to keep the table up-to-date as new users are added to the tenant. In this example, I chose to export Display Name, ID (GUID) and UserPrincipalName - only 2 of these 3 fields are required, the goal later in our KQL steps, is to create a join, matching either ID (GUID) or UserDisplayName between the new table, and CRI to return the UPN.


Next add this table to Cosmos DB using Azure Data Explorer. There are options to do this through PowerShell using the Az.CosmosDB module, in this guide will leverage Azure Data Explorer (ADX).

Navigate to ADX and connect to the Cosmos DB. You can find the direct link (URI) to the CRI instance in the Azure portal. Select the resource group, and the Azure Data Explorer instance, and the copy the URI. Paste into new browser window, authenticate with the needed credentials, and select "Trust".



The next step is to add our CSV export data, including UserPrincipalName, as a new table in the existing CallRecords database. Select Get Data, as noted above, and then select Local File as the data source.


Expand the CallRecords database, and select New Table. Update the new table name to user_upn and in the right hand pane, drag/drop or browse for the CSV export generated from the PowerShell script. Click Next.


ADX shows a preview of the table creation from the CSV file. Note, toggle "First row is column header" to prevent the headers from inserting as rows in the table.

Click Finish - watch the table creating and record updates. When competed you will now find the new table.


The magic happens in our KQL query were we create a table join to match user ID's to UPN and return the UPN in our output.

In this example we need to return UPN for caller and callee records. This query can also be updated to create a third join on organizer if needed.

In the Query window - the following KQL will select the desired fields, join the tables using Caller_UserId and Callee_UserId to the Id(GUID) field in user_upn and return the UPN for both callers and callees.

(Please modify the query below if table names, or column names differ in your deployment)

CallRecords
| sort by CallStartTime desc
| extend CallDuration = todatetime(CallEndTime) - todatetime(CallStartTime)
| project CallId, SessionId, StreamDirection, MediaLabel, CallStartTime, CallEndTime, CallDuration, Caller_UserDisplayName, Caller_UserId, Caller_PhoneId, Callee_UserDisplayName, Callee_PhoneId, Callee_UserId, Caller_UserAgentHeaderValue, Callee_UserAgentHeaderValue
| join kind=leftouter (user_upn | project Id, UserPrincipalName) on $left.Callee_UserId == $right.Id
| extend Callee_UPN = UserPrincipalName
| project-away Id, UserPrincipalName
| join kind=leftouter (user_upn | project Id, UserPrincipalName) on $left.Caller_UserId == $right.Id
| extend Caller_UPN = UserPrincipalName
| project-away Id, UserPrincipalName
| project CallId, SessionId, StreamDirection, MediaLabel, CallStartTime, CallEndTime, CallDuration, Caller_UserDisplayName, Caller_UPN, Caller_PhoneId, Callee_UserDisplayName, Callee_UPN, Callee_PhoneId,Caller_UserAgentHeaderValue, Callee_UserAgentHeaderValue

  • From the previous post, we used extend to calculate and create the call duration in the query results
  • The first join operation matches Callee_Id from CallRecords with ID from user_upn and projects the UserPrincipalName into a new field called Callee_UPN.
  • The second join operation matches Caller_Id from CallRecords with ID from user_upn and projects the UserPrincipalName into a new field called Caller_UPN.
  • The project-away operator is used after each join to remove the unnecessary ID and UserPrincipalName fields from the intermediate results.
  • The final project operator lists all the fields you want to include in the final output, including the new Callee_UPN and Caller_UPN fields.

The Outcome:

Comparing the basic Take 100 query noted above in the Challenge section, to the new output, the UPN fields are now present in the output.



In the next chapter, we will create an automation job to upkeep the user_upn table as new members are added to the tenant. Stay tuned.


Monday, July 15, 2024

Unleashing Creativity and Collaboration with Microsoft Whiteboard and Copilot

Whether you're brainstorming ideas, planning a project, or conducting a team meeting, having the right tools can make all the difference. Enter Microsoft Whiteboard and Copilot, a dynamic duo that enhances productivity and fosters innovation. In this blog post, we'll explore how these tools work together to create a seamless collaborative experience.


What is Microsoft Whiteboard?

Microsoft Whiteboard is a digital canvas that allows users to brainstorm, ideate, and collaborate in real-time. It's designed to be intuitive and easy to use, providing a range of tools that mimic traditional whiteboarding but with the added benefits of digital technology. You can draw, write, and add sticky notes, images, and diagrams to your whiteboard, making it a versatile tool for any collaborative task. For enhanced collaboration, integrate the whiteboard into Team chat or meetings, allowing participants to actively contribute, share ideas, and engage in real-time brainstorming sessions.

Introducing Copilot: Your AI Assistant

Copilot is an AI-powered assistant integrated within Microsoft's suite of productivity tools. It helps streamline tasks, offers suggestions, and automates routine processes, allowing you to focus on the creative and strategic aspects of your work. With Copilot, you can enhance your productivity and make the most of your collaborative sessions on Microsoft Whiteboard.

Welcome to Copilot in Whiteboard - Microsoft Support

Design Thinking - Retrospective Use Case

Recently our team was using a Design Thinking process, while Whiteboard has many retrospective templates, we chose the Rose, Bud and Thorn option, leveraging  a  framework which helps identify positives, opportunities, and challenges.





Consider multiple contributors collaboratively adding note cards over time, filling the whiteboard with numerous virtual 'sticky notes,' some of which overlap in concept. Now, it's time to categorize and summarize this information.




In steps Copilot in Whiteboard.

With a quick selection, we can remove duplicates, request additional ideas, categorize the topics, and summarize.



Similar to an 'Easy Button,' Copilot seamlessly integrates input and ideas from multiple collaborators, organizing and summarizing them directly within the Whiteboard app. This allows everyone to review the content in real-time, eliminating the need for separate discussion breaks for later review.

Conclusion

Combining Microsoft Whiteboard with Copilot creates a powerful environment for creativity and collaboration. By leveraging the strengths of both tools, you can enhance your productivity, streamline your workflows, and foster innovation within your team. Whether you're brainstorming new ideas, planning a project, or conducting a meeting, Microsoft Whiteboard and Copilot are here to help you succeed.

Get started with the template options in Whiteboard, and leverage Copilot to save time and simplify.


Thanks Don Donais, Mark DeRosia, Mike Rinner, and Scott Francis for navigating this exercise together.

Saturday, July 13, 2024

Teams SIP Gateway - Sign-In Options


Teams SIP Gateway: Bulk, Zero Touch, and Remote Login Options

Microsoft Teams has been revolutionizing the way we communicate and collaborate in the modern workplace. One of the key features that has recently seen significant updates is the Teams SIP Gateway, with a growing list of compatible devices and administration options designed to enhance the integration of legacy SIP devices with Microsoft Teams. In this blog post, we'll explore the latest updates to Teams SIP Gateway, focusing on bulk sign-in, Zero Touch sign-in, and remote login options.



What is Teams SIP Gateway?

The Teams SIP Gateway allows organizations to connect their existing Session Initiation Protocol (SIP) devices, such as desk phones, to Microsoft Teams. This integration ensures that users can continue using their familiar hardware while leveraging the powerful collaboration features of Teams, often extending the life of legacy SIP phones in Teams. Legacy SIP phones do not provide the rich experience of a native Teams phone, but often meets the need for common area or shared device use-cases. For more detailed information, you can refer to the official Microsoft Teams SIP Gateway documentation.

SIP Gateway Compatible Devices

Bulk Sign-In

Bulk sign-in is an administrative feature that simplifies the process of registering multiple SIP devices with Teams. With bulk provisioning, administrators can now register and configure up to 100 devices per batch, with 3 concurrent batches, saving time and reducing the complexity associated with individual device setup.

  • Efficiency: Bulk provisioning automates the configuration process, allowing IT teams to quickly deploy a large fleet of SIP devices.
  • Consistency: Ensures consistent settings across all devices, reducing the risk of misconfiguration.
  • Scalability: Ideal for large organizations or those undergoing rapid expansion.

To take advantage of bulk provisioning, administrators can use configuration files or scripts to define the settings for multiple devices, streamlining the entire deployment process. More details on bulk provisioning can be found in the Microsoft Teams bulk provisioning documentation.

Zero Touch Common Area Sign-In

Zero Touch Common Area sign-in takes the hassle out of device setup, removing the need for on-site resources to capture the pairing code. This feature is particularly beneficial for remote or distributed administration teams. The portal can be access HERE - SIP Remote Login Portal

  • Ease of Use: Devices can be shipped directly to end users without requiring local IT intervention for setup.
  • Admin-Friendly: Users or local IT can simply plug in their devices, which then can be logged in remotely to retrieve and apply the necessary configuration settings.
  • Reduced Support: Minimizes the need for IT support during the initial setup phase, freeing up resources for other tasks.

Zero Touch sign-in allows an administration team to define device region and hardware MAC address from a remote location and browser, to obtain the device paring code, without the need for local resources at the handset. When a device connects to the network, it contacts the provisioning server, downloads its configuration, and is ready to use within minutes. For a deeper dive, visit the Zero Touch Sign In Steps.


Remote Login

Remote login options enhance the flexibility and security of accessing SIP devices from different locations. Similar to Zero-Touch sign-in, not a bulk process, this process allows for users and admins to remotely sign-in Teams SIP capable devices when a technician or user is present at their device. Especially beneficial for analog devices with no physical screen. The poral can be access HERE - SIP Login

  • Security: Ensures secure authentication and authorization processes, protecting against unauthorized access.
  • Flexibility: Allows users to log in to any Teams compatible SIP device, from a browser.
  • Convenience: Users and admins can seamlessly transition between devices without needing to login directly on the device.

With remote login, users can authenticate via their Teams credentials, ensuring a consistent and secure user experience across all their devices. For more information, check out the remote login documentation.



Conclusion

These options for Teams SIP Gateway and device administration are a game-changer for organizations looking to integrate legacy SIP devices with Microsoft Teams. Bulk provisioning, Zero Touch login, and remote login options provide the tools needed to simplify deployment, enhance user experience, and maintain security. These features are particularly valuable in today's dynamic work environment, where flexibility and efficiency are paramount.

By leveraging these flexible sign-in methods, organizations can ensure a smooth transition to modern communication solutions while maximizing the value of their existing hardware investments. Stay tuned for more updates and innovations from Microsoft Teams as they continue to redefine the future of workplace collaboration.

Thanks.

Tuesday, June 25, 2024

Teams Auto Attendant and Call Queue - Historical Reports Update

While many eagerly anticipate the general availability of the Queues App in Teams, a recent feature update is set to revolutionize the way organizations handle communication analytics. Authorized Users can now gain access to comprehensive historical reporting for Auto Attendants and Call Queues directly through Power BI. This update promises to empower teams with invaluable insights into call trends, performance metrics, and operational efficiencies. Read more, as we explore how this enhancement can elevate your communication strategies and streamline your workflow, without compromising administrative boarders.



For anyone interested in learning more about the Queues App - absolutely read more HERE 

In this post, I want to delve into the Voice App policy, a crucial component underpinning this functionality.

Have you ever wondered:

  • How can I view call queue history for my team without needing access to all Voice Applications across my tenant?
  • Is it possible to enable historical reporting for certain users in my tenant without granting them administrative roles (access to the Teams Admin Center)?
  • Can call supervisors access comprehensive historical reports without requiring elevated role assignments?

Microsoft recently updated the Voice Application Policies, allowing authorized users to access historical reporting specifically for designated Auto Attendants and Call Queues. This update ensures that supervisors can gain valuable insights without compromising security or administrative boundaries.

Documentation on this feature can be found HERE

Setup:

Currently, defining policy attributes to allow historical reporting is available through Teams PowerShell - not currently available in Teams Admin Center.

Once connected to the Teams PowerShell module (please make sure your Teams module is updated - PowerShell Gallery | MicrosoftTeams 6.4.0 ) - 

Get-TeamsVoiceApplicationPolicy 

will show the following policy attributes, noting the 3 reporting attributes at the bottom.
 

These 3 attributes can be set with 3 attribute variables (String).
  • Disabled
  • AuthorizedOnly
  • All
You may chose to create 2 policies - 

One for AuthorizedOnly: allowing only those users defined in each Auto Attendant or Call Queue as an authorized user access to the historical reports. 

A second for All: defining users with this policy access to all historical report across the tenant.

Use the New-TeamsVoiceApplicationPolicy cmdlet to create the policy needed.


Example (Historical Reporting Only):

New-CsTeamsVoiceApplicationsPolicy -identity "Historical Report Only" -HistoricalAutoAttendantMetricsPermission AuthorizedOnly -HistoricalCallQueueMetricsPermission AuthorizedOnly -HistoricalAgentMetricsPermission AuthorizedOnly


If users are currently assigned an administrative role - 




Once the policies are defied to your needs, there are 2 additional steps required 
  • Assign the Policy to Users
  • Define Authorized Users within each Auto Attendant and/or Call Queue
Select a user -> Select Policies -> Edit


Assign the desired Policy

And then, if the policy is for Authorized Users - each user needs to be defined within the Auto Attendant or Call Queue.



Result:

With this policy setup, an Authorized User can open PowerBI AA/CQ Historical Report Template and ONLY see the historical reports for the reports where they have access, no longer need an administrative role. 

The filter noted in the red box below will only show authorized resources.

For more information and a link to download the PowerBI template - 



Hope this helps everyone. !!!

Thanks.




Note - at the time of this post - please reference the following guidance if you experience errors connecting in PowerBI



Wednesday, June 19, 2024

Unlocking Insights: Deploying and Utilizing Microsoft Teams Call Record Insights

The "Call Record Insights" application template on GitHub is a comprehensive solution for retrieving, parsing, and storing Microsoft Teams call records. Using the Microsoft Graph API, this template simplifies data retrieval and storage into Cosmos DB and Kusto in Azure, providing scalability and deep data analysis capabilities. It is fully deployable within your tenant, ensuring data processing and storage are managed according to your organizational needs. The repository includes detailed deployment instructions, architecture guidelines, and administrative functions, making it a valuable resource for administrators aiming to manage Teams call records efficiently.

https://github.com/OfficeDev/microsoft-teams-apps-callrecord-insights


Foundation:

In this post, we'll explore the Call Record Insights application: what it is, how to deploy it, and methods to access and enhance its data. We'll explain how these records differ from traditional Call Detail Records (CDRs) and discuss automation strategies for accessing records and gaining insights. By the end, you'll have an understanding of how to leverage this tool to maximize your Microsoft Teams data management and analysis capabilities.


Deployment:

  • Pre-requisites: Ensure you have an Azure subscription, access to Microsoft Teams, and required permissions.
  • Clone Repository: Clone the GitHub repository to your local machine (or download and extract the ZIP package)
  • Azure Resources and Parameters: Define the necessary Azure resource parameters following the GitHub guidance. Here is an example:
  • $DeploymentParameters = @{
    •   ResourceGroupName = 'cridemo1rg'
    •   BaseResourceName  = 'cridemo1'
    •   SubscriptionId    = 'YourSubscriptionID'
    •   TenantDomain      = 'YourTenantDomain.com'
    •   Location        = 'EASTUS'
    • }
  • Configure and Deploy: Use the provided scripts and templates to configure and deploy the solution in your Azure environment.
  • MWCRI\deploy> .\deploy.ps1 @DeploymentParameters

    Follow the deployment script output:


    Once Deployed your resource group will look similar to this:



    Note this post wont cover much troubleshooting of the deployment or health once deployed, but you can leverage Function App Logs to review errors when needed.



    Accessing Records:

    With deployment complete, first thing we want to do is "see" the data.

    There are multiple ways to access Call Record Insights data. Here we will outline a few.

    Option 1 - Direct from Azure Portal:

    Navigate to the defined resource group and select the Azure Data Explorer Cluster for Call Record Insights.


    Expand the Data section and select query. In this view I also expanded the CallRecordsInsights database so we can see the table name "CallRecords". A simple KQL: CallRecords | Take 100 will return the first 100 records in the lower right quadrant of the Explorer panel. Click "Run" after inserting the query.
    Note in the upper right corner you can then export these records to Excel CSV.

    Option 2 - Azure Data Explorer (ADX) web:

    From the overview section of the Azure Data Explorer Cluster, copy the URI and paste into a a browser.


    Note this method does require permissions to access the cluster, make sure your user has access permissions, and authenticated as the appropriate user and directory (this can be found in the upper right corner of the Data Explorer interface).

    Option 3 - Kusto Explorer

    More information and installation links can be found here - Kusto.Explorer installation and user interface - Azure Data Explorer | Microsoft Learn

    Kusto Explorer is designed with a focus on data exploration and ad-hoc querying, making it easier for users who need to quickly analyze data without the need for extensive management capabilities.

    Once Kusto Explorer (KE) is installed, select Add Connection.



    Insert the URI from ADX (noted in Option 2 above) into the Cluster Connection Field. The Connection Alias can be set to any familiar name for Call Record Insights. 
    Note if you are connecting KE to and ADX instance outside your traditional tenant ID - you can modify the Advanced Connection String to include the tenant ID required, which will introduce the interactive login prompt.

    Data Source=https://<cluster-name>.kusto.windows.net;Initial Catalog=<database-name>;AAD Federated Security=True;Authority=<tenant-ID>


    Once connected, select your Call Records Insights Database, right click, and launch a new query tab.

    Enhanced Data Collection:

    Once you have access to the records in Cosmos DB, it's crucial to understand the data. Each record contains up to 244 fields, but not all fields are populated for every record. The Microsoft Graph API populates fields based on call type, stream direction, and components used.

    Understanding call types is essential. Unlike traditional CDRs, modern cloud calling solutions include additional components like multiple participants, video, screen sharing, and IVR endpoints. This makes the data more complex, with streams representing various call aspects.

    To manage the extensive data, you can use KQL (Kusto Query Language) to focus on key fields for easier analysis and filtering. This helps streamline the process of sorting through thousands of records and extracting meaningful insights. 

    Below is a KQL example - Select from CallRecords, the past 24 hours of records, sort by oldest to newest, AND .... insert a calculation for call duration. This field is not natively provided from the Graph API endpoint, but this value can be calculated using CallEndTime and CallStartTime using "extend", and then display the desired fields using "project".


    CallRecords
    | where CallEndTime >= ago(24h)
    | sort by CallStartTime desc
    | extend CallDuration = todatetime(CallEndTime) - todatetime(CallStartTime)
    | project CallId, SessionId, StreamDirection, MediaLabel, CallStartTime, CallEndTime,
    CallDuration, Caller_UserDisplayName, Caller_PhoneId, Callee_UserDisplayName, Callee_PhoneId,
    Organizer_UserDisplayName, Caller_UserAgentHeaderValue, Callee_UserAgentHeaderValue


    In the "project" statement here - I selected 13 of the available fields in the database and added the calculated field of call duration.

    For an additional filter (only audio records for calls) we could modify the following KQL:

    | where CallEndTime >= ago(24h) and MediaLabel == "main-audio"

    Keep in mind there are potentially 244 available fields available, depending on what insights you chose to evaluate - including stream type, audio and video codec, audio quality and packet metrics.  


    Record Example Types:

    Remembering that each record is a call leg, or stream, understanding the field terms will be helpful in classifying, or identifying individual calls. Reminder in the examples below I have only selected certain fields using KQL - there are many more available for your particular analysis.

    CallID - Call Record Identifier

    SessionID - identifies individual sessions as part of the CallID

    MediaLabel - Identifies the payload type for each stream (for phone calls over IP or PSTN, we use "main-audio")

    -- Apologies for the format here - displaying long records doesn't fit well on the screen (Click the image for a full view) --

    Example: Teams User to Outbound PSTN call (Direct Routing)

    Note we see the caller (Caller_UserDisplayName) identified Administrator, and the callee (Callee_PhoneID) identified by PSTN digits 


    Example: User to User (Click-to-Call within Teams)

    In this example we know the caller and callee Display Names and there was no PSTN endpoints involved so Caller_PhoneID and Callee_PhoneID are both blank


    Example: Inbound PSTN (Direct Routing), to Auto Attendant, transfer to Teams User


    It's important to recognize how fields are populated based on call direction. The Caller fields (either Caller_UserID or Caller_PhoneID) identify the call originator.

    I plan to publish a follow-up post with additional call examples and explanations, including Auto Attendant, Call Queue, Call Transfer, and Meetings with PSTN dial-in/out. Stay tuned for chapter 2.


    Automate Data Collection:

    While Call Record Insights (CRI) already operates in near real-time as a function app, you might want to automate the collection of records at specific intervals. For example, you could collect records every 24 hours and export them to Excel, SQL, or Power BI for analysis or cost management. Cosmos DB stores all collected records, allowing you to query them as needed, but scheduled exports can enhance your data analysis and reporting capabilities.

    In PowerShell a script can run to perform the KQL query and export to CSV.

    # Interactive Login
    Connect-AzAccount

    # Non-Interactive Login (Optional)
    # Uncomment and fill in the details if using a service principal)
    # $tenantId = "<Your-Tenant-ID>"
    # $appId = "<Your-App-ID>"
    # $password = ConvertTo-SecureString "<Your-Password>" -AsPlainText -Force
    # Connect-AzAccount -ServicePrincipal -Credential
    (New-Object System.Management.Automation.PSCredential($appId, $password)) -Tenant $tenantId

    # Install the required modules
    Install-Module -Name Az.CosmosDB
    Install-Module -Name Az.Kusto

    # Set Output File Path with timestamp
    $timestamp = Get-Date -Format "yyyyMMdd_HHmmss"
    $filePath = "C:\yourfilepath_$timestamp.csv"
    #Define Cosmos DB Connection String and Query
    $cluster = "https://yourclusterurl"
    $database = "CallRecordInsights"
    $query = @"
    CallRecords
    | where CallEndTime >= ago(24h)
    | sort by CallStartTime desc
    | extend CallDuration = todatetime(CallEndTime) - todatetime(CallStartTime)
    | project CallId, SessionId, StreamDirection, MediaLabel, CallStartTime, CallEndTime,
    CallDuration, Caller_UserDisplayName, Caller_PhoneId, Callee_UserDisplayName, Callee_PhoneId,
    Organizer_UserDisplayName, Caller_UserAgentHeaderValue, Callee_UserAgentHeaderValue
    "@
    #run the query
    $results = Invoke-AzKustoQuery -Cluster $cluster -Database $database -Query $query

    # Convert to CSV
    $results | Export-Csv -Path $filePath -NoTypeInformation

    # Optional - Convert CSV to Excel
    $excelFilePath = "C:\yourfilepath__$timestamp.xlsx"
    $excel = New-Object -ComObject Excel.Application
    $workbook = $excel.Workbooks.Open($filePath)
    $workbook.SaveAs($excelFilePath, 51)  # 51 is the Excel file format code for .xlsx
    $workbook.Close()
    $excel.Quit()

    Additionally, you can use Power Automate with the Azure Data Explorer connection to export data to Excel, Power BI, or SQL. This method is particularly useful because the user UPN (User Principal Name) is not directly available from the Graph API. Power Automate, or a table join can look up the UPN and return it to the chosen dataset. 

    Update - Here is a method to provide UPN within Call Record Insights.

    More Examples can be found Usage examples for Azure Data Explorer connector to Power Automate - Azure Data Explorer | Microsoft Learn


    Boom !!



    Conclusion:

    In this post, we've explored the Call Record Insights application, from deployment to data enhancement and automation. Understanding the comprehensive data captured by the Microsoft Graph API, especially how it differs from traditional CDRs, is crucial for effective analysis. By leveraging tools like KQL and Power Automate, you can streamline data management and extract valuable insights. Stay tuned for the next chapter, where we'll dive deeper into call examples and scenarios, including Auto Attendant, Call Queue, and PSTN meetings. Get started with your deployment and dig into the insights.

    Big thanks to Andy Thompson and Justin Walker for the ongoing development effort!!


    Update July Here is a method to provide UPN within Call Record Insights.

    Call Records Insights Update 2 - Export UPN from Entra ID and update Call Records

    In the previous chapter of CRI , I discussed an option for inserting user UPNs into the 'Call Record Insights' query results. To fur...