Transform work with autonomous agents across your business processes

Transform work with autonomous agents across your business processes

This article is contributed. See the original author and article here.

We’re expanding our ambition to bring AI-first business process to organizations. First, we’re announcing that the ability to create autonomous agents with Microsoft Copilot Studio will be available in public preview in November 2024. Learn more on the Copilot Studio blog

Second, we’re introducing 10 new autonomous agents in Microsoft Dynamics 365 to build capacity for sales, service, finance, and supply chain teams. These agents are designed to help you accelerate your time to value and are configured to scale operational efficiency and elevate customer experiences across roles and functions. 

Young business owner working on a laptop in her studio workplace.

Scale your team with new autonomous agents

Discover more ways to drive impact with autonomous agents and Copilot Studio.

Rapidly drive impact with autonomous agents 

Microsoft Copilot is your AI assistant—it works for you—and Copilot Studio enables you to easily create, manage, and connect agents to Copilot. Think of agents as the new apps for an AI-powered world. We envision organizations will have a constellation of agents—ranging from simple prompt-and-response to fully autonomous. They will work on behalf of an individual, team, or function to execute and orchestrate business process ranging from lead generation, to sales order processing, to confirming order deliveries. Copilot is how you’ll interact with these agents.

Introducing autonomous agents for Dynamics 365 

New autonomous agents enable customers to move from legacy line of business applications to AI-first business process. AI is today’s return on investment (ROI) and tomorrow’s competitive edge. These new agents are designed to help sales, service, finance, and supply chain teams drive business value—and are just the start. We will create many more agents in the coming year that give customers the competitive advantage they need to help future-proof their organization. Today, we’re introducing ten of these autonomous agents which will start to become available in public preview later in 2024 and continue into early 2025. 

Sales: Help sellers focus time on building customer relationships to close deals faster

Agents will help sellers focus time on engaging customers to move through the sales cycle faster. The Sales Qualification Agent for Microsoft Dynamics 365 Sales can free up time for the seller to spend on higher value activities by researching and prioritizing inbound leads in the pipe and developing personalized sales emails to initiate a sales conversation.

For small to medium-sized businesses, the Sales Order Agent for Microsoft Dynamics 365 Business Central will automate the order intake process from entry to confirmation by interacting with customers, capturing their preferences. See Sales Order Agent in action.

Operations: Empower teams to grow the business, optimize process, and meet customer demand

To maintain smooth business operations, it’s crucial that process in key areas such as finance, procurement, and supply chain are optimized to minimize cost, mitigate risks, and accelerate decisions. Autonomous agents operate around the clock to execute a range of process, helping professionals spend less time on manual work and more time on strategic tasks like planning and decision making.   

The Supplier Communications Agent for Microsoft Dynamics 365 Supply Chain Management autonomously manages collaboration with suppliers to confirm order delivery, while helping to preempt potential delays. With agents performing all the tasks related to confirming purchase orders, procurement specialists can focus on managing supplier relationships and improving overall supply chain resiliency.

Additional agents:  

  • Financial Reconciliation Agent for Microsoft 365 Copilot for Finance helps teams prepare and cleanse data sets to simplify and reduce time spent on the most labor-intensive part of the financial period close process that leads to financial reporting. Learn more in this brief video.
  • Account Reconciliation Agent for Microsoft Dynamics 365 Finance, designed for accountants and controllers, automates the matching and clearing of transactions between subledgers and the general ledger, helping them speed up the financial close process. This enhances cash flow visibility and can result in faster decisions to drive business performance. Watch this video to learn more
  • Time and Expense Agent for Microsoft Dynamics 365 Project Operations autonomously manages time entry, expense tracking, and approval workflows. It helps get invoices to customers promptly, preventing revenue leakage and helps ensure projects stay on track and within budget. See Time and Expense Agent in action.

Service: Transform customer experiences across self- and human-assisted service  

Contact centers face interconnected, compounding challenges to successfully and efficiently serve customers. For example, keeping vital knowledge base articles current relies on manual process. Valuable insights from seasoned customer service representatives are often locked away in chat logs, call recordings, case notes, and other data silos. And self-service tools rely on inflexible, hard-coded dialog with embedded knowledge that must be predefined for potential customer issues.  

The Customer Intent and Customer Knowledge Management Agents, available for Microsoft Dynamics 365 Customer Service and Microsoft Dynamics 365 Contact Center, help contact centers transform customer experiences across self-service and human-assisted service. The Customer Intent Agent enables evergreen self-service by continuously discovering new intents from past and current customer conversations across all channels, mapping issues and corresponding resolutions maintained by the agent in a library. The Customer Knowledge Management Agent helps ensure knowledge articles are kept perpetually up to date by analyzing case notes, transcripts, summaries, and other artifacts from human-assisted cases to uncover insights. 

Additional agents:

  • Case Management Agent for Customer Service automates key tasks throughout the case lifecycle—creation, resolution, follow up, closure—to reduce handle time and alleviate the burden on service representatives. See Case Management Agent in action.
  • Scheduling Operations Agent for Microsoft Dynamics 365 Field Service enables dispatchers to provide optimized schedules for technicians, even as conditions change throughout the workday—for example, accounting for issues such as traffic delays, double bookings, or last-minute cancellations that often result in conflicts or gaps. 

Collectively, these agents are trained to autonomously learn to address new and emerging issues via self-service, improve the quality of issue resolution across channels and help drive time and cost savings.

As agents become more prevalent in the enterprise, customers want to be confident that they have robust data governance and security. The agents coming to Dynamics 365 follow our core security, privacy, and responsible AI commitments. Agents built in Copilot Studio include guardrails and controls established by maker-defined instructions, knowledge, and actions. The data sources linked to the agent adhere to stringent security measures and controls—all managed in Copilot Studio. This includes data loss prevention, robust authentication protocols, and more. Once these agents are created, IT administrators can apply a comprehensive set of features to govern their use. 

Transform your business with agents 

Start your journey with agents by reading more about the full set of capabilities announced today, as well as a closer look at new ways to build autonomous agents with Copilot Studio.  

The post Transform work with autonomous agents across your business processes appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.

Routing options for VMs from Private Subnets

Routing options for VMs from Private Subnets

This article is contributed. See the original author and article here.

Virtual Machines deployed in Azure used to have Default Outbound Internet Access. Until today, this allows virtual machines to connect to resources on the internet (including public endpoints of Azure PaaS services) even if the Cloud administrators have not configured any outbound connectivity method for their virtual machines explicitly. Implicitly, Azure’s network stack performed source network address translation (SNAT) with a public IP address that was provided by the platform.


 


As part of their commitment to increase security on customer workloads, Microsoft will deprecate Default Outbound Internet Access on 30 September 2025 (see the official announcement here). As of this day, customers will need to configure an outbound connectivity method explicitly if their virtual machine requires internet connectivity. Customers will have the following options:



  • Attach a dedicated Public IP Address to a virtual machine.

  • Deploy a NAT gateway and attach it to the VNet subnet the VM is connected to.

  • Deploy a Load Balancer and configure Load Balancer Outbound Rules for virtual machines.

  • Deploy a Network Virtual Appliance (NVA) to perform SNAT, such as Azure Firewall, and route internet-bound traffic to the NVA before egressing to the internet.


Today, customers can start preparing their workloads for the updated platform behavior. By setting property defaultOutboundAccess to false during subnet creation, VMs deployed to this subnet will not benefit from the conventional default outbound access method, but adhere to the new conventions. Subnets with this configuration are also referred to as ‘private subnets’.


 


In this article, we are demonstrating (a) the limited connectivity of virtual machines deployed to private VNets. We are also exploring different options to (b) route traffic from these virtual machines to public internet and to (c) optimize the communication path for management and data plane operations targeting public endpoints of Azure services.


 



We will be focusing on connectivity with Azure services’ public endpoints. If you use Private Endpoints to expose services to your virtual network instead, routing in a private subnet remains unchanged.





Overview



The following architecture diagram presents the sample setup that we’ll use to explore the network traffic with different components. 


 

arch.drawio.png


 


The setup comprises following components:



  • A virtual network with a private subnet (i.e., a subnet that does not offer default outbound connectivity to the internet).

  • A virtual machine (running Ubuntu Linux) connected to this subnet.

  • A Key Vault including a stored secret as sample Azure PaaS service to explore Azure-bound connectivity.

  • A Log Analytics Workspace, storing audit information (i.e., metadata of all control and data plane operations) from that Key Vault.

  • A Bastion Host to securely connect to the virtual machine via SSH.


In the following sections, we will integrate following components to control the network traffic and explore the effects on communication flow:



  • An Azure Firewall as central Network Virtual Appliance to route outbound internet traffic.

  • An Azure Load Balancer with Outbound Rules to route Azure-bound traffic through the Azure Backbone (we’ll use the Azure Resource Manager in this example).

  • A Service Endpoint to route data plane operations directly to the service.


We’ll use following examples to illustrate the communication paths:



  • A simple http-call to ifconfig.io which (if successful) will return the public IP address that will be used to make calls to public internet resources.

  • An invocation of the Azure CLI to get Key Vault metadata (az keyvault show), which (if successful) will return information about the Key Vault resources. This call to the Azure Resource Manager represents a management plane operation.

  • An invocation of the Azure CLI to get a secret stored in the Key Vault (az keyvault secret show), which (if successful) will return a secret stored in the Key Vault. This represents a data plane operation.

  • A query to the Key Vault’s audit log (stored in the Log Analytics Workspace), to reveal the IP address of the caller for management and data plane operations.



Prerequisites



The repository Azure-Samples/azure-networking_private-subnet-routing on GitHub contains all required Infrastructure as Code assets, allowing you to easily reproduce the setup and exploration in your own Azure subscription.



Tools



The implementation uses the following tools:



  • bash as Command Line Interpreter (consider using Windows Subsystem for Linux if you are on Windows)

  • git to clone the repository (find installation instructions here)

  • Azure Command-Line Interface to interact with deployed Azure components (find installation instructions here)

  • HashiCorp Terraform (find installation instructions here).

  • jq to parse and process JSON input (find installation instructions here)



Git repository




  • Clone the Git repository from [TODO: Repo link here] and change cd into its repository root.

    $ git clone https://github.com/Azure-Samples/azure-networking_private-subnet-routing
    $ cd azure-networking_private-subnet-routing





Azure subscription




  • Login to your Azure subscription via Azure CLI and ensure you have access to your subscription.

    $ az login

    $ az account show






Getting ready: Deploy infrastructure.



We kick off our journey by deploying the infrastructure depicted in the architecture diagram above; we’ll do that using the IaC (Infrastructure as Code) assets from the repository.


 




  • Open file terraform.tfvars in your favorite code editor, and adjust the values of variables location (the region to which all resource will be deployed) and prefix (the shared name prefix for all resources). Also don’t forget to provide login credentials for your VM by setting values for admin_username and admin_password.




  • Set environment variable ARM_SUBSCRIPTION_ID to point terraform to the subscription you are currently logged on to.



    $ export ARM_SUBSCRIPTION_ID=$(az account show –query “id” -o tsv)




  • Using your CLI and terraform, deploy the demo setup:



    $ terraform init
    Initializing the backend…
    […]
    Terraform has been successfully initialized!

    $ terraform apply
    […]
    Do you want to perform these actions?
    Terraform will perform the actions described above.
    Only ‘yes’ will be accepted to approve.
    Enter a value: yes
    […]

    Apply complete!
    […]




    ☝️ In case you are not familiar with Terraform, this tutorial might be insightful for you.





  • Explore the deployed resources in the Azure Portal. Note that although the network infrastructure components shown in the architecture drawing above are already deployed, they are not yet configured for use from the Virtual Machine:



    • The Azure Firewall is deployed, but the route table attached to the VM subnet does not (yet) have any route directing traffic to the firewall (we will add this in Scenario 2).

    • The Azure Load Balancer is already deployed, but the virtual machine is not yet member of its backend pool (we will change this in Scenario 3).




  • Log in to the Virtual Machine using the Bastion Host.



    $ ./scripts/ssh-bastion.sh
    azureuser@localhost’s password:
    Welcome to Ubuntu 20.04.6 LTS (GNU/Linux 5.15.0-1064-azure x86_64)
    azureuser@no-doa-demo-vm:~$





Scenario 1: Access from private subnet.



At this point, our virtual machine is deployed to a private subnet. As we do not have any outbound connectivity method set up, all calls to public internet resources as well as to the public endpoints of Azure resources will time out.




  • Test 1: Call to public internet



    $ curl ifconfig.io –connect-timeout 10
    curl: (28) Connection timed out after 10004 milliseconds




  • Test 2: Call to Azure Resource Manager



    $ curl https://management.azure.com/ –connect-timeout 10
    curl: (28) Connection timed out after 10001 milliseconds




  • Test 3: Call to Azure Key Vault (data plane)



    $ curl https://no-doa-demo-kv.vault.azure.net/ –connect-timeout 10
    curl: (28) Connection timed out after 10002 milliseconds





Scenario 2: Route all traffic through azure Firewall.



Typically, customers deploy a central Firewall in their network to ensure all outbound traffic is consistently SNATed through the same public IPs and all outbound traffic is centrally controlled and governed. In this scenario, we therefore modify our existing route table and add a default route (i.e., for CIDR range 0.0.0.0/0), directing all outbound traffic to the private IP of our Azure Firewall.




  • Add Firewall and routes.



    • Browse to network.tf, uncomment the definition of azurerm_route.default-to-firewall.

    • Update your deployment.

      $ terraform apply
      Terraform will perform the following actions:
      # azurerm_route.default-to-firewall will be created
      […]






  • Test 1: Call to public internet, revealing that outbound calls are routed through the firewall’s public IP.



    $ curl ifconfig.io
    4.184.163.38




  • Now that you have access to the internet, install Azure CLI.



    $ curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash




  • Login to Azure with the Virtual machine’s managed identity.



    $ az login –identity




  • Test 2: Call to Azure Resource Manager (you might need to change the Key Vault name if you changed the prefix in your terraform.tfvars)



    $ az keyvault show –name “no-doa-demo-kv” -o table
    Location Name ResourceGroup
    —————— ————– ————–
    germanywestcentral no-doa-demo-kv no-doa-demo-rg




  • Test 3: Call to Azure Key Vault (data plane)



    $ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
    ContentType Name Value
    ————- ——- ————-
    message Hello, World!

     




  • Query Key Vault Audit Log.



    ☝️ The ingestion of audit logs into the Log Analytics Workspace might take some time. Please make sure to wait for up to ten minutes before starting to troubleshoot.





    • Get Application ID of VM’s system-assigned managed identity:



      $ ./scripts/vm_get-app-id.sh
      AppId for Principal ID f889ca69-d4b0-45a7-8300-0a88f957613e is: 8aa9503c-ee91-43ee-96c7-49dc005ebecc




    • Go to Log Analytics Workspace, run the following query.



      AzureDiagnostics |
      where identity_claim_appid_g == “[Replace with App ID!]”
      | project TimeGenerated, Resource, OperationName, CallerIPAddress
      | order by TimeGenerated desc


      Alternatively, run the prepared script kv_query-audit.sh:



      $ ./scripts/kv_query-audit.sh
      CallerIPAddress OperationName Resource TableName TimeGenerated
      —————– ————— ————– ————- —————————-
      4.184.163.38 VaultGet NO-DOA-DEMO-KV PrimaryResult 2024-06-14T08:25:29.4821689Z
      4.184.163.38 SecretGet NO-DOA-DEMO-KV PrimaryResult 2024-06-14T08:26:07.0067419Z







🗣 Note that both calls to the Key Vault succeed as they are routed through the central Firewall; both requests (to Azure Management plane and Key Vault data plane) hit their endpoints with the Firewall’s public IP.




Scenario 3: Bypass Firewall for traffic to Azure management plane.



At this point all, internet and Azure-bound traffic to public endpoints is routed through the Azure Firewall. Although this allows you to centrally control all traffic, you might have good reasons to prefer to offload some communication from this component by routing traffic targeting a specific IP address range through a different component for SNAT — for example to optimize latency or reduce load on the firewall component for communication with well-known hosts.


 



☝️ As mentioned before, dedicated Public IP addresses, NAT Gateways and Azure Load Balancers are alternative options to configure SNAT for outbound access. You can find a detailed discussion about all options here.



In this scenario, we assume that we want network traffic to the Azure Management plane to bypass the central Firewall (we pick this service for demonstration purposes here). Instead, we want to use the SNAT capabilities of an Azure Load Balancer with outbound rules to route traffic to the public endpoints of the Azure Resource Manager. We can achieve this by adding a more-specific route to the route table, directing traffic targeting the corresponding service tag (which is like a symbolic name comprising a set of IP ranges) to a different destination.


 


The integration of outbound load balancing rules into the communication path works differently than integrating a Network Virtual Appliance: While we defined the latter by setting the NVA’s private IP address as next hop in our user defined route in scenario 1, we only integrate the Load Balancer implicitly into our network flow — by specifying Internet as next hop in our route table. (Essentially, next hop ‘Internet’ instructs Azure to use either (a) the Public IP attached to the VM’s NIC, (b) the Load Balancer associated to the VM’s NIC with the help of an outbound rule, or (c) a NAT Gateway attached to the subnet the VM’s NIC is connected to.) Therefore, we need to take two steps to send traffic through our Load Balancer:



  • Deploy a more-specific user-defined route for the respective service tag.

  • Add our VM’s NIC to a load balancer’s backend pool with an outbound load balancing rule.


In our scenario, we’ll do this for the Service tag AzureResourceManager, which (amongst others) also comprises the IP addresses for management.azure.com, which is the endpoint for the Azure control plane. This will affect the az keyvault get operation to retrieve the Key Vault’s metadata.



Deploy more-specific route for AzureResourceManager service tag.





  • Add more specific route for AzureResourceManager




    • Browse to network.tf, uncomment the definition of azurerm_route.azurerm_2_internet.



      ☝️ Note that this route specifies Internet (!) as next hop type for any communication targeting IPs of service tag AzureResourceManager.





    • Update your deployment.



      $ terraform apply
      Terraform will perform the following actions:
      # azurerm_route.azurerm_2_internet will be created
      […]






  • (optional) Repeat test 1 (call to public internet) and test 3 (call to Key Vault’s data plane) to confirm behavior remains unchanged.



    $ curl ifconfig.io
    4.184.163.38

    $ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
    ContentType Name Value
    ————- ——- ————-
    message Hello, World!





  • Test 2: Call to Azure Resource Manager



    $ az keyvault show –name “no-doa-demo-kv” -o table
    : Failed to establish a new connection: [Errno 101] Network is unreachable





🗣 While the call to the Key Vault data plane succeeds, the call to the resource manager fails: Route azurerm_2_internet directs traffic to next hop type Internet. However, as the VM’s subnet is private, defining the outbound route is not sufficient and we still need to attach the VM’s NIC to the Load Balancers outbound rule.




Instruct Azure to send internet-bound traffic through Outbound Load Balancer





  • Add virtual machine’s NIC to a backend pool linked with an outbound load balancing rule.




    • Browse to vm.tf, uncomment the definition of azurerm_network_interface_backend_address_pool_association.vm-nic_2_lb.




    • Update your deployment.



      $ terraform apply
      Terraform will perform the following actions:
      # azurerm_network_interface_backend_address_pool_association.vm-nic_2_lb will be created
      […]






  • (optional) Repeat test 1 (call to public internet) and test 3 (call to Key Vault’s data plane) to confirm behavior remains unchanged.




  • Repeat Test 2: Call to Azure Resource Manager



    $ az keyvault show –name “no-doa-demo-kv” -o table
    Location Name ResourceGroup
    —————— ————– —————
    germanywestcentral no-doa-demo-kv no-doa-demo-rg




  • Re-run the prepared script kv_query-audit.sh:



    $ ./scripts/kv_query-audit.sh
    CallerIPAddress OperationName Resource TableName TimeGenerated
    —————– ————— ————– ————- —————————-
    4.184.163.38 SecretGet NO-DOA-DEMO-KV PrimaryResult 2024-06-17T12:49:30.7165964Z
    4.184.161.169 VaultGet NO-DOA-DEMO-KV PrimaryResult 2024-06-17T12:44:35.6599439Z
    […]





🗣 After adding the NIC to the backend of the outbound load balancer, routes with next hop type Internet will use the load balancer for outbound traffic. As we specified Internet as next hop type for AzureResourceManager, the VaultGet operation is now hitting the management plane from the load balancer’s public IP. (Communication with the Key Vault data plane remains unchanged; the SecretGet operation still hits the Key Vault from the Firewall’s Public IP.)




☝️ We explored this path for the platform-defined service tag AzureResourceManager. However, it’s equally possible to define this communication path for your self-defined IP addresses or ranges.




Scenario 4: Add ‘shortcut’ for traffic to Key Vault data plane.



For communication with many platform services, Azure offers customers Virtual Network Service Endpoints to enable an optimized connectivity method that keeps traffic on its backbone network. Customers can use this, for example, to offload traffic to platform services from their network resources and increase security by enabling access restrictions on their resources.


 



☝️ Note that service endpoints are not specific for individual resource instances; they will enable optimized connectivity for all deployments of this resource type (across different subscriptions, tenants and customers). You may want to make sure to deploy complementing firewall rules to your resource as an additional layer of security.



In this scenario, we’ll deploy a service endpoint for Azure Key Vaults. We’ll see that the platform will no longer SNAT traffic to our Kay Vault’s data plane but use the VM’s private IP for communication.




  • Deploy Service Endpoint for Key Vault




    • Browse to network.tf, uncomment the definition of serviceEndpoints in azapi_resource.subnet-vm.




    • Update your deployment.



      $ terraform apply
      Terraform will perform the following actions:
      # azapi_resource.subnet-vm will be updated in-place
      […]






  • (optional) Repeat test 1 (call to public internet) and test 2 (call to Azure management plane) to confirm behavior remains unchanged.




  • Test 3: Call to Azure Key Vault (data plane)



    $ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
    ContentType Name Value
    ————- ——- ————-
    message Hello, World!




  • Re-run the prepared script kv_query-audit.sh:



    $ ./scripts/kv_query-audit.sh
    CallerIPAddress OperationName Resource TableName TimeGenerated
    —————– ————— ————– ————- —————————-
    10.3.1.4 SecretGet NO-DOA-DEMO-KV PrimaryResult 2024-06-17T14:21:28.3388285Z
    […]





🗣 After deploying a service endpoint, we see that traffic is hitting the Azure Key Vault data plane from the virtual machine’s private IP address, i.e., not passing through Firewall or outbound load balancer.




Inspect NIC’s effective routes.



Eventually, let’s explore how the different connectivity methods show up in the virtual machine’s NIC’s effective routes. Use one of the following options to show them:




  • In Azure portal, browse to the VM’s NIC and explore the ‘Effective Routes’ section in the ‘Help’ section.




  • Alternatively, run the provided script (please note that the script will only show the first IP address prefix in the output for brevity).



    $ ./scripts/vm-nic_show-routes.sh
    Source FirstIpAddressPrefix NextHopType NextHopIpAddress
    ——– ———————- —————————– ——————
    Default 10.0.0.0/16 VnetLocal
    User 191.234.158.0/23 Internet
    Default 0.0.0.0/0 Internet
    Default 191.238.72.152/29 VirtualNetworkServiceEndpoint
    User 0.0.0.0/0 VirtualAppliance 10.254.1.4





🗣 See that… 



  • …system-defined route 191.238.72.152/29 to VirtualNetworkServiceEndpoint is sending traffic to Azure Key Vault data plane via service endpoint.

  • …user-defined route 191.234.158.0/23 to Internet is implicitly sending traffic to AzureResourceManager via Outbound Load Balancer (by defining Internet as next hop type for a VM attached to an outbound load balancer rule).

  • …user-defined route 0.0.0.0/0 to VirtualAppliance (10.254.1.4) is sending all remaining internet-bound traffic to the Firewall.



 



 


 

Announcing the Public Preview of the new XML Compose and Parse with schema actions

Announcing the Public Preview of the new XML Compose and Parse with schema actions

This article is contributed. See the original author and article here.

Why XML?


 


XML is widely used across various industries due to its versatility and ability to structure complex data. Some key industries that use XML:



  • Finance: XML is used for financial data interchange, such as in SWIFT messages for international banking transactions and in various financial reporting standards.

  • Healthcare: XML is used in healthcare for data exchange standards like HL7, which facilitates the sharing of clinical and administrative data between healthcare providers.

  • Supply Chain: XML is used in supply chain management for data interchange, such as in Electronic Data Interchange (EDI) standards.

  • Government: Multiple government entities use XML for various data management and reporting tasks.

  • Legal: XML is used in the legal industry to organize and manage documents, making it easier to find and manage information.


To provide continuous support to our customers in these industries, Microsoft has always provided strong capabilities for integration with XML workloads. For instance, XML was a first-class citizen in BizTalk Server. Now, despite the pervasiveness of the JSON format, we continue working to make Azure Logic Apps the best alternative for our BizTalk Server customers and customers using XML based workloads.


 


hcamposu_0-1729262258348.png


 


 


The XML Operations connector


 


We have recently added two actions for the XML Operations connector: Parse with schema and Compose with schema. With this addition, Logic Apps customers can now interact with the token picker during design time. The tokens are generated from the XML schema provided by the customer. As a result, the XML document and its contained properties will be easily accessible, created and manipulated in the workflow.


 


hcamposu_1-1729208772633.png


 


XML parse with schema


 


The XML parse with schema allow customers to parse XML data using an XSD file (an XML schema file). XSD files need to be uploaded to the Logic App schemas artifacts or an Integration account. Once they have been uploaded, you need to enter the enter your XML content, the source of the schema and the name of the schema file. The XML content may either be provided in-line or selected from previous operations in the workflow using the token picker.


 


hcamposu_2-1729208772637.png


 


 


Based on the provided XML schema, tokens such as the following will be available to subsequent operations upon saving the workflow:


 


hcamposu_3-1729208772645.png


 


In the output, the Body field contains a wrapper ‘json’ property, so that additional properties may be provided besides the translated XML content, such as any parsing warning messages. To ignore the additional properties, you may pick the ‘json’ property instead.


You may also select the token for each individual properties of the XML document, as these tokens are generated from the provided XML schema.


 


XML compose with schema


 


The XML compose with schema allows customers to generate XML data, using an XSD file. XSD files need to be uploaded to the Logic App schemas artifacts or an Integration account. Once they have been uploaded, you should select the XSD file along with entering the JSON root element or elements of your input XML schema. The JSON input elements will be dynamically generated based on the selected XML schema.


 


hcamposu_4-1729208772651.png


 


You can also switch to Array and pass an entire array for Customers and another for Orders:


 


hcamposu_0-1729279409558.png


 


 


Please watch the following video for a complete demonstration of this new feature.


 


 


In collaboration with @David_Burg.


 

From Challenges to Triumph: The Judge Group’s AI Transformation with Microsoft 365 Copilot

From Challenges to Triumph: The Judge Group’s AI Transformation with Microsoft 365 Copilot

This article is contributed. See the original author and article here.

Slide1.jpg



Welcome back to Grow Your Business with Microsoft 365 Copilot, your monthly resource for actionable insights to harness the power of AI at small and medium sized businesses.


 


In today’s fast-paced small business environment, staying competitive means embracing innovative solutions that drive efficiency and growth. Microsoft 365 Copilot is designed to meet these needs, offering businesses a powerful tool to enhance productivity and streamline operations. Many small and medium-sized businesses (SMBs) face challenges in managing their digital presence, customer interactions, and internal processes. These hurdles can impede growth and limit the ability to respond swiftly to market changes.


 


We spoke with John T. Battaglia from The Judge Group, a mid-sized global professional services firm specializing in business technology consulting, learning, staffing, and offshore solutions. Established in 1970, the company has grown into an international leader in their industry. With over 30 locations across the United States, Canada, and India, The Judge Group partners with some of the most prominent companies in the world, including over 60 of the Fortune 100. After encountering significant challenges with employees using their own AI tools, The Judge Group made a strategic decision to transition to Microsoft 365 Copilot for its security features, unmatched efficiency, and AI-driven insights. Since adopting Copilot, The Judge Group has become power users, fully integrating the tool into their daily operations. 


 


In the following sections, we will explore some of the specific challenges The Judge Group faced and how Microsoft 365 Copilot helped them achieve better outcomes.


 


Managing complex projects with AI


One of the pain points The Judge Group faced was managing complex projects across multiple stakeholders. They often found it challenging to keep everyone on the same page and ensure timely completion of tasks. Copilot helped provide insights and simplified workflows to streamline project management. For example, during a presentation to a nonprofit company’s board, Copilot was used to gather key details of the meeting with ease such as generating real-time summaries, tracking action items, and providing instant access to relevant documents using Microsoft Teams with simple prompts. This technology superpower impressed the board, who initially thought Copilot’s capabilities were more similar to those of its competitors. They were amazed by the added bonus of AI-driven insights, which helped prioritize tasks and allocate resources more efficiently—far exceeding expectations.


GIF showing how you can easily prompt Copilot to provide meeting insightsGIF showing how you can easily prompt Copilot to provide meeting insights


 


Security and Compliance


The Judge Group experienced notable challenges early on in their AI journey, particularly in the areas of data security and compliance. Employees independently opted to use their own AI tools leading to substantial security concerns, especially for the legal team. The previous tool posed significant risks to sensitive information, making it challenging for The Judge Group to ensure their data was protected and adhering to the strict guidelines and regulations they have in place to safeguard their operations and instill confidence in their stakeholders.


 



“We realized that [employees independently opting to use their own AI tools] was a massive scare for our legal team. That’s when we decided to go on this AI journey with Copilot to protect our information and improve our processes.”


John T. Battaglia

Switching to Microsoft 365 Copilot provided enhanced security features and compliance management. Not only did Microsoft 365 Copilot adopt established data privacy and security policies and configurations, but their compliance approach also ensured that all AI capabilities were developed, deployed, and evaluated in a compliant and ethical manner. This comprehensive strategy not only safeguarded their operations but also provided peace of mind across the organization.


 


Decisive steps were also taken to enhance their data security and privacy settings in SharePoint to mitigate “oversharing.” They implemented SharePoint Advanced Management (SAM), to further simplify governing and protecting SharePoint data. This allowed them to control access to specific areas of their site or documents when collaborating with vendors, clients, or customers outside their organization. They emphasized the importance of knowing who they were sharing with, determining if the recipients could re-share the content, and whether they needed to protect the content to prevent re-use or re-sharing.


grow-sharepoint-os-permissions.png


 


Management, Adoption, and Training


Another challenge was ensuring compliance with responsible AI policies. To address this, they adopted a phased rollout approach for Copilot, starting with a pilot group and gradually expanding while providing targeted training and support. This strategy ensured that all team members adhered to the guidelines, improving the organization’s workflow and productivity. The phased rollout allowed The Judge Group to identify and address any compliance issues early on, ensuring a smooth implementation of Copilot.


 


The rollout consisted of several training initiatives, including role-specific training sessions tailored to the unique responsibilities and workflows of different teams within the organization. This approach ensured that each team member received relevant and practical training that directly applied to their daily tasks and served as a collaborative space where team members could share their experiences, best practices, and learn from one another. The emphasis on social learning within teams helped accelerate the learning curve and adoption rate, creating a supportive environment that encouraged continuous improvement.


 


In the pilot phase, The Judge Group prioritized roles such as project managers, IT administrators, and customer service representatives. Each role faced unique challenges and required specific training to ensure success. Project managers focused on using Copilot for task management and resource allocation, while IT administrators concentrated on compliance and security aspects. Customer service representatives received training in leveraging Copilot for customer interactions and support.


 


The key takeaways were the importance of making the training relevant to the role, continuous support, and fostering a culture of collaboration and learning.


 


Approach to Adoption


The Judge Group meticulously crafted a strategic adoption plan, with fun engaging campaign materials. They equipped the adoption team with all the necessary skills including “Train the Trainer” sessions to ensure they could effectively support the campaign for a successful launch.


 

Executive sponsors and champions were also actively engaged and trained within priority persona groups. Consistent and active communication was maintained across these groups to keep everyone informed and energized. By capturing and tracking adoption and utilization metrics for campaign target users, The Judge Group was then able to measure the impact of the adoption campaign using the Copilot Dashboard to help make informed data-driven decisions for optimization. The Copilot Dashboard was used to track various adoption metrics, such as “Total actions taken,” “Copilot assisted hours,” “Total actions count,” “Total active Copilot users,” and “meetings summarized by Copilot.” This allowed The Judge Group to monitor the progress and impact of their AI initiatives effectively.


 


copilot-dashboard-active-users.png



The ongoing training series serve as a collaborative place where team members could share their experiences, best practices, and learn from one another. This emphasis on social learning within teams accelerated the learning curve and adoption rate, fostering a supportive environment that encouraged continuous improvement.


Tip: To further support SMBs, the Microsoft Copilot Success Kit can help set companies up for success from day oneproviding all the resources needed to guide customers through deployment and adoption. 

 


Join us next month for another inspiring story on how Copilot is driving growth and creating opportunities for small to mid-sized businesses. Have you experienced growth with Microsoft 365 Copilot? We’d love to hear your story! Comment below to express your interest, and a team member will reach out to you.


 


Angela Byers


Microsoft


Senior Director, Copilot & Growth Marketing for SMB
Let’s connect on LinkedIn 




GabeHo_2-1723487362464.png


Take action! 


Don’t miss out on the opportunity to transform your business with Microsoft 365 Copilot. Learn more about how Copilot can help you achieve your goals, sign up for a trial, or contact our team for a personalized demo.


Try out some of the ways The Judge Group used Microsoft 365 Copilot



Become a Copilot power user!





smb-news-banner.jpg


Here’s the latest news on how Microsoft 365 Copilot can accelerate your business


Just announced, a new Forrester Total Economic Impact (TEI) study was just published, examining the impact


September was an epic month for Copilot with the Wave Two announcement made by Satya Nadella and Jared Spataro. Here is a rundown of additional features announced for every SMB not covered in last month’s blog:



  • Expanded Copilot Academy Access: Microsoft Copilot Academy is now available to all users with a Microsoft 365 Copilot license, enabling them to upskill their Copilot proficiency without needing a paid Viva license.

  • New Admin and Management Capabilities: Expanded controls for managing the availability of Copilot in Teams meetings have been introduced. IT admins and meeting organizers can now select an ‘Off’ value for Copilot, providing more flexibility in managing Copilot usage.

  • Copilot in Word Enhancements: On blank documents, Copilot in Word will now provide one-click examples of prompts that quickly help users get started on a new document. This feature is rolling out in October for desktop and Mac.

  • Copilot User Enablement Toolkit: To help users quickly realize the full benefit of Microsoft 365 Copilot, a new toolkit has been developed. It includes communication templates and resources designed to inspire better Copilot engagement, with role-specific prompt examples and use cases. 

  • Check out the full details of the announcement and launch event.


Self-Service Purchase: Did you know that over 80% of information workers in small and medium-sized businesses already bring their own AI tools to work? Microsoft 365 Copilot can now be purchased directly by users who have a Microsoft 365 Business Basic, Standard, or Premium license. If you’d like to purchase Microsoft 365 Copilot for yourself to use at work, click “Add Copilot to your Microsoft plan” on the Copilot for Microsoft 365 or Compare All Microsoft 365 Plans product pages. 




Meet the team 


The monthly series, Grow Your Business with Copilot for Microsoft 365, is brought to you by the SMB Copilot marketing team at Microsoft. From entrepreneurs to coffee connoisseurs, they work passionately behind the scenes, sharing the magic of Copilot products with small and medium businesses everywhere. Always ready with a smile, a helping hand, and a clever campaign, they’re passionate about helping YOUR business grow!  


Microsoft SMB Copilot Product Marketing TeamMicrosoft SMB Copilot Product Marketing Team
From left to right: An image of the SMB Copilot team at Microsoft, with Angela Byers, Mariana Prudencio, Elif Algedik, Kayla Patterson, Briana Taylor, and Gabe Ho.




About the blog


Welcome to “Grow Your Business with Microsoft 365 Copilot,” where we aim to inspire and delight you with insights and stories on how AI is changing the game for business. This monthly series is designed to empower small and mid-sized businesses to harness the power of AI at work. Each month, we feature scenarios where an SMB is using AI to transform, scale, and grow.

Trusted Signing is now open for individual developers to sign up in Public Preview!

Trusted Signing is now open for individual developers to sign up in Public Preview!

This article is contributed. See the original author and article here.

In the realm of software development, code signing certificates play a pivotal role in ensuring the authenticity and integrity of code. For individual developers, obtaining these certificates involves a rigorous identity validation process. This blog explores the challenges individual developers face and how Trusted Signing can streamline the code signing process, with a focus on how its individual validation process contributes to this efficiency. 


 


Challenges faced by Individual Developers in Code Signing 


Individual developers often face unique challenges when it comes to code signing. Here are some key issues: 



  • Identity Validation process: This includes challenges such as obtaining the necessary documentation, undergoing lengthy verification processes, and dealing with differing requirements from various CAs.  



  • Private Key Theft or Misuse: Private keys are crucial for the code signing process and must be protected at all times. If these keys are stolen, attackers can use the compromised certificates to sign malware, distributing harmful software under a verified publisher name. It is expensive for individual developers to invest in the infrastructure and operations required to manage and store the keys. 



  • Complexity and Cost: The process of obtaining and managing code signing certificates can be complex and expensive, especially for individual developers and small teams. This complexity can lead to incomplete signing or not signing at all. 



  • Integration with DevOps: Code signing needs to be integrated with DevOps processes, tool chains, and automation workflows. Ensuring that access to private keys is easy, seamless, and secure is a significant challenge. 



  • Code Integrity and Security: While code signing ensures the integrity of software, it does not guarantee that the signed code is free from vulnerabilities. Hackers can exploit unregulated access to code signing systems to get malicious code signed and distributed. 


 


What is the Trusted Signing service?  


Trusted Signing is a comprehensive code signing service supported by a Microsoft-managed certification authority. The identity validation process is designed to be robust. Certificates are issued from Microsoft-managed CAs and are subsequently protected and serviced by providing seamless integration with leading developer toolsets. This eliminates the need for individual developers to invest in additional infrastructure and operations. 


 


The Importance of Identity Validation 


Identity validation is crucial for securing code signing certificates. It ensures that the individual requesting the certificate is indeed who they claim to be, thereby preventing malicious actors from distributing harmful code under the guise of legitimate software. This process builds trust among users and stakeholders, as they can be confident that the signed code is authentic and has not been tampered with. 


 


Process for Identity Validation with Trusted Signing 


Trusted Signing utilizes Microsoft Entra Verified ID (VID) for identity validation of individual developers. This process ensures that developers receive a VID, which is accessible through the Authenticator app, offering enhanced security, a streamlined process, and seamless integration with Microsoft Entra.  


 


The verification process involves the following steps:  



  1. Submission of Government-Issued Photo ID: The first requirement is to provide a legible copy of a currently valid government-issued photo ID. This document must include the same name and address as on the certificate order. 

  2. Biometric/selfie check: Along with the photo ID, applicants need to submit a selfie. This step ensures that the person in the ID matches the individual applying for the certificate. 

  3. Additional Verification Steps: If the address is missing on the government issued ID card, then additional documents will be required to verify the address of the applicant. 


This is how a successfully procured VID would appear in Azure portal. 


 


Meha_MSFT_0-1727844589082.png


 


Best Practices for a Smooth Validation Process 


To ensure a smooth and successful identity validation process, individual developers should adhere to the following best practices: 



  • Accurate Documentation: Ensure that all submitted documents are accurate and up-to-date and follow the guidelines. 



  • Stay Informed: Keep abreast of any changes in the validation requirements or processes of the CA you are working with. 


Meha_MSFT_1-1727844589083.png


 


Costs of using Trusted Signing service 


Trusted Signing offers two pricing tiers starting at $9.99/month and you can pick the tiers based on your usage. Both tiers are designed to provide optimal cost efficiency and cater to various signing needs. You can find the pricing details here. The costs for identity validation, certificate lifecycle management, storing the keys securely, and signing are all included in a single SKU, ensuring accessibility and predictable expenses. 


 


Conclusion 


Identity validation is a critical step for individual developers seeking code signing certificates. By understanding the process, preparing in advance, and following best practices, developers can successfully navigate the validation process and secure their code signing certificates with Trusted Signing. This not only enhances the security of their software but also builds trust with users and stakeholders.