This article is contributed. See the original author and article here.
Virtual Machines deployed in Azure used to haveDefault Outbound Internet Access. Until today, this allows virtual machines to connect to resources on the internet (including public endpoints of Azure PaaS services) even if the Cloud administrators have not configured any outbound connectivity method for their virtual machines explicitly. Implicitly, Azure’s network stack performed source network address translation (SNAT) with a public IP address that was provided by the platform.
As part of their commitment to increase security on customer workloads, Microsoft will deprecateDefault Outbound Internet Accesson 30 September 2025 (see the official announcementhere). As of this day, customers will need to configure an outbound connectivity method explicitly if their virtual machine requires internet connectivity. Customers will have the following options:
Deploy a Network Virtual Appliance (NVA) to perform SNAT, such asAzure Firewall, androuteinternet-bound traffic to the NVA before egressing to the internet.
Today, customers can start preparing their workloads for the updated platform behavior. By settingpropertydefaultOutboundAccesstofalseduring subnet creation, VMs deployed to this subnet will not benefit from the conventional default outbound access method, but adhere to the new conventions. Subnets with this configuration are also referred to as ‘private subnets’.
In this article, we are demonstrating (a) the limited connectivity of virtual machines deployed to private VNets. We are also exploring different options to (b) route traffic from these virtual machines to public internet and to (c) optimize the communication path for management and data plane operations targeting public endpoints of Azure services.
We will be focusing on connectivity with Azure services’publicendpoints. If you usePrivate Endpointsto expose services to your virtual network instead, routing in a private subnet remains unchanged.
Overview
The following architecture diagram presents the sample setup that we’ll use to explore the network traffic with different components.
The setup comprises following components:
A virtual network with a private subnet (i.e., a subnet that does not offer default outbound connectivity to the internet).
A virtual machine (running Ubuntu Linux) connected to this subnet.
A Key Vault including a stored secret as sample Azure PaaS service to explore Azure-bound connectivity.
A Log Analytics Workspace, storing audit information (i.e., metadata of all control and data plane operations) from that Key Vault.
A Bastion Host to securely connect to the virtual machine via SSH.
In the following sections, we will integrate following components to control the network traffic and explore the effects on communication flow:
An Azure Firewall as central Network Virtual Appliance to route outbound internet traffic.
An Azure Load Balancer with Outbound Rules to route Azure-bound traffic through the Azure Backbone (we’ll use the Azure Resource Manager in this example).
A Service Endpoint to route data plane operations directly to the service.
We’ll use following examples to illustrate the communication paths:
A simple http-call toifconfig.iowhich (if successful) will return the public IP address that will be used to make calls to public internet resources.
An invocation of the Azure CLI to get Key Vault metadata (az keyvault show), which (if successful) will return information about the Key Vault resources. This call to the Azure Resource Manager represents a management plane operation.
An invocation of the Azure CLI to get a secret stored in the Key Vault (az keyvault secret show), which (if successful) will return a secret stored in the Key Vault. This represents a data plane operation.
A query to the Key Vault’s audit log (stored in the Log Analytics Workspace), to reveal the IP address of the caller for management and data plane operations.
Prerequisites
The repositoryAzure-Samples/azure-networking_private-subnet-routingon GitHub contains all required Infrastructure as Code assets, allowing you to easily reproduce the setup and exploration in your own Azure subscription.
jqto parse and process JSON input (find installation instructionshere)
Git repository
Clone the Git repository from [TODO: Repo link here] and changecdinto its repository root.
$ git clone https://github.com/Azure-Samples/azure-networking_private-subnet-routing
$ cd azure-networking_private-subnet-routing
Azure subscription
Login to your Azure subscription via Azure CLI and ensure you have access to your subscription.
$ az login
$ az account show
Getting ready: Deploy infrastructure.
We kick off our journey by deploying the infrastructure depicted in the architecture diagram above; we’ll do that using the IaC (Infrastructure as Code) assets from the repository.
Open fileterraform.tfvarsin your favorite code editor, and adjust the values of variableslocation(the region to which all resource will be deployed) andprefix(the shared name prefix for all resources). Also don’t forget to provide login credentials for your VM by setting values foradmin_username and admin_password.
Set environment variableARM_SUBSCRIPTION_IDto point terraform to the subscription you are currently logged on to.
$ export ARM_SUBSCRIPTION_ID=$(az account show –query “id” -o tsv)
Using your CLI and terraform, deploy the demo setup:
$ terraform init
Initializing the backend…
[…]
Terraform has been successfully initialized!
$ terraform apply
[…]
Do you want to perform these actions?
Terraform will perform the actions described above.
Only ‘yes’ will be accepted to approve.
Enter a value: yes
[…]
Apply complete!
[…]
☝️ In case you are not familiar with Terraform,this tutorialmight be insightful for you.
Explore the deployed resources in the Azure Portal. Note that although the network infrastructure components shown in the architecture drawing above are already deployed, they are not yet configured for use from the Virtual Machine:
The Azure Firewall is deployed, but the route table attached to the VM subnet does not (yet) have any route directing traffic to the firewall (we will add this in Scenario 2).
The Azure Load Balancer is already deployed, but the virtual machine is not yet member of its backend pool (we will change this in Scenario 3).
Log in to the Virtual Machine using the Bastion Host.
At this point, our virtual machine is deployed to a private subnet. As we do not have any outbound connectivity method set up, all calls to public internet resources as well as to the public endpoints of Azure resources will time out.
Test 1: Call to public internet
$ curl ifconfig.io –connect-timeout 10
curl: (28) Connection timed out after 10004 milliseconds
Test 2: Call to Azure Resource Manager
$ curl https://management.azure.com/ –connect-timeout 10
curl: (28) Connection timed out after 10001 milliseconds
Test 3: Call to Azure Key Vault (data plane)
$ curl https://no-doa-demo-kv.vault.azure.net/ –connect-timeout 10
curl: (28) Connection timed out after 10002 milliseconds
Scenario 2: Route all traffic through azure Firewall.
Typically, customers deploy a central Firewall in their network to ensure all outbound traffic is consistently SNATed through the same public IPs and all outbound traffic is centrally controlled and governed. In this scenario, we therefore modify our existing route table and add a default route (i.e., for CIDR range0.0.0.0/0), directing all outbound traffic to the private IP of our Azure Firewall.
Add Firewall and routes.
Browse tonetwork.tf, uncomment the definition ofazurerm_route.default-to-firewall.
Update your deployment.
$ terraform apply
Terraform will perform the following actions:
# azurerm_route.default-to-firewall will be created
[…]
Test 1: Call to public internet, revealing that outbound calls are routed through the firewall’s public IP.
$ curl ifconfig.io
4.184.163.38
Now that you have access to the internet, installAzure CLI.
Test 2: Call to Azure Resource Manager (you might need to change the Key Vault name if you changed the prefix in yourterraform.tfvars)
$ az keyvault show –name “no-doa-demo-kv” -o table
Location Name ResourceGroup
—————— ————– ————–
germanywestcentral no-doa-demo-kv no-doa-demo-rg
Test 3: Call to Azure Key Vault (data plane)
$ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
ContentType Name Value
————- ——- ————-
message Hello, World!
Query Key Vault Audit Log.
☝️ The ingestion of audit logs into the Log Analytics Workspace might take some time. Please make sure to wait for up to ten minutes before starting to troubleshoot.
Get Application ID of VM’s system-assigned managed identity:
$ ./scripts/vm_get-app-id.sh
AppId for Principal ID f889ca69-d4b0-45a7-8300-0a88f957613e is: 8aa9503c-ee91-43ee-96c7-49dc005ebecc
Go to Log Analytics Workspace, run the following query.
AzureDiagnostics |
where identity_claim_appid_g == “[Replace with App ID!]”
| project TimeGenerated, Resource, OperationName, CallerIPAddress
| order by TimeGenerated desc
Alternatively, run the prepared scriptkv_query-audit.sh:
🗣 Note that both calls to the Key Vault succeed as they are routed through the central Firewall; both requests (to Azure Management plane and Key Vault data plane) hit their endpoints with the Firewall’s public IP.
Scenario 3: Bypass Firewall for traffic to Azure management plane.
At this point all, internet and Azure-bound traffic to public endpoints is routed through the Azure Firewall. Although this allows you to centrally control all traffic, you might have good reasons to prefer to offload some communication from this component by routing traffic targeting a specific IP address range through a different component for SNAT — for example to optimize latency or reduce load on the firewall component for communication with well-known hosts.
☝️ As mentioned before, dedicated Public IP addresses, NAT Gateways and Azure Load Balancers are alternative options to configure SNAT for outbound access. You can find a detailed discussion about all optionshere.
In this scenario, we assume that we want network traffic to the Azure Management plane to bypass the central Firewall (we pick this service for demonstration purposes here). Instead, we want to use the SNAT capabilities of an Azure Load Balancer with outbound rules to route traffic to the public endpoints of the Azure Resource Manager. We can achieve this by adding a more-specific route to the route table, directing traffic targeting the correspondingservice tag(which is like a symbolic name comprising a set of IP ranges) to a different destination.
The integration of outbound load balancing rules into the communication path works differently than integrating a Network Virtual Appliance: While we defined the latter by setting the NVA’s private IP address as next hop in our user defined route in scenario 1, we only integrate the Load Balancerimplicitlyinto our network flow — by specifyingInternetas next hop in our route table. (Essentially, next hop ‘Internet’ instructs Azure to use either (a) the Public IP attached to the VM’s NIC, (b) the Load Balancer associated to the VM’s NIC with the help of an outbound rule, or (c) a NAT Gateway attached to the subnet the VM’s NIC is connected to.) Therefore, we need to take two steps to send traffic through our Load Balancer:
Deploy a more-specific user-defined route for the respective service tag.
Add our VM’s NIC to a load balancer’s backend pool with an outbound load balancing rule.
In our scenario, we’ll do this for the Service tagAzureResourceManager, which (amongst others) also comprises the IP addresses formanagement.azure.com, which is the endpoint for the Azure control plane. This will affect theaz keyvault getoperation to retrieve the Key Vault’s metadata.
Browse tonetwork.tf, uncomment the definition ofazurerm_route.azurerm_2_internet.
☝️ Note that this route specifiesInternet(!) as next hop type for any communication targeting IPs of service tagAzureResourceManager.
Update your deployment.
$ terraform apply
Terraform will perform the following actions:
# azurerm_route.azurerm_2_internet will be created
[…]
(optional)Repeat test 1 (call to public internet) and test 3 (call to Key Vault’s data plane) to confirm behavior remains unchanged.
$ curl ifconfig.io
4.184.163.38
$ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
ContentType Name Value
————- ——- ————-
message Hello, World!
Test 2: Call to Azure Resource Manager
$ az keyvault show –name “no-doa-demo-kv” -o table
: Failed to establish a new connection: [Errno 101] Network is unreachable
🗣 While the call to the Key Vault data plane succeeds, the call to the resource manager fails: Routeazurerm_2_internetdirects traffic to next hop typeInternet. However, as the VM’s subnet is private, defining the outbound route is not sufficient and we still need to attach the VM’s NIC to the Load Balancers outbound rule.
Instruct Azure to send internet-bound traffic through Outbound Load Balancer
Add virtual machine’s NIC to a backend pool linked with an outbound load balancing rule.
Browse tovm.tf, uncomment the definition ofazurerm_network_interface_backend_address_pool_association.vm-nic_2_lb.
Update your deployment.
$ terraform apply
Terraform will perform the following actions:
# azurerm_network_interface_backend_address_pool_association.vm-nic_2_lb will be created
[…]
(optional)Repeat test 1 (call to public internet) and test 3 (call to Key Vault’s data plane) to confirm behavior remains unchanged.
Repeat Test 2: Call to Azure Resource Manager
$ az keyvault show –name “no-doa-demo-kv” -o table
Location Name ResourceGroup
—————— ————– —————
germanywestcentral no-doa-demo-kv no-doa-demo-rg
🗣 After adding the NIC to the backend of the outbound load balancer, routes with next hop typeInternetwill use the load balancer for outbound traffic. As we specifiedInternetas next hop type forAzureResourceManager, theVaultGetoperation is now hitting the management plane from the load balancer’s public IP. (Communication with the Key Vault data plane remains unchanged; theSecretGetoperation still hits the Key Vault from the Firewall’s Public IP.)
☝️ We explored this path for the platform-defined service tagAzureResourceManager. However, it’s equally possible to define this communication path for your self-defined IP addresses or ranges.
Scenario 4: Add ‘shortcut’ for traffic to Key Vault data plane.
For communication with many platform services, Azure offers customersVirtual Network Service Endpointsto enable an optimized connectivity method that keeps traffic on its backbone network. Customers can use this, for example, to offload traffic to platform services from their network resources and increase security by enabling access restrictions on their resources.
☝️ Note that service endpoints are not specific for individual resource instances; they will enable optimized connectivity foralldeployments of this resource type (across different subscriptions, tenants and customers). You may want to make sure to deploy complementing firewall rules to your resource as an additional layer of security.
In this scenario, we’ll deploy a service endpoint for Azure Key Vaults. We’ll see that the platform will no longer SNAT traffic to our Kay Vault’s data plane but use the VM’s private IP for communication.
Deploy Service Endpoint for Key Vault
Browse tonetwork.tf, uncomment the definition ofserviceEndpointsinazapi_resource.subnet-vm.
Update your deployment.
$ terraform apply
Terraform will perform the following actions:
# azapi_resource.subnet-vm will be updated in-place
[…]
(optional)Repeat test 1 (call to public internet) and test 2 (call to Azure management plane) to confirm behavior remains unchanged.
Test 3: Call to Azure Key Vault (data plane)
$ az keyvault secret show –vault-name “no-doa-demo-kv” –name message -o table
ContentType Name Value
————- ——- ————-
message Hello, World!
🗣 After deploying a service endpoint, we see that traffic is hitting the Azure Key Vault data plane from the virtual machine’s private IP address, i.e., not passing through Firewall or outbound load balancer.
Inspect NIC’s effective routes.
Eventually, let’s explore how the different connectivity methods show up in the virtual machine’s NIC’s effective routes. Use one of the following options to show them:
In Azure portal, browse to the VM’s NIC and explore the ‘Effective Routes’ section in the ‘Help’ section.
Alternatively, run the provided script (please note that the script willonly show the firstIP address prefix in the output for brevity).
$ ./scripts/vm-nic_show-routes.sh
Source FirstIpAddressPrefix NextHopType NextHopIpAddress
——– ———————- —————————– ——————
Default 10.0.0.0/16 VnetLocal
User 191.234.158.0/23 Internet
Default 0.0.0.0/0 Internet
Default 191.238.72.152/29 VirtualNetworkServiceEndpoint
User 0.0.0.0/0 VirtualAppliance 10.254.1.4
🗣 See that…
…system-defined route191.238.72.152/29 to VirtualNetworkServiceEndpointis sending traffic to Azure Key Vault data plane via service endpoint.
…user-defined route191.234.158.0/23 to Internetisimplicitlysending traffic toAzureResourceManagervia Outbound Load Balancer (by definingInternetas next hop type for a VM attached to an outbound load balancer rule).
…user-defined route0.0.0.0/0 to VirtualAppliance (10.254.1.4)is sending all remaining internet-bound traffic to the Firewall.
This article is contributed. See the original author and article here.
Why XML?
XML is widely used across various industries due to its versatility and ability to structure complex data. Some key industries that use XML:
Finance: XML is used for financial data interchange, such as in SWIFT messages for international banking transactions and in various financial reporting standards.
Healthcare: XML is used in healthcare for data exchange standards like HL7, which facilitates the sharing of clinical and administrative data between healthcare providers.
Supply Chain: XML is used in supply chain management for data interchange, such as in Electronic Data Interchange (EDI) standards.
Government: Multiple government entities use XML for various data management and reporting tasks.
Legal: XML is used in the legal industry to organize and manage documents, making it easier to find and manage information.
To provide continuous support to our customers in these industries, Microsoft has always provided strong capabilities for integration with XML workloads. For instance, XML was a first-class citizen in BizTalk Server. Now, despite the pervasiveness of the JSON format, we continue working to make Azure Logic Apps the best alternative for our BizTalk Server customers and customers using XML based workloads.
The XML Operations connector
We have recently added two actions for the XML Operations connector: Parse with schema and Compose with schema. With this addition, Logic Apps customers can now interact with the token picker during design time. The tokens are generated from the XML schema provided by the customer. As a result, the XML document and its contained properties will be easily accessible, created and manipulated in the workflow.
XML parse with schema
The XML parse with schema allow customers to parse XML data using an XSD file (an XML schema file). XSD files need to be uploaded to the Logic App schemas artifacts or an Integration account. Once they have been uploaded, you need to enter the enter your XML content, the source of the schema and the name of the schema file. The XML content may either be provided in-line or selected from previous operations in the workflow using the token picker.
Based on the provided XML schema, tokens such as the following will be available to subsequent operations upon saving the workflow:
In the output, the Body field contains a wrapper ‘json’ property, so that additional properties may be provided besides the translated XML content, such as any parsing warning messages. To ignore the additional properties, you may pick the ‘json’ property instead.
You may also select the token for each individual properties of the XML document, as these tokens are generated from the provided XML schema.
XML compose with schema
The XML compose with schema allows customers to generate XML data, using an XSD file. XSD files need to be uploaded to the Logic App schemas artifacts or an Integration account. Once they have been uploaded, you should select the XSD file along with entering the JSON root element or elements of your input XML schema. The JSON input elements will be dynamically generated based on the selected XML schema.
You can also switch to Array and pass an entire array for Customers and another for Orders:
Please watch the following video for a complete demonstration of this new feature.
This article is contributed. See the original author and article here.
Welcome back to Grow Your Business with Microsoft 365 Copilot, your monthly resource for actionable insights to harness the power of AI at small and medium sized businesses.
In today’s fast-paced small business environment, staying competitive means embracing innovative solutions that drive efficiency and growth. Microsoft 365 Copilot is designed to meet these needs, offering businesses a powerful tool to enhance productivity and streamline operations. Many small and medium-sized businesses (SMBs) face challenges in managing their digital presence, customer interactions, and internal processes. These hurdles can impede growth and limit the ability to respond swiftly to market changes.
We spoke with John T. Battaglia from The Judge Group, a mid-sized global professional services firm specializing in business technology consulting, learning, staffing, and offshore solutions. Established in 1970, the company has grown into an international leader in their industry. With over 30 locations across the United States, Canada, and India, The Judge Group partners with some of the most prominent companies in the world, including over 60 of the Fortune 100. After encountering significant challenges with employees using their own AI tools, The Judge Group made a strategic decision to transition to Microsoft 365 Copilot for its security features, unmatched efficiency, and AI-driven insights. Since adopting Copilot, The Judge Group has become power users, fully integrating the tool into their daily operations.
In the following sections, we will explore some of the specific challenges The Judge Group faced and how Microsoft 365 Copilot helped them achieve better outcomes.
Managing complex projects with AI
One of the pain points The Judge Group faced was managing complex projects across multiple stakeholders. They often found it challenging to keep everyone on the same page and ensure timely completion of tasks. Copilot helped provide insights and simplified workflows to streamline project management. For example, during a presentation to a nonprofit company’s board, Copilot was used to gather key details of the meeting with ease such as generating real-time summaries, tracking action items, and providing instant access to relevant documents using Microsoft Teams with simple prompts. This technology superpower impressed the board, who initially thought Copilot’s capabilities were more similar to those of its competitors. They were amazed by the added bonus of AI-driven insights, which helped prioritize tasks and allocate resources more efficiently—far exceeding expectations.
GIF showing how you can easily prompt Copilot to provide meeting insights
Security and Compliance
The Judge Group experienced notable challenges early on in their AI journey, particularly in the areas of data security and compliance. Employees independently opted to use their own AI tools leading to substantial security concerns, especially for the legal team. The previous tool posed significant risks to sensitive information, making it challenging for The Judge Group to ensure their data was protected and adhering to the strict guidelines and regulations they have in place to safeguard their operations and instill confidence in their stakeholders.
“We realized that [employees independently opting to use their own AI tools] was a massive scare for our legal team. That’s when we decided to go on this AI journey with Copilot to protect our information and improve our processes.”
Switching to Microsoft 365 Copilot provided enhanced security features and compliance management. Not only did Microsoft 365 Copilot adopt established data privacy and security policies and configurations, but their compliance approach also ensured that all AI capabilities were developed, deployed, and evaluated in a compliant and ethical manner. This comprehensive strategy not only safeguarded their operations but also provided peace of mind across the organization.
Decisive steps were also taken to enhance their data security and privacy settings in SharePoint to mitigate “oversharing.” They implemented SharePoint Advanced Management (SAM), to further simplify governing and protecting SharePoint data. This allowed them to control access to specific areas of their site or documents when collaborating with vendors, clients, or customers outside their organization. They emphasized the importance of knowing who they were sharing with, determining if the recipients could re-share the content, and whether they needed to protect the content to prevent re-use or re-sharing.
Management, Adoption, and Training
Another challenge was ensuring compliance with responsible AI policies. To address this, they adopted a phased rollout approach for Copilot, starting with a pilot group and gradually expanding while providing targeted training and support. This strategy ensured that all team members adhered to the guidelines, improving the organization’s workflow and productivity. The phased rollout allowed The Judge Group to identify and address any compliance issues early on, ensuring a smooth implementation of Copilot.
The rollout consisted of several training initiatives, including role-specific training sessions tailored to the unique responsibilities and workflows of different teams within the organization. This approach ensured that each team member received relevant and practical training that directly applied to their daily tasks and served as a collaborative space where team members could share their experiences, best practices, and learn from one another. The emphasis on social learning within teams helped accelerate the learning curve and adoption rate, creating a supportive environment that encouraged continuous improvement.
In the pilot phase, The Judge Group prioritized roles such as project managers, IT administrators, and customer service representatives. Each role faced unique challenges and required specific training to ensure success. Project managers focused on using Copilot for task management and resource allocation, while IT administrators concentrated on compliance and security aspects. Customer service representatives received training in leveraging Copilot for customer interactions and support.
The key takeaways were the importance of making the training relevant to the role, continuous support, and fostering a culture of collaboration and learning.
Approach to Adoption
The Judge Group meticulously crafted a strategic adoption plan, with fun engaging campaign materials. They equipped the adoption team with all the necessary skills including “Train the Trainer” sessions to ensure they could effectively support the campaign for a successful launch.
Executive sponsors and champions were also actively engaged and trained within priority persona groups. Consistent and active communication was maintained across these groups to keep everyone informed and energized. By capturing and tracking adoption and utilization metrics for campaign target users, The Judge Group was then able to measure the impact of the adoption campaign using the Copilot Dashboard to help make informed data-driven decisions for optimization. The Copilot Dashboard was used to track various adoption metrics, such as “Total actions taken,” “Copilot assisted hours,” “Total actions count,” “Total active Copilot users,” and “meetings summarized by Copilot.” This allowed The Judge Group to monitor the progress and impact of their AI initiatives effectively.
The ongoing training series serve as a collaborative place where team members could share their experiences, best practices, and learn from one another. This emphasis on social learning within teams accelerated the learning curve and adoption rate, fostering a supportive environment that encouraged continuous improvement.
Tip: To further support SMBs, the Microsoft Copilot Success Kit can help set companies up for success from day one—providing all the resources needed to guide customers through deployment and adoption.
Join us next month for another inspiring story on how Copilot is driving growth and creating opportunities for small to mid-sized businesses. Have you experienced growth with Microsoft 365 Copilot? We’d love to hear your story! Comment below to express your interest, and a team member will reach out to you.
Don’t miss out on the opportunity to transform your business with Microsoft 365 Copilot. Learn more about how Copilot can help you achieve your goals, sign up for a trial, or contact our team for a personalized demo.
Try out some of the ways The Judge Group used Microsoft 365 Copilot
(New every week) Try a new proven Copilot productivity tip to get time back for more important work
(New) Get your one-stop guide to successfully deploy Copilot with the Copilot Success Kit
See the Mechanics video that goes over simple steps to check your data permissions are in place
Join the SMB Copilot Community to learn directly from Microsoft experts and your peers
Here’s the latest news on how Microsoft 365 Copilot can accelerate your business
Just announced, a new Forrester Total Economic Impact (TEI) study was just published, examining the impact
September was an epic month for Copilot with the Wave Two announcement made by Satya Nadella and Jared Spataro. Here is a rundown of additional features announced for every SMB not covered in last month’s blog:
Expanded Copilot Academy Access: Microsoft Copilot Academy is now available to all users with a Microsoft 365 Copilot license, enabling them to upskill their Copilot proficiency without needing a paid Viva license.
New Admin and Management Capabilities: Expanded controls for managing the availability of Copilot in Teams meetings have been introduced. IT admins and meeting organizers can now select an ‘Off’ value for Copilot, providing more flexibility in managing Copilot usage.
Copilot in Word Enhancements: On blank documents, Copilot in Word will now provide one-click examples of prompts that quickly help users get started on a new document. This feature is rolling out in October for desktop and Mac.
Copilot User Enablement Toolkit: To help users quickly realize the full benefit of Microsoft 365 Copilot, a new toolkit has been developed. It includes communication templates and resources designed to inspire better Copilot engagement, with role-specific prompt examples and use cases.
Self-Service Purchase: Did you know that over 80% of information workers in small and medium-sized businesses already bring their own AI tools to work? Microsoft 365 Copilot can now be purchased directly by users who have a Microsoft 365 Business Basic, Standard, or Premium license. If you’d like to purchase Microsoft 365 Copilot for yourself to use at work, click “Add Copilot to your Microsoft plan” on the Copilot for Microsoft 365 or Compare All Microsoft 365 Plans product pages.
Meet the team
The monthly series, Grow Your Business with Copilot for Microsoft 365, is brought to you by the SMB Copilot marketing team at Microsoft. From entrepreneurs to coffee connoisseurs, they work passionately behind the scenes, sharing the magic of Copilot products with small and medium businesses everywhere. Always ready with a smile, a helping hand, and a clever campaign, they’re passionate about helping YOUR business grow!
Microsoft SMB Copilot Product Marketing Team From left to right: An image of the SMB Copilot team at Microsoft, with Angela Byers, Mariana Prudencio, Elif Algedik, Kayla Patterson, Briana Taylor, and Gabe Ho.
About the blog
Welcome to “Grow Your Business with Microsoft 365 Copilot,” where we aim to inspire and delight you with insights and stories on how AI is changing the game for business. This monthly series is designed to empower small and mid-sized businesses to harness the power of AI at work. Each month, we feature scenarios where an SMB is using AI to transform, scale, and grow.
This article is contributed. See the original author and article here.
In the realm of software development, code signing certificates play a pivotal role in ensuring the authenticity and integrity of code. For individual developers, obtaining these certificates involves a rigorous identity validation process. This blog explores the challenges individual developers face and how Trusted Signing can streamline the code signing process, with a focus on how its individual validation process contributes to this efficiency.
Challenges faced by Individual Developers in Code Signing
Individual developers often face unique challenges when it comes to code signing. Here are some key issues:
Identity Validation process: This includes challenges such as obtaining the necessary documentation, undergoing lengthy verification processes, and dealing with differing requirements from various CAs.
Private Key Theft or Misuse: Private keys are crucial for the code signing process and must be protected at all times. If these keys are stolen, attackers can use the compromised certificates to sign malware, distributing harmful software under a verified publisher name. It is expensive for individual developers to invest in the infrastructure and operations required to manage and store the keys.
Complexity and Cost: The process of obtaining and managing code signing certificates can be complex and expensive, especially for individual developers and small teams. This complexity can lead to incomplete signing or not signing at all.
Integration with DevOps: Code signing needs to be integrated with DevOps processes, tool chains, and automation workflows. Ensuring that access to private keys is easy, seamless, and secure is a significant challenge.
Code Integrity and Security: While code signing ensures the integrity of software, it does not guarantee that the signed code is free from vulnerabilities. Hackers can exploit unregulated access to code signing systems to get malicious code signed and distributed.
What is the Trusted Signing service?
Trusted Signing is a comprehensive code signing service supported by a Microsoft-managed certification authority. The identity validation process is designed to be robust. Certificates are issued from Microsoft-managed CAs and are subsequently protected and serviced by providing seamless integration with leading developer toolsets. This eliminates the need for individual developers to invest in additional infrastructure and operations.
The Importance of Identity Validation
Identity validation is crucial for securing code signing certificates. It ensures that the individual requesting the certificate is indeed who they claim to be, thereby preventing malicious actors from distributing harmful code under the guise of legitimate software. This process builds trust among users and stakeholders, as they can be confident that the signed code is authentic and has not been tampered with.
Process for Identity Validation with Trusted Signing
Trusted Signing utilizes Microsoft Entra Verified ID (VID) for identity validation of individual developers. This process ensures that developers receive a VID, which is accessible through the Authenticator app, offering enhanced security, a streamlined process, and seamless integration with Microsoft Entra.
The verification process involves the following steps:
Submission of Government-Issued Photo ID: The first requirement is to provide a legible copy of a currently valid government-issued photo ID. This document must include the same name and address as on the certificate order.
Biometric/selfie check: Along with the photo ID, applicants need to submit a selfie. This step ensures that the person in the ID matches the individual applying for the certificate.
Additional Verification Steps: If the address is missing on the government issued ID card, then additional documents will be required to verify the address of the applicant.
This is how a successfully procured VID would appear in Azure portal.
Best Practices for a Smooth Validation Process
To ensure a smooth and successful identity validation process, individual developers should adhere to the following best practices:
Accurate Documentation: Ensure that all submitted documents are accurate and up-to-date and follow the guidelines.
Stay Informed: Keep abreast of any changes in the validation requirements or processes of the CA you are working with.
Costs of using Trusted Signing service
Trusted Signing offers two pricing tiers starting at $9.99/month and you can pick the tiers based on your usage. Both tiers are designed to provide optimal cost efficiency and cater to various signing needs. You can find the pricing details here. The costs for identity validation, certificate lifecycle management, storing the keys securely, and signing are all included in a single SKU, ensuring accessibility and predictable expenses.
Conclusion
Identity validation is a critical step for individual developers seeking code signing certificates. By understanding the process, preparing in advance, and following best practices, developers can successfully navigate the validation process and secure their code signing certificates with Trusted Signing. This not only enhances the security of their software but also builds trust with users and stakeholders.
This article is contributed. See the original author and article here.
Combining SCM Pricing Management with Commerce and Retail Discounts
In today’s competitive market, businesses need to leverage every possible advantage to stay ahead. Pricing is one of the most powerful tools at their disposal. Enter Unified Pricing Management, an innovative solution that merges attribute-based pricing with advanced Supply Chain Management (SCM) pricing management and comprehensive commerce and retail discount solutions.
The Need for Unified Pricing Management
Companies across industries face numerous challenges when it comes to pricing management. Traditional pricing methods often fall short in addressing the complexities of modern commerce, where factors such as customer segmentation, product attributes, market demand, and competitive actions all influence pricing decisions. The Unified Pricing Management solution addresses these challenges by providing a holistic and integrated framework that empowers businesses to make informed and strategic pricing decisions.
What is Unified Pricing Management?
Unified Pricing Management is a sophisticated attribute-based pricing solution designed to transform the way businesses approach pricing strategies. By considering a myriad of attributes and integrating data from SCM and other sources, this solution allows businesses to set more accurate, competitive prices that enhance profitability and customer satisfaction.
The Power of Attribute-Based Pricing
At the heart of Unified Pricing Management lies attribute-based pricing. This approach takes into account various attributes, such as product characteristics, customer demographics, and purchasing behaviors, to tailor pricing strategies that resonate better with the market. The ability to customize prices based on specific attributes ensures that businesses can meet diverse customer needs while maximizing revenue.
Converging SCM Pricing Management with Commerce and Retail Discounts
Unified Pricing Management supports SCM pricing management, enabling businesses to harness the full potential of their supply chain data. This convergence allows for more informed pricing decisions, ensuring that prices reflect real-time supply chain dynamics. The result is a more responsive and agile pricing strategy that can adapt to changing market conditions.
In addition to SCM pricing management, Unified Pricing Management also incorporates commerce and retail discount capabilities. This means that businesses can manage discounts more efficiently, ensuring consistency across all promotional efforts. The unified platform streamlines discount management, making it easier to implement and track various discount strategies.
Unified Pricing data model
Unified Pricing Management introduces the concept of price trees, which are essential for structuring pricing models. These price trees support multiple versions, allowing businesses to maintain and compare different pricing strategies over time. This capability ensures that businesses can continuously optimize their pricing approaches to achieve the best possible outcomes.
Activating and Utilizing Price Groups for Pricing Calculations
A crucial element of Unified Pricing Management is the concept of price groups. This common concept exists in both SCM pricing management and commerce and retail solutions. In Finance and Operations, as well as POS machines, price groups can be associated with channels, loyalty programs, affiliations, and attributes from sales order headers. They can also be defined as an attribute within the header attribute group, providing even greater flexibility in pricing strategies.
Set Up Price Groups: Define price groups based on relevant criteria such as customer segments, or geographical regions. This initial setup is critical as it lays the foundation for your pricing strategy.
Assign Price Rules: Assign specific pricing rules to each price group. These rules can include base prices, margin components, and discounts. This ensures that each segment is priced according to its unique characteristics and market conditions.
Apply Price Groups to Sales Orders and POS: When creating or modifying sales orders, apply the relevant price group to ensure that the correct pricing rules are used. This step is essential for maintaining consistency and accuracy in your pricing.
Monitor and Adjust: Use the unified pricing management system to monitor the performance of your pricing strategy. Make adjustments as needed based on real-time data and analytics to ensure that your pricing remains competitive and profitable.
Conclusion
Unified Pricing Management represents a significant advancement in the field of pricing management. By combining attribute-based pricing, convergent SCM pricing management, and comprehensive commerce and retail discount capabilities, it provides businesses with a powerful tool to navigate the complexities of modern pricing. The enablement of price groups and the flexibility to use single and multiple price trees further enhance its adaptability and effectiveness.
In an era where pricing can make or break a business, Unified Pricing Management offers a strategic advantage that can drive profitability, customer satisfaction, and long-term success. Embrace this innovative solution and unlock the full potential of your pricing strategies.
Recent Comments