This article is contributed. See the original author and article here.
Sellers often need to manage numerous long documents to understand customers’ unique requirements and deliver the personalized attention advancing deals. These documents require sellers to invest significant time reviewing and understanding them, leaving sales team less time for their customers. With the summarization capabilities provided by Copilot in Dynamics 365 Sales, sellers can now obtain pertinent details from sales documents in minutes. Sellers quickly gain insights into customer requirements, budget details, timelines, and main decision makers, enabling them to prepare for prospect meetings more effectively and engage in more informed and meaningful conversations.
Close deals faster with seller-specific document summary
Copilot in Dynamics 365 Sales can create summaries of sales documents that include insights specific to your sales team’s needs. For example, sellers can quickly get details about the issuing organization, items to be procured, deadlines and timelines for submitting a proposal, contact information and meetings with stakeholders mentioned in the document, and other important information needed to move a deal forward, such as compliance and legal considerations.
Sellers can ask Copilot to summarize sales documents stored in SharePoint or associated with any lead, contact, opportunity, and account in Dynamics 365 Sales.
Next steps
Learn more about summarizing documents linked to records in Dynamics 365 Sales.
This article is contributed. See the original author and article here.
For Microsoft and our customers, work is changing at the speed of AI. To help you stay ahead, we’ll share monthly highlights of new Microsoft Copilot innovations, plus the latest from our customers on how they’re getting the most value from Copilot.
This article is contributed. See the original author and article here.
In an era where seamless network management and enhanced performance are paramount, the release of Network ATC, Network HUD, and AccelNet for Windows Server 2025 marks a significant milestone. These groundbreaking innovations are designed to optimize the way we manage, monitor, and accelerate network operations, promising unprecedented efficiency and reliability.
Network ATC
Historically, deployment and management of networking for Failover clusters has been complex and error prone. The configuration flexibility with the host networking stack means there are many moving parts that can be easily misconfigured or overlooked. Keeping up with the latest best practices is also a challenge as improvements are continuously made to the underlying technologies. Additionally, configuration consistency across failover cluster nodes is vital for reliability.
Network ATC simplifies the deployment and network configuration management for Windows Server 2025 clusters. It provides an intent-based approach to host network deployment. Customers specify one or more intents (management, compute, or storage) for a network adapter, and we automate the deployment of the intended configuration.
Network ATC helps to:
Reduce host networking deployment time, complexity, and errors
Deploy the latest Microsoft-validated and supported best practices
Ensure configuration consistency across the cluster
Eliminate configuration drift
One of the greatest benefits of Network ATC is its ability to remediate configuration drift. Have you ever wondered “who changed that?” or said, “we must have missed this node.” You’ll never worry about this again with Network ATC at the helm. Expanding the cluster to add new nodes? Simply install the feature on the new node, join the cluster and within minutes, the expected configuration will be deployed.
For more details about deploying and managing Network ATC on Windows Server 2025, please check here: Deploy host networking with Network ATC. You can manage Network ATC through Powershell cmdlets or Windows Admin Center.
Figure: Network ATC management in Windows Admin Center
Network HUD (Coming Soon)
Network HUD is an upcoming Windows Server 2025 feature that will proactively identifies and remediates operational network issues.
Managing a network for business applications is challenging. Ensuring stability and optimization requires coordination across the physical network (switches, cabling, NICs), host operating system (virtual switches, virtual NICs), and the applications running in VMs or containers. Each component has its own configurations and capabilities, often managed by different teams. Even with a perfect setup, a bad configuration elsewhere in the network can degrade performance.
The complexity of managing these components has reached an all-time high, with numerous tools and technologies involved. Windows Server OS provides a wealth of information through event logs, performance counters, and tools, but analyzing this data when issues arise requires expertise and time, often after the problem has occurred.
Network HUD excels by analyzing real-time data from event logs, performance counters, tools like Pktmon, network traffic, and physical devices to identify issues before they happen. In many cases, it prevents issues by adjusting your system to avoid exacerbating problems. When prevention isn’t possible, Network HUD alerts you with actionable messages to resolve the issue.
Network HUD leverages capabilities in the physical switch to ensure that your configuration matches the physical network. For example, it can determine whether the locally connected switchports have the correct VLAN settings and the correct data center bridging configuration required for RDMA storage traffic to function.
Network HUD is built as a true cloud service that runs on-premises. It will ship as an Arc extension and will be part of Windows Server Azure Arc Management (WSAAM) services. This allows us to bring in more capabilities and make these available to you as soon as they are ready.
AccelNet
Accelerated Networking simplifies the management of single root I/O virtualization (SR-IOV) for virtual machines hosted on Windows Server 2025 clusters. SR-IOV provides a high-performance data path that bypasses the host, which reduces latency, jitter, and CPU utilization for the most demanding network workloads. This is particularly useful in High Performance Computing (HPC) environments, Real-time applications such as financial trading platforms, and virtualized network functions.
The following figure illustrates how two VMs communicate with and without SR-IOV.
Without SR-IOV, all networking traffic in and out of the VM traverses the host and the virtual switch. With SR-IOV, network traffic that arrives at VM’s network interface (NIC) is forwarded directly to VM.
SR-IOV has been available in Windows Server since 2012 R2 days. So, what benefit does AccelNet provide?
Prerequisite checking: Informs users if the Windows Server cluster hosts support SR-IOV, checking for OS version and hyperthreading status among other things.
Host Configuration: Ensures SR-IOV is enabled on the correct vSwitch that hosts virtual machine workloads and allows configuration of reserve nodes in case of failover to prevent resource over-subscription.
Simplified VM performance settings: It can be overwhelming to identify how many queue pairs may be needed for a VM that is being enabled through SR-IOV. AccelNet abstracts performance settings into “Low,” “Medium,” and “High” to simplify configuration.
Health Monitoring and Diagnostics: Leverages Network HUD to identify and remediate configuration/performance related issues. Examples include NIC SR-IOV support, Live migration management, etc. (Coming Soon)
Simplified management with Windows Admin Center: All AccelNet management functionality is available through Powershell and with an easy-to-use UI through Windows Admin Center (Latter coming Soon).
AccelNet is part of Windows Server Azure Arc Management (WSAAM) services. To learn more about Accelnet, please check here.
As organizations continue to navigate the challenges of an ever-evolving digital landscape, the integration of these advanced features into Windows Server 2025 ensures they are equipped with the tools needed to achieve excellence in network management and performance. Embrace the future of networking with Windows Server 2025 and experience the transformative power of Network ATC, Network HUD, and AccelNet.
We are excited to share all these innovations with you. Upgrade to Windows Server 2025 to try out these features, we look forward to your feedback. For any suggestions, opinions or issues, please reach out to us at edgenetfeedback@microsoft.com.
This article is contributed. See the original author and article here.
We are excited to introduce the Modern Web App (MWA) pattern for .NET. MWA is part of our Enterprise App Patterns (EAP), that offers guidance to accelerate app modernization to the cloud. MWA provides developer patterns, prescriptive architecture, reference implementation, and infrastructure guidance that aligns with the principles of the Azure Well-Architected Framework (WAF) and 12-factor app methodologies so you can be assured the guidance is real-world proven.
The Modern Web App (MWA) pattern marks the next stage in transforming monolithic web applications toward cloud-native architecture, with a focus on the Refactor modernization strategy. Building on the Reliable Web App (RWA) pattern, which helped organizations transition to cloud with minimal changes under a Replatform approach, MWA guides teams further by encouraging decoupling and decomposition of key functions into microservices. This enables high-demand areas to be optimized for agility and scalability, providing dedicated resources for critical components and enhancing reliability and performance. Decoupling also allows independent versioning and scaling, delivering cost efficiency and flexibility to evolve individual app components without affecting the entire system.
Key Features of Modern Web App pattern
The Modern Web App Pattern provides detailed guidance to decouple critical parts of a web application, enabling independent scaling, greater agility, and cost optimization. This decoupling approach ensures that high-demand components have dedicated resources and may be versioned and scaled independently, improving the reliability and performance of the application and agility to enhance features separately. By separating services, the risk of degradation in one part of the app affecting other parts is minimized. Here are some strategies MWA adopts:
Modernization through Refactoring Built on top of the Reliable Web App Pattern, MWA focuses on optimizing high-demand areas of web applications by decoupling critical components.
Incremental modernization using strangler-fig pattern Guidance for incremental refactoring from monolithic to decoupled services, reducing risks during modernization and improving agility for new features.
Embracing Cloud-Native architectures Leverages Azure services such as Azure App Services, Azure Container Apps, Azure Container Registry, Azure Service Bus, Azure Monitor and more to build independently scalable, resilient cloud-native applications.
Independent scaling using Azure Container Apps Allows key parts of the app to scale independently, optimizing resource usage and reducing costs.
Enhanced security and availability Hub and Spoke architecture for production infrastructure improves security and isolates workloads, and multi-region deployment supports a 99.9% business srvice-level objectives (SLO).
What’s covered in the reference implementation?
In this context, we use a fictional company, Relecloud’s, evolving business needs to illustrate the Modern Web App (MWA) pattern, which takes scalability further through decoupling and refactoring of monolithic line-of-business web app. This architecture enables independent scaling via microservices for high demand, supporting Relecloud’s growth while enhancing security, agility and reliability meeting the 99.9% business SLO uptime requirement.
Azure Services
Developer Patterns
Best Practices
Patterns from RWA
Other awesomeness!
Azure Front Door
Microsoft Entra ID
Azure App Service
Azure Container Apps
Azure Container Registry
Azure Cache for Redis
Azure SQL
Azure Storage
Azure Key Vault
Azure App Configuration
Azure Service Bus
Azure Monitor and App Insights
Strangler Fig
Queue-based Load Leveling
Competing Consumers
Health Endpoint Monitoring
Feature rollouts using Feature Flags
Distributed Tracing
Managed Identities
Private endpoints
Hub and Spoke network architecture
Retry
Circuit-breaker
Cache-aside
Azure Developer CLI (azd)
Reusable modular IaC assets (Bicep)
Resource Tagging
Multi-region support with 99.9% business SLO
Dev and Prod Environments & SKUs
.. and more!
Get started
We created a full production-grade application that you can deploy easily to Azure to see all of the principles of MWA in action. Visit the MWA GitHub repo for more information.
This article is contributed. See the original author and article here.
Introduction
When gathering SharePoint data through Microsoft Graph Data Connect, you are billed through Azure. As I write this blog, the price to pull 1,000 objects from Microsoft Graph Data Connect in the US is $0.75, plus the cost for Azure infrastructure like Azure Storage and Azure Synapse.
That is true for all datasets except the SharePoint Files dataset, which has a different billing rate. Because of its typical high volume, the SharePoint Files dataset is billed at $0.75 per 50,000 objects.
I wrote a blog aboutwhat counts as an object, but I frequently get questions about how to estimate the overall Azure bill for the Microsoft Graph Data Connect for SharePoint for a specific project. Let me try to clarify things…
Before we start, here are a few notes and disclaimers:
These are estimates and your specific Azure bill will vary.
Check the official Azure links provided. Rates may vary by country and over time.
These are Azure pay-as-you-go list prices in the US as of October 2024.
You may benefit from Azure discounts, like savings using a pre-paid plan.
How many objects?
To estimate the number of objects, you start by finding out the number of sites in the tenant. This should include all sites (not just active sites) in your tenant. You can find this number easily in the SharePoint Admin Center. That will be the number of objects in your SharePoint Sites dataset.
Finding the number of SharePoint Groups and SharePoint Permissions will require some estimation. I recently collected some telemetry and saw that the average number of SharePoint Groups per Site for a sample of large tenants was around 31. The average SharePoint permissions per site was around 61. The average number of files per site was 2,874.
Delta pulls (gathering just what changed) will be smaller, but that also varies depending on how much collaboration happens in your tenant (in the Delta numbers below, I am estimating a 5% change for an average collaboration level).
Here’s a table to help you estimate your Microsoft Graph Data Connect for SharePoint costs:
Notes for the table above:
* Higher collaboration level assumes twice the average in terms of groups, permissions and files. ** Security scenario includes Sites, Groups and Permissions. Capacity Scenario includes Sites and Files. *** Delta assumes 5% change for average collaboration and 10% change for high collaboration. These are on the high side for one week’s worth of changes. Your numbers will likely be smaller.
As you can see, smaller tenants with an average collaboration will see costs below $10 for the smaller Sites dataset and below $1,000 for larger datasets like Permissions or Files.
The SharePoint information you get from Microsoft Graph Data Connect will be stored in an Azure Storage account. That also incurs some cost, but it’s usually small when compared to the Microsoft Graph Data Connect costs for data pulls. The storage will be proportional to the number of objects and to the size of these objects.
Again, this will vary depending on the amount of collaboration in the tenant. More sharing means more members in groups and more people in the permissions, which will result in more objects and also larger objects.
I also did some estimating of object size and arrived at around 2KB per SharePoint Site object, 20KB per SharePoint Group object, 3KB per Permission object and 1KB per file object. There are several Azure storage options including Standard vs. Premium, LRS vs. GRS, v1 vs. v2 and Hot vs. Cool. For Microsoft Graph Data Connect, you can go with a Standard + LRS + V2 + Cool blob storage account, which costs $0.01 per GB per month.
Here’s a table to help you estimate your Azure Storage costs:
The same notes from the previous table apply here.
As you can see, smaller tenants with average collaboration will see storage costs below $1000/month, most of it going to storing the larger Files dataset. The cost for delta dataset storage is also fairly small, even for the largest of tenants. There are additional costs per storage operation like read and write but those are negligible at this scale (for instance, $0.065 per 10,000 writes and $0.005 per 10,000 reads).
You will also typically use Azure Synapse to move the SharePoint data from Microsoft 365 to your Azure account. You could run a pipeline daily to get the information and do some basic processing, like computing deltas or creating aggregations.
Here are a few of the items that are billed for Azure Synapse when running Microsoft Graph Data Connect pipelines:
Azure Hosted – Integration Runtime – Data Movement – $0.25/DIU-hour
Azure Hosted – Integration Runtime – Orchestration Activity Run – $1 per 1,000 runs
vCore – $0.15 per vCore-hour
As with Azure Storage, the costs here are small. You will likely need one pipeline run per day and it will typically run in less than one hour for a small tenant. Large tenants might need a few hours per run to gather all their SharePoint datasets. You should expect less than $10/month for smaller tenants and less than $100/month for larger and/or more collaborative tenants.
These are the main meters in Azure to get you started with costs related to Microsoft Graph Data Connect for SharePoint. I suggest experimenting with a small test/dev tenant to get familiar with Azure billing.
For more information about Microsoft Graph Data Connect for SharePoint, see the links athttps://aka.ms/SharePointData.
This article is contributed. See the original author and article here.
We previously announced that support would end for retired Azure classic storage accounts on 31 August 2024. Now that we are past the retirement date and Azure classic storage accounts are not supported, we have some updates to share on our plans and the considerations for customers who have not migrated their remaining classic storage accounts to Azure Resource Manager.
On or after 1 November 2024:
Your ability to perform write operations using the classic service model APIs, including PUT and PATCH, will be limited. You will only be able to perform read and list operations using the classic service model APIs.
Your remaining classic storage accounts will be migrated to Azure Resource Manager on your behalf on a rolling schedule. Your data will continue to be stored, but any applications that use the classic service model APIs to perform management plane operations will experience disruptions if you’re actively using any write operations. Write operations will only be available through the Azure Resource Manager APIs after your account(s) have been migrated.
Note: There are no impacts on the availability of the data plane APIs before, during, or after the migration of classic storage accounts.
Azure storage accounts under Azure Resource Manager provides the same capabilities as well as new features, including:
A management layer that simplifies deployment by enabling you to create, update, and delete resources.
Resource grouping, which allows you to deploy, monitor, manage, and apply access control policies to resources as a group.
To avoid service disruptions, you’ll need to migrate your classic storage accounts to Azure Resource Manager as soon as possible. Our recommendation is that customers self-service their migrations using the existing migration capabilities instead of waiting for Azure Storage to migrate any remaining classic storage accounts on your behalf. This ensures that you can migrate on your schedule and minimize impacts. We cannot guarantee that you will not experience service interruptions if you further delay your migration.
Required action
To avoid service disruptions, migrate your classic storage accounts as soon as possible. Additionally, update any management operations in code or applications that target the classic deployment model. Read our FAQ and migration guidance for more information.
Help and support
If you have questions, get answers from community experts in Microsoft Q&A. If you have a support plan and you need technical help, open the Azure portal and select the question mark icon at the top of the page.
FAQ
Q: I want to delete my account, but the operation is blocked. How do I delete my account if write operations are blocked on the classic control plane? A: You can delete your account using the Azure Resource Manager APIs for Azure Storage after your account has been migrated from the classic service model.
Q: Will I be able to use the legacy PowerShell cmdlets to manage my classic storage accounts? A: No. After your accounts have been migrated, they can be managed with the modern PowerShell cmdlets for Azure Storage and Azure Resource Manager.
Q: What resource group will my migrated resources appear in?
Recent Comments