ADX Web updates – May 2024

ADX Web updates – May 2024

This article is contributed. See the original author and article here.

Introducing an Enhanced Connections Explorer


We are pleased to introduce a new look and feel to our Connections explorer, designed to help you manage your list of data sources more efficiently.
While maintaining the familiar functionality of the old experience, this updated interface features a modern design, improved performance and an enhanced way to manage and view your Favorites.


 


We encourage you to turn on the new Connections pane to experience its smoother and more intuitive user experience –


 


Michal_Bar_0-1717916757851.png


 


If you are managing a long list of connections in your ADX web UI, you’ll notice a performance improvement as soon as you turn the new experience on.


Moreover, the new connection pane features


1 – multiple actions to help you manage the tree connections, accessible via an intuitive menu


2 – easy access to Get data actions


3 – easy access to tables’ data profile


 


Michal_Bar_1-1717916757856.png


 


 


Please share your thoughts and feedback regarding this new enhancement KustoWebExpFeedback@service.microsoft.com!


 


Easily Favorite and Find Your Important Dashboards


We are happy to announce that you can now add dashboards to your favorites list from two convenient locations: both from the catalog and, as a newly introduced feature, directly from the dashboard itself! This enhancement, driven by your feedback, makes it easier than ever to quickly mark and access your most-used dashboards.


Michal_Bar_2-1717916757860.png


 


 


Michal_Bar_3-1717916757874.png


 


 


Azure Data Explorer Web UI team is looking forward for your feedback in KustoWebExpFeedback@service.microsoft.com


You’re also welcome to add more ideas and vote for them here – https://aka.ms/adx.ideas


Read more:


PostgreSQL for your AI app’s backend | Azure Database for PostgreSQL Flexible Server

PostgreSQL for your AI app’s backend | Azure Database for PostgreSQL Flexible Server

This article is contributed. See the original author and article here.

Use PostgreSQL as a managed service on Azure. As you build generative AI apps, explore advantages of Azure Database for PostgreSQL Flexible Server such as integration with Azure AI services, as well as extensibility and compatibility, integrated enterprise capabilities to protect data, and controls for managing business continuity.


 


Main.png


 


Charles Feddersen, product team lead for PostgreSQL on Azure, joins host Jeremy Chapman to share how Flexible Server is a complete PostgreSQL platform for enterprise and developers.


 


 


Generate vector embeddings for data and images.


 


1.png


 


Enhance search accuracy and semantic matches. Watch how to use the Azure AI extension with Azure Database for PostgreSQL here.


 


 


Leverage the Azure AI extension.


 


2.png


 


Calculate sentiment and show a summarization of reviews using PostgreSQL. See it here.


 


 


Simplify disaster recovery for enterprise apps.


 


3.png


 


Achieve multi-zone high availability, zero data loss, and planned failover with GeoDR.


 


 


Watch our video here:


 


 







QUICK LINKS:


00:00 — Azure Database for PostgreSQL Flexible Server
00:51 — Microsoft and PostgreSQL 
01:40 — Open-source PostgreSQL
03:18 — Vector embeddings for data
04:32 — How it works with an app
06:59 — Azure AI Vision
08:14 — Azure AI extension using PostgreSQL
09:37 — Text generation using Azure AI extension
10:30 — High availability and disaster recovery|
12:45 — Wrap up


 


 


Link References


Get started with the Azure Database for PostgreSQL flexible server at https://aka.ms/postgresql


Stay current with all the updates at https://aka.ms/AzurePostgresBlog


 


 


Unfamiliar with Microsoft Mechanics?


As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft.



 


 


Keep getting this insider knowledge, join us on social:











Video Transcript:


– Postgres is one of the most popular open-source databases in use today, and with its built-in vector index, plays a vital role in powering natural language generative AI experiences by searching across billions of data points to find similarity matches to support the generation of more accurate responses. But did you know that you can also use Postgres as a managed service on Azure? Today, in fact, as you build generative AI apps, we’re going to explore Azure Database for Postgres flexible server and the unique advantages such as integration with Azure AI services, as well as extensibility and compatibility, integrated enterprise capabilities to protect your data, controls for managing business continuity and more. And to walk us through all this, I’m joined, once again, by Charles Feddersen who leads the product team for Postgres on Azure. Welcome back to the show.


 


– Thanks for having me back, Jeremy. It’s great to be here.


 


– And it’s great to have you back on. You know, before we get into this, it’s probably worth explaining how Microsoft’s role is as part of the Postgres community. We’re not just putting an instance of Postgres on Azure, right?


 


– Yeah, what a lot of people don’t realize actually is Microsoft is a really significant contributor to Postgres, both major contributions in open-source Postgres and the surrounding ecosystem of features. We’ve contributed to many of the features that you’re probably using every day in Postgres, which include optimizations that speed up queries over highly petitioned tables. Or perhaps the single largest contribution we’re making to Postgres is to enable asynchronous and direct I/O for more efficient read and write operations in the database. We’ve learned a lot from running really demanding Postgres workloads in Azure, and this has inspired many of the performance optimizations that we’ve contributed upstream to open-source Postgres, so that everybody benefits.


 


– So given the pace of innovation then for the open-source community with Postgres, how do we make sure that, on Azure, we’ve got all the features and that they’re compatible with Azure Database for Postgres?


 


– Well, the first thing I really want to emphasize is that it’s pure open-source Postgres, and that’s by design. And this means you can run normal tools like pgAdmin, as you can see here. And there’s a really high level of compatibility with Postgres throughout the stack. And we ship new major versions of Postgres on Azure within weeks of the community release, which lets you test those latest features really quickly. Flexible service supports over 60 of the most common extensions, including PostGIS for geospatial workloads and Postgres FDW, which allows you to access data in external Postgres service. It also supports a great community-built extension called pgvector that enables Postgres to store index and query embeddings. And last year, we added the Azure AI extension, which provides direct integration between Postgres and the Azure OpenAI Service to generate vector embeddings from your data. And it also enables you to hook into capabilities like sentiment analysis, summarization, language detection and more. In fact, Azure AI support for Postgres is a major advantage of running Postgres on Azure. And this is in addition to several enterprise capabilities, such as built-in support for Microsoft Entra’s identity and access management, as well as broader security controls, like networking over private endpoints to better protect your data in transit, along with Key Vault encryption, using your own keys, including managed hardware security modules, or HSM, and more.


 


– Right, and this means basically that your Postgres implementation is natively integrated with your security policies for enterprise workloads, but you also mentioned that AI is a major benefit here in terms of Postgres on flexible server in Azure. So can you show us or walk through an example?


 


– Sure. Let me walk you through one using a travel website where the Azure AI extension has been used to generate vector embeddings for data for the travel site. And this also works for images where we can use the Azure AI Vision service to convert images to text and vectorize that information, all of which is stored in index in Postgres flexible server. And if you’re new to vectors, they’re a coordinate-like way to refer to chunks of data in your database and used for search for semantic matches. So when users submit natural language searches, those two are converted into vector embeddings. And unlike traditional keyword searches, similarity lookups find the closest semantic meaning between the vector embeddings from the user’s prompt and the embeddings stored in the database. Now additionally, the travel website uses Azure OpenAI’s GPT large language model itself to generate natural language responses using the data presented from Postgres as its context. So let’s try this out with a real app. Here’s our travel website and I’m going to book a much needed vacation. So I’ll search for San Diego and I’ve got over 120 accommodation options that I need to scroll through or filter. But now, I’m also traveling with my dog Mabel as well. So I need to find places where she can also stay. I’m going to add, allow small dogs to my search and this is going to use semantic search with embeddings to find suitable accommodations. And now, we’re down to about 90 results. So let’s look at the code to see how this works. Now, to perform the semantic similarity searches, we first need to generate text embeddings stored in a vector type in Postgres. I’ll create a new generator column of type vector and name it lodging_embedding. And this is going to store the text embeddings in our lodgings table that are based on the text descriptions column. Every time a new record is inserted, the Azure AI extension will call the OpenAI embedding model ada-002, pass the description text and return the embedding to stored. So I’ll run that query and now I’ll add an index to this new column to improve query efficiency. This is a special vector index called hnsw. It’s not your regular B-tree. And so I’ll run that and now we can do a test query against the embeddings. So I’ll switch to the vector similarity tab. And this query does a couple of interesting things. If you look at the order by clause, you can see that we’re ordering by the result of the comparison between the lodging_embedding column and the embedding we dynamically created from the search term to find the best result for allow small dogs. Now, we’re also using the PostGIS extension to add geospatial capabilities to find relevant lodging within 30 miles of a point of interest in San Diego. So I’ll run this query and you can see the top six results within 30 miles of a point of interest, ranked in order of the best semantic results for my small dog.


 


– So I get it, instead of creating another table or database, what you’re showing here is actually that Postgres provides a native type for embedding, so that you can actually incorporate your semantic search into your existing relational SQL workload.


 


– Exactly, and that’s the power of it. You don’t need a different database to handle embeddings. If you’ve got any existing Postgres apps, adding embeddings and semantic search and flexible server is as easy as adding a column and running a SQL function to call the Azure OpenAI service. So let’s go back to our hotel booking example. We also want to book a room with a beach view. I’ll add that to the search and how this works as I’m going to show you next is really cool. So I’ll head back over to a notebook and I’ve got one of the images from a property listing. Let’s take a look at the notebook cell. I can use the Azure AI Vision service to extract the embeddings from this image. And if I run this, you could see the embedding has been created and I could go ahead and store that in Postgres as well. And if we check our app again, you can see that we’re doing a text search for beach view, which is actually returning property images with a beach visible from the rooms. And the results are further refined with the suitability for my small dog. And as we can see on the left, it’s in the right distance range, within 30 miles of San Diego, which we’ve specified using geospatial in Postgres. And the amazing thing is we do it all with OpenText search, which is infinitely flexible, and not predefined filters. So I don’t need to hunt around for that often-hidden pets allowed filter.


 


– And the neat thing here is, as you mentioned, all of this is happening at the database layer, because we’ve actually converted all the text and all the images into vector embeddings, as part of data ingest and that’s all using Azure AI services.


 


– That’s right. That’s exactly right. And next, I’ll show you how you can make the search experience even richer by bringing Azure AI to summarize reviews and measure sentiment on a property. One of the most time-consuming parts of finding a great place to stay is reading the reviews. Here, we can use the Azure AI extension to calculate the sentiment and show a summarization of the reviews using Postgres. This is the output of the Coastal View College, with a 98% favorable sentiment and summary of reviews. So let’s take a look at the code. In this query, you can see we’re calling the azure_cognitive.analyze_sentiment function and passing the review_text that we want to score. I’ll run that and here you can see a positive sentiment of 98% returns. Now I’ll switch to the summary example. It’s a similar query pattern, except this time, we’re using the summarize_abstractive function to summarize the reviews into a small amount of easily-consumable text. So I’ll run this query, and here, you can see that summarized text,


 


– Right, and what you’ve shown here is more than just using embeddings, but also how the database can leverage other Azure capabilities to improve your app.


 


– That’s right. I’ve shown SQL queries that are returning results directly from the AI services, but alternatively, you could return those and store them in Postgres to reuse later. It’s really up to you, as a developer, about how you want to architect your app. Flexible server with the Azure AI extension just makes it easy to do it all using SQL. Now let’s move on to text generation, which is another area where we can use the Azure AI extension. I’m back in the website and I’ve selected the Coastal View Cottage for my stay. On the right, I can ask a freeform question about the property, but I’ve got a suggested prompt to look for hidden fees. These always seem to get me. So here, we’re using the Davinci model in the Azure OpenAI service to generate a response and it’s found a hidden fee buried in the fine print. So moving back to VS Code, I’ll run another query with the hidden fees prompt and I’ll capture those results. Now that I have the relevant context from the database, I’ll pass that to the Azure OpenAI Service Completion API and the prebuilt Davinci model to compose a response based on the results I took from the database. And this is how everything works.


 


– And this is a really great example of harnessing all of the AI capabilities. But something else that’s really important for an enterprise app is high availability and also disaster recovery.


 


– It is, and flexible server has all of those covered as well. This includes multi-zone high availability with zero data loss, zero redundant backups across regions, and recently we announced the general availability of planned failover, GeoDR. Here’s how you can configure that. I’m going to start in the portal on the Overview blade, and you can see I’ve got the Postgres flexible server called geodr running in the East US 2 region. I’ll scroll down on the left nav panel and head over to Replication where I’ve got two options: here to either create an endpoint, or create a read replica. Let’s create the read replica first. I’ll enter the replica server name and I’ll go create that in Australia Southeast, because that’s pretty much as far from East US 2 as you can get. I’ll click Review and create, and that’s now submitted. So once the replica is created on the other side of the planet, I need to create a virtual endpoint, which gives me a single endpoint for my application, so that when I do fail over, I don’t need to make any application changes to update connection strings. This time, I’ll create an endpoint. I’ll head over to the right panel and give it a name geodrvip, and you can see that the name has been appended to each of the writer and reader endpoint names below. And the reader server is the replica I just created. I’ll hit Create. And now, you can see I’ve got my virtual endpoint. So let’s test the failover using promotion. I’ll click the small Promote icon next to my replica server name. Now I’ve got some options. I can either promote this to the primary server, which means I reverse the roles of my two servers, that the replica becomes the writer, and the current writer becomes the replica. Or alternatively, I can promote this server to Standalone. I can also select if this as Planned, which means all data is synchronized to the replica prior to failover, or Forced, which executes immediately and doesn’t wait for the asynchronous replication to finish. I’ll leave everything as is and I’ll click Promote. And now, once this is finished, my geodr server that was the primary is now the replica under the reader endpoint and geodrausse is now the primary.


 


– Okay, so now you’ve got all your enterprise-grade data protections in place. You’ve got native vector search support and also GenAI capabilities for your apps, all powered by Postgres flexible server on Azure on the backend. So what’s next?


 


– So I’ve shown you how Flexible Server is a complete Postgres platform for enterprise and developers, and it’s only going to get better. We’ve got really big plans for the future, so stay tuned.


 


– So for everyone who’s watching right now, what do you recommend for them to get started?


 


– So to get started with the Azure Database for Postgres flexible server, go to aka.ms/postgresql, and to stay current with all the updates that we’re constantly shipping, check out our blog at aka.ms/AzurePostgresBlog.


 


– Thanks so much for joining us today, Charles. Always great to have you on to share all the updates to Postgres. Looking forward to having you back on the show. Of course, keep checking back to Microsoft Mechanics. We’ll see you next time and thanks for watching.


 





 

Tech Presentations: Key Strategies for Success

Tech Presentations: Key Strategies for Success

This article is contributed. See the original author and article here.

 


Navigating the intricate world of technology requires more than just expertise; it demands the ability to share that knowledge effectively. This article embarks on a journey to uncover the most effective strategies for crafting presentations tailored for the tech-savvy audience. We will dissect the elements that make a presentation not just informative, but memorable and engaging. From leveraging the latest tools to understanding the nuances of audience engagement, we aim to provide a comprehensive guide that empowers tech professionals to deliver their message with precision and impact. We’ll discuss these strategies with U.S. M365 MVP Melissa Marshall.


 


RochelleSonny_0-1717777396906.jpeg


MVP Melissa Marshall


 


Share with us your journey into becoming a top-level presenter.


I started my career as a professor at Penn State University, where I taught public speaking courses for engineering students. While I was there, I had the good fortune to give a TED Talk entitled “Talk Nerdy to Me” about the importance of science communication, and that really launched my ideas on scientific presentations into global prominence. I began to receive additional invitations to speak at conferences and provide training workshops at companies and institutions.  In 2015, I went full-time into my speaking, training, and consulting business, Present Your Science.  I now help the leading tech professionals and companies in the world present their work in a meaningful and compelling way that inspires stakeholders to take action.


 


Could you please share with us the three main tools to improve presentations?



  1. Be audience-centric. Your ability to be successful as a speaker depends upon your ability to make your audience successful.
    TIP: Be an interpreter of your work, not a reporter. Always connect each piece of technical info to a “So what?” point.



  1. Filter and Focus. When you try to share everything, you share nothing.
    TIP: Start your planning with the ‘view’ you want your audience to have at the end of the talk. Then ask yourself “What would they need to know in order to get there?”



  1. Show Your Science. Your slides should do something for you that your words cannot. This means make your slides VISUAL not VERBAL.
    TIP: Avoid bullet points (seriously!). Have a brief take-away message at the top of each slide and support it with visual evidence.


 


What are the main challenges that presenters face during a presentation?  


Presenters often allow the “status quo” of how slides are typically designed in their industry or at their company to dictate their choices.  Unfortunately, this status quo is often rooted in text heavy, bulleted slides which are not successful for an audience.  Instead of designing slides how you have always seen it done, I think presenters need to use a more strategic, evidence-based approach for their slide design.  That’s why I worked with the MS PowerPoint team to create this slide design template for technical presenters. This template is fully customizable, but it helps to lead the presenter to a design strategy that focuses on take away messages supported by visual evidence.  Which is a big step in the right direction for technical slides that are more successful for an audience.  


Also, presenters often struggle to filter their details in presentations, and they overwhelm the audience with too much information.  This can be improved by beginning your preparation by identifying the most critical single message you must convey.  And then focus on including information that relates to that message.


When sharing data, it’s important to be very descriptive about not just what the data is, but why the data is significant. It is easy to get in the habit of simply sharing the information, without providing context for it.


 


How are you using AI today to help you with presentations?  


I love sharing PowerPoint Speaker Coach with my clients.  This is an awesome AI-Driven tool that provides the speaker with private feedback on presentation elements like rate of speaking, emphasis, verbal fillers, and inclusive language.  It’s a great way to add some structure and purpose to practicing a presentation. 


 


What advice would you have for tech professionals beginning their journey presenting?


Look for more opportunities to present!  Most people have some anxiety associated with speaking in front of others, which causes them to avoid those situations as much as possible. However, the answer to becoming more comfortable speaking is to simply DO IT MORE.  It’s counterintuitive to what we feel like we want to do, but if you embraced the discomfort of presenting more often, you would find quite quickly that you are all the sudden becoming more comfortable and confident.  


In summary, the journey through the landscape of technical presentations is one of continuous learning and adaptation. The strategies discussed here provide a roadmap for creating presentations that not only convey complex information but also engage and inspire the tech community. By weaving together a narrative that resonates with the audience, utilizing visual aids to clarify and emphasize key points, and delivering with confidence and passion, presenters can leave a lasting impact. As technology continues to advance, so must our approach to sharing it. Let this article serve as a catalyst for innovation in your presentation techniques, empowering you to illuminate the path forward in the ever-changing world of technology.

Using Phi-3 & C# with ONNX for text and vision samples

Using Phi-3 & C# with ONNX for text and vision samples

This article is contributed. See the original author and article here.

Hi!


 


I written several posts about how to use Local Large Language Models with C#.

But what about Small Language Models?



Well, today’s demo is an example on how to use SLMs this with ONNX.  So let’s start with a quick intro to Phi-3, ONNX and why to use ONNX. And then, let’s showcase some interesting code sample and respirces. 


 


In the Phi-3 C# Labs sample repo we can find several samples, including a Console Application that loads a Phi-3 Vision model with ONNX, and analyze and describe an image.


 


30SampleVisionConsole.gif


 


Introduction to Phi-3 Small Language Model


The Phi-3 Small Language Model (SLM) represents a groundbreaking advancement in AI, developed by Microsoft. It’s part of the Phi-3 family, which includes the most capable and cost-effective SLMs available today. These models outperform others of similar or even larger sizes across various benchmarks, including language, reasoning, coding, and math tasks. The Phi-3 models, including the Phi-3-mini, Phi-3-small, and Phi-3-medium, are designed to be instruction-tuned and optimized for ONNX Runtime, ensuring broad compatibility and high performance.


 


You can learn more about Phi-3 in:



 


Introduction to ONNX


ONNX, or Open Neural Network Exchange, is an open-source format that allows AI models to be portable and interoperable across different frameworks and hardware. It enables developers to use the same model with various tools, runtimes, and compilers, making it a cornerstone for AI development. ONNX supports a wide range of operators and offers extensibility, which is crucial for evolving AI needs.


 


Why to Use ONNX on Local AI Development


Local AI development benefits significantly from ONNX due to its ability to streamline model deployment and enhance performance. ONNX provides a common format for machine learning models, facilitating the exchange between different frameworks and optimizing for various hardware environments. 


 


For C# developers, this is particularly useful because we have a set of libraries specifically created to work with ONNX models. In example: 



Microsoft.ML.OnnxRuntime, https://github.com/microsoft/onnxruntime

 

C# ONNX and Phi-3 and Phi-3 Vision


 


The Phi-3 Coookbook GitHub repository contains C# labs and workshops sample projects that demonstrate the use of Phi-3 mini and Phi-3-Vision models in .NET applications


 


It showcases how these powerful models can be utilized for tasks like question-answering and image analysis within a .NET environment.




 
































Project Description Location
LabsPhi301 This is a sample project that uses a local phi3 model to ask a question. The project load a local ONNX Phi-3 model using the Microsoft.ML.OnnxRuntime libraries. .srcLabsPhi301
LabsPhi302 This is a sample project that implement a Console chat using Semantic Kernel. .srcLabsPhi302
LabsPhi303 This is a sample project that uses a local phi3 vision model to analyze images.. The project load a local ONNX Phi-3 Vision model using the Microsoft.ML.OnnxRuntime libraries. .srcLabsPhi303
LabsPhi304 This is a sample project that uses a local phi3 vision model to analyze images.. The project load a local ONNX Phi-3 Vision model using the Microsoft.ML.OnnxRuntime libraries. The project also presents a menu with different options to interacti with the user. .srcLabsPhi304

 


To run the projects, follow these steps:


 




  1. Clone the repository to your local machine.




  2. Open a terminal and navigate to the desired project. In example, let’s run LabsPhi301.



    cd .srcLabsPhi301

     




  3. Run the project with the command



    dotnet run

     




  4. The sample project ask for a user input and replies using the local mode.


    The running demo is similar to this one:




20SampleConsole.gif


 


Sample Console Application to use a ONNX model


 


Let’s take a look at the 1st demo application with the following code snippet is from /src/LabsPhi301/Program.cs. The main steps to use a model with ONNX are:



  • The Phi-3 model, stored in the modelPath , is loaded into a Model  object.

  • This model is then used to create a Tokenizer  which will be responsible for converting our text inputs into a format that the model can understand.


And this is the chatbot implementation.



  • The chatbot operates in a continuous loop, waiting for user input.

  • When a user types a question, the question is combined with a system prompt to form a full prompt.

  • The full prompt is then tokenized and passed to a Generator object.

  • The generator, configured with specific parameters, generates a response one token at a time.

  • Each token is decoded back into text and printed to the console, forming the chatbot’s response.

  • The loop continues until the user decides to exit by entering an empty string. 


 


 


 

using Microsoft.ML.OnnxRuntimeGenAI;

var modelPath = @"D:phi3modelsPhi-3-mini-4k-instruct-onnxcpu_and_mobilecpu-int4-rtn-block-32";
var model = new Model(modelPath);
var tokenizer = new Tokenizer(model);

var systemPrompt = "You are an AI assistant that helps people find information. Answer questions using a direct style. Do not share more information that the requested by the users.";

// chat start
Console.WriteLine(@"Ask your question. Type an empty string to Exit.");

// chat loop
while (true)
{
    // Get user question
    Console.WriteLine();
    Console.Write(@"Q: ");
    var userQ = Console.ReadLine();    
    if (string.IsNullOrEmpty(userQ))
    {
        break;
    }

    // show phi3 response
    Console.Write("Phi3: ");
    var fullPrompt = $"{systemPrompt}{userQ}";
    var tokens = tokenizer.Encode(fullPrompt);

    var generatorParams = new GeneratorParams(model);
    generatorParams.SetSearchOption("max_length", 2048);
    generatorParams.SetSearchOption("past_present_share_buffer", false);
    generatorParams.SetInputSequences(tokens);

    var generator = new Generator(model, generatorParams);
    while (!generator.IsDone())
    {
        generator.ComputeLogits();
        generator.GenerateNextToken();
        var outputTokens = generator.GetSequence(0);
        var newToken = outputTokens.Slice(outputTokens.Length - 1, 1);
        var output = tokenizer.Decode(newToken);
        Console.Write(output);
    }
    Console.WriteLine();
}

 


 



This is a great example of how you can leverage the power of Phi-3 and ONNX in a C# application to create an interactive AI experience. Please take a look at the other scenarios and if you have any questions, we are happy to receive your feedback!


Best


Bruno Capuano


 


Note: Part of the content of this post was generated by Microsoft Copilot, an AI assistant.


 


 

Transition from unified routing diagnostics to Azure Application Insights

Transition from unified routing diagnostics to Azure Application Insights

This article is contributed. See the original author and article here.

Azure Application Insights is now our comprehensive solution for end-to-end conversation diagnostics. As part of this advancement, we are phasing out unified routing diagnostics and integrating its capabilities into Application Insights. 

Key dates

  • Disclosure date: May 9, 2024
    We sent communications to affected customers that we are deprecating unified routing diagnostics in Dynamics 365 Customer Service.
  • End of support: July 1, 2024
    After this date, no new diagnostics records will be generated for conversations/records routed.
  • End of life: July 15, 2024
    After this date, unified routing diagnostics will be taken out of service.

Next steps

We strongly encourage customers to leverage Azure Application Insights, which will be enriched with all the conversation and routing diagnostics related events. Application Insights offers a flexible and cost-effective alternative with the added advantage of customization to meet business needs. We aim to iteratively improve diagnostic capabilities of conversation lifecycle events. This ensures reliability and alignment with our commitment to cost-efficiency and user-centric innovation. Learn more about conversation diagnostics in Azure Application Insights. 

Please contact your Success Manager, FastTrack representative, or Microsoft Support if you have any additional questions. 

The post Transition from unified routing diagnostics to Azure Application Insights appeared first on Microsoft Dynamics 365 Blog.

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.