This article is contributed. See the original author and article here.
Hello, Mike Bazarewsky writing again, now on our shiny new ISV blog! My topic today is on a product that hasn’t gotten a huge amount of press, but actually brings some really nice capabilities to the table, especially with respect to IoT scenarios as we look to the future with Azure IoT Operations. That product is AKS Edge Essentials, or AKS-EE for short.
What did Microsoft have before AKS-EE?
AKS-EE is intended to be the “easy button” for running Linux-based and/or Windows-based containers on a Windows host, including a Windows IoT Enterprise host. It’s been possible to run Docker-hosted containers on Windows for a long time, and it’s even been possible to run orchestrators including Kubernetes on Windows for some time now. There’s even formal documentation on how to do so in Microsoft Learn.
Meanwhile, in parallel, and specific to IoT use cases, Microsoft offers Azure IoT Edge for Linux on Windows, or EFLOW for short. EFLOW offers the Azure IoT Edge container orchestrator on a Windows host by leveraging a Linux virtual machine. That virtual machine runs a customized deployment of CBL-Mariner, Microsoft’s first-party Linux distribution designed for secure, cloud-focused use cases. As an end-to-end Microsoft offering on a Microsoft platform, EFLOW is updated through Microsoft Update and as such, “plays nice” with the rest of the Windows ecosystem and bringing the benefits of that ecosystem while allowing running targeted Linux containers to run with a limited amount of “ceremony”.
What does AKS-EE bring to the table?
Taking this information all into account, it’s reasonable to ask “What are the gaps? Why would it make sense to bring another product into the space?” The answer is two-fold:
For some ISVs, particularly those coming from traditional development models (e.g. IoT developers, web service developers), the move to “cloud native” technologies such as containers is a substantial shift on its own, before worrying about deployment and management of an orchestrator. However, an orchestrator is still something those ISVs need to be able to get to scalability and observability as they work through their journey of “modernization” around containers.
EFLOW works very, very well for its intended target, which is Azure IoT Edge. However, that is a specialized use case that does not generalize well to general application workloads.
There is a hidden point here as well. Windows containers are a popular option in many organizations, but Linux containers are more common. At the same time, many enterprises (and thus, ISV customers) prefer the management, hardware support, and long-term OS support paths that Windows offers. Although through technologies such as Windows container hosting, Windows Subsystem for Linux, and Hyper-V allow for running Linux containers on a Windows host, they have different levels of complexity and management overhead, and in some situations, they are not practical.
The end result of all of this is that there is a need in the marketplace for a low-impact, easily-deployed, easily-updated container hosting solution for Linux containers on Windows hosts that supports orchestration. This is especially true as we look at a solution like Azure IoT Operations, which is the next-generation, Kubernetes-centric Azure IoT platform, but is also true for customers looking to move from the simplistic orchestration offered by the EFLOW offering to the more sophisticated orchestration offered by Kubernetes.
Besides bringing that to the table, AKS-EE builds on top of the standard k3s or k8s implementations, which means that popular Kubernetes management tools such as k9s can be used.
It can be Azure Arc enabled, allowing centralized management of the solution in the Azure Portal, Azure PowerShell, or Azure CLI. Azure Arc supports this through an outgoing connection from the cluster to the Azure infrastructure, which means it’s possible to remotely manage the environment, including deploying workloads, collecting telemetry and metrics, and so on, without needing incoming access to the host or the cluster. And, because it’s possible to manage Windows IoT Enterprise using Azure Arc, even the host can be connected to remotely, with centrally managed telemetry and updates (including AKS-EE through Microsoft Update). This means that it’s possible to have an end-to-end centrally managed solution across a fleet of deployment locations, and it means an ISV can offer “management as a service”. An IoT ISV can even offer packaged hardware offerings with Windows IoT Enterprise, AKS-EE, and their workload, all centrally managed through Azure Arc, which is an extremely compelling and powerful concept!
What if I am an IoT Edge user using EFLOW today?
As you might be able to determine from the way I’ve presented AKS-EE, one possible way to think about AKS-EE is as a direct replacement for EFLOW in IoT Edge scenarios. The AKS-EE Product Group is finishing guidance on migrating from EFLOW to AKS-EE and it will be published as soon as it is completed.
Conclusion
Hopefully, this short post gives you a better understanding of the “why” of AKS-EE as an offering and how it relates to some other offerings in the Microsoft space. If you’re looking to evaluate AKS-EE, the next step would be to review the Quickstart guide to get started!
Looking forward, if you are interested in production AKS-EE architecture, FastTrack ISV and FastTrack for Azure (Mainstream) have worked with multiple AKS-EE customers at this point, from single host deployments to multi-host scale-out deployments, including leveraging both the Linux and the Windows node capabilities of AKS-EE and leveraging the preview GPU support in the product. Take a look at those sites to learn more about how we can help you with derisking your AKS-EE deployment, or help you decide if AKS-EE is in fact the right tool for you!
This article is contributed. See the original author and article here.
Prologue – The creation of a new proxy with Linux, Rust, and OSS
In this introductory blog to the new Azure Front Door next generation platform, we will go over the motivations, design choices and learnings from this undertaking which helped us successfully achieve massive gains in scalability, security and resiliency.
Introduction
Azure Front Door is a global, scalable, and secure entry point for caching and acceleration of your web content. It offers a range of features such as load balancing, caching, web application firewall, and a rich rules engine for request transformation. Azure Front Door operates at the edge of Microsoft’s global network and handles trillions of requests per day from millions of clients around the world.
Azure Front Door, originally built upon a Windows-based proxy, has been a critical component in serving and protecting traffic for Microsoft’s core internet services. As the commercial offering of Azure Front Door expanded, and with the ever-evolving landscape of security and application delivery, we recognized the need for a new platform. This new platform would address the growing demands of scale, performance, cost-effectiveness, and innovation, ensuring we are able to meet the challenging scale and security demands from our largest enterprise customers. For our next-generation Azure Front Door platform, we opted to build it on Linux and embrace the open-source software community. The new edge platform was designed to incorporate learnings from the previous proxy implementation, while allowing us to accelerate innovation and deliver enhanced value to our customers. We will delve into the key design and development decisions that shaped the next generation proxy, and a modern edge platform that meets innovation, resiliency, scale and performance requirements of Azure and Microsoft customers.
Why Linux and Open Source?
A key choice that we made during the development of the new proxy platform was to use Linux as the operating system for the proxy. Linux offers a mature and stable platform for running high-performance network applications and it has a rich ecosystem of tools and libraries for network programming which allows us to leverage the expertise and experience of the open-source community.
Another reason for choosing Linux was that it offers a vibrant ecosystem with containers and Kubernetes for deploying and managing the proxy instances. The use of containers and Kubernetes offer many benefits for cloud-native applications, such as faster and easier deployment, scaling, and updates, as well as better resource utilization and isolation. By using containers and Kubernetes, we were also able to take advantage of the existing infrastructure and tooling that Microsoft has built for running Linux-based services on Azure.
The next decision that we made was to use open-source software as the basis of the platform. We selected high-quality and widely used open-source software for tasks like TLS termination, caching, and basic HTTP proxying capabilities. By using existing and reliable open-source software as the foundation of the new edge platform, we can concentrate on developing the features and capabilities that are unique to Azure Front Door. We also gain from continuous development and enhancement by the open-source community.
How did we build the next generation proxy?
While open-source software provides a solid foundation for the new proxy, it does not cover all the features and capabilities that we need for Azure Front Door. Azure Front Door is a multi-tenant service that supports many custom proxy features that are not supported by any open-source proxy. Building the proxy from scratch was faced with multiple design challenges but in this blog we will focus on the top two that helped build the foundation of the new proxy. We will discuss other aspects such as resilient architecture and protection features in later parts of this blog series.
Challenge 1: Multi-Tenancy
The first major challenge in developing Azure Front Door as a multi-tenant service was ensuring that the proxy could efficiently manage the configurations of hundreds of thousands of tenants, far surpassing the few hundred tenants typically supported by most open-source proxies. Each tenant’s configuration dictates how the proxy handles their HTTP traffic, making the configuration lookup an extremely critical aspect of the system. This requires all tenant configurations to be loaded into memory for high performance.
Processing configuration for hundreds of thousands of tenants means that the system needs to handle hundreds of config updates every second which requires dynamic updates to the data path without disrupting any packets. To address this, Azure Front Door adopted a binary configuration format which supports zero-copy deserialization and ensures fast lookup times. This choice is crucial not only for efficiently managing current tenant configurations but also for scaling up to accommodate future growth, potentially increasing the customer base tenfold. Additionally, to handle dynamic updates to the customer configuration delivered by the Azure Front Door’s configuration pipeline, a custom module was developed to asynchronously monitor and update the config in-memory.
Challenge 2: Customer business logic
One of the most widely adopted features of Azure Front Door is our Rules Engine, which allows our customers to set up custom rules tailored for their traffic. To build the proxy from scratch means that we must enable this extremely powerful use case in the open-source proxy, which brings us to our second challenge. Rather than creating fixed modules for each rule, we chose to innovate.
We developed a new domain-specific language (DSL) named AXE (Arbitrary eXecution Engine), specifically designed to add and evolve data plane capabilities swiftly. AXE is declarative and expressive, enabling the definition and execution of data plane processing logic in a structured yet flexible manner. It represents the rules as a directed acyclic graph (DAG), where each node signifies an operation or condition, and each edge denotes data or control flow. This allows AXE to support a vast array of operations and conditions, including:
Manipulating headers, cookies, and query parameters
Regex processing
URL rewriting
Filtering and transforming requests and responses
Invoking external services
These capabilities are integrated at various phases of the request processing cycle, such as parsing, routing, filtering, and logging.
AXE is implemented as a custom module in the new proxy, where it interprets and executes AXE scripts for each incoming request. The module is built on a fast, lightweight interpreter that operates in a secure, sandboxed environment, granting access to necessary proxy variables and functions. It also supports asynchronous and non-blocking operations, vital for non-disruptive external service interactions and timely processing.
This innovative approach to building and integrating the Rules Engine using AXE ensures that Azure Front Door remains a cutting-edge solution, capable of meeting and exceeding the dynamic requirements of our customers. Though AXE was developed for supporting Rules Engine feature of Azure Front Door, it was so flexible that we use it to power our WAF module now.
Why Rust?
Another important decision that we made while building the next generation proxy was to write new code in Rust, a modern and safe systems programming language. All the components we mentioned in the section above are either written in Rust or being actively rewritten in Rust. Rust is a language that offers high performance, reliability, and productivity, and it is gaining popularity and adoption in the network programming community. Rust has several features and benefits that make it a great choice for the next generation proxy, such as:
Rust has a powerful and expressive type system that helps us write correct and robust code. Rust enforces strict rules and performs all checks at compile time to prevent common errors and bugs, such as memory leaks, buffer overflows, null pointer exceptions, and data races. Rust also supports advanced features found in modern high-level languages such as generics, traits, and macros, that allow us to write generic and reusable code.
Rust has a concise and consistent syntax that avoids unnecessary boilerplate and encourages common conventions and best practices. Rust also has a rich and standard library that provides a wide range of useful and high-quality functionality with an emphasis on safety and performance, such as collections, iterators, string manipulation, error handling, networking, threading, and asynchronous execution abstractions.
Rust has a strong and vibrant community that supports and contributes to the language and its ecosystem. It has a large and growing number of users and developers who share their feedback, experience, and knowledge through various channels, such as forums, blogs, podcasts, and conferences. Rust also has a thriving and diverse ecosystem of tools and libraries that enhance and extend the language and its capabilities, such as IDEs, debuggers, test frameworks, web frameworks, network libraries, and AI/ML libraries.
We used Rust to write most of the new code for the proxy. By using Rust, we were able to write highly performant and reliable code for the proxy, while also improving our development velocity by leveraging existing Rust libraries. Rust helped us avoid many errors and bugs that could have compromised the security and stability of the proxy, and it also made our code more readable and maintainable.
Conclusion
The Azure Front Door team embarked on this journey to overhaul the entire platform a few years ago by rewriting the proxy and changing the infrastructure hosting the proxy. This effort enabled us to increase our density and throughput by more than double along with significant enhancements to our resiliency and scalability. We have successfully completed the transition of Azure Front Door customers from the old platform to the new one without any disruption. This challenging task was like changing the wings of a plane while it is airborne.
In this blog post, we shared some of the design and development challenges and decisions that we made while building the next generation edge platform for Azure Front Door that is based on Linux and uses Rust and OSS to extend and customize its functionality. We will share more details about AXE and other data plane and infrastructure innovations in later posts.
If you want to work with us and help us make the internet better and safer, we have some great opportunities for you. Azure Front Door team is looking to hire more engineers in different locations, such as USA, Australia, and Ireland. You can see more details and apply online at the Microsoft careers website. We hope to hear from you and welcome you to our team.
This article is contributed. See the original author and article here.
One year ago, generative AI burst onto the scene and for the first time since the smartphone, people began to change the way they interact with technology. People are bringing AI to work at an unexpected scale — and now the big question is, how’s it going?
As AI becomes ubiquitous in the workplace, employees and businesses alike are under extreme pressure. The pace and intensity of work, which accelerated during the pandemic, has not eased, so employees are bringing their own AI to work. Leaders agree AI is a business imperative — and feel the pressure to show immediate ROI — but many lack a plan and vision to go from individual impact to applying AI to drive the bottom line.
At the same time, the labor market is set to shift and there’s a new AI economy. While some professionals worry AI will replace their job, the data tells a more nuanced story — of a hidden talent shortage, more employees eyeing a career change, and a massive opportunity for those willing to skill up.
“AI is democratizing expertise across the workforce,” said Satya Nadella, Chairman and Chief Executive Officer, Microsoft. “Our latest research highlights the opportunity for every organization to apply this technology to drive better decision-making, collaboration — and ultimately business outcomes.”
For our fourth annual Work Trend Index, out today, we partnered with LinkedIn for the first time on a joint report so we could provide a comprehensive view of how AI is not only reshaping work, but the labor market more broadly. We surveyed 31,000 people across 31 countries, identified labor and hiring trends from LinkedIn, analyzed trillions of Microsoft 365 productivity signals and conducted research with Fortune 500 customers. The data points to insights every leader and professional needs to know — and actions they can take — when it comes to AI’s implications for work.
1. Employees want AI at work — and won’t wait for companies to catch up.
Three in four knowledge workers (75%) now use AI at work. Employees, overwhelmed and under duress, say AI saves time, boosts creativity and allows them to focus on their most important work. While 79% of leaders agree AI adoption is critical to remain competitive, 59% worry about quantifying the productivity gains of AI and 60% worry their company lacks a vision and plan to implement it. While leaders feel the pressure to turn individual productivity gains into organizational impact, employees aren’t waiting to reap the benefits: 78% of AI users are bringing their own AI tools to work. The opportunity for every leader is to channel this momentum into ROI.
2. For employees, AI raises the bar and breaks the career ceiling.
We also see AI beginning to impact the job market. While AI and job loss are top of mind for some, our data shows more people are eyeing a career change, there are jobs available, and employees with AI skills will get first pick. The majority of leaders (55%) say they’re worried about having enough talent to fill open roles this year, with leaders in cybersecurity, engineering, and creative design feeling the pinch most.
And professionals are looking. Forty-six percent across the globe are considering quitting in the year ahead — an all-time high since the Great Reshuffle of 2021 — a separate LinkedIn study found U.S. numbers to be even higher with 85% eyeing career moves. While two-thirds of leaders wouldn’t hire someone without AI skills, only 39% of users have received AI training from their company. So, professionals are skilling up on their own. As of late last year, we’ve seen a 142x increase in LinkedIn members adding AI skills like Copilot and ChatGPT to their profiles and a 160% increase in non-technical professionals using LinkedIn Learning courses to build their AI aptitude.
In a world where AI mentions in LinkedIn job posts drive a 17% bump in application growth, it’s a two-way street: Organizations that empower employees with AI tools and training will attract the best talent, and professionals who skill up will have the edge.
3. The rise of the AI power user — and what they reveal about the future.
In the research, four types of AI users emerged on a spectrum — from skeptics who rarely use AI to power users who use it extensively. Compared to skeptics, AI power users have reoriented their workdays in fundamental ways, reimagining business processes and saving over 30 minutes per day. Over 90% of power users say AI makes their overwhelming workload more manageable and their work more enjoyable, but they aren’t doing it on their own.
Power users work for a different kind of company. They are 61% more likely to have heard from their CEO on the importance of using generative AI at work, 53% more likely to receive encouragement from leadership to consider how AI can transform their function and 35% more likely to receive tailored AI training for their specific role or function.
“AI is redefining work and it’s clear we need new playbooks,” said Ryan Roslansky, CEO of LinkedIn. “It’s the leaders who build for agility instead of stability and invest in skill building internally that will give their organizations a competitive advantage and create more efficient, engaged and equitable teams.”
The prompt box is the new blank page
We hear one consistent piece of feedback from our customers: talking to AI is harder than it seems. We’ve all learned how to use a search engine, identifying the right few words to get the best results. AI requires more context — just like when you delegate work to a direct report or colleague. But for many, staring down that empty prompt box feels like facing a blank page: Where should I even start?
Today, we’re announcing Copilot for Microsoft 365 innovations to help our customers answer that question.
If you’ve got the start of a prompt, Copilot will offer to auto-complete it to get to a better result, suggesting something more detailed to help ensure you get what you’re looking for. That not only speeds things up, it offers you new ideas for how to leverage Copilot’s power.
Other times, you know exactly what you want — you’re just not sure how to ask. With its new rewrite feature, Copilot turns a basic prompt into a rich one with the click of a button, turning everyone into a prompt engineer.
Catch Up, a new chat interface that surfaces personal insights based on your recent activity, provides responsive recommendations, like “You have a meeting with the sales VP on Thursday. Let’s get you prepared — click here to get detailed notes.”
We also know that every role, team and function has unique needs and ways of working. To help create prompts for exactly the work you do, you’ll soon be able to create, publish and manage prompts in Copilot Lab that are expressly tailored to your closest teams.
These features will be available in the coming months, and in the future, we’ll take it a step further, with Copilot asking you questions to get to your best work yet.
LinkedIn has also made free over 50 learning courses to empower professionals at all levels to advance their AI aptitude.
Head to WorkLab for the full Work Trend Index Report, and head to LinkedIn to hear more from LinkedIn’s Chief Economist, Karin Kimbrough, on how AI is reshaping the labor market.
And for all the blogs, videos and assets related to today’s announcements, please visit our microsite.
This article is contributed. See the original author and article here.
Cloud computing and the rapid pace of emerging technologies have made identifying and upskilling talent an increased challenge for organizations. And AI is further widening this skills gap. A recent IDC infographic, commissioned by Microsoft, highlights that organizations are adopting AI, but a shortage of skilled employees is hindering their AI-based initiatives, with 52% citing a lack of skilled workers as the top blocker. [1]
We’re seeing more organizations use a skills-first approach to address the challenge of attracting, hiring, developing, and redeploying talent.This new shift in talent management emphasizes a person’s skills and competencies—in addition to degrees, job histories, and job titles.
At Microsoft, we value our people, their skills, and the impact they make. We follow the skills-first approach for our employees’ development, and we want to enable you to do the same as your organization pursues the opportunities for growth and innovation presented by cloud and AI. That’s why we’ve evolved ourMicrosoft Credentials, to give you the tools you need as you invest in and expand your workforce. Our credentials offer organizations the flexibility to grow the skills needed for critical roles with Microsoft Certifications, and the agility to expand the skills needed for real-world business opportunities with Microsoft Applied Skills.
Take on high priority projects with Applied Skills
We developed Microsoft Applied Skills, new verifiable credentials that validate specific real-world skills, to help you address your skills gaps and empower your employees with the in-demand expertise they need. Applied Skills credentials are earned through interactive lab-based assessments on Microsoft Learn, offering flexibility with optional training that accommodates individual learning journeys. Your team members can earn credentials at their own pace, aligning with project timelines.
Recently, we’ve received outstanding feedback regarding the significant value-add of Applied Skills from Telstra, Australia’s leading telecommunications and technology company: “There are so many opportunities for us to leverage this across our skilled workforce at Telstra,” notes Cloud and Infrastructure Lead Samantha Davies.
Applied Skills also be useful to prepare their teams before they start work on highly technical new projects. Charlyn Tan, Senior Chapter Lead, Cloud Engineering, points out that, “Being in a company with multiple technology stacks integrating and interacting with each other, it is important to have multiple scenario-based learnings for our people to upskill and experiment before they jump into the actual production environment.”
Watch this video to see how Telstra plans to integrate Applied Skills as part of their broader skilling strategy moving forward.
Here are a few more ways Microsoft Applied skills can help your organization:
Identify talent for projects: Whether you want to maximize the skill sets of your own team members or recruit new talent, Applied Skills helps you identify the right people with the specific skills required for critical projects.
Accelerate the release of new projects or products: Applied Skills can help your team quickly acquire, prove, and apply in-demand skills so projects move forward with increased success and reduced cost.
Retaining and upskilling talent: Applied Skills can help team members demonstrate their technical expertise so they can advance in their career and make an impact on projects that involve emerging technologies including AI.
Snapshot of Microsoft Applied Skills benefits to organizations
Upskill your teams in AI—and more—with Applied Skills
Applied Skills credentials provide a new way for employees to upskill for key AI transformation projects and help you assess how your organization can best leverage AI.
Explore our current portfolio of Applied Skills and certifications that are focused specifically on AI (with more to come):
Snapshot of Microsoft Applied Skills credentials currently available
Learn more
Explore Applied Skills today and invest in a nimble and resilient workforce ready to take on new projects, no matter how specialized. Through a variety of resources available on Microsoft Learn, we ensure that we’re partnering with organizations like yours to help you address challenges and maximize opportunities with comprehensive credentials and skilling solutions.
Be sure to follow us on X and LinkedIn, and get subscribed to “The Spark,” our LinkedIn newsletter, to stay updated on new Applied Skills as they are released.
This article is contributed. See the original author and article here.
Microsoft Defender for Cloud becomes the first CNAPP to protect enterprise-built AI applications across the application lifecycle
The AI transformation has accelerated with the introduction of generative AI (GenAI), unlocking a wide range of innovations with intelligent applications. Organizations are choosing to develop new GenAI applications and embed AI into existing applications to increase business efficiency and productivity.
Attackers are increasingly looking to exploit applications to alter the designed purpose of the AI model with new attacks like prompt injections, wallet attacks, model theft, and data poisoning, while increasing susceptibility to known risks such as data breaches and denial of service. Security teams need to be prepared and ensure they have the proper security controls for their AI applications and detections that address the new threat landscape.
As a market-leading cloud-native application protection platform (CNAPP), Microsoft Defender for Cloud helps organizations secure their hybrid and multicloud environments from code-to-cloud. We are excited to announce the preview of new security posture and threat protection capabilities to enable organizations to protect their enterprise-built GenAI applications throughout the entire application lifecycle.
With the new security capabilities to protect AI applications, security teams can now:
Continuously discover GenAI application components and AI-artifacts from code to cloud.
Explore and remediate risks to GenAI applications with built-in recommendations to strengthen security posture
Identify and remediate toxic combinations in GenAI applications using attack path analysis
Hunt and investigate attacks in GenAI apps with built-in integration with Microsoft Defender
Start secure with AI security posture management
With 98% of organizations using public cloud embracing a multicloud strategy[1], many of our customers use Microsoft Defender Cloud Security Posture Management (CSPM) in Defender for Cloud to get visibility across their multicloud environments and address cloud sprawl. With the complexities of AI workloads and its configurations across models, SDKs, and connected datastores – visibility into their inventory and the risks associated with them is more important than ever.
To enable customers to gain a better understanding of their deployed AI applications and get ahead of potential threats – we’re announcing the public preview of AI security posture management (AI-SPM) as part of Defender CSPM.
Defender CSPM can automatically and continuously discover deployed AI workloads with agentless and granular visibility into presence and configurations of AI models, SDKs, and technologies used across AI services such as Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock.
The new AI posture capabilities in Defender CSPM discover GenAI artifacts by scanning code repositories for Infrastructure-as-Code (IaC) misconfigurations and scanning container images for vulnerabilities. With this, security teams have full visibility of their AI stack from code to cloud and can detect and fix vulnerabilities and misconfigurations before deployment. In the example below, the cloud security explorer can be used to discover several running containers across clouds using LangChain libraries with known vulnerabilities.
Using the cloud security explorer in Defender for Cloud to discover container images with CVEs on their AI-libraries that are already deployed in containers in Azure, AWS and GCP.
By mapping out AI workloads and synthesizing security insights such as identity, data security, and internet exposure, Defender CSPM continuously surfaces contextualized security issues and suggests risk-based security recommendations tailored to prioritize critical gaps across your AI workloads. Relevant security recommendations also appear within the Azure OpenAI resource itself in Azure portal, providing developers or workload owners direct access to recommendations and helping remediate faster.
Recommendations and alerts surfaced directly in the resource page of Azure OpenAI in the Azure portal, aiming to meet business users and resource owners directly.
Grounding and fine tuning are top of mind for organizations to infuse their GenAI with the relevant business context. Our attack path analysis capability can identify sophisticated risks to AI workloads including data security scenarios where grounding or fine-tuning data is exposed to the internet through lateral movement and is susceptible to data poisoning.
This attack path has identified that a VM with vulnerabilities has access to a data store that was tagged as a grounding resource for GenAI applications. This opens the data store to risks such as data poisoning.
A common oversight around grounding happens when the GenAI model is grounded with sensitive data and could pose an opening to sensitive data leaks. It is important to follow architecture and configuration best practices to avoid unnecessary risks such as unauthorized or excessive data access. Our attack paths will find sensitive data stores that are linked to AI resources and extend wide privileges. This will allow security teams to focus their attention on the top recommendations and remediations to mitigate this.
This attack path has captured that the GenAI application is grounded with sensitive data and is internet exposed, making the data susceptible to leakage if proper guardrails are not in place.
Furthermore, attack path analysis in Defender CSPM can discover risks for multicloud scenarios, such as an AWS workload using an Amazon Bedrock model, and cross-cloud, mixed stacks that are typical architectures where the data and compute resources are in GCP or AWS and leverage Azure OpenAI model deployments.
An attack path surfacing vulnerabilities in an Azure VM that has access to an Amazon account with an active Bedrock service. These kinds of attack paths are easy to miss given their hybrid cloud nature.
Stay secure in runtime with threat protection for AI workloads
With organizations racing to embed AI as part of their enterprise-built applications, security teams need to be prepared with tailored threat protection to emerging threats to AI workloads. The potential attack techniques targeting AI applications do not revolve around the AI model alone, but rather the entire application as well as the training and grounding data it can leverage.
To complement our posture capabilities, today we are thrilled to announce the limited public preview of threat protection for AI workloads in Microsoft Defender for Cloud. The new threat protection offering leverages a native integration Azure OpenAI Service, Azure AI Content Safety prompt shields and Microsoft threat intelligence to deliver contextual and actionable security alerts. Threat protection for AI workloads allows security teams to monitor their Azure OpenAI powered applications in runtime for malicious activity associated with direct and in-direct prompt injection attacks, sensitive data leaks and data poisoning, as well as wallet abuse or denial of service attacks.
GenAI applications are commonly grounded with organizational data, if sensitive data is held in the same data store, it can accidentally be shared or solicited via the application. In the alert below we can see an attempt to exfiltrate sensitive data using direct prompt injection on an Azure OpenAI model deployment. By leveraging the evidence provided, SOC teams can investigate the alert, assess the impact, and take precautionary steps to limit users access to the application or remove the sensitive data from the grounding data source.
The sensitive data that was passed in the response was detected and surfaced as an alert in the Defender for Cloud.
Defender for Cloud has built-in integrations into Microsoft Defender XDR, so security teams can view the new security alerts related to AI workloads using Defender XDR portal. This gives more context to those alerts and allows correlations across cloud resources, devices, and identities alerts. Security teams can also use Defender XDR to understand the attack story, and related malicious activities associated with their AI applications, by exploring correlations of alerts and incidents.
An incident in Microsoft Defender XDR detailing 3 separate Defender for Cloud alerts originating from the same IP targeting the Azure OpenAI resource – sensitive data leak, credential theft and jailbreak detections.
Learn more about securing AI applications with Defender for Cloud
Get started with AI security posture management in Defender CSPM
Get started with threat protection for AI workloads in Defender for Cloud
Get access to threat protection for AI workloads in Defender for Cloud in preview
Read more about securing your AI transformation with Microsoft Security
Recent Comments