This article is contributed. See the original author and article here.
Microsoft Defender for Cloud becomes the first CNAPP to protect enterprise-built AI applications across the application lifecycle
The AI transformation has accelerated with the introduction of generative AI (GenAI), unlocking a wide range of innovations with intelligent applications. Organizations are choosing to develop new GenAI applications and embed AI into existing applications to increase business efficiency and productivity.
Attackers are increasingly looking to exploit applications to alter the designed purpose of the AI model with new attacks like prompt injections, wallet attacks, model theft, and data poisoning, while increasing susceptibility to known risks such as data breaches and denial of service. Security teams need to be prepared and ensure they have the proper security controls for their AI applications and detections that address the new threat landscape.
As a market-leading cloud-native application protection platform (CNAPP), Microsoft Defender for Cloud helps organizations secure their hybrid and multicloud environments from code-to-cloud. We are excited to announce the preview of new security posture and threat protection capabilities to enable organizations to protect their enterprise-built GenAI applications throughout the entire application lifecycle.
With the new security capabilities to protect AI applications, security teams can now:
- Continuously discover GenAI application components and AI-artifacts from code to cloud.
- Explore and remediate risks to GenAI applications with built-in recommendations to strengthen security posture
- Identify and remediate toxic combinations in GenAI applications using attack path analysis
- Detect on GenAI applications powered by Azure AI Content Safety prompt shields, Microsoft threat intelligence signals, and contextual activity monitoring
- Hunt and investigate attacks in GenAI apps with built-in integration with Microsoft Defender
Start secure with AI security posture management
With 98% of organizations using public cloud embracing a multicloud strategy[1], many of our customers use Microsoft Defender Cloud Security Posture Management (CSPM) in Defender for Cloud to get visibility across their multicloud environments and address cloud sprawl. With the complexities of AI workloads and its configurations across models, SDKs, and connected datastores – visibility into their inventory and the risks associated with them is more important than ever.
To enable customers to gain a better understanding of their deployed AI applications and get ahead of potential threats – we’re announcing the public preview of AI security posture management (AI-SPM) as part of Defender CSPM.
Defender CSPM can automatically and continuously discover deployed AI workloads with agentless and granular visibility into presence and configurations of AI models, SDKs, and technologies used across AI services such as Azure OpenAI Service, Azure Machine Learning, and Amazon Bedrock.
The new AI posture capabilities in Defender CSPM discover GenAI artifacts by scanning code repositories for Infrastructure-as-Code (IaC) misconfigurations and scanning container images for vulnerabilities. With this, security teams have full visibility of their AI stack from code to cloud and can detect and fix vulnerabilities and misconfigurations before deployment. In the example below, the cloud security explorer can be used to discover several running containers across clouds using LangChain libraries with known vulnerabilities.
By mapping out AI workloads and synthesizing security insights such as identity, data security, and internet exposure, Defender CSPM continuously surfaces contextualized security issues and suggests risk-based security recommendations tailored to prioritize critical gaps across your AI workloads. Relevant security recommendations also appear within the Azure OpenAI resource itself in Azure portal, providing developers or workload owners direct access to recommendations and helping remediate faster.
Grounding and fine tuning are top of mind for organizations to infuse their GenAI with the relevant business context. Our attack path analysis capability can identify sophisticated risks to AI workloads including data security scenarios where grounding or fine-tuning data is exposed to the internet through lateral movement and is susceptible to data poisoning.
A common oversight around grounding happens when the GenAI model is grounded with sensitive data and could pose an opening to sensitive data leaks. It is important to follow architecture and configuration best practices to avoid unnecessary risks such as unauthorized or excessive data access. Our attack paths will find sensitive data stores that are linked to AI resources and extend wide privileges. This will allow security teams to focus their attention on the top recommendations and remediations to mitigate this.
Furthermore, attack path analysis in Defender CSPM can discover risks for multicloud scenarios, such as an AWS workload using an Amazon Bedrock model, and cross-cloud, mixed stacks that are typical architectures where the data and compute resources are in GCP or AWS and leverage Azure OpenAI model deployments.
Stay secure in runtime with threat protection for AI workloads
With organizations racing to embed AI as part of their enterprise-built applications, security teams need to be prepared with tailored threat protection to emerging threats to AI workloads. The potential attack techniques targeting AI applications do not revolve around the AI model alone, but rather the entire application as well as the training and grounding data it can leverage.
To complement our posture capabilities, today we are thrilled to announce the limited public preview of threat protection for AI workloads in Microsoft Defender for Cloud. The new threat protection offering leverages a native integration Azure OpenAI Service, Azure AI Content Safety prompt shields and Microsoft threat intelligence to deliver contextual and actionable security alerts. Threat protection for AI workloads allows security teams to monitor their Azure OpenAI powered applications in runtime for malicious activity associated with direct and in-direct prompt injection attacks, sensitive data leaks and data poisoning, as well as wallet abuse or denial of service attacks.
GenAI applications are commonly grounded with organizational data, if sensitive data is held in the same data store, it can accidentally be shared or solicited via the application. In the alert below we can see an attempt to exfiltrate sensitive data using direct prompt injection on an Azure OpenAI model deployment. By leveraging the evidence provided, SOC teams can investigate the alert, assess the impact, and take precautionary steps to limit users access to the application or remove the sensitive data from the grounding data source.
Defender for Cloud has built-in integrations into Microsoft Defender XDR, so security teams can view the new security alerts related to AI workloads using Defender XDR portal. This gives more context to those alerts and allows correlations across cloud resources, devices, and identities alerts. Security teams can also use Defender XDR to understand the attack story, and related malicious activities associated with their AI applications, by exploring correlations of alerts and incidents.
Learn more about securing AI applications with Defender for Cloud
- Get started with AI security posture management in Defender CSPM
- Get started with threat protection for AI workloads in Defender for Cloud
- Get access to threat protection for AI workloads in Defender for Cloud in preview
- Read more about securing your AI transformation with Microsoft Security
- Learn about Defender for Cloud pricing
Additional resources
- Learn more about Azure OpenAI Service
- Learn more about Microsoft Defender for Cloud
- Learn about OWASP top 10 for LLM Applications
Ron Matchoro, Principal Group Product Manager, Microsoft Defender for Cloud
Shiran Horev, Principal Product Manager, Microsoft Defender for Cloud
[1] 451 Research, Multicloud in the Mainstream, 2023
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
Recent Comments