This article is contributed. See the original author and article here.

Welcome to the conclusion of our series on OpenAI and Microsoft Sentinel!  Back in Part 1, we introduced the Azure Logic Apps connector for OpenAI and explored the parameters that influence text completion from the GPT3 family of OpenAI Large Language Models (LLMs) with a simple use case: describing the MITRE ATT&CK tactics associated with a Microsoft Sentinel incident.  Part 2 covered another useful scenario, summarizing a KQL analytics rule extracted from Sentinel using its REST API.  In Part 3, we revisited the first use case and compared the Text Completion (DaVinci) and Chat Completion (Turbo) models.  What’s left to cover?  Well, quite a lot – let’s get started!


 


There is some incredible work happening every day by Microsoft employees, MVPs, partners, and independent researchers to harness the power of generative AI everywhere.  Within the security field, though, one of the most important topics for AI researchers is data privacy.  We could easily extract all entities from a Microsoft Sentinel incident and send them through OpenAI’s API for ChatGPT to summarize and draw conclusions – in fact, I’ve seen half a dozen new projects on GitHub just this week doing exactly that.  It’s certainly a fun project for development and testing, but no enterprise SOC wants to export potentially sensitive file hashes, IP addresses, domains, workstation hostnames, and security principals to a third party without strictly defined data sharing agreements (or at all, if they can help it).  How can we keep sensitive information private to the organization while still getting benefit from innovative AI solutions such as ChatGPT?


 


Enter Azure OpenAI Service!


 


danielbates_0-1678464757978.png


 


Azure OpenAI Service provides REST API access to the same GPT-3.5, Codex, DALL-E 2, and other LLMs that we worked with earlier in this series, but with the security and enterprise benefits of Microsoft Azure.  This service is deployed within your Azure subscription with encryption of data at rest and data privacy governed by Microsoft’s Responsible AI principles.  Text completion models including DaVinci have been generally available on Azure OpenAI Service since December 14, 2022.  While this article was being written, ChatGPT powered by the gpt-3.5-turbo model was just added to Preview.  Access is limited right now, so be sure to apply for access to Azure OpenAI!


 


ChatGPT on Azure solves a major challenge in operationalizing generative AI LLMs for use in an enterprise SOC.  We’ve already seen automation for summarizing incident details, related entities, and analytic rules – and if you’ve followed this series, we’ve actually built several examples!  What’s next?  I’ve compiled a few examples that I think highlight where AI will bring the most value to a security team in the coming weeks and months.


 



  • As an AI copilot for SOC analysts and incident responders, ChatGPT could power a natural language assistant interfacing with security operators through Microsoft Teams to provide a common operating picture of an incident in progress.  Check out Chris Stelzer’s innovative work with #SOCGPT for an example of this capability.

  • ChatGPT could give analysts a head start on hunting for advanced threats in Microsoft 365 Defender Advanced Hunting by transforming Sentinel analytic rules into product-specific hunting queries.  A Microsoft colleague has done some pioneering work with ChatGPT for purple-teaming scenarios, both generating and detecting exploit code – the possibilities here are endless.

  • ChatGPT’s ability to summarize large amounts of information could make it invaluable for incident documentation.  Imagine an internal SharePoint with summaries on every closed incident from the past two years!


 


There are still a few areas where ChatGPT, as innovative as it is, won’t replace human expertise and purpose-built systems.  Entity research is one such example; it’s absolutely crucial to have fully defined, normalized telemetry for security analytics and entity mapping.  ChatGPT’s models are trained on a very large but still finite set of data and cannot be relied on for real-time threat intelligence.   Similarly, ChatGPT’s generated code must always be reviewed before being implemented in production.


 


I can’t wait to see what happens with OpenAI and security research this year!  What security use cases have you found for generative AI?  Leave a comment below!

Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.