HOW TO AUTHOR SCCM REPORTS LIKE A SUPERHERO

HOW TO AUTHOR SCCM REPORTS LIKE A SUPERHERO

This article is contributed. See the original author and article here.

Hi there, I am Matt Balzan, and I am a Microsoft Modern Workplace Customer Engineer who specializes in SCCM, APP-V and SSRS/PowerBI reporting.

Today I am going to show you how to write up SQL queries, then use them in SSRS to push out beautiful dashboards, good enough to send to your boss and hopefully win that promotion you’ve been gambling on!

INGREDIENTS

  • Microsoft SQL Server Management Studio (download from here)
  • A SQL query
  • Report Builder
  • Your brain

Now that you have checked all the above, let us first go down memory lane and do a recap on SQL.

WHAT EXACTLY IS SQL?

SQL (Structured Query Language) is a standard language created in the early 70s for storing, manipulating and retrieving data in databases.

Does this mean that if I have a database with data stored in it, I can use the SQL [or T-SQL] language to grab the data based on conditions and filters I apply to it?   Yep, you sure can!

A database is made up of tables and views and many more items but to keep things simple we are only interested in the tables and views.

DB-BLOG.png

Your table/view will contain columns with headers and in the rows is where all the data is stored.

ROWS-BLOG.png

The rows are made up of these columns which contain cells of data. Each cell can be designed to have text, datetime, integers. Some cells can have no values (or NULL values) and some are mandatory to have data in them. These settings are usual setup by the database developer. But thankfully, for reporting we need not worry about this. What we need is data for our reports, and to look for data, we use t-SQL!

But wait! – I have seen these queries on the web, and they all look double-Dutch to me!

No need to worry, I will show you how to read these bad boys with ease!

ANATOMY OF A SQL QUERY

We have all been there before, someone sent you a SQL query snippet or you were browsing for a specific SQL query and you wasted the whole day trying to decipher the darn thing!

Here is one just for simplicity:

select Name0,Operating_System_Name_and0 from v_R_System where Operating_System_Name_and0 like ‘%server%’

My quick trick here is to go to an online SQL formatter, there are many of these sites but I find this one to be perfect for the job: https://sql-format.com/

Simply paste the code in and press the FORMAT button and BOOM!

query.png

Wow, what a difference!!! Now that all the syntax has been highlighted and arranged, copy the script, open SSMS, click New Query, paste the script in it and press F5 to execute the query.

This is what happens:

ANATOMY-BLOG.png

  1. The SELECT command tells the query to grab data, however in this case it will only display 2 columns (use asterisk * to get all the columns in the view)
  2. FROM the v_R_System view
  3. WHERE the column name contains the keyword called server using the LIKE operator (the % tells the filter to bring back zero, one, or multiple characters)
  4. The rows of data are then returned in the Results pane. In this example only all the server names are returned.

 TIPS!

Try to keep all the commands in CAPITALS – this is good practice and helps make the code stand out for easier reading!

ALWAYS use single quotes for anything you are directing SQL to. One common mistake is to use code copied from the browser or emails which have different font types. Paste your code in Notepad and then copy them out after.

Here is another example:   Grab all the rows of data and all the columns from the v_Collections view.

SELECT * FROM v_Collections

The asterisk * means give me every man and his dog. (Please be mindful when using this on huge databases as the query could impact SQL performance!)

Sometimes you need data from different views. This query contains a JOIN of two views:

SELECT vc.CollectionName

,vc.MemberCount

FROM v_Collections AS vc

INNER JOIN v_Collections_G AS cg ON cg.CollectionID = vc.CollectionID

WHERE vc.CollectionName LIKE%desktop%

ORDER BY vc.CollectionName DESC

OK, so here is the above query breakdown:

  1. Grab only the data in the column called vc.CollectionName and vc.MemberCount from the v_Collections view
  2. But first JOIN the view v_Collections_G using the common column CollectionID (this is the relating column that both views have!)
  3. HOWEVER, only filter the data that has the word ‘desktop‘ in the CollectionName column.
  4. Finally, order the list of collection names in descending order.

SIDE NOTE: The command AS is used to create an alias of a table, view or column name – this can be anything but generally admin folk use acronyms of the names (example: v_collections will be vc) – Also noteworthy, is that when you JOIN tables or VIEWS you will need to create the aliases, they probably might have the same column names, so the alias also solves the problem of getting all of the joined columns mixed up.

T-SQL reference guide: https://docs.microsoft.com/en-us/sql/t-sql/language-reference?view=sql-server-ver15

OK, SO WHAT MAKES A GOOD REPORT?

It needs the following:

  • A script that runs efficiently and does not impact the SQL server performance.
  • The script needs to be written so that if you decide to leave the place where you work, others can understand and follow it!
  • Finally, it needs to make sense to the audience who is going to view it – Try not to over engineer it – keep it simple, short and sweet.

SCENARIO: My customer wants to have a Task Sequence report which contains their company logo, the task sequence steps and output result for each step that is run, showing the last step run.

In my lab I will use the following scripts. The first one is the main one, it will join the v_r_System view to the vSMS_TaskSequenceExecutionStatus view so that I can grab the task sequence data and the name of the device.

SELECT vrs.Name0

,[PackageID]

,[AdvertisementID]

,vrs.ResourceID

,[ExecutionTime]

,[Step]

,[GroupName]

,[ActionName]

,[LastStatusMsgID]

,[LastStatusMsgName]

,[ExitCode]

,[ActionOutput]

FROM [vSMS_TaskSequenceExecutionStatus] AS vtse

JOIN v_R_System AS vrs ON vrs.ResourceID = vtse.ResourceID

WHERE AdvertisementID = @ADVERTID

ORDER BY ExecutionTime DESC

The second script is for the @ADVERTID parameter – When the report is launched, the user will be prompted to choose a task sequence which has been deployed to a collection. This @ADVERTID parameter gets passed to the first script, which in turn runs the query to grab all the task sequence data rows.

Also highlighted below, the second script concatenates the column name pkg.Name (this is the name of the Task Seq) with the word ‘ to ‘ and also with the column name col.Name (this is the name of the Collection), then it binds it altogether as a new column called AdvertisementName.

So, for example purposes, the output will be: INSTALL SERVER APPS to ALL DESKTOPS AND SERVERS – this is great, as we now know which task sequence is being deployed to what collection!

SELECT DISTINCT

adv.AdvertisementID,

col.Name AS Collection,

pkg.Name AS Name,

pkg.Name + ‘ to ‘ + col.Name AS AdvertismentName

FROM v_Advertisement adv

JOIN (SELECT

PackageID,

Name

FROM v_Package) AS pkg

ON pkg.PackageID = adv.PackageID

JOIN v_TaskExecutionStatus ts

ON adv.AdvertisementID = (SELECT TOP 1 AdvertisementID

  FROM v_TaskExecutionStatus

  WHERE AdvertisementID = adv.AdvertisementID)

JOIN v_Collection col

ON adv.CollectionID = col.CollectionID

ORDER BY Name

LET US BEGIN BY CREATING THE REPORT

  1. Open Microsoft Endpoint Configuration Manager Console and click on the Monitoring workspace.
  2. Right-click on REPORTS then click Create Report.
  3. Type in the name field – I used TASK SEQUENCE REPORT
  4. Type in the path field – I created a folder called _MATT (this way it sits right at the top of the folder list)
  5. Click Next and then Close – now the Report Builder will automatically launch.
  6. Click Run on the Report Builder popup and wait for the UI to launch. This is what it looks like:

REPORT-BUILDER-BLOG.png

STEP 1 – ADD THE DATA SOURCE

 ADD-DS-BLOG.gif

To get the data for our report, we need to make a connection to the server data source.

Right-click on Data Sources / Click Add Data Source… / Click Browse… / Click on Configmr_<yoursitecode> / Scroll down to the GUID starting with {5C63 and double-click it / Click Test Connection / Click OK and OK.

STEP 2 – ADD THE SQL QUERIES TO THEIR DATASETS

ADD-DATASETS-BLOG.gif

Copy your scripts / Right-click on Datasets / click on Add Dataset… / Type in the name of your dataset (no spaces allowed) / Select the radio button ‘Use a dataset embedded in my report’ / Choose your data source that was added in the previous step / Paste your copied script in the Query window and click OK / Click the radio button ‘Use the current Window user’ and click OK.

STEP 3 – ADJUST THE PARAMETER PROPERTIES

PARAMS-PROPS-GEN-BLOG.png

 

Expand the Parameters folder / Right-click the parameter ADVERTID / Select the General tab, under the Prompt: field type DEPLOYMENT – leave all the other settings as they are.

PARAMS-PROPS-BLOG.png

 

Click on Available Values / Click the radio button ‘Get values from a query’ / for Dataset: choose ADVERTS, for Value field choose AdvertisementID and for Label field choose AdvertisementName / Click OK.

Now when the report first runs, the parameter properties will prompt the user with the DEPLOYMENT label and grab the results of the ADVERTS query – this will appear on the top of the report and look like this (remember the concatenated column?):

DeploymentLabel.png

OK cool – but we are not done yet. Now for the fun part – adding content!

STEP 4 – ADDING A TITLE / LOGO / TABLE

ADD-TITLE-BLOG.gif

Click the label to edit your title / change the font to SEGOE UI then move its position to the centre / Adjust some canvas space then remove the [&ExecutionTime] field.

 

ADD-LOGO-BLOG.gif

From the ribbon Insert tab / click Image, find a space on your canvas then drag-click an area / Click Import…, choose ALL files (*.*) image types then find and add your logo / Click OK.

 ADD-TABLE-BLOG.gif

Next click on Table, choose Table Wizard… / Select the TASKSEQ dataset and click Next / Hold down shift key and select all the fields except PackageID, AdvertisementID & ResourceID.

Drag the highlighted fields to the Values box and click Next / Click Next to skip the layout options / Now choose the Generic style and click Finish.

Drag the table under the logo.

STEP 5 – SPIT POLISHING YOUR REPORT

TS-SPITPOLISH-BLOG.png

A Placeholder is a field where you can apply an expression for labels or text, you wish to show in your report.

In my example, I would like to show the Deployment name which is next to the Task Sequence title:

ADD-PLACEHOLDER-BLOG.gif

In the title text box, right-click at the end of the text TASK SEQUENCE: / Click Create Placeholder… / under Value: click on the fx button / click on Datasets / Click on ADVERTS and choose First(AdvertisementName).

Finally, I wanted the value in UPPERCASE and bold. To do this I changed the text value to: =UCASE(First(Fields!AdvertisementName.Value, “ADVERTS”)) , click on OK.

I then selected the <<Expr>> and changed the font to BOLD.

Do not forget to save your report into your folder of choice!

Once you have finished all the above settings and tweaks, you should be good to run the report. Click the Run button from the top left corner in the Home ribbon.

If you followed all the above step by step, you should have now a report fit for your business – this is the output after a Deployment has been selected from the combo list:

FINAL-REPORT-BLOG.png

  • To test the Action Output column, I created a task sequence where I intentionally left some errors in the Run Command Line steps, just to show errors and draw out some detail.
  • To avoid massive row height sizes, I set the cell for this column to font size 8.
  • To give it that Azure report style look and feel, I only set the second row in the table with the top border being set. You can change this to your own specification.

Please feel free to download my report RDL file as a reference guide (Attachment on the bottom of this page)

 LESSONS LEARNED FROM THE FIELD

  • Best advice I give to my customers is to start by creating a storyboard. Draw it out on a sheet of blank or grid paper which gives you an idea where to begin.
  • What data do you need to monitor or report on?
  • How long do these scripts take to run? Test them in SQL SERVER MANAGEMENT STUDIO and note the time in the Results pane.
  • Use the free online SQL formatting tools to create proper readable SQL queries for the rest of the world to understand!
  • Who will have access to these reports? Ensure proper RBAC is in place.
  • What is the target audience? You need to keep in mind some people will not understand the technology of the data.

COMING NEXT TIME…

My next blog will go into a deep dive in report design. I will show you how to manipulate content based on values using expressions, conditional formatting, design tips, best practices and much more….Thanks for reading, keep smiling and stay safe!

DISCLAIMER

The sample files are not supported under any Microsoft standard support program or service. The sample files are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample files and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the files be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.

Matt Balzan | Modern Workplace Customer Engineer | SCCM, Application Virtualisation, SSRS / PowerBI Reporting

Azure Stack Hub Support: Getting a quick answer to an issue you are having with Azure Stack Hub

Azure Stack Hub Support: Getting a quick answer to an issue you are having with Azure Stack Hub

This article is contributed. See the original author and article here.

As an Azure Stack Hub operator, you can quickly find solutions to many issues in the Support Portal in global Azure. In this blog post, we’ll join Shireen Isab (PM on the Azure Stack Hub team) as we walk through how to quickly find an answer to an Azure Stack Hub problem.


 


In the Support Portal, you can find solutions created for specific problems customers have encountered in the past. This is a simple way of potentially identifying a solution for your individual problem, without actually creating a support ticket.


 


In addition to the Support Portal, you can also find troubleshooting topics and step-by-step instructions in the Azure Stack Hub product documentation for both the cloud operator tasks and for developers using the Azure Stack Hub user portalYou can also find information about new features and current issues can be found in the release notes section of the documentation. If you can’t find the answer to your problem through the support portal or the documentation, you can ask questions in the Microsoft Q&A for Azure Stack Hub or take part in the Azure Stack World Wide Community on Yammer. You can also find us using the tag #azurestackhub on Twitter, or  ‘azure-stack’ in Stack Overflow to either post or find answers. 


 


However, in this scenario, we will use the Support Portal. Let’s assume I have an issue connecting to a virtual machine that was created in Azure Stack Hub. I am having trouble connecting to the VM. I’m getting this message when I try to use Remote Desktop on my management computer to connect to my VM. I’ve already installed things on the VM, but now I can’t connect. 


rctibi_0-1607982570140.png


 


Before I got to support, I’ve tried to restart the VM, redeploy, and a take a look at the settings. I’m still unable to connect.


 



  1. From the Azure Stack Hub administrative portal, select the ? For Help + support.


rctibi_1-1607982570164.png


 



  1. Select Create support request.

  2. Add the information about your issue in the New Support request form.


rctibi_2-1607982570175.png


 



  1. Select Technical for issue type.

  2. Select your subscription.

  3. Select My services.

  4. Select Azure Stack Hub for the Service.

  5. Select the Resource group associated with your issue.

  6. Type a summary of your issue. In this case, “I cannot connect,” and so I summarize my issue. I type, “I cannot connect to my VM.” I then see a list of recommended solutions.
    rctibi_3-1607982570183.png

  7. Select the Problem type from the list of options – Here, I am having trouble connecting to my windows VM, and so I select Virtual Machine running Windows.

  8. Select the problem subtype. I cannot connect to my VM.

  9. Select Next: Solutions.


 


rctibi_4-1607982570191.png


 



  1. I review the options in the recommended solutions. I check that my ports open. I open my Azure Stack Hub portal and then open the VM machine. I select Networking.


rctibi_5-1607982570197.png


 



  1. I see that the port rule for RDP, port 3389, is set to Deny. I update to allow.

  2. I try with my RDP client again and I can connect.


rctibi_6-1607982570199.png


 


I resolved this problem.


 


 

Now available: Azure Storage logs

Now available: Azure Storage logs

This article is contributed. See the original author and article here.

Azure resource logs for Azure Storage is now in public preview in the Azure public cloud. These logs provide detailed information about successful and failed requests to a storage service. This information can be used to monitor individual requests and to diagnose issues with a storage service. Compared with classic storage logs, Azure Monitor logs for Storage is extended to support:



  • Resource logs for Azure Files

  • Resource logs for premium block blob storage and premium file shares

  • Resource logs for Azure Data Lake Storage Gen2

  • More enhancements including authentication context, TLS version, and other attributes

  • Native query and analysis experience with Log Analytics


You can now configure diagnostic settings to consolidate storage logs to central storage accounts, send to Event Hub for streaming ingestion to SIEM tool or custom integration, or export to Log Analytics to query and analyze natively. Log export is charged per streaming volume. See Azure Monitor Pricing for details.


 


Screenshot of the configuration screen to set up the Storage logsScreenshot of the configuration screen to set up the Storage logs


 


Learn more about this exciting new feature, and let us know what you think right here on the Tech Community!

Secure Application Lifecycle  – Part 1 – Using CredScan

Secure Application Lifecycle – Part 1 – Using CredScan

This article is contributed. See the original author and article here.

 


Cyber Security topic is one the most important topics in our mind when we develop application and systems on-perm or in cloud in general.


It is important to frequently perform and install security validations on applications. There are two important aspects for these security validations. First, developer should be able to detect any credentials or secrets in the code so developer can move them to a safe place. Also, DevOps and infrastructure team should be able to perform frequent security health checks on azure subscriptions.


In this series, I will go over very useful tools which help to improve the security of application and cloud resources. In Part 1, I will discuss CredScan. Part 2 will focus on secure DevOps Kit for Azure or AzSK and Part 3 will focus on Azure Sentinel and security health.


 


 


Managing Credentials in the Code with CredScan


 


The first aspect as we mentioned is the ability to detect any creds or secrets. Developer should be able to catch it before committing the code to Repo or during the pipeline process itself. 


We all know it is easy to leave credentials in the code, especially in large projects. Team can always try to check for credentials manually, but it is not recommended way to look for sensitive information.


Credential Scanner (aka CredScan) is a tool developed and maintained by Microsoft to identify credential leaks such as those in source code and configuration files. Some of the commonly found types of credentials are default passwords, SQL connection strings and Certificates with private keys.


There are two version of CredScan server and client side as it shows in the following diagram


 


Picture1.png


The client side


It is extension and currently support VS 2017 and you can download it from here


After installing the extension then we are ready to code and build and if our code has certain credential, the tool will catch it as following.


 


Picture2.png


The downside for the client side there is no extension for VS 2019 or VS code yet. As Alternative for developers who are interested in installing first line of defense for creds scanning please refer to my blog git secrets


 


CredScan Server Side implementation


In order to use the server side version, developer needs to include “CredScan Build” task in project pipeline. For more information about obtaining Microsoft Security Code Analysis Extension, please review this document.


 


In Azure DevOps, we can add the tasks in Classic build Editor, CredScan can be added direct using search box


 


Picture3.png


 


After adding the task, developer or DevOps engineer can fill the detail of the task


Available options include:



  • Display Name: Name of the Azure DevOps Task. The default value is Run Credential Scanner

  • Tool Major Version: Available values include CredScan V2CredScan V1. We recommend customers to use the CredScan V2 version.

  • Output Format: Available values include TSVCSVSARIF, and PREfast.

  • Tool Version: We recommend you select Latest.

  • Scan Folder: The repository folder to be scanned.

  • Searchers File Type: The options for locating the searchers file that is used for scanning.

  • Suppressions File: A JSON file can suppress issues in the output log. For more information about suppression scenarios, see the FAQ section of this article.

  • Verbose Output: Self-explanatory.

  • Batch Size: The number of concurrent threads used to run Credential Scanner. The default value is 20. Possible values range from 1 through 2,147,483,647.

  • Match Timeout: The amount of time in seconds to spend attempting a searcher match before abandoning the check.

  • File Scan Read Buffer Size: The size in bytes of the buffer used while content is read. The default value is 524,288.

  • Maximum File Scan Read Bytes: The maximum number of bytes to read from a file during content analysis. The default value is 104,857,600.

  • Control Options > Run this task: Specifies when the task will run. Select Custom conditions to specify more complex conditions.

  • Version: The build task version within Azure DevOps. This option isn’t frequently used.


 


Picture4.png


In YAML Pipeline Editor, Here is example for CredScan YAML task.


 


 

parameters:
  pool: 'Hosted VS2017'
  jobName: 'credscan'
  displayName: Secret Scan

jobs:
- job: ${{ parameters.jobName }}
  pool:
    name: ${{ parameters.pool }}

  displayName: ${{ parameters.displayName }}

  steps:
  - task: securedevelopmentteam.vss-secure-development-tools.build-task-credscan.CredScan@2
    displayName: 'Scan for Secrets'
    inputs:
      suppressionsFile: tools/credScan/suppress.json
      toolMajorVersion: V2
      debugMode: false

 


 


 


After adding the task, the pipeline will pass successfully only after passing CredScan task.


 


Summary


In this Part 1, we discussed the important of implementing first line of defense against credential leak by using CredScan client-side extension or CredScan task. In the next blog I will explore using AzSK to secure DevOps.

Investigate Azure Security Center alerts using Azure Sentinel

Investigate Azure Security Center alerts using Azure Sentinel

This article is contributed. See the original author and article here.

Azure Security Center performs continuous assessment of your cloud workloads and provides the recommendations concerning the security of the environment. Azure Security Center covers scenarios by offering Cloud Security Posture Management (CSPM) and Cloud Workload Protection Platform (CWPP) capabilities (read this article for more details). 


 


To cover the threat detections for the CWPP scenario you need to upgrade Security Center to Azure Defender. Azure Defender uses a variety of detection capabilities to alert you of potential threats to your environment. Azure Defender’s threat protection includes fusion kill-chain analysis, which automatically correlates alerts in your environment based on cyber kill-chain analysis, to help you better understand the full story of an attack campaign. The alerts can tell you what triggered the alert, what in your environment was targeted, the source of the attack, and remediation steps. You also have the flexibility to set up custom alerts to address specific needs in your environment. 


 


Azure Sentinel is a cloud-native Security Information and Event Management (SIEM) and Security Orchestration, Automation and Response (SOAR) solution. Azure Sentinel’s role is to ingest data from different data sources and perform data correlation across these data sources. On top of that, Azure Sentinel leverages intelligent security analytics and threat intelligence to help with alert detection, threat visibility, proactive hunting, and threat response.  


 


When Azure Defender detects/triggers alerts, you can stream these alerts to your own SIEM solutionBy doing this you can quickly view what needs your attention from one management interface and take an appropriate action. 


 


In this blog, we will walk you through how alerts from Azure Defender integrates with Sentinel providing Sentinel with security recommendations, alerts, and analytics. When integrated together how they operate in a better together scenario. 


 


Integration 


Azure Sentinel leverages data connectors which give you that holistic rich view across multiple data sources. To stream Azure Defender alerts in to Azure Sentinel, the first step is to configure this integration by adding Azure Security Center connector. You can connect to Azure Security Center data connector by following the steps from this article. 


After following the steps from the article mentioned in the previous paragraph, you can confirm the connectivity (as shown in the figure below). 


Image 1: Confirming the connectivity of Azure Security Center connector in Azure SentinelImage 1: Confirming the connectivity of Azure Security Center connector in Azure Sentinel


Investigating an Azure Defender alert in Azure Sentinel


In this example, we are analyzing an alert which is using Fusion analytics that automatically correlates alerts in the environment based on cyber kill-chain analysis to help better understand the full attack surface as to where it started and what kind of impact it had on the resources. To learn more about Cloud smart alert correlation (also known as fusion), please read our documentation here.


Image 2: Security Center Alerts pageImage 2: Security Center Alerts page


Image 3: Fusion Analytics correlationImage 3: Fusion Analytics correlation


As you notice in image 3, Fusion technology has correlated different severity alerts and contextual signals together.


The left pane of the security incident page shows high-level information about the security incident like, alert description, severity, activity time and affected resources. The right pane of security incident page contains information about the Alerts and their description.


Image 4: Take action Tab of Azure Security CenterImage 4: Take action Tab of Azure Security Center


Switch to the Take Action tab (as shown in Image 4) to know more information on how to Mitigate the threat, review the related recommendations identified on this affected resource under prevent future attacks.


 


Trigger automated response option will provide you the option to trigger a Logic App as a response to this security alert. Setting up an automation reduces overhead and helps you take care of incidents automatically and quickly. Review our Github repository to find different automation artifacts that you can leverage to resolve the alerts or recommendations in Security Center.


 


Suppress similar alerts option gives you an option to suppress future alerts with similar characteristics if alert isn’t relevant for your organization. Please review this article to understand how you can benefit from this feature.


 


To further investigate this alert, let’s navigate to Azure Sentinel. One of the benefits of triaging incidents in Azure Sentinel is that you can start from a holistic high-level view and zoom in on relationships, often found in related entities or notable events. Azure Sentinel helps you in identifying and correlating those in a couple of ways. In addition, Azure Sentinel offers powerful hunting capabilities to find that needle in the haystack.


In this blogpost, we will provide a couple of examples of those options.


 


Triaging the Azure Defender incident in Azure Sentinel


When we pivot over the Azure Sentinel, we can see the same incidents appearing in our triage view:


Image 5: Incidents pane of Azure SentinelImage 5: Incidents pane of Azure Sentinel


Looking at the details of our incident, we can see the affected entities and can start our investigations from there, or we can pivot over to the sending source, in our case Azure Security Center:


 


Picture6.png


When we look more closer, we quickly see that more sources are reporting suspicious activities related to our affected entities, which we need to investigate:


Image 6: Example of Open incidents from Azure SentinelImage 6: Example of Open incidents from Azure Sentinel


We can run investigations in several ways, one is a visual investigation:


Zooming in on our suspicious entity, our server, we can see a lot more to investigate, including a timeline of events:


Image 8: Timelines of eventsImage 8: Timelines of events


This is a clear signal that this is a true positive which we should be escalating for further investigation. We can add our findings to the incident comment section and take a couple of counter measures to isolate our servers by leveraging a block IP action which we have discovered as being a malicious IP address. You can find the Logic App playbook here. This playbook has been authored to block a brute force attack IP address, but can also be used to block any IP address.


In our investigation we also saw that an Azure Active Directory user was affected:


Image 9: Azure Active Directory user that was affectedImage 9: Azure Active Directory user that was affected


The second preventive counter measure we will take is to confirm that this is a risky user, by leveraging a confirm risky user playbook and reset the user’s password.


Before we escalate the incident to our investigations team, we create a ticket in ServiceNow, add our findings to the incident comments and continue our triage.


 


A usual question we receive at this point is, “When I use the Azure Security Center data connector in Azure Sentinel and generate Incidents, what happens when I close my Azure Sentinel Incident, does it close the related Azure Security Center alert?”


The short of this is it does not. The Azure Sentinel Incident closes, but the ASC Alert remains active. Some customers prefer to keep the alerts active in ASC, while they are closing the incident in Azure Sentinel and some of our customers prefer to close the incident or alerts at both ends. You also have the option to suppress the ASC Alert manually in the ASC Portal. If the outcome of your incident triage is complete and you decide to close the incident in Azure Sentinel, you can use and invoke this Logic App Playbook which will close the incident in Azure Sentinel and dismisses the alert in Azure Security Center. This article describes how to run a Logic App Playbook. Azure Sentinel also allows you to invoke a Logic App Playbook automatically upon creation of an incident, which is described here.


 


Hope you enjoyed learning our better together story of Azure Security Center and Azure Sentinel.


 


Acknowledgements


This blog was written in collaboration with my super talented colleague Tiander Turpijn, Senior Program Manager and reviewed by the God of Azure Security Center, Yuri Diogenes, Principal Program Manager