Secrets from the Deep – The DNS Analytical Log – Part 4

Secrets from the Deep – The DNS Analytical Log – Part 4

This article is contributed. See the original author and article here.

Hi Team, it’s Eric Jansen again, here today to continue where we left off in Part 3 of the series.  In the last episode, we discussed how to parse the DNS Analytical Log using a sample scenario where I’ve deployed a DNS Black Hole Solution, and my goal was to harvest specific data of interest.  Today we are going to take the data that we previously collected and report our findings.  I wanted to talk about this because I’ve seen a number of customers employ DNS block lists as a security measure, but they don’t look to see what’s getting blocked, which I find somewhat shocking.  If you don’t check to see what’s actually being blocked, other than the occasional end user complaining that they can’t get to something, how would you know if the block list is providing any value at all? 


 


There are many ways to report the data of interest but today I’m going to show a simple means to do some basic reporting, writing just the data that you care about to, to its own event log.  We’ll take the millions of events that we parsed through in the last part of this series and simply report the findings in just a handful of ‘higher level’ (arguably more useful) events, while still giving you the ability to find any of the more specific data that you’re interested in.   


   


Note: For the ease of discussion, I’m taking small snips of code from a much larger function that I’ve written (where additional events are parsed) and showing just the necessary info in the code blocks below for ease of discussion.  In some cases, I’m changing the code from the function so that it makes more sense when seen independently.  Towards the end of this series, I’ll likely post all of the complete functions to a GitHub repository for folks to use. 


 


And with that, here’s the usual disclaimer that all my code samples come with: 


 


The code snippets above are considered sample scripts.  Sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages. 


 


Ok, so in the last episode, we ran some snippets of code, which resulted in data being stored in a variable called $HitData.  That variable contained query data that was blocked by the server as a result of DNS policy.  The first thing that we’re going to do is create a new event log to put this ‘data of interest’ into: 


 


 

#Create custom Event Log for parsed DNS Analytic data.

$CustomEventlogName = "DNS-Server-AnalyticLog-ParseData"
$EventSource_PAL = "Parse-AnalyticLog"
$EventSource_DD = "DENIED_Domains"
New-EventLog -LogName $CustomEventlogName -Source $EventSource_PAL,$EventSource_DD

 


 


The snippet above creates a new Event log called DNS-Server-AnalyticLogParseData, defining two event sources, that we’ll be using later on.  Our mission now is to extract that data into even more ‘boiled down’ higher level info.  To do that we’re going to pick and choose the data that we want out of $HitData for building messages to write to our shiny new custom event log.  Here’s a sample of how we can get that done: 


 


 

#If hits are found, process them.
If($HitData){

    #Defines the a .csv log file path for ALL hit data to be dumped to. Ideally this would be ingested into SQL.
    $CSVLogFilePath = "D:Logs$($ENV:COMPUTERNAME)_BHDomainHits_$(Get-Date -Format "ddMMMyyyy_HHmm").csv"

    $HitData | sort timestamp | Export-Csv $CSVLogFilePath -NoTypeInformation        

    #Variable containing the higher level information of interest.
    $Message += "The HitData has been sorted and dumped to $($CSVLogFilePath).`n" | Out-String
      
    #Collect QRP Count for use in calculating the percentage of blocked domains that are hit.
    $QRPCount = (Get-item 'HKLM:SOFTWAREMicrosoftWindows NTCurrentVersionDNS ServerPolicies').SubKeyCount

    #Sorts EventID 258 data.
    $258HitData = $HitData | where EventID -eq 258
    $Unique258Queries = $258HitData | sort XID -Unique | select -ExpandProperty XID
    $Unique258Domains = $258HitData | sort Query -Unique | select -ExpandProperty Query
    $Unique258QuerySources = $258HitData | sort QuerySource -Unique| select QuerySource
    
    $UQCount = $Unique258Queries.count
    $UDCount = $Unique258Domains.count
    $UQSCount = $Unique258QuerySources.count
                        
    #If there are 258 events that were a result of 'Policy', the 'high level' findings breakdown is output to the DNS-Server-AnalyticLog-ParseData event log, in event ID 2.
    $Message += "There were $($j) RESPONSE_FAILURE events; out of those there were $l that were not a result of a Query Resolution Policy match." | Out-String
    $Message += "Out of those $($j) events, $($UQCount) were unique, based on the Transaction ID (XID)." | Out-String
    $Message += "The $($UQCount) unique queries originated from $() clients, and boiled down to $($UDCount) unique domains queried.`n" | Out-String
    $Message += "The $($UQSCount) clients are:" | Out-String
    $Message += $($Unique258QuerySources) | sort QuerySource -Descending | Format-Wide -Property QuerySource -Column 5 | Out-String  
    $Message += "This represents $($($UDCount/$QRPCount)*100) percent of what's being blocked via the $($QRPCount) QRPs." | Out-String
    $Message += "The corresponding Domains that were flagged as Policy Hits and DENIED can be found in Event ID 3.`n" | Out-String
            
    If($Unique258Domains){
               
        #The -Message parameter for Write-EventLog allows a maximum of 32766 characters, so the data within must be within those limits.
        If($(Get-CharacterCount -InputObject $Unique258Domains) -lt 32500){
        
        #Response Failure Domain Message Data.
        $RFMessage = "Black Hole Domain Hits - DENIED Domains:`n" | Out-String
        $RFMessage += $Unique258Domains | Out-String

        #Domains that were blocked are output to the DNS-Server-AnalyticLog-ParseData event log as it's own event, in event ID 3.
        Write-EventLog -LogName $CustomEventlogName -Source $EventSource_DD -EventId 3 -Message $RFMessage
        }
        Else{
        $CharLength = $(Get-CharacterCount -InputObject $Unique258Domains)
        Write-Warning "The character limit for the maximum allowable size of the message within for the Event ID has been exceeded and has a length of $($CharLength), but only 32766 characters are allowed."
        Write-Host "`nPrinting the DENIED Domains to the screen instead:`n"
        $Unique258Domains
        }
    }

    #Send the data of interest to the DNS-Server-AnalyticLog-ParseData event log located in "Application and Services Logs".
    Write-EventLog -LogName $CustomEventlogName -Source $EventSource_PAL -EventId 2 -Message $Message

    #Find the last event to be parsed and output it to the DNS-Server-AnalyticLog-ParseData event log located in "Application and Services Logs".
    Write-EventLog -LogName $CustomEventlogName -EventId 1 -Source $EventSource_PAL -Message "LastIterationLastEventTimeStamp - $($LastEventTimestamp)"            
}
Else{
$WarningMsg = "There was no hit data found. This could be a good news or due to one of the following scenarios - but this is by no means an all inclusive list:"
$NoHitsMessage = $WarningMsg | Out-String
$NoHitsMessage += "`n"
$NoHitsMessage += " - The Parse-DNSAnalyticLog function was run without elevated credentials (The account didn't have permissions to the log to be able to parse it)." | Out-String
$NoHitsMessage += " - The DNS analytic log was just recently cleared." | Out-String
$NoHitsMessage += "`t This is usually due to a reboot, but it could have also been administratively done." | Out-String
$NoHitsMessage += " - The DNS analytic log was just recently turned on." | Out-String            
$NoHitsMessage += " - There are no Query Resolution Policies on the server." | Out-String
$NoHitsMessage += " - If there are Query Resolution Policies, then they could be Disabled. Check the Policies to see if they're disabled { Get-DNSServerQueryResolutionPolicy }." | Out-String
$NoHitsMessage += " - If there are Query Resolution Policies, then they could be getting Ignored. Check the DNS Server Settings { (Get-DNSServerSetting -All).IgnoreServerLevelPolicies }." | Out-String
$NoHitsMessage += " - The inbound queries aren't meeting the criteria defined within the Query Resolution Policies." | Out-String

Write-Warning $WarningMsg
$NoHitsMessage.Substring($($WarningMsg.Length + 2))
Write-EventLog -LogName $CustomEventlogName -Source $EventSource_PAL -EventId 0 -Message $NoHitsMessage
}

 


 


In the code above, you’ll notice a non-native function called Get-CharacterCount, that can be found here:


 


 

function Get-CharacterCount ([String]$InputObject)
{
    $($InputObject | Measure-Object -Character).Characters
}

 


 


I have notes in the code, but in summary, it further picks apart the data that’s stored in the $HitData variable and stores it into separate variables for use in building possible messages that are dumped to the different events.  For this scenario, I have four Events defined: 


 


Event ID 0 – This event is triggered if a parsing cycle occurs and there are not hits, which is possible, but unlikely (depending on the block list used).  This lists possible reasons for this event being triggered. 


Event ID 1 – This event is triggered on every parsing cycle, but only if events are found in the $HitData variable.  In this case it logs the timestamp of the last event that was parsed, during the last parsing iteration. 


Event ID 2 – This event shows the “Bottom-Line Up-Front” information that can be customized to whatever you decide is of value. 


Event ID 3 – This event shows just the domains that were Denied from query attempts to the resolver.  The list tends to be long, so I have it stored as its own event. 


 


One thing to consider, in Part 3 of the series, I show how to build a hash table for use in filtering the log for just the 258 events, but you can add more filters to that to further reduce the amount of data that needs to get parsed.  That’s the reason that I’ve included Event ID 1 above – when employed, it tells the next parse attempt to start parsing events beginning when the last event that was parsed.  This can save a considerable amount of time, depending on traffic and how long you go between parsing cycles among other things.  The snippet below shows how I collect the timestamp and how to use it in the $FilterHashTable variable – essentially, just adding to that which I previously showed: 


 


 

#Tries to collect the LastIterationLastEventTimeStamp value from the latest Event ID 1. 

IF( $((Get-EventLog $CustomEventlogName -Newest 1 -InstanceId 1 -ErrorAction SilentlyContinue).message.split('-').trim()[1]) ){

#If there is, I store the content in a variable as a datetime format so that I can start the event search looking for events after the last parsed timestamp.

    $LastIterationLastEventTimeStamp = Get-Date $((Get-EventLog $CustomEventlogName -Newest 1 -InstanceId 1 -ErrorAction SilentlyContinue).message.split('-').trim()[1])

}

 


 


The updated Hash Table would just include a new value – StartTime:


 


 

#Build the HashTable for use with Get-WinEvent

$FilterHashTable = @{
Path=$DNSAnalyticalLogPath
ID=$EventIDs
StartTime= $LastIterationLastEventTimeStamp
}  

 


 


At this point, we’ve further parsed the collected data into subsets of information for building messages for writing to our custom event log, showing only the “BottomLine UpFront” information that we’re interested in.  At the same time however, we’re writing ALL hit data to a .csv file during each parsing cycle, so we still have the ability to further investigate any interesting information that we find in the new events that we’re writing to.  We have the best of both worlds!  The .csv files (I say plural, because ideally this parser would be set up to run on a schedule) can further be leveraged by ingesting them into SQL or other data repositories for use in analyzing historical trends among other metrics.  So let’s take a look at what I have in my lab, to show you what the final product looks like in Event ID Order: 


 


At this point, we’ve further parsed the collected data into subsets of information for building messages for writing to our custom event log, showing only the “BottomLine UpFront” information that we’re interested in.  At the same time however, we’re writing ALL hit data to a .csv file during each parsing cycle, so we still have the ability to further investigate any interesting information that we find in the new events that we’re writing to.  We have the best of both worlds!  The .csv files (I say plural, because ideally this parser would be set up to run on a schedule) can further be leveraged by ingesting them into SQL or other data repositories for use in analyzing historical trends among other metrics.  So let’s take a look at what I have in my lab, to show you what the final product looks like in Event ID Order: 


 










Event ID 0: 



Possible explanations of why there are no Query Resolution Policy hits. 



 


didn’t have any examples of this event, so I had to reproduce it artificially to be able to get a screenshot of it: 


 


EJansen_5-1611087576858.png


 


EJansen_6-1611087623828.png


 


 










Event ID 1: 



Timestamp of the last event that was parsed during the last parsing cycle.  



 


 


EJansen_10-1611087783745.png


 










Event ID 2: 



Bottom-Line Up-Front Data – This can be whatever you think is important. 



 


EJansen_11-1611087818356.png


 


 










Event ID 3: 



List of domains that were blocked as a result of a Query Resolution Policy. 



 


 


EJansen_12-1611087926285.png


 


Well, team...as you can see, we‘ve managed to successfully parse the DNS Analytic Log, define some data of interest, and then built some report data that we‘ve shipped to a custom created event log called DNS-Server-AnalyticLogParseData.  It’s not the prettiest solution, and there’s plenty of room for innovation, but it’s better than just putting a block list in place and not knowing if it’s effective or not.  Now you just need to remember to check the logs, like any other log that you deem important. 


 


You know what would be super cool though??  What if instead of just shipping this data to an event log, what if we were able to use PowerBI to give us some visuals of this data that we deemed important?  Maybe some charts and graphs showing important data on a per resolver basis.  That’d be pretty cool.


 


Until next time! 

Common policy configuration mistakes for managing Windows updates

Common policy configuration mistakes for managing Windows updates

This article is contributed. See the original author and article here.

Misconfigured policies can prevent devices from updating and negatively affect monthly patch compliance. Explore common policy configuration mistakes that can hinder update adoption and result in a poor experience for your end users—and get guidance on how to review your Windows update policies to confirm your devices are configured correctly. Alternatively, you can leverage the Update Baseline tool to automatically apply the recommended set of Windows Update policies to your devices.


Set deadlines (with a grace period)


One of the most powerful resources that IT admins can use to support patch compliance is setting deadlines. A deadline is the number of days before a device is forced to restart to ensure compliance. Deadlines provide a balance between keeping devices secure and providing a good end user experience.


Deadlines work in coordination with pause and deferral settings. For example, if you set a quality update deadline of 2 days and a quality update deferral of 7 days, users will not get offered the quality update until day 7 and the deadline will not force restart until day 9. Similarly, if you (or the end user) pause quality updates, the deadline will not kick in until after the pause has elapsed and a quality update is offered to the device. For example, if the end user pauses all updates for 7 days and the quality update deadline is set to 2 days, as soon as the pause period is over on day 7, the deadline kicks in and the device will have 2 days to download, install, and restart to complete the update.


To ensure a good user experience for devices that have been shut off for some time, as when a user of a device is on vacation, we strongly recommend setting a grace period. The grace period is a buffer that prevents deadlines from immediately forcing a restart as soon as a device is turned on.


We also recommend leveraging the default automatic restart behavior. Windows has heuristics to analyze when the user interacts with the device to find the optimal time to automatically download, install, and restart. Allowing auto-restart can therefore improve your patch compliance while maintaining a good end user experience.


We recommend the following settings for deadline policies. You can find these policies in Group Policy under Computer Configuration > Administrative Templates > Windows Components > Windows Update > Specify deadlines for automatic updates and restarts or the CSP name listed for each policy setting below.



  • Quality updates (days): 0-7 (3 days is the recommended configuration)
    CSP name: Update/ConfigureDeadlineForFeatureUpdates

  • Feature update (days): 0-14 (7 days is the recommended configuration)
    CSP name: Update/ConfigureDeadlineForQualityUpdates

  • Grace period (days): 0-3 (2 days is the recommended configuration)
    CSP name: Update/ConfigureDeadlineGracePeriod









    Note: We strongly recommend the sum of a feature update/quality update deadline and the grace period to be no less than 2. Setting a value lower than 2 can cause a poor end user experience due to the aggressive timeline.




  • Auto-restart: Disabled is the recommended configuration.
    CSP name: Update/ConfigureDeadlineNoAutoReboot


How to set deadlines for automatic updates and restarts using Group PolicyHow to set deadlines for automatic updates and restarts using Group Policy


For more information, see Enforcing compliance deadlines for updates in Windows Update for Business.


Make sure automatic updates are set up correctly


Automatic updates are another policy where misconfigurations affect patch compliance. Within the Configure Automatic Updates policy in Group Policy (see below for the Configuration service provider (CSP) equivalent), you can define when and if to require end user interaction during the update process.


As a rule of thumb, requiring end user approval of updates negatively impacts patch compliance and success rates by a significant percentage. To simplify the update process, we therefore recommend either not configuring this policy at all or, if configured, selecting “4 – Auto download and schedule the install.” This allows the update to download and install silently in the background and only notifies the user once it is time to restart.


We also recommend setting the scheduled installation time to “Automatic,” rather than a specific time to restart, as the device will then fall back to the configured restart policies, such as active hours, to find the optimal time to schedule the restart (like when the user is away).


We strongly recommend not requiring the end user to approve updates for the smoothest update process as this can create bottlenecks in the update process. This includes avoiding configuring “2 – Notify for download and auto install” and “3 – Auto download and notify for install.”


If you do choose to use values 2 or 3, the following policy conflicts may arise and prevent updates from successfully being applied or may significantly degrade the end user experience.



  • If “Configure Automatic Updates” is set to 2 or 3 and
    “Remove access to use all Windows Update features” is set to Enabled:

    The end user will be unable to take action on Windows Update notifications and will, therefore, be unable to download or install the update before the deadline. When the deadline is reached, the update will automatically be downloaded and installed.


  • If “Configure Automatic Updates” is set to 2 or 3 and
    Display options for update notifications” is set to (2) Disable all notifications including restart notifications:

    The end user will not see notifications and, therefore, cannot take action without going to the Windows Update Settings page. Thus, the user will be unlikely to download or install the update before the deadline at which time the device will be forced to restart without any warning.


How to configure automatic update settings using Group PolicyHow to configure automatic update settings using Group Policy


The CSP equivalent of the above recommended configurations for “Configure Automatic Updates” in Group Policy would be:



  • Update/AllowAutoUpdate = 2 (Auto install and restart. Updates are downloaded automatically and installed when the device is not in use.)

  • Update/ScheduledInstallDay = 0 (Every day)

  • Update/ScheduledInstallEveryWeek = 1

  • Or simply not configuring the policy


Ensure that devices can check for updates


Frequently, we see the policies listed in this section misconfigured during the initial setup of a device and then never revisited. If left misconfigured, they can prevent devices from updating. As a result, we recommend that you review the following policies to ensure they are configured correctly. You can find these policies in Group Policy under Computer Configuration > Administrative Templates > Windows Components > Windows Update or the Update Policy CSP listed for each policy setting below.


Do not allow update deferral policies to cause scans against Windows Update
CSP name: Update/DisableDualScan


If a device is configured to receive updates from Windows Update, including DualScan devices, make sure this policy is not set to Enabled. To clean out the policy, we recommend you set this policy to Disabled.


Specify intranet Microsoft update service location
CSP name: Update/AllowUpdateService


We see a significant number of devices unable to update due to bad Windows Server Update Services (WSUS) server addresses, often because a replacement WSUS server is provisioned with a different name. As a result, double-check to ensure that the WSUS server address is up to date. Devices that are misconfigured to ping an address that is expired or no longer in service won’t receive updates.


Additionally, scan failures can occur when devices require user proxy to scan their configured WSUS server and do not have the proper proxy policy configured. for proxy scenarios, see Scan changes and certificates add security for Windows devices using WSUS for updates.


Configure policies to ensure a good end user experience


The following policies can negatively affect your end user experience:


Remove access to “Pause updates” feature
CSP name: Update/SetDisablePauseUXAccess


Enabling this policy can benefit your monthly adoption rate, but also prevents users from pausing updates when necessary. If you have set deadlines that may cause a mandatory restart, disabling this policy will give end users the power to protect their work. For example, if a device is in the middle of a process that must run for consecutive days, the end user may want to make sure that their device is not forced to restart during the process.


Remove access to use all Windows Update features
CSP name: Update/SetDisableUXWUAccess


Leveraging this policy will remove the end user’s ability to use the Windows Update Settings page. This includes their ability to check for updates or interact with notifications, and can lead to policy conflicts. Note that, if this policy is left alone, checking for updates simply prompts a scan and does not change what the user is offered if you have configured Windows Update for Business offering policies on the device.


Display options for update notifications
CSP Name: Update/UpdateNotificationLevel


We recommend this setting is configured to 0 (default). If set to 1 or 2, this setting will disable all restart warnings or all restart notifications, limiting the visibility that your end user has to an upcoming restart. This can result in restarts happening while the end user is present with no notifications due to the deadline being reached. Enabling this policy is recommended for kiosk or digital signage devices only.


Apply recommendations with the Update Baseline


To implement the recommendations outlined above, we provide the Update Baseline tool which allows you to import our full set of Windows update policy recommendations into Group Policy Management Center. The Update Baseline toolkit is currently only available for Group Policy. A Microsoft Intune solution to apply the Update Baseline is coming soon. Download the Update Baseline and review the documentation included to view and apply our full list of policy configuration recommendations.


Try out our policy recommendations for yourself to see how you can improve your patch velocity. Let us know what you think by commenting below or reaching out to me directly on Twitter @ariaupdated.


 

Secure DevOps Kit for Azure (AzSK)

Secure DevOps Kit for Azure (AzSK)

This article is contributed. See the original author and article here.

In my previous blog I addressed the issue of managing credentials in the code and presented two different alternatives to secure it. In this post, I will focus on Azure subscription security health and its challenge. I could summarize the subscription security health challenges as follows:



  • New resources are deployed to Azure subscriptions all the time, especially if the company has many developers and DevOps working on the same subscription.

  • Conducting subscription security health checks frequently for many subscriptions to make sure all the resources within the subscriptions follow the security best practices.

  • For a big subscription, a manual check can be challenging.


Of course, there is an option to develop manual security checks as the scripts and run them on the subscription. However maintaining such a tool and updating it would be a nightmare, especially as the company adapts more resources type in the subscription.


Wouldn’t it be nice if there was a tool that I could run against the subscription? And it could come back with a list of security issues and even give me the option to fix them automatically? Fortunately, this tool exists, and it is called a secure DevOps kit for Azure or AzSK.


magdysalem_0-1611108265970.png


What is Secure DevOps kit (AzSK)?



  • Secure DevOps kit was an internal tool for Microsoft and was developed to help Microsoft internal teams to move to Azure quicker on more easily.

  •  It is an open-source tool and not an official product.

  • Microsoft released this tool so it can share Azure cloud security best practices with the community.

  • Both the code and documentation can be found on GitHub.

  • It allows the option to customize these is scripts so they can match company needs


AzSK Focus Area:



  • Secure the subscription: It can run global subscription health checks then it will come back with a list of improvements. Then I can go ahead and apply these improvements manually or automatically.

  • Integrate Security into CICD:  These security tests can be integrated into the company’s continuous integration on continuous delivery pipelines.

  • Enable Secure Development: It enables secure development. There is a visual extension that can be installed on the developers’ machine. It adds security intelligence to the developers’ IDE, so developers will be presented with the security best practices at the time of development.

  • Continuous Assurance: It has the option to integrate these security tests with Azure automation, so this test can be run automatically as part of Azure automation process.

  • Alerting & Monitoring: It will provide verbal’s alerting and monitoring data where users can implement it as part of Azure monitor.

  • Security Telemetry in App Insights: It can write security telemetry information to an instance of Application Insights.


How does it work?


In the subscription, I can have different azure services like Azure SQL Server, virtual machines, storage accounts, Azure key, Walt Instances, API management instances ..etc, there are few options to user AzSK tool



  • Running a few security tests against the subscription.

  •  Performing all these tests on come back with the test result. The test result is a CSV file.

  •  Automatically fix the issues for us.


NOTE: Not all the issues found can be fixed automatically. DevOps Admin will need to fix them manually.


magdysalem_1-1611108266002.png


Setting up AzSK


All the requirement and step by step instruction can be found https://azsk.azurewebsites.net/


magdysalem_2-1611108266068.png


I will need the following Pre-requisites:



  • PowerShell 5.0 or higher.

  • Windows OS


magdysalem_3-1611108266128.png


To Install the Secure DevOps Kit for Azure (AzSK) PS module:


 


 

Install-Module AzSK -Scope CurrentUser

 


 


Demo: Running Security Validation Tests (SVT) with AzSK


I am going to use the tool to scan on Azure subscription for Security Health. I will also use the tool to scan a resource group for security health. The resource group will have the following



  • Storage Account

  •  Azure SQL Database.

  • Virtual Machine

  •  Azure Key Vault

  •  Azure Cosmos DB


The following the azSK command will run security scans against the subscription.


 


 

Get-AzSKSubscriptionSecurityStatus -SubscriptionId '<subscriptionId>' --GenerateFixScript

 


 


 


magdysalem_4-1611108266198.png


Once the command is finished it will open an explorer window and show the result.


magdysalem_5-1611108266209.png


The SecurityReport file will show a list of the subscription level test, which were executed, and its result. For example, there is a test to make sure the admin owners off.


magdysalem_6-1611108266285.png


Also, since I used “GenerateFixScript” flag, there is a folder calls “FixControlScript” and if I opened this folder I can see Powershell fix script


magdysalem_7-1611108266295.png


In this second test, I will run the AzSK command against a resource group


 


 

Get-AzSKAzureServicesSecurityStatus -SubscriptionId <SubscriptionId> -ResourceGroupNames <rgname>

 


 


 


magdysalem_8-1611108266372.png


Like the last test, AzSK  command will open the result folder where I can find the security scan result report


magdysalem_9-1611108266382.png


magdysalem_10-1611108266464.png


Also, there will be a “FixControlScript” folder where I will find the “RunFixScript” file


magdysalem_11-1611108266473.png


If I edit the file I could see the script and how it will attempt to fix the problem


magdysalem_12-1611108266491.png


Also if I checked the result folder again I could see a “security-validation-rg” folder that contains logs for each azure service that exists under the resource group. The log will contain information about the test and results that executed against the resource.


magdysalem_13-1611108266505.png


Summary


AzSK enables us to run security health checks against our subscriptions or resource groups. The tool will give a report and also an option to automatically fix issues that are found. In our next blog, I will discuss Azure sentinel.


 


 

Microsoft 365 Information Protection and Compliance Deployment Acceleration Guides

This article is contributed. See the original author and article here.

 The guides can be used both independently, but we recommend using all the solutions together for your deployment needs. We are not recommending one solution be implemented before another but have included information in each guide to tie all the solutions together with features to consider during your implementation. The guide covers current released feature as of today and will be updated once additional features progress from beta, or private preview to general availability.



In summary, the deployment guides will help to
One Compliance Story covering how to each solution features complement each.
Best Practices based on the CxE teams experience with customer roadblocks.
Considerations to take and research before starting your deployment.
Help Resources links to additional readings and topics to gain a deeper understanding.
Appendix for additional information on licensing.


 


We have included 5 Zip files. One with all four guides, and separate Zips for each.


 


This documentation was written by the global CxE team, thank you all for your hard work to produce the documents. 


 


If you have any questions on the guides or suggestions, please reach out to our Yammer group at aka.ms/askmipteam

Azure RTOS 6.1.3 is now released

Azure RTOS 6.1.3 is now released

This article is contributed. See the original author and article here.

This is our first release of Azure RTOS in 2021. The 6.1.3 patch release incorporates many updates, including new ThreadX ports to new microcontrollers and chips, addition of LwM2M and PTP to NetX Duo, updates to GUIX and more.


 


Azure RTOS is an embedded development suite including ThreadX, a small but powerful real-time operating system, as well as several libraries for connectivity, file system, user interfaces and all that’s needed to build optimized resource-constrained devices.


 


In continuous development, you can track the updates on GitHub in the Azure RTOS repositories.


 


New ThreadX ports


Our team delivers new features regularly responding to customers’ needs and asks, and for this release brings ThreadX to more microcontrollers and chips with new ports to Renesas’s RXv2 for use on IAR, GNU and Xtensa XCC compilers and tools.



New module ports are also now available for Cortex A35 on AC6 workbench and GNU compiler, making ThreadX even more pervasive.



In addition to the new ports, we added new samples projects for Renesas RZ and RX development boards: https://github.com/azure-rtos/samples


 


Bringing new protocols support in NetX Duo


NetX Duo is Azure RTOS’ advanced, industrial-grade TCP-IP network stack that also provides a rich set of protocols including security and cloud protocols.


 


In this release, the team added support for Lightweight M2M (LwM2M). The addition of this protocol allows for a seamless integration of IoT devices powered by Azure RTOS into solutions that have invested in LwM2M to connect, service and manage their devices.


 


Precision Time Protocol (PTP) has also been added to NetX Duo, giving the option to synchronize IoT devices clock at the sub-microsecond level on a local area network.


 


GUIX enhanced support for screen rotation


As mobile IoT Devices are becoming more sophisticated, they are also more often getting a graphical interface on a screen that will be used in changing orientations.


 


GUIX, the embedded graphical user interface library of Azure RTOS now supports screen rotation feature for 16bpp color and APIs for handling bi-directional text reordering.


 


GUIX Studio and TraceX now in the Windows Store


Finding, installing and keeping developer tools up to date is not always trivial. We added the GUIX Studio and TraceX tools to the Windows Store to make your life a little bit simpler.


 


You can now grab and install them from: https://aka.ms/azrtos-guix-installer and https://aka.ms/azrtos-tracex-installer.


 


Picture1.pngPicture2.png


 

For more details about improvements and bug fixes in this release, you can check the change log within repository of each Azure RTOS component: https://github.com/azure-rtos.


 


If you encounter any bugs, have suggestions for new features, you can create issues on GitHub or post a question to Stack Overflow using the azure-rtos and the component tags such as threadx.