4 ways to level up your Power Automate flows

4 ways to level up your Power Automate flows

This article is contributed. See the original author and article here.

Intro


A while back, I wrote about How to use a custom connector in Power Automate showing how easy you can create a connector to a cloud service that is not already in the very long list of connectors in Power Automate. I chose to create a connector for Spotify and connected a Get_Current_Song action with an IOT button and twitter. As a result, information about the song I would be listening to would be tweeted.


Now I stumbled upon a really great blog post by fellow MVP Loryan Strant who also used this Spotify connector to change the pinned message of your status in Microsoft Teams. To get the most value out of this post, go read Loryans post first- it is written with great clarity and also I love this guys’ taste of music :musical_notes:! Also please understand his flow first. I love the idea and creativity! The result of such a flow looks like this:


 

Teams-status.png


 


While some would debate if this flow is necessary, I feel it shows that custom connectors are a great way to extend Microsoft 365. Also: #MusicWasMyFirstLove – case closed :)


However when reading this blog post, I saw some patterns that I often see in flows and that could be improved – and as I could already learn so much from Loryan, this time I hope to return the favor :)


 


Loryan created 13 actions and as I seem to be just more lazy than he is, therefore I thinned out his awesome idea to just 5-6 actions: This is what it looks like:


 

flow-overview.png


 


The result is about the same – just that I display also a message if I am currently not listening to music (yes, this happens!)


Parse JSON


First thing I wanted get rid of was the Parse JSON action. While it is super powerful and lets you easily access properties of objects that you get as response, it is unnecessary sometimes: We can also write the flow without it if we take a look on how we can select properties and return their values in expressions.


To be successful with that, it is crucial to understand the JSON schema of the response we are interested in. Easiest way to achieve that:


a) copy the body of the output of that action, paste it into a code editor – I work with Visual Studio Code


 


flow-output.png


 


b) we make sure that we select JSON as language – VS Code will then color everything nicely for us and highlight beginning and ends of objects for example


c) we have a look at the code. For the sake of better readability – this schema is about 450 lines long, I already collapsed two arrays called available markets – it’s a long list of country codes in which a particular song is available. We don’t need it here. If you aim to rebuild this, its highly recommended to copy the code from YOUR output, not from this blog post, as I shortened the code.


 


    {
“timestamp”: 1631969547352,
“progress_ms”: 85903,
“item”: {
“album”: {
“album_type”: “album”,
“artists”: [
{
“external_urls”: {
“spotify”: “https://open.spotify.com/artist/3CQIn7N5CuRDP8wEI7FiDA”
},
“href”: “https://api.spotify.com/v1/artists/3CQIn7N5CuRDP8wEI7FiDA”,
“id”: “3CQIn7N5CuRDP8wEI7FiDA”,
“name”: “Run–D.M.C.”,
“type”: “artist”,
“uri”: “spotify:artist:3CQIn7N5CuRDP8wEI7FiDA”
}
],
“available_markets”: [

],
“external_urls”: {
“spotify”: “https://open.spotify.com/album/7AFsTiojVaB2I58oZ1tMRg”
},
“href”: “https://api.spotify.com/v1/albums/7AFsTiojVaB2I58oZ1tMRg”,
“id”: “7AFsTiojVaB2I58oZ1tMRg”,
“images”: [
{
“height”: 640,
“url”: “https://i.scdn.co/image/ab67616d0000b273894ae4df775c6b47438991af”,
“width”: 640
},
{
“height”: 300,
“url”: “https://i.scdn.co/image/ab67616d00001e02894ae4df775c6b47438991af”,
“width”: 300
},
{
“height”: 64,
“url”: “https://i.scdn.co/image/ab67616d00004851894ae4df775c6b47438991af”,
“width”: 64
}
],
“name”: “Raising Hell”,
“release_date”: “1986-05-15”,
“release_date_precision”: “day”,
“total_tracks”: 12,
“type”: “album”,
“uri”: “spotify:album:7AFsTiojVaB2I58oZ1tMRg”
},
“artists”: [
{
“external_urls”: {
“spotify”: “https://open.spotify.com/artist/3CQIn7N5CuRDP8wEI7FiDA”
},
“href”: “https://api.spotify.com/v1/artists/3CQIn7N5CuRDP8wEI7FiDA”,
“id”: “3CQIn7N5CuRDP8wEI7FiDA”,
“name”: “Run–D.M.C.”,
“type”: “artist”,
“uri”: “spotify:artist:3CQIn7N5CuRDP8wEI7FiDA”
},
{
“external_urls”: {
“spotify”: “https://open.spotify.com/artist/7Ey4PD4MYsKc5I2dolUwbH”
},
“href”: “https://api.spotify.com/v1/artists/7Ey4PD4MYsKc5I2dolUwbH”,
“id”: “7Ey4PD4MYsKc5I2dolUwbH”,
“name”: “Aerosmith”,
“type”: “artist”,
“uri”: “spotify:artist:7Ey4PD4MYsKc5I2dolUwbH”
}
],
“available_markets”: [

],
“disc_number”: 1,
“duration_ms”: 310386,
“explicit”: false,
“external_ids”: {
“isrc”: “USAR19900334”
},
“external_urls”: {
“spotify”: “https://open.spotify.com/track/6qUEOWqOzu1rLPUPQ1ECpx”
},
“href”: “https://api.spotify.com/v1/tracks/6qUEOWqOzu1rLPUPQ1ECpx”,
“id”: “6qUEOWqOzu1rLPUPQ1ECpx”,
“is_local”: false,
“name”: “Walk This Way (feat. Aerosmith)”,
“popularity”: 69,
“preview_url”: “https://p.scdn.co/mp3-preview/c7a8010bbddcd0d793a832de76a24d2cae5ab497?cid=2e75e650d1e74b6a994734ed4aea2ef7”,
“track_number”: 4,
“type”: “track”,
“uri”: “spotify:track:6qUEOWqOzu1rLPUPQ1ECpx”
},
“currently_playing_type”: “track”,
“actions”: {
“disallows”: {
“resuming”: true,
“toggling_repeat_context”: true,
“toggling_repeat_track”: true,
“toggling_shuffle”: true
}
},
“is_playing”: true
}


 

b) look for the properties you are interested in – for example we want to if a song is playing right now – we will find the is_playing property, which will either return true or false, which makes it perfect to put this into our condition:


 

flow-condition.png


 


The expression is outputs(‘Get_Current_Song’)[‘body’][‘is_playing’].


Why is that? Let’s deconstruct this: From the output of the Get_Current_Song, we are interested in the [‘body’] and inside of this we want the value of the [‘is_playing’] property


Now if we are also interested in the name of the song, we would do a quick search in that file for name and get four results:



  1. in line 14: this name property sits in the artists array, that consists of an album object and the name property here refers to the name of the artist of album, not to the name of the song that we are interested in.

  2. in line 221: this name property also sits in the album object and refers to the name of the album.

  3. line 235: this name property sits in the artists object and refers again to the name of the artist.

  4. finally, in line 432, we find the name property we were looking for; it sits in the item property.


Therefore, we will access the song name with:


outputs(‘Get_Current_Song’)[‘body’]?[‘item’]?[‘name]


If we now also want to have the name of the artist, we get this with:


outputs(‘Get_Current_Song’)[‘body’]?[‘item’]?[‘album]?[‘artists][0]?[‘name’]


Wait, what? These are a lot of properties, so let’s slow down for a bit to take a closer look:



  1. we get the Get_Current_Song action with outputs(‘Get_Current_Song’)

  2. now we go ahead and with the ? operator and select the first level property we are interested in: item

  3. next up is taking a look inside of the item property: what do we want to get here? It’s the album property. We do this as before with ? and the name of the property in []: ?[‘album’]

  4. Inside of the album property we want to get the artists property and yet again we do this with ? and the name of the property in []: ?[‘artists’]

  5. Now remember that artists was an array? You can see this by the brackets [] in the code. We want to return the first element of this array, therefore we put a [0]. It’s a zero, because arrays in JSON are zero-based, which means that the first element of an array has the index 0, the second one has index 1, and so on.

  6. Now that we returned that first element in the artists array (it’s only one, but Power Automate will yell at you if you don’t select just one element and instead return the entire array), we will go ahead and finally select the name property from it, which refers to the artist.


You see, it’s all about understanding the underlying JSON schema and see, which properties are part of which objects. If you use the Parse JSON action, you don’t need to write these expressions, but you face some disadvantages:



  1. you can now select from four name properties in your dynamic content – and need to select blindly

  2. You have no clue WHY you get four of those as you don’t understand the data structure

  3. Parse JSON is yet another action which blows up your flow


unnecessary Apply-to-each loops


You know that moment when you are creating a flow and and out of a sudden Power Automates automatically adds an Apply-to-each for you and you wonder why this happened? Also, you will face some issues later on? Wherever possible it’s a good idea to avoid loops that are not necessary.


The fact that we didn’t just parse the JSON output from our Get_Current_Song action but understood the JSON schema gives us an option to avoid a loop – we did not return an array of (one) artist, that triggered Power Automate to insert an apply-to-each loop, but we only returned the first element of the artist array – this way we don’t need to loop over this one-element-array, which means that we got rid of another action!


variables and expressions


Power Automate knows some nice actions for variables – the most important one is initialize variable – in Power Automate all variables need to be initialized (with or without value) before we can use them.


Now as we already skipped successfully the Parse JSON action and could also access artist name and song name without the use of variables but in expressions, I want to minimize the other initialize variables and compose actions from Loryans flow:


Instead of several actions and calculations to


– get the timestamp when the song started – get the current time – add the duration of the currently playing song to the current time,


we could have one variable called duration with this expression:


 


addSeconds(utcnow(),div(sub(outputs(‘Get_Current_Song’)?[‘body’]?[‘item’]?[‘duration_ms’],outputs(‘Get_Current_Song’)?[‘body’]?[‘progress_ms’]),1000))


Explanation:



  1. This adds seconds to utcnow(), which is the current time.

  2. How many seconds? The return value from the subtraction of the duration in milliseconds [‘duration_ms’] minus the [‘progress_ms’] current progress in milliseconds

  3. With the div function this value is divided by 1000 as we want seconds instead of milliseconds.


understand the API you are working with


The Get_Current_Song returns the LAST song that was played – and the is_playing property returns if the song is currently (still) playing or not. This means, that we need to distinguish between a song that played before I needed to turn off the music and a currently playing song. You might say, well, this doesn’t really matter – but if we take a closer look to understand, which data is returned when, we would need to redesign our flow: The fact that we get an output of the Get_Current_Song even if the is_playing property is false, means that we don’t get a null when our subsequent actions expect an object, a string, an array or anything else that is NOT null. Yet again, understanding what happens behind the scenes because we understand the output of an action will make it easier to create flows.

Distributing virtual machines across multiple cluster shared volumes in AKS on Azure Stack HCI

Distributing virtual machines across multiple cluster shared volumes in AKS on Azure Stack HCI

This article is contributed. See the original author and article here.

In the July Update of Azure Kubernetes Service (AKS) on Azure Stack HCI we introduced automatic distribution of virtual machine data across multiple cluster shared volumes which makes clusters more resilient to shared storage outages. This post covers how this works and why it’s important for reliability.


 


Just to recap, AKS-HCI is a turn-key solution for Administrators to easily deploy, manage  Kubernetes clusters in datacenters and edge locations, and developers to run and manage modern applications similar to cloud-based Azure Kubernetes Service.  The architecture seamlessly supports running virtualized Windows and Linux workloads on top of Azure Stack HCI or Windows Server 2019 Datacenter. It comprises of different layers which include a management cluster, a load balancer, workload clusters and Cluster Shared Volumes (CSV) which run customer workloads, etc. as shown in the image below. For detailed information on each of these layers visit here.


 


baziwane_0-1631889721470.png


Figure 1: AKS-HCI cluster components.


 


Cluster Shared Volumes allow multiple nodes in a Windows Server failover cluster or Azure Stack HCI to simultaneously have read-write access to the same disk that is provisioned as an NTFS volume. In AKS-HCI, we use CSVs to persist virtual hard disk (VHD/VHDX) files and other configuration files required to run clusters.


 


In past releases of AKS-HCI, virtual machine data was saved on a single volume in the system. This architecture generated a single point of failure – the volume hosting all VM data as shown in Figure 2a. In the event of an outage or failure in this volume, the entire cluster would be unreachable and thus impacting application/pod availability as illustrated in 2b.


 


baziwane_1-1631889721484.png


 Figure 2: Virtual machines on a single volume.


 


Starting with the July release, customers running multiple Cluster Shared Volumes (CSV) in their Azure Stack HCI clusters, by default during a new installation of AKS-HCI, the virtual machine data will automatically be spread out across all available CSVs in the cluster.  What you will notice is a list of folders prefixed with the name auto-config-container-N created on each cluster shared volume in the system.


 


baziwane_2-1631889721498.png


Figure 3: Sample of an auto-config-container-X folder generated by AKS-HCI deployment.


 


Most customers may not have noticed this behavior as it required no changes in the cluster creation user experience; this happens behind the scenes during initial cluster installation. Note that for customers running clusters based on the June or prior releases, an update and clean installation is required for this functionality to be available.


 


To illustrate how this improves the reliability of the system, assuming you have 3 volumes and deploy a cluster with VM data spread out as illustrated in Figure 4a. In the event of an outage or failure in volume 2 the cluster would still be operational as workloads would continue running in the remaining VMs (Figure 4b).


baziwane_3-1631889721506.png


Figure 4: Virtual machines distributed across multiple cluster shared volumes.


 


To learn more about high availability on AKS-HCI, please visit our documentation for a range of topics.


 


Useful links:


Try for free: https://aka.ms/AKS-HCI-Evaluate
Tech Docs: https://aka.ms/AKS-HCI-Docs
Issues and Roadmap: https://github.com/azure/aks-hci
Evaluate on Azure: https://aka.ms/AKS-HCI-EvalOnAzure


 


 


 

ACSC Releases Annual Cyber Threat Report

This article is contributed. See the original author and article here.

The Australian Cyber Security Centre (ACSC) has released its annual report on key cyber security threats and trends for the 2020–21 financial year.  
 
The report lists the exploitation of the pandemic environment, the disruption of essential services and critical infrastructure, ransomware, the rapid exploitation of security vulnerabilities, and the compromise of business email  as last year’s most significant threats.   
 
CISA encourages users and administrators to review ACSC’s Annual Cyber Threat Report July 2020 to June 2021 and CISA’s Stop Ransomware webpage for more information. 

Introducing the Windows Server Hybrid Administrator Associate certification

This article is contributed. See the original author and article here.

Today at Windows Server Summit, Microsoft announced a new Windows Server Hybrid Administrator Associate certification, a certification that members of the team responsible for this blog have been highly involved in developing.



To obtain this certification you need to pass two exams: AZ 800 (Administering Windows Server Hybrid Core Infrastructure) and AZ 801 (Configuring Windows Server Hybrid Advanced Services). The objectives associated with the exams address knowledge of configuring and administering core and advanced Windows Server roles and features, from AD DS, DNS, DHCP, File, Storage and Compute through to Security, High Availability, DR, Monitoring and Troubleshooting. Both the traditional on-premises elements of these Windows Server roles and features are covered by the exam objectives as well as the interaction of these elements with hybrid cloud technologies.


 


We’ve created two study guides to help you prepare for each exam. In these study guides you will find links to relevant MS Learn modules and learning paths and docs.microsoft.com articles. You can find them here:



https://aka.ms/az-800studyguide (Administering Windows Server Hybrid Core Infrastructure)
https://aka.ms/az-801studyguide (Configuring Windows Server Hybrid Advanced Services)


 


If you just want to get a good overview of the content of each exam, I ran through the contents of each in briefings to Jeff Woolsey from the Windows Server & Azure Stack HCI product team. Each briefing is about 20 minutes in length and watching both should give you a great idea of what each exam and the certification is all about:


 


AZ-800 https://youtu.be/yI8BRar8xJY
AZ-801 https://youtu.be/T-JSpxZp8xk


 


How these exams and the certification came about is directly related this team’s role as Cloud Advocates and our responsibility of advocating to and on behalf of the IT Operations audience. Certification has always been important to us and many of us got our groundings in core Microsoft technologies through preparing to take certification exams.


 


A good number of us first got certified on Windows NT 4 and my first book was a Microsoft Press training kit for the Windows Server 2003 admin exam. When Rick Claus made the first post on this blog introducing the team back in 2018, one of the first comments we got asked us about future Windows Server training and certification. We know the topic is important to you, our audience, because it has regularly come up when presenting to audiences at Ignite or user groups, or on twitter, or in casual conversation at the supermarket.


 


Over the last 18 months Cloud Advocates have worked with World Wide Learning, Marketing, and the Windows Server and Azure Stack HCI product teams to design and develop MS Learn and instructor led training content that covered the fundamental technologies addressed by the AZ 800 and AZ 801 exams. These modules, paths, and courses laid the path for the certification announced today.


 


It’s not a stretch to say that over the last few years cloud technologies have increasingly interacted with the on-premises world. Just as WINS was critical to NT4, AD was critical to Windows 2000, and virtualization critical to Windows Server 2008 and Windows Server 2012, cloud technologies are an important element of today’s on-premises Windows Server deployments.


 


Role based certifications address the tasks that people perform in the course of their jobs. Any new certification around Windows Server not only had to address the core on-premises roles, but also how those roles are extended by technologies hosted in the cloud. Through our regular interactions with our audience we’ve seen time and time again that we’re all living in a hybrid world even if the degree to which we’re living in that world varies from organization to organization.


 


Windows Server 2022 has been designed as to work in hybrid cloud environments, something you see through Windows Admin Center through to extended capabilities made available through Azure Arc and Azure File Sync. The description for each exam indicates that exam candidates should have experience with technologies they are being tested on. Whereas a few years ago the hybrid story wasn’t as comprehensive of compelling, the release of Windows Server 2022 provided an opportunity to return to a certification that attests to how people do and will use the operating system today and into the future.


 


The AZ-800 and AZ-801 exams will go into beta towards the end of 2021. An announcement will be made when the betas are available and we expect that uptake of available seats on the beta will be swift. The exams are likely to RTM early in 2022. By providing you with a lot of information now, we hope you’ll have a good amount of time to get prepared for this brand new certification.

APT Actors Exploiting Newly Identified Vulnerability in ManageEngine ADSelfService Plus

This article is contributed. See the original author and article here.

Summary

This Joint Cybersecurity Advisory uses the MITRE Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK®) framework, Version 8. See the ATT&CK for Enterprise for  referenced threat actor tactics and for techniques.

This joint advisory is the result of analytic efforts between the Federal Bureau of Investigation (FBI), United States Coast Guard Cyber Command (CGCYBER), and the Cybersecurity and Infrastructure Security Agency (CISA) to highlight the cyber threat associated with active exploitation of a newly identified vulnerability (CVE-2021-40539) in ManageEngine ADSelfService Plus—a self-service password management and single sign-on solution.

CVE-2021-40539, rated critical by the Common Vulnerability Scoring System (CVSS), is an authentication bypass vulnerability affecting representational state transfer (REST) application programming interface (API) URLs that could enable remote code execution. The FBI, CISA, and CGCYBER assess that advanced persistent threat (APT) cyber actors are likely among those exploiting the vulnerability. The exploitation of ManageEngine ADSelfService Plus poses a serious risk to critical infrastructure companies, U.S.-cleared defense contractors, academic institutions, and other entities that use the software. Successful exploitation of the vulnerability allows an attacker to place webshells, which enable the adversary to conduct post-exploitation activities, such as compromising administrator credentials, conducting lateral movement, and exfiltrating registry hives and Active Directory files.

Zoho ManageEngine ADSelfService Plus build 6114, which Zoho released on September 6, 2021, fixes CVE-2021-40539. FBI, CISA, and CGCYBER strongly urge users and administrators to update to ADSelfService Plus build 6114. Additionally, FBI, CISA, and CGCYBER strongly urge organizations ensure ADSelfService Plus is not directly accessible from the internet.

The FBI, CISA, and CGCYBER have reports of malicious cyber actors using exploits against CVE-2021-40539 to gain access [T1190] to ManageEngine ADSelfService Plus, as early as August 2021. The actors have been observed using various tactics, techniques, and procedures (TTPs), including:

  • Frequently writing webshells [T1505.003] to disk for initial persistence
  • Obfuscating and Deobfuscating/Decoding Files or Information  [T1027 and T1140]
  • Conducting further operations to dump user credentials [T1003]
  • Living off the land by only using signed Windows binaries for follow-on actions [T1218]
  • Adding/deleting user accounts as needed [T1136]
  • Stealing copies of the Active Directory database (NTDS.dit) [T1003.003] or registry hives
  • Using Windows Management Instrumentation (WMI) for remote execution [T1047]
  • Deleting files to remove indicators from the host [T1070.004]
  • Discovering domain accounts with the net Windows command [1087.002]
  • Using Windows utilities to collect and archive files for exfiltration [T1560.001]
  • Using custom symmetric encryption for command and control (C2) [T1573.001]

The FBI, CISA, and CGCYBER are proactively investigating and responding to this malicious cyber activity.

  • FBI is leveraging specially trained cyber squads in each of its 56 field offices and CyWatch, the FBI’s 24/7 operations center and watch floor, which provides around-the-clock support to track incidents and communicate with field offices across the country and partner agencies.
  • CISA offers a range of no-cost cyber hygiene services to help organizations assess, identify, and reduce their exposure to threats. By requesting these services, organizations of any size could find ways to reduce their risk and mitigate attack vectors.
  • CGCYBER has deployable elements that provide cyber capability to marine transportation system critical infrastructure in proactive defense or response to incidents.

Sharing technical and/or qualitative information with the FBI, CISA, and CGCYBER helps empower and amplify our capabilities as federal partners to collect and share intelligence and engage with victims while working to unmask and hold accountable, those conducting malicious cyber activities. See the Contact section below for details.

Click here for a PDF version of this report.

Technical Details

Successful compromise of ManageEngine ADSelfService Plus, via exploitation of CVE-2021-40539, allows the attacker to upload a .zip file containing a JavaServer Pages (JSP) webshell masquerading as an x509 certificate: service.cer. Subsequent requests are then made to different API endpoints to further exploit the victim’s system.

After the initial exploitation, the JSP webshell is accessible at /help/admin-guide/Reports/ReportGenerate.jsp. The attacker then attempts to move laterally using Windows Management Instrumentation (WMI), gain access to a domain controller, dump NTDS.dit and SECURITY/SYSTEM registry hives, and then, from there, continues the compromised access.

Confirming a successful compromise of ManageEngine ADSelfService Plus may be difficult—the attackers run clean-up scripts designed to remove traces of the initial point of compromise and hide any relationship between exploitation of the vulnerability and the webshell.

Targeted Sectors

APT cyber actors have targeted academic institutions, defense contractors, and critical infrastructure entities in multiple industry sectors—including transportation, IT, manufacturing, communications, logistics, and finance. Illicitly obtained access and information may disrupt company operations and subvert U.S. research in multiple sectors.

Indicators of Compromise

Hashes:

068d1b3813489e41116867729504c40019ff2b1fe32aab4716d429780e666324
49a6f77d380512b274baff4f78783f54cb962e2a8a5e238a453058a351fcfbba

File paths:

C:ManageEngineADSelfService Pluswebappsadssphelpadmin-guidereportsReportGenerate.jsp
C:ManageEngineADSelfService Pluswebappsadssphtmlpromotionadap.jsp
C:ManageEngineADSelfService PlusworkCatalinalocalhostROOTorgapachejsphelp
C:ManageEngineADSelfService PlusjrebinSelfSe~1.key (filename varies with an epoch timestamp of creation, extension may vary as well)
C:ManageEngineADSelfService PluswebappsadsspCertificatesSelfService.csr
C:ManageEngineADSelfService Plusbinservice.cer
C:UsersPubliccustom.txt
C:UsersPubliccustom.bat
C:ManageEngineADSelfService PlusworkCatalinalocalhostROOTorgapachejsphelp (including subdirectories and contained files)

Webshell URL Paths:

/help/admin-guide/Reports/ReportGenerate.jsp

/html/promotion/adap.jsp

Check log files located at C:ManageEngineADSelfService Pluslogs for evidence of successful exploitation of the ADSelfService Plus vulnerability:

  • In access* logs:
    • /help/admin-guide/Reports/ReportGenerate.jsp
    • /ServletApi/../RestApi/LogonCustomization
    • /ServletApi/../RestAPI/Connection
  • In serverOut_* logs:
    • Keystore will be created for "admin"
    • The status of keystore creation is Upload!
  • In adslog* logs:
    • Java traceback errors that include references to NullPointerException in addSmartCardConfig or getSmartCardConfig

TTPs:

  • WMI for lateral movement and remote code execution (wmic.exe)
  • Using plaintext credentials acquired from compromised ADSelfService Plus host
  • Using pg_dump.exe to dump ManageEngine databases
  • Dumping NTDS.dit and SECURITY/SYSTEM/NTUSER registry hives
  • Exfiltration through webshells
  • Post-exploitation activity conducted with compromised U.S. infrastructure
  • Deleting specific, filtered log lines

Yara Rules:

rule ReportGenerate_jsp {
   strings:
      $s1 = “decrypt(fpath)”
      $s2 = “decrypt(fcontext)”
      $s3 = “decrypt(commandEnc)”
      $s4 = “upload failed!”
      $s5 = “sevck”
      $s6 = “newid”
   condition:
      filesize < 15KB and 4 of them
}

rule EncryptJSP {
   strings:
      $s1 = “AEScrypt”
      $s2 = “AES/CBC/PKCS5Padding”
      $s3 = “SecretKeySpec”
      $s4 = “FileOutputStream”
      $s5 = “getParameter”
      $s6 = “new ProcessBuilder”
      $s7 = “new BufferedReader”
      $s8 = “readLine()”
   condition:
      filesize < 15KB and 6 of them
}

Mitigations

Organizations that identify any activity related to ManageEngine ADSelfService Plus indicators of compromise within their networks should take action immediately.

Zoho ManageEngine ADSelfService Plus build 6114, which Zoho released on September 6, 2021, fixes CVE-2021-40539. FBI, CISA, and CGCYBER strongly urge users and administrators to update to ADSelfService Plus build 6114. Additionally, FBI, CISA, and CGCYBER strongly urge organizations ensure ADSelfService Plus is not directly accessible from the internet.

Additionally, FBI, CISA, and CGCYBER strongly recommend domain-wide password resets and double Kerberos Ticket Granting Ticket (TGT) password resets if any indication is found that the NTDS.dit file was compromised.

Actions for Affected Organizations

Immediately report as an incident to CISA or the FBI (refer to Contact Information section below) the existence of any of the following:

  • Identification of indicators of compromise as outlined above.
  • Presence of webshell code on compromised ManageEngine ADSelfService Plus servers.
  • Unauthorized access to or use of accounts.
  • Evidence of lateral movement by malicious actors with access to compromised systems.
  • Other indicators of unauthorized access or compromise.

Contact Information

Recipients of this report are encouraged to contribute any additional information that they may have related to this threat.

For any questions related to this report or to report an intrusion and request resources for incident response or technical assistance, please contact:

  • To report suspicious or criminal activity related to information found in this Joint Cybersecurity Advisory, contact your local FBI field office at https://www.fbi.gov/contact-us/field-offices, or the FBI’s 24/7 Cyber Watch (CyWatch) at (855) 292-3937 or by e-mail at CyWatch@fbi.gov. When available, please include the following information regarding the incident: date, time, and location of the incident; type of activity; number of people affected; type of equipment used for the activity; the name of the submitting company or organization; and a designated point of contact.
  • To request incident response resources or technical assistance related to these threats, contact CISA at Central@cisa.gov.
  • To report cyber incidents to the Coast Guard pursuant to 33 CFR Subchapter H, Part 101.305 please contact the USCG National Response Center (NRC) Phone: 1-800-424-8802, email: NRC@uscg.mil.

Revisions

September 16, 2021: Initial Version

This product is provided subject to this Notification and this Privacy & Use policy.