This article is contributed. See the original author and article here.
OpenFOAM (Open Field Operation and Manipulation) is an open-source computational fluid dynamics (CFD) software package. It provides a comprehensive set of tools for simulating and analyzing complex fluid flow and heat transfer phenomena. It is widely used in academia and industry for a range of applications, such as aerodynamics, hydrodynamics, chemical engineering, environmental simulations, and more.
Azure offers services like Azure Batch and Azure CycleCloud that can help individuals or organizations run OpenFOAM simulations effectively and efficiently. In both scenarios, these services allow users to create and manage clusters of VMs, enabling parallel processing and scaling of OpenFOAM simulations. While CycleCloud provides a similar experience to on-premises thanks to its support to common schedulers like OpenPBS or SLURM; Azure Batch provides a cloud native resource scheduler that simplifies the configuration, maintenance and support of your required infrastructure.
This article covers a step-by-step guide on a minimal Azure Batch setup to run OpenFOAM simulations. Further analysis should be performed to identify the right sizing both in terms of compute and storage. A previous article on How to identify the recommended VM for your HPC workloads could be helpful.
Step 1: Provisioning required infrastructure
To get started, create a new Azure Batch account. At this point a pool, job or task is not required. In our scenario, the pool allocation method would be configure as “User Subscription” and public network access configured to “All Networks”.
A shared storage across all nodes would be also required to share the input model and store the outputs. In this guide, an Azure Files NFS share would be used. Alternatives like Azure NetApp Files or Azure Managed Lustre could also be an option base on your scalability and performance needs.
Step 2: Customizing the virtual machine image
OpenFOAM provides pre-compiled binaries packaged for Ubuntu that can be installed through its oficial APT repositories. If Ubuntu is your distribution of choice, you can follow the oficial documentation on how to install it, using a pool’s start task is a good approach to do it. As an alternative, you can create a custom image with everything already pre-configured.
This article would cover the second option using CentOS 7.9 as base image to show the end-to-end configuration and compilation of the software from source code. To simplify the process, it would rely on the available HPC images that provide the required pre-requisites already installed. The reference URN for those images is: OpenLogic:CentOS-HPC:s7_9-gen2:latest. The SKU of the VM we would use both to create the custom image and run the simulations is a HBv3.
Start the configuration creating a new VM. After the VM is up and running, execute the following script to download and compile OpenFOAM source code.
## Downloading OpenFoam
sudo mkdir /openfoam
sudo chmod 777 /openfoam
cd /openfoam
wget https://dl.openfoam.com/source/v2212/OpenFOAM-v2212.tgz
wget https://dl.openfoam.com/source/v2212/ThirdParty-v2212.tgz
tar -xf OpenFOAM-v2212.tgz
tar -xf ThirdParty-v2212.tgz
module load mpi/openmpi
module load gcc-9.2.0
## OpenFoam 10 requires cmake 3. CentOS 7.9 cames with a previous version.
sudo yum install epel-release.noarch -y
sudo yum install cmake3 -y
sudo yum remove cmake -y
sudo ln -s /usr/bin/cmake3 /usr/bin/cmake
source OpenFOAM-v2212/etc/bashrc
foamSystemCheck
cd OpenFOAM-v2212/
./Allwmake -j -s -q -l
The last command compiles with all cores (-j), reduced output (-s, -silent), with queuing (-q, -queue) and logs (-l, -log) the output to a file for later inspection. After the initial compilation, review the output log or re-run the last command to make sure that everything was compiled without errors. Output is so verbose that errors could be missed in a quick review of the logs.
It would take a while before the compilation process finishes. After that, you can delete the installers and any other folder not required in your scenario and capture the image into a Shared Image Gallery.
Step 3. Batch pool configuration
Add a new pool to your previously created Azure Batch account. You can create a new pool using the standard wizard (Add) and fulfilling the required fields with the values mentioned in the following JSON, or you can copy and paste this file into the Add (JSON editor).
Make sure you customize the properties between .
{
"properties": {
"vmSize": "STANDARD_HB120rs_V3",
"interNodeCommunication": "Enabled",
"taskSlotsPerNode": 1,
"taskSchedulingPolicy": {
"nodeFillType": "Pack"
},
"deploymentConfiguration": {
"virtualMachineConfiguration": {
"imageReference": {
"id": ""
},
"nodeAgentSkuId": "batch.node.centos 7",
"nodePlacementConfiguration": {
"policy": "Regional"
}
}
},
"mountConfiguration": [
{
"nfsMountConfiguration": {
"source": "",
"relativeMountPath": "data",
"mountOptions": "-o vers=4,minorversion=1,sec=sys"
}
}
],
"networkConfiguration": {
"subnetId": "",
"publicIPAddressConfiguration": {
"provision": "BatchManaged"
}
},
"scaleSettings": {
"fixedScale": {
"targetDedicatedNodes": 0,
"targetLowPriorityNodes": 0,
"resizeTimeout": "PT15M"
}
},
"targetNodeCommunicationMode": "Simplified"
}
}
Wait till the pool is created and the nodes are available to accept new tasks. Your pool view should look similar to the following image.
Step 4. Batch Job Configuration
Once the pool allocation state value is “Ready”, continue with the next step: create a new Job. Default configuration is enough in this case. In our case, the job is called “flange” because we would use the flange example from OpenFOAM tutorials.
Step 5. Task Pool Configuration
Once the job state value changes to “Active”, it is ready to admit new tasks. You can create a new task using the standard wizard (Add) and fulfilling the required fields with the values mentioned in the following JSON, or you can copy and paste this file into the Add (JSON editor).
Make sure you customize the properties between .
{
"id": "",
"commandLine": "/bin/bash -c '$AZ_BATCH_NODE_MOUNTS_DIR/data/init.sh'",
"resourceFiles": [],
"environmentSettings": [],
"userIdentity": {
"autoUser": {
"scope": "pool",
"elevationLevel": "nonadmin"
}
},
"multiInstanceSettings": {
"numberOfInstances": 2,
"coordinationCommandLine": "echo "Coordination completed!"",
"commonResourceFiles": []
}
}
Task commandline parameter is configured to execute a Bash script stored into the Azure Files that Batch is mounting automatically into the ‘$AZ_BATCH_NODE_MOUNTS_DIR/data’ folder. You need to copy first the following scripts and the flange example mentioned above into a folder called flange inside that directory.
Command Line Task Script
This script would configure the environment variables and pre-process the input files before launching the mpirun command to execute the solver in parallel across all the available nodes. In this case, 2 nodes with 240 cores.
#! /bin/bash
source /etc/profile.d/modules.sh
module load mpi/openmpi
# Azure Files is mounted automatically in this directory based on the pool configuration
DATA_DIR="$AZ_BATCH_NODE_MOUNTS_DIR/data"
# OpenFoam was installed on this folder
OF_DIR="/openfoam/OpenFOAM-v2212"
# A new folder is created per execution and the input data copied there.
mkdir -p "$DATA_DIR/flange"
unzip -o "$DATA_DIR/flange.zip" -d "$DATA_DIR/$AZ_BATCH_TASK_ID"
# Configures OpenFoam environment
source "$OF_DIR/etc/bashrc"
source "$OF_DIR/bin/tools/RunFunctions"
# Preprocessing of the files
cd "$DATA_DIR/$AZ_BATCH_JOB_ID-flange"
runApplication ansysToFoam "$OF_DIR/tutorials/resources/geometry/flange.ans" -scale 0.001
runApplication decomposePar
# Configure the host file
echo $AZ_BATCH_HOST_LIST | tr "," "n" > hostfile
sed -i 's/$/ slots=120/g' hostfile
# Launching the secondarr script to perform the parallel computation.
mpirun -np 240 --hostfile hostfile "$DATA_DIR/run.sh" > solver.log
Mpirun Processing Script
This script would launch the task in all the nodes available. It is required to configure the environment variables and folders the solver would need to access. If this script is not executed and the solver is invoked directly on the mpirun command, only the primary task node would have the right configuration applied and the rest of the nodes would fail with file not found errors.
#! /bin/bash
source /etc/profile.d/modules.sh
module load gcc-9.2.0
module load mpi/opennmpi
DATA_DIR="$AZ_BATCH_NODE_MOUNTS_DIR/data"
OF_DIR="/openfoam/OpenFOAM-v2212"
source "$OF_DIR/etc/bashrc"
source "$OF_DIR/bin/tools/RunFunctions"
# Execute the code across the nodes.
laplacianFoam -parallel > solver.log
Step 6. Checking the results
Mpirun output is redirected to a file called solver.log in the directory where the model is stored inside the Azure Files file share. Checking the first lines of the log, it’s possible to validate that the execution has properly started and it’s running on top of two HBv3 with 240 processes.
/*---------------------------------------------------------------------------*
| ========= | |
| / F ield | OpenFOAM: The Open Source CFD Toolbox |
| / O peration | Version: 2212 |
| / A nd | Website: www.openfoam.com |
| / M anipulation | |
*---------------------------------------------------------------------------*/
Build : _66908158ae-20221220 OPENFOAM=2212 version=v2212
Arch : "LSB;label=32;scalar=64"
Exec : laplacianFoam -parallel
Date : May 04 2023
Time : 15:01:56
Host : 964d5ce08c1d4a7b980b127ca57290ab000000
PID : 67742
I/O : uncollated
Case : /mnt/resource/batch/tasks/fsmounts/data/flange
nProcs : 240
Hosts :
(
(964d5ce08c1d4a7b980b127ca57290ab000000 120)
(964d5ce08c1d4a7b980b127ca57290ab000001 120)
)
Conclusion
By leveraging Azure Batch’s scalability and flexible infrastructure, you can run OpenFOAM simulations at scale, achieving faster time-to-results and increased productivity. This guide demonstrated the process of configuring Azure Batch, customizing the CentOS 7.9 image, installing dependencies, compiling OpenFOAM, and running simulations efficiently on Azure Batch. With Azure’s powerful capabilities, researchers and engineers can unleash the full potential of OpenFOAM in the cloud.
Brought to you by Dr. Ware, Microsoft Office 365 Silver Partner, Charleston SC.
Recent Comments