Azure Linux RDMA Setup Tips

ryanblog2-2
Microsoft’s announcement of Azure Linux RDMA support last year was great news for those looking to run tightly coupled HPC workloads in the cloud. Unfortunately, there still isn’t a lot of documentation out there describing how to set it up. This tutorial appears to be the main source of information for configuring Azure Linux RDMA. However, there are a couple of omissions in there that can trip you up when setting up your cluster for the first time. In this post, we’ll cover a few gotchas that you might encounter and some workarounds.
First, the tutorial uses the older ASM model for deploying virtual machines. Microsoft recommends that new projects use ARM for deployment. One big reason for switching is that ARM deployments will provision virtual machines in parallel whereas ASM will deployment them serially. For larger clusters, this can make a big difference in startup time. This is a simple ARM template that can be used as a starting point that will launch a standalone MPI cluster with the recommended vanilla SLES 12 HPC VHD.
After the cluster launches, you will likely want to install some common packages like, say, git.
However:
# zypper install git
Loading repository data…
Reading installed packages…
‘git’ not found in package names. Trying capabilities.
No provider of ‘git’ found.
Resolving package dependencies…

Nothing to do.
The reason for this is that the vanilla SLES VHD is missing a bunch of repos out of the box.  You can re-add them by running the following:
# cd /etc/zypp/repos.d
# mv sldp-msft.repo sldp-msft.repo.bak
# rm -f *.repo
# systemctl restart guestregister.service
# mv sldp-msft.repo.bak sldp-msft.repo
# zypper addrepo sldp-msft.repo
# zypper refresh

Now, you should have access to a much wider range of packages to install.  As described in the tutorial guide, after you’ve installed any custom packages and also setup Intel MPI, you can capture your custom VHD and use that as the starting point for your MPI clusters instead.
Once you’ve launched a cluster with the custom VHD, you may need to install a VM extension that will update the RDMA drivers.  The tutorial states that you should not update the RDMA driver in the US West, West Europe, and Japan East regions.  However, this appears to be an out-of-date notice, because when we tried running the Intel MPI ping-pong test in those regions, we ran into the same DAPL errors that are described here.  After updating the drivers, the ping-pong test started working without error.
As far as installing the OSTC Extension goes, there is one small wrinkle that you will need to be aware of- if you ssh into the VM immediately after the installing the extension, you will notice that your connection is dropped shortly after logging in.
azureadmin@n1:~> Connection to 13.93.144.56 closed by remote host.
Connection to 13.93.144.56 closed.

The reason for this is that the VM is rebooted about 2-3 minutes after the extension deployment completes.  It would be nicer if the VM was ready for use when the extension installation finishes, but unfortunately, that doesn’t seem to be the case here.  This is something that you’ll need to take into account if you are trying to automate the cluster deployment.
Hopefully, once Azure Linux RDMA support is added to the Azure Batch service you won’t have to deal with any of the above.  Of course, launching the cluster is just the starting point.  You still need to install and tune your simulation software, set up a connection to your license server, and securely transfer your input and output files to and from the cluster.  Rescale’s support team is ready to work with you to accomplish this on Azure using our web, API, or CLI tools.

Similar Posts