Access and Setup
The LRZ Compute Cloud allows you to set up one or several virtual machines (VMs) from hosted or also your own images. You can login to your VM(s) then via ssh and use them with sudo rights.
You can configure and setup your VMs either via the web interface or using the OpenStack API via the command line.
- Login: https://cc.lrz.de/ (Domain: ADS)
- Documentation: https://doku.lrz.de/display/PUBLIC/Compute+Cloud
- FAQ: https://doku.lrz.de/display/PUBLIC/FAQ
- OpenStack client docs: https://docs.openstack.org/python-openstackclient/train/
Only two things are needed to access the LRZ Cloud
- You need to have a valid LRZ user ID (
- Your project's master user has to enable the LRZ Cloud usage for your account
The master user to contact is most likely Günter Duckeck, so just ask him in case you are interested to use these computing resources.
After the access was granted you should be able to login via the portal https://cc.lrz.de/.
Although you can in principle use the web interface for all use cases, it may be more convenient to use the python-based API. For this you have to install the OpenStack client via pip:
pip install python-openstackclient --user
Note: executables of user-installed python packages are placed under
path/to/home/.local/bin. So if this directory is not yet contained in your
PATH environment variable, you have to add it.
The OpenStack client is configured by a set of environment variables. The easiest way to set these up, is to download a configuration file from the web interface. Simply log into the portal (Domain: ADS), click on the drop-down menu in the upper right corner and select "OpenStack RC File v3" to download a configuration file. This can then be sourced
to set up the needed variables and after you entered your password for your LRZ ID, the OpenStack client should be ready to use.
To allow an ssh connection to your VMs, an additional entry in your security group is required. You can do this via the web interface and following the instructions of step 3 in this tutorial. To test your newly configured command line interface, you may also do it via the API by
openstack security group rule create --dst-port 22 --protocol tcp default
Finally, to access the VM via ssh you have to upload your public key. Assuming your public key is located under
~/.ssh/id_rsa.pub you can do this by
openstack keypair create --public-key ~/.ssh/id_rsa.pub TestVM
Setting up a VM via the Command Line
Several tutorials to set up a VM either via the web interface or the command line can be found here. In the following a quick example is given using the command line.
First, a volume (aka disc space) for the VM has to be created:
openstack volume create --image CentOS-7 --bootable --size 50 myVolume
--size specifies the volume size in GB and the name of the image to uses given by
--image. A list of the available images is given by
openstack image list. In case you are planning to run ATLAS software on your VM, you need an image that has cvmfs installed (see #Running ATLAS Software)
Next, a server (aka VM) that uses this particular volume is be created by
openstack server create --volume myVolume --flavor lrz.small --network MWN --key-name TestVM --wait myVM
--flavor defines the amount of CPUs and main memory the VM has access to (see here for a list of flavors) and
--key-name should specify the name of public key you specified during the setup step.
To connect to the VM a floating IP address is required. Thus create one and link it to the VM you just created:
openstack floating ip create MWN_pool openstack server add floating ip myVM 10.195.2.217
Of course you have to specify the IP address you received from the first command. You can list your available floating IPs via
openstack floating ip list.
Then you can log into your freshly created VM by
The username depends on the image. For CentOS images it is "centos", while for ubuntu images it is "ubuntu". An overview can be found here.
In case you do not need your VM, it is good practice to "shelve" (all associated data and resources are kept, but anything still in memory is not retained) your server:
openstack server shelve myVM
You can bring your VM back later in a couple of seconds by the according
Running ATLAS Software
The public images do not have cvmfs installed. You could install and setup cvmfs by yourself of course as you have sudo rights on your VMs. To speed this up, an image based on Ubuntu 18.04 that already contains cvmfs can be found here:
Before you can use it to create a volume, you have to upload (may take a minute or so) it via
openstack image create --file /project/etp3/miholzbo/LRZ-Cloud-Images/Ubuntu-18.04-ATLAS-v01.img Ubuntu-18.04-ATLAS-v01
Using this image, you can directly run a singularity container as you are used to via
slc6 bash. Within this environment, any ATLAS software should run as usual.
Creating an Image
In case you need additional software installed which is not contained in one of the available images, you can create your own. The easiest approach to do this is to start from one of the existing images, set up an VM and install everything you need. Then you can create an image from the volume that is associated with this instance. Before doing that, this volume must not be in use, i.e. delete your VM that is running on it. When the volume is not in use, you can create an instance from it by
openstack image create --volume myVolume myImage
After some time, the image should have been uploaded and ready to use. But keep in mind, that the image size will be at least the size of the volume. Thus it may make sense to create a dedicated smaller volume for image creation, to keep the image size at a reasonable level.
In case you want to distribute your new image, you can save it to a file via
openstack image save myImage.img myImage
SSH Connection Issues
It may happen, that a VM cannot be reached anymore by ssh. There is a section in the LRZ Cloud FAQ about this. What worked for me, is to delete the VM and create a new instance from the volume, i.e. on the web interface: volumes overview page --> "Launch as instance". After assigning a floating ip to that new VM, the configuration from before should have been recovered.