KVM Node Installation

This page shows you how to install OpenNebula from the binary packages.

Using the packages provided on our site is the recommended method, to ensure the installation of the latest version and to avoid possible package divergences in different distributions. There are two alternatives here: you can add our package repositories to your system, or visit the software menu to download the latest package for your Linux distribution.

Step 1. Add OpenNebula Repositories


To add OpenNebula repository execute the following as root:


cat << "EOT" > /etc/yum.repos.d/opennebula.repo


cat << "EOT" > /etc/yum.repos.d/opennebula.repo


To add OpenNebula repository on Debian/Ubuntu execute as root:

wget -q -O- https://downloads.opennebula.org/repo/repo.key | apt-key add -

Debian 9

echo "deb https://downloads.opennebula.org/repo/5.10/Debian/9 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Debian 10

echo "deb https://downloads.opennebula.org/repo/5.10/Debian/10 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Ubuntu 16.04

echo "deb https://downloads.opennebula.org/repo/5.10/Ubuntu/16.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Ubuntu 18.04

echo "deb https://downloads.opennebula.org/repo/5.10/Ubuntu/18.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Ubuntu 19.04

echo "deb https://downloads.opennebula.org/repo/5.10/Ubuntu/19.04 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Ubuntu 19.10

echo "deb https://downloads.opennebula.org/repo/5.10/Ubuntu/19.10 stable opennebula" > /etc/apt/sources.list.d/opennebula.list

Step 2. Installing the Software

Installing on CentOS/RHEL

Execute the following commands to install the node package and restart libvirt to use the OpenNebula provided configuration file:

sudo yum install opennebula-node-kvm
sudo systemctl restart libvirtd

Newer QEMU/KVM (only CentOS/RHEL 7)

You may benefit from using the more recent and feature-rich enterprise QEMU/KVM release. The differences between the base (qemu-kvm) and enterprise (qemu-kvm-rhev on RHEL or qemu-kvm-ev on CentOS) packages are described on the Red Hat Customer Portal.

On CentOS 7, the enterprise packages are part of the separate repository. To replace the base packages, follow these steps:

sudo yum install centos-release-qemu-ev
sudo yum install qemu-kvm-ev

On RHEL 7, you need a paid subscription to the Red Hat Virtualization (RHV) or Red Hat OpenStack (RHOS) products license only for the Red Hat Enterprise Linux isn’t enough! You have to check the RHV Installation Guide for your licensed version. Usually, the following commands should enable and install the enterprise packages:

sudo subscription-manager repos --enable rhel-7-server-rhv-4-mgmt-agent-rpms
sudo yum install qemu-kvm-rhev

For further configuration, check the specific guide: KVM.

Installing on Debian/Ubuntu

Execute the following commands to install the node package and restart libvirt to use the OpenNebula-provided configuration file:

sudo apt-get update
sudo apt-get install opennebula-node
sudo service libvirtd restart # debian
sudo service libvirt-bin restart # ubuntu

For further configuration check the specific guide: KVM.

Step 3. SELinux on CentOS/RHEL


If you are performing an upgrade skip this and the next steps and go back to the upgrade document.

SELinux can block some operations initiated by the OpenNebula Front-end, resulting in all the node operations failing completely (e.g., when the oneadmin user’s SSH credentials are not trusted) or only individual failures for particular operations with virtual machines. If the administrator isn’t experienced in SELinux configuration, it’s recommended to disable this functionality to avoid unexpected failures. You can enable SELinux anytime later when you have the installation working.

Enable SELinux

Change the following line in /etc/selinux/config to enable SELinux in enforcing state:


When changing from the disabled state, it’s necessary to trigger filesystem relabel on the next boot by creating a file /.autorelabel, e.g.:

touch /.autorelabel

After the changes, you should reboot the machine.


Depending on your OpenNebula deployment type, the following may be required on your SELinux-enabled KVM nodes:

  • package util-linux newer than 2.23.2-51 installed
  • SELinux boolean virt_use_nfs enabled (with datastores on NFS):
sudo setsebool -P virt_use_nfs on

Follow the SELinux User’s and Administrator’s Guide for more information on how to configure and troubleshoot SELinux.

Step 4. Configure Passwordless SSH

The OpenNebula Front-end connects to the hypervisor Hosts using SSH. You must distribute the public key of the oneadmin user from all machines to the file /var/lib/one/.ssh/authorized_keys in all the machines. There are many methods to achieve the distribution of the SSH keys. Ultimately the administrator should choose a method; the recommendation is to use a configuration management system. In this guide we are going to manually scp the SSH keys.

When the package was installed in the Front-end, an SSH key was generated and the authorized_keys populated. We will sync the id_rsa, id_rsa.pub and authorized_keys from the Front-end to the nodes. Additionally we need to create a known_hosts file and sync it as well to the nodes. To create the known_hosts file, we have to execute this command as user oneadmin in the Front-end with all the node names and the Front-end name as parameters:

ssh-keyscan <frontend> <node1> <node2> <node3> ... >> /var/lib/one/.ssh/known_hosts

Now we need to copy the directory /var/lib/one/.ssh to all the nodes. The easiest way is to set a temporary password for oneadmin in all the hosts and copy the directory from the Front-end:

scp -rp /var/lib/one/.ssh <node1>:/var/lib/one/
scp -rp /var/lib/one/.ssh <node2>:/var/lib/one/
scp -rp /var/lib/one/.ssh <node3>:/var/lib/one/

You should verify that connecting from the Front-end, as user oneadmin, to the nodes and the Front-end itself, and from the nodes to the Front-end, does not ask for a password:

ssh <frontend>
ssh <node1>
ssh <frontend>
ssh <node2>
ssh <frontend>
ssh <node3>
ssh <frontend>

If an extra layer of security is needed, it’s possible to keep the private key just at the frontend node instead of copying it to all the hypervisors. In this fashion the oneadmin user in the hypervisors won’t be able to access other hypervisors. This is achieved by modifying the /var/lib/one/.ssh/config in the front-end and adding the ForwardAgent option to the hypervisor hosts for forwarding the key:

cat /var/lib/one/.ssh/config
 Host host1
    User oneadmin
    ForwardAgent yes
 Host host2
    User oneadmin
    ForwardAgent yes


Remember that is necessary to have ssh-agent running with the corresponding private key imported before OpenNebula is started. You can start ssh-agent by running eval "$(ssh-agent -s)" and add the private key by running ssh-add /var/lib/one/.ssh/id_rsa.

Step 5. Networking Configuration


A network connection is needed by the OpenNebula Front-end daemons to access the hosts to manage and monitor the Hosts, and to transfer the Image files. It is highly recommended to use a dedicated network for this purpose.

There are various network models. Please check the Networking chapter to find out the networking technologies supported by OpenNebula.

You may want to use the simplest network model, that corresponds to the bridged driver. For this driver, you will need to setup a Linux bridge and include a physical device in the bridge. Later on, when defining the network in OpenNebula, you will specify the name of this bridge and OpenNebula will know that it should connect the VM to this bridge, thus giving it connectivity with the physical network device connected to the bridge. For example, a typical host with two physical networks, one for public IP addresses (attached to an eth0 NIC for example) and the other for private virtual LANs (NIC eth1 for example) should have two bridges:

ip link show type bridge
4: br0: ...
5: br1: ...
ip link show master br0
2: eth0: ...
ip link show master br1
3: eth1: ...


Remember that this is only required in the Hosts, not in the Front-end. Also remember that the exact name of the resources is not important (br0, br1, etc…), however it’s important that the bridges and NICs have the same name in all the Hosts.

Step 6. Storage Configuration

You can skip this step entirely if you just want to try out OpenNebula, as it will come configured by default in such a way that it uses the local storage of the Front-end to store Images, and the local storage of the hypervisors as storage for the running VMs.

However, if you want to set-up another storage configuration at this stage, like Ceph, NFS, LVM, etc, you should read the Open Cloud Storage chapter.

Step 7. Adding a Host to OpenNebula

In this step we will register the node we have installed in the OpenNebula Front-end, so OpenNebula can launch VMs in it. This step can be done in the CLI or in Sunstone, the graphical user interface. Follow just one method, not both, as they accomplish the same.

To learn more about the host subsystem, read this guide.

Adding a Host through Sunstone

Open Sunstone as documented here. In the left side menu go to Infrastructure -> Hosts. Click on the + button.


Then fill-in the fqdn of the node in the Hostname field.


Finally, return to the Hosts list, and check that the Host has switched to ON status. It should take somewhere between 20s to 1m. Try clicking on the refresh button to check the status more frequently.


If the host turns to err state instead of on, check /var/log/one/oned.log. Chances are it’s a problem with SSH!

Adding a Host through the CLI

To add a node to the cloud, run this command as oneadmin in the Front-end:

onehost create <node01> -i kvm -v kvm
onehost list
   1 localhost       default     0                  -                  - init

# After some time (20s - 1m)
onehost list
   0 node01          default     0       0 / 400 (0%)     0K / 7.7G (0%) on

If the host turns to err state instead of on, check /var/log/one/oned.log. Chances are it’s a problem with SSH!

Step 8. Import Currently Running VMs (Optional)

You can skip this step, as importing VMs can be done at any moment. However, if you wish to see your previously-deployed VMs in OpenNebula you can use the import VM functionality.

Step 9. Next steps

You can now jump to the optional Verify your Installation section in order to launch a test VM.

Otherwise, you are ready to start using your cloud or you could configure more components: