Platform Notes

This page will show you the specific considerations when using an OpenNebula cloud, according to the different supported platforms.

This is the list of the individual platform components that have been through the complete OpenNebula Quality Assurance and Certification Process.

Certified Components Version

Front-End Components

ComponentVersionMore information
Red Hat Enterprise Linux8, 9Front-End Installation
AlmaLinux8, 9Front-End Installation
Ubuntu Server22.04 (LTS), 24.04 (LTS)Front-End Installation
Debian11, 12Front-End Installation.
Not certified to manage VMware infrastructures
MariaDB or MySQLVersion included in the Linux distributionMySQL Setup
SQLiteVersion included in the Linux distributionDefault DB, no configuration needed

KVM Nodes

ComponentVersionMore information
Red Hat Enterprise Linux8, 9KVM Driver
AlmaLinux8, 9KVM Driver
Ubuntu Server22.04 (LTS), 24.04 (LTS)KVM Driver
Debian11, 12KVM Driver
KVM/LibvirtSupport for version included in the Linux distribution.
For RHEL the packages from qemu-ev are used.
KVM Node Installation

LXC Nodes

ComponentVersionMore information
Ubuntu Server22.04 (LTS), 24.04 (LTS)LXC Driver
Debian11, 12LXC Driver
AlmaLinux8, 9LXC Driver
LXCSupport for version included in the Linux distributionLXC Node Installation

Linux and Windows Contextualization Packages

Refer to: one-apps release

More information: one-apps wiki

Open Cloud Networking Infrastructure

ComponentVersionMore information
8021q kernel moduleVersion included in the Linux distribution802.1Q VLAN
Open vSwitchVersion included in the Linux distributionOpen vSwitch
iproute2Version included in the Linux distributionVXLAN

Open Cloud Storage Infrastructure

ComponentVersionMore information
iSCSIVersion included in the Linux distributionLVM Drivers
LVM2Version included in the Linux distributionLVM Drivers
CephReef v18.2.x
Squid v19.2.x
The Ceph Datastore

Authentication

ComponentVersionMore information
net-ldap ruby library0.19.0LDAP Authentication
opensslVersion included in the Linux distributionx509 Authentication

Monitoring and Backups

ComponentVersionMore information
Prometheus monitoring toolkit2.53.1Monitoring and Alerting Installation
Restic backup backend0.17.3Backup Datastore: Restic
Veeam12.3.1Veeam Backup (EE)

Sunstone

BrowserVersion
Chrome61.0 - 94.0
Firefox59.0 - 92.0

Chrome: chrome://flags -> #touch-events: disabled. Firefox: about:config -> dom.w3c_touch_events: disabled.

Certified Infrastructure Scale

A single instance of OpenNebula (i.e., a single oned process) has been stress-tested to cope with 500 hypervisors without performance degradation. This is the maximum recommended configuration for a single instance, and depending on the underlying configuration of storage and networking, it is mainly recommended to switch to a federated scenario for any larger number of hypervisors.

However, there are several OpenNebula users managing significantly higher numbers of hypervisors (to the order of two thousand) with a single instance. This largely depends, as mentioned, on the storage, networking, and also monitoring configuration.

Front-End Platform Notes

The following applies to all Front-Ends:

  • XML-RPC tuning parameters (MAX_CONN, MAX_CONN_BACKLOG, KEEPALIVE_TIMEOUT, KEEPALIVE_MAX_CONN and TIMEOUT) are only available with packages distributed by us, as they are compiled with a newer xmlrpc-c library.
  • Only Ruby versions >= 2.0 are supported.

Nodes Platform Notes

The following items apply to all distributions:

  • When using qcow2 storage drivers you can make sure that the data is written to disk when doing snapshots by setting the cache parameter to writethrough. This change will make writes slower than other cache modes but safer. To do this edit the file /etc/one/vmm_exec/vmm_exec_kvm.conf and change the line for DISK:
DISK = [ driver = "qcow2", cache = "writethrough" ]
  • Most Linux distributions using a kernel 5.X (i.e. Debian 11) mount cgroups v2 via systemd natively. If you have hosts with a previous version of the distribution mounting cgroups via fstab or in v1 compatibility mode (i.e. Debian 10) and their libvirt version is <5.5, during the migration of VMs from older hosts to new ones you can experience errors like
WWW MMM DD hh:mm:ss yyyy: MIGRATE: error: Unable to write to '/sys/fs/cgroup/machine.slice/machine-qemu/..../cpu.weight': Numerical result out of range Could not migrate VM_UID to HOST ExitCode: 1

This happens in every single VM migration from a host with the previous OS version to a host with the new one.

To solve this, there are 2 options: Delete the VM and recreate it scheduled in a host with the new OS version or mount cgroups in v1 compatibility mode in the nodes with the new OS version. To do this

  1. Edit the bootloader default options (normally under /etc/defaults/grub.conf)
  2. Modify the default commandline for the nodes (usually GRUB_CMDLINE_LINUX="...") and add the option systemd.unified_cgroup_hierarchy=0
  3. Recreate the grub configuration file (in most cases executing a grub-mkconfig -o /boot/grub/grub.cfg)
  4. Reboot the host. The change will be persistent if the kernel is updated

RedHat 8 Platform Notes

Disable PolicyKit for Libvirt

It is recommended that you disable PolicyKit for Libvirt:

$ cat /etc/libvirt/libvirtd.conf
...
auth_unix_ro = "none"
auth_unix_rw = "none"
unix_sock_group = "oneadmin"
unix_sock_ro_perms = "0770"
unix_sock_rw_perms = "0770"
...

AlmaLinux 9 Platform Notes

Disable Libvirtd’s SystemD Socket Activation

OpenNebula currently works only with the legacy livirtd.service. You should disable libvirt’s modular daemons and systemd socket activation for the libvirtd.service. You can take a look at this bug report, for a detailed workaround procedure.