Platform Notes 6.10

This page will show you the specific considerations when using an OpenNebula cloud, according to the different supported platforms.

This is the list of the individual platform components that have been through the complete OpenNebula Quality Assurance and Certification Process.

Certified Components Version

Front-End Components

Component

Version

More information

Red Hat Enterprise Linux

8, 9

Front-End Installation

AlmaLinux

8, 9

Front-End Installation

Ubuntu Server

22.04 (LTS), 24.04 (LTS)

Front-End Installation

Debian

11, 12

Front-End Installation. Not certified to manage VMware infrastructures

MariaDB or MySQL

Version included in the Linux distribution

MySQL Setup

SQLite

Version included in the Linux distribution

Default DB, no configuration needed

Ruby Gems

Versions installed by opennebula-rubygems

Detailed information in /usr/share/one/Gemfile

vCenter Nodes

Important

The legacy vCenter driver is included in the distribution, but no longer receives updates or bug fixes.

Component

Version

More information

vCenter

7.0.x managing ESX 7.0.x & 8.0.x managing ESX 8.0.x

vCenter Node Installation

NSX-T

2.4.1+

VMware compatiblity. NSX Documentation.

NSX-V

6.4.5+

VMware compatiblity. NSX Documentation

Note

Debian front-ends are not certified to manage VMware infrastructures with OpenNebula.

KVM Nodes

Component

Version

More information

Red Hat Enterprise Linux

8, 9

KVM Driver

AlmaLinux

8, 9

KVM Driver

Ubuntu Server

22.04 (LTS), 24.04 (LTS)

KVM Driver

Debian

11, 12

KVM Driver

KVM/Libvirt

Support for version included in the Linux distribution. For RHEL the packages from qemu-ev are used.

KVM Node Installation

LXC Nodes

Component

Version

More information

Ubuntu Server

22.04 (LTS), 24.04 (LTS)

LXC Driver

Debian

11, 12

LXC Driver

AlmaLinux

8, 9

LXC Driver

LXC

Support for version included in the Linux distribution

LXC Node Installation

Linux and Windows Contextualization Packages

Refer to: one-apps release

More information: one-apps wiki

Open Cloud Networking Infrastructure

Component

Version

More information

8021q kernel module

Version included in the Linux distribution

802.1Q VLAN

Open vSwitch

Version included in the Linux distribution

Open vSwitch

iproute2

Version included in the Linux distribution

VXLAN

Open Cloud Storage Infrastructure

Component

Version

More information

iSCSI

Version included in the Linux distribution

LVM Drivers

LVM2

Version included in the Linux distribution

LVM Drivers

Ceph

Quincy v17.2.x Reef v18.2.x

The Ceph Datastore

Authentication

Component

Version

More information

net-ldap ruby library

0.12.1 or 0.16.1

LDAP Authentication

openssl

Version included in the Linux distribution

x509 Authentication

Sunstone

Browser

Version

Chrome

61.0 - 94.0

Firefox

59.0 - 92.0

Note

For Windows desktops using Chrome or Firefox you should disable the option touch-events for your browser:

Chrome: chrome://flags -> #touch-events: disabled. Firefox: about:config -> dom.w3c_touch_events: disabled.

Compatibility of Workloads on Certified Edge Clusters

Edge Clusters can be virtual or metal depending of the instance type used to build the cluster. Note that not all providers offer both instance types.

Important

Providers based on virtual instances have been disabled by default.

Edge/Cloud Provider

Edge Cluster

Hypervisor

Equinix

metal

KVM and LXC

AWS

metal

KVM and LXC

On-prem

metal

KVM and LXC

The Edge Cluster type determines the hypervisor and workload that can be run in the cluster. The following table summarizes the Edge Cluster you need to run specific workloads:

Use Case

Edge Cluster

Hypervisor

I want to run virtual servers…

metal

KVM, LXC

I want to run a Kubernetes cluster…

metal

KVM

Certified Infrastructure Scale

A single instance of OpenNebula (i.e., a single oned process) has been stress-tested to cope with 500 hypervisors without performance degradation. This is the maximum recommended configuration for a single instance, and depending on the underlying configuration of storage and networking, it is mainly recommended to switch to a federated scenario for any larger number of hypervisors.

However, there are several OpenNebula users managing significantly higher numbers of hypervisors (to the order of two thousand) with a single instance. This largely depends, as mentioned, on the storage, networking, and also monitoring configuration.

Front-End Platform Notes

The following applies to all Front-Ends:

  • XML-RPC tuning parameters (MAX_CONN, MAX_CONN_BACKLOG, KEEPALIVE_TIMEOUT, KEEPALIVE_MAX_CONN and TIMEOUT) are only available with packages distributed by us, as they are compiled with a newer xmlrpc-c library.

  • Only Ruby versions >= 2.0 are supported.

Nodes Platform Notes

The following items apply to all distributions:

  • Since OpenNebula 4.14 there is a new monitoring probe that gets information about PCI devices. By default it retrieves all the PCI devices in a Host. To limit the PCI devices for which it gets info and appear in onehost show, refer to PCI Passthrough.

  • When using qcow2 storage drivers you can make sure that the data is written to disk when doing snapshots by setting the cache parameter to writethrough. This change will make writes slower than other cache modes but safer. To do this edit the file /etc/one/vmm_exec/vmm_exec_kvm.conf and change the line for DISK:

DISK = [ driver = "qcow2", cache = "writethrough" ]
  • Most Linux distributions using a kernel 5.X (i.e. Debian 11) mount cgroups v2 via systemd natively. If you have hosts with a previous version of the distribution mounting cgroups via fstab or in v1 compatibility mode (i.e. Debian 10) and their libvirt version is <5.5, during the migration of VMs from older hosts to new ones you can experience errors like

WWW MMM DD hh:mm:ss yyyy: MIGRATE: error: Unable to write to '/sys/fs/cgroup/machine.slice/machine-qemu/..../cpu.weight': Numerical result out of range Could not migrate VM_UID to HOST ExitCode: 1

This happens in every single VM migration from a host with the previous OS version to a host with the new one.

To solve this, there are 2 options: Delete the VM and recreate it scheduled in a host with the new OS version or mount cgroups in v1 compatibility mode in the nodes with the new OS version. To do this

  1. Edit the bootloader default options (normally under /etc/defaults/grub.conf)

  2. Modify the default commandline for the nodes (usually GRUB_CMDLINE_LINUX="...") and add the option systemd.unified_cgroup_hierarchy=0

  3. Recreate the grub configuration file (in most cases executing a grub-mkconfig -o /boot/grub/grub.cfg)

  4. Reboot the host. The change will be persistent if the kernel is updated

RedHat 8 Platform Notes

Disable PolicyKit for Libvirt

It is recommended that you disable PolicyKit for Libvirt:

cat /etc/libvirt/libvirtd.conf
...
auth_unix_ro = "none"
auth_unix_rw = "none"
unix_sock_group = "oneadmin"
unix_sock_ro_perms = "0770"
unix_sock_rw_perms = "0770"
...

AlmaLinux 9 Platform Notes

Disable Libvirtd’s SystemD Socket Activation

OpenNebula currently works only with the legacy livirtd.service. You should disable libvirt’s modular daemons and systemd socket activation for the libvirtd.service. You can take a look at this bug report, for a detailed workaround procedure.

vCenter 7.0 Platform Notes

Important

The legacy vCenter driver is currently included in the distribution, but no longer receives updates or bug fixes.

Problem with Boot Order

Currently in vCenter 7.0 changing the boot order is only supported in Virtual Machines at deployment time.

Debian 10 and Ubuntu 18 Upgrade

When upgrading your nodes from Debian 10 or Ubuntu 18 you may need to update the opennebula sudoers file because of the /usr merge feature implemented for Debian11/Ubuntu20. You can find more information and a recommended work around in this issue.