Platform Notes

This page will show you the specific considerations when using an OpenNebula cloud, according to the different supported platforms.

This is the list of the individual platform components that have been through the complete OpenNebula Quality Assurance and Certification Process.

Certified Components Version

Front-End Components

Component Version More information
Red Hat Enterprise Linux 7, 8 Front-End Installation
CentOS 7, 8 Front-End Installation
Ubuntu Server 18.04 (LTS), 20.04 (LTS), 20.10, 21.04 Front-End Installation
Debian 9, 10 Front-End Installation
MariaDB or MySQL Version included in the Linux distribution MySQL Setup
PostgreSQL 9.5+, Version included in the Linux distribution (except RHEL/CentOS 7) PostgreSQL Setup
SQLite Version included in the Linux distribution Default DB, no configuration needed
Ruby Gems Versions installed by packages or install_gems utility front-end installation
Corosync+Pacemaker Version included in the Linux distribution Front-end HA Setup

Containerized Front-End Components

Component Version More information
Podman 2.0, 2.2 (on RHEL/CentOS 8) Containerized Deployment
Podman Compose 0.1.7-2.git20201120 Containerized Deployment
Docker 20.10 Containerized Deployment
Docker Compose 1.27.4 Containerized Deployment

vCenter Nodes

Component Version More information
vCenter 6.5/6.7/7.0, managing ESX 6.5/6.7/7.0 vCenter Node Installation
NSX-T 2.4.1+ VMware compatiblity. NSX Documentation.
NSX-V 6.4.5+ VMware compatiblity. NSX Documentation

KVM Nodes

Component Version More information
Red Hat Enterprise Linux 7, 8 KVM Driver
CentOS 7, 8 KVM Driver
Ubuntu Server 18.04 (LTS), 20.04 (LTS), 20.10, 21.04 KVM Driver
Debian 9, 10 KVM Driver
KVM/Libvirt Support for version included in the Linux distribution. For CentOS/RHEL the packages from qemu-ev are used. KVM Node Installation

LXC Nodes

Component Version More information
Ubuntu Server 18.04 (LTS), 20.04 (LTS), 20.10, 21.04 LXC Driver
Debian 9, 10 LXC Driver
CentOS 8 LXC Driver
LXC Support for version included in the Linux distribution LXC Node Installation

Firecracker Nodes

Component Version More information
Red Hat Enterprise Linux 7, 8 Firecracker Driver
CentOS 7, 8 Firecracker Driver
Ubuntu Server 18.04 (LTS), 20.04 (LTS), 20.10, 21.04 Firecracker Driver
Debian 9, 10 Firecracker Driver
KVM/Firecracker Support for KVM version included in the Linux distribution. For Firecracker/Jailer version v0.21.1 is used. Firecracker Node Installation

Linux Contextualization Packages

Component Version More information
AlmaLinux 8 Linux Contextualization Packages
Alpine Linux 3.10, 3.11, 3.12, 3.13, 3.14 Linux Contextualization Packages
ALT Linux p9, Sisyphus Linux Contextualization Packages
Amazon Linux 2 Linux Contextualization Packages
CentOS 7, 8, 8 Stream Linux Contextualization Packages
Debian 8, 9, 10 Linux Contextualization Packages
Devuan 2 Linux Contextualization Packages
Fedora 32, 33, 34 Linux Contextualization Packages
FreeBSD 11, 12, 13 Linux Contextualization Packages
openSUSE 15, Tumbleweed Linux Contextualization Packages
Oracle Linux 7, 8 Linux Contextualization Packages
Red Hat Enterprise Linux 7, 8 Linux Contextualization Packages
Rocky Linux 8 Linux Contextualization Packages
Ubuntu 14.04, 16.04, 18.04, 20.04, 20.10, 21.04 Linux Contextualization Packages

Windows Contextualization Packages

Component Version More information
Windows 7+ Windows Contextualization Packages
Windows Server 2008+ Windows Contextualization Packages

Open Cloud Networking Infrastructure

Component Version More information
ebtables Version included in the Linux distribution Ebtables
8021q kernel module Version included in the Linux distribution 802.1Q VLAN
Open vSwitch Version included in the Linux distribution Open vSwitch
iproute2 Version included in the Linux distribution VXLAN

Open Cloud Storage Infrastructure

Component Version More information
iSCSI Version included in the Linux distribution LVM Drivers
LVM2 Version included in the Linux distribution LVM Drivers
Ceph Jewel v10.2.x, Luminous v12.2.x, Mimic v13.2.x, Nautilus v14.2.x The Ceph Datastore

Authentication

Component Version More information
net-ldap ruby library 0.12.1 or 0.16.1 LDAP Authentication
openssl Version included in the Linux distribution x509 Authentication

Application Containerization

Component Version
Docker 20.10.5 CE
Docker Machine 0.14.0
Appliance OS Ubuntu 16.04

Sunstone

Browser Version
Chrome 61.0 - 85.0
Firefox 59.0 - 80.0

Note

For Windows desktops using Chrome or Firefox you should disable the option touch-events for your browser:

Chrome: chrome://flags -> #touch-events: disabled. Firefox: about:config -> dom.w3c_touch_events: disabled.

Internet Explorer is not supported with the Compatibility Mode enabled, since it emulates IE7, which is not supported.

Note

Generally, for all Linux platforms, it is worth noting that Ruby gems should be used from packages shipped with OpenNebula or installed with the install_gems utility. Avoid using Ruby gem versions shipped with your platform.

Compatibility of Workloads on Certified Edge Clusters

Edge Clusters can be virtual or metal depending of the instance type used to build the cluster. Note that not all providers offer both instance types.

Edge/Cloud Provider Edge Cluster Hypervisor
Equinix metal KVM, Firecracker and LXC
AWS metal KVM, Firecracker and LXC
AWS virtual LXC
Google virtual LXC
DigitalOcean virtual LXC
Vultr metal (in progress) KVM, Firecracker and LXC
Vultr virtual LXC
On-prem metal KVM, Firecracker and LXC

The Edge Cluster type determines the hypervisor and workload that can be run in the cluster. The following table summarizes the Edge Cluster you need to run specific workloads:

Use Case Edge Cluster Hypervisor
I want to run application containers… virtual LXC
metal LXC, Firecracker
I want to run virtual servers… metal KVM, LXC
I want to run a Kubernetes cluster… metal KVM (k8s based)
Firecracker (k3s based)

In the above table, application containers are those imported from DockerHub, LinuxContainers or TunrkeyLinux, as well as images created from DockerFiles. On the other hand Virtual servers use full system disk images.

Certified Infrastructure Scale

A single instance of OpenNebula (i.e., a single oned process) has been stress-tested to cope with 500 hypervisors without performance degradation. This is the maximum recommended configuration for a single instance, and depending on the underlying configuration of storage and networking, it is mainly recommended to switch to a federated scenario for any larger number of hypervisors.

However, there are several OpenNebula users managing significantly higher numbers of hypervisors (to the order of two thousand) with a single instance. This largely depends, as mentioned, on the storage, networking, and also monitoring configuration.

Frontend Platform Notes

The following applies to all Front-Ends:

  • XML-RPC tuning parameters (MAX_CONN, MAX_CONN_BACKLOG, KEEPALIVE_TIMEOUT, KEEPALIVE_MAX_CONN and TIMEOUT) are only available with packages distributed by us, as they are compiled with a newer xmlrpc-c library.
  • Only Ruby versions >= 2.0 are supported.

CentOS 7.0

When using Apache to serve Sunstone, it is required that you disable or comment the PrivateTMP=yes directive in /usr/lib/systemd/system/httpd.service.

There is an automatic job that removes all data from /var/tmp/. In order to disable this, please edit the /usr/lib/tmpfiles.d/tmp.conf and remove the line that removes /var/tmp.

There is a bug in libvirt that prevents the use of the save/restore mechanism if cpu_model is set to 'host-passthrough' via RAW. The work around if needed is described in this issue.

Ubuntu 20.04

When using Apache to serve Sunstone, it’s required to grant read permissions to the user running httpd in /var/lib/one.

Debian 9

Guacamole does not come with RDP support due to the lack of availability of libfreerdp2 in Debian 9. Hence, this functionality won’t be present if the Front-end runs in this platform.

Nodes Platform Notes

The following items apply to all distributions:

  • Since OpenNebula 4.14 there is a new monitoring probe that gets information about PCI devices. By default it retrieves all the PCI devices in a Host. To limit the PCI devices for which it gets info and appear in onehost show, refer to PCI Passthrough.
  • When using qcow2 storage drivers you can make sure that the data is written to disk when doing snapshots by setting the cache parameter to writethrough. This change will make writes slower than other cache modes but safer. To do this edit the file /etc/one/vmm_exec/vmm_exec_kvm.conf and change the line for DISK:
DISK = [ driver = "qcow2", cache = "writethrough" ]

CentOS/RedHat 7 Platform Notes

Ruby Dependencies

In order to install Ruby dependencies on RHEL, the Server Optional channel needs to be enabled. Please refer to RedHat documentation to enable the channel.

Alternatively, use CentOS 7 repositories to install Ruby dependencies.

Libvirt Version

The libvirt/QEMU packages used in the testing infrastructure are the ones in the qemu-ev repository. To add this repository on CentOS, you can install the following packages:

yum install centos-release-qemu-ev
yum install qemu-kvm-ev

Disable PolicyKit for Libvirt

It is recommended that you disable PolicyKit for Libvirt:

$ cat /etc/libvirt/libvirtd.conf
...
auth_unix_ro = "none"
auth_unix_rw = "none"
unix_sock_group = "oneadmin"
unix_sock_ro_perms = "0770"
unix_sock_rw_perms = "0770"
...

CentOS/RedHat 8 Platform Notes

Disable PolicyKit for Libvirt

It is recommended that you disable PolicyKit for Libvirt:

$ cat /etc/libvirt/libvirtd.conf
...
auth_unix_ro = "none"
auth_unix_rw = "none"
unix_sock_group = "oneadmin"
unix_sock_ro_perms = "0770"
unix_sock_rw_perms = "0770"
...

vCenter 7.0 Platform Notes

Problem with Boot Order

Currently in vCenter 7.0 changing the boot order is only supported in Virtual Machines at deployment time.