Platform Notes 6.10¶
This page will show you the specific considerations when using an OpenNebula cloud, according to the different supported platforms.
This is the list of the individual platform components that have been through the complete OpenNebula Quality Assurance and Certification Process.
Certified Components Version¶
Front-End Components¶
Component |
Version |
More information |
---|---|---|
Red Hat Enterprise Linux |
8, 9 |
|
AlmaLinux |
8, 9 |
|
Ubuntu Server |
22.04 (LTS), 24.04 (LTS) |
|
Debian |
11, 12 |
Front-End Installation. Not certified to manage VMware infrastructures |
MariaDB or MySQL |
Version included in the Linux distribution |
|
SQLite |
Version included in the Linux distribution |
Default DB, no configuration needed |
Ruby Gems |
Versions installed by opennebula-rubygems |
Detailed information in |
vCenter Nodes¶
Important
The legacy vCenter driver is included in the distribution, but no longer receives updates or bug fixes.
Component |
Version |
More information |
---|---|---|
vCenter |
7.0.x managing ESX 7.0.x & 8.0.x managing ESX 8.0.x |
|
NSX-T |
2.4.1+ |
|
NSX-V |
6.4.5+ |
Note
Debian front-ends are not certified to manage VMware infrastructures with OpenNebula.
KVM Nodes¶
Component |
Version |
More information |
---|---|---|
Red Hat Enterprise Linux |
8, 9 |
|
AlmaLinux |
8, 9 |
|
Ubuntu Server |
22.04 (LTS), 24.04 (LTS) |
|
Debian |
11, 12 |
|
KVM/Libvirt |
Support for version included in the Linux distribution.
For RHEL the packages from |
LXC Nodes¶
Component |
Version |
More information |
---|---|---|
Ubuntu Server |
22.04 (LTS), 24.04 (LTS) |
|
Debian |
11, 12 |
|
AlmaLinux |
8, 9 |
|
LXC |
Support for version included in the Linux distribution |
Linux and Windows Contextualization Packages¶
Refer to: one-apps release
More information: one-apps wiki
Open Cloud Networking Infrastructure¶
Component |
Version |
More information |
---|---|---|
8021q kernel module |
Version included in the Linux distribution |
|
Open vSwitch |
Version included in the Linux distribution |
|
iproute2 |
Version included in the Linux distribution |
Open Cloud Storage Infrastructure¶
Component |
Version |
More information |
---|---|---|
iSCSI |
Version included in the Linux distribution |
|
LVM2 |
Version included in the Linux distribution |
|
Ceph |
Quincy v17.2.x Reef v18.2.x |
Authentication¶
Component |
Version |
More information |
---|---|---|
net-ldap ruby library |
0.12.1 or 0.16.1 |
|
openssl |
Version included in the Linux distribution |
Sunstone¶
Browser |
Version |
---|---|
Chrome |
61.0 - 94.0 |
Firefox |
59.0 - 92.0 |
Note
For Windows desktops using Chrome or Firefox you should disable the option touch-events
for your browser:
Chrome: chrome://flags -> #touch-events: disabled. Firefox: about:config -> dom.w3c_touch_events: disabled.
Compatibility of Workloads on Certified Edge Clusters¶
Edge Clusters can be virtual or metal depending of the instance type used to build the cluster. Note that not all providers offer both instance types.
Important
Providers based on virtual instances have been disabled by default.
Edge/Cloud Provider |
Edge Cluster |
Hypervisor |
---|---|---|
metal |
KVM and LXC |
|
metal |
KVM and LXC |
|
metal |
KVM and LXC |
The Edge Cluster type determines the hypervisor and workload that can be run in the cluster. The following table summarizes the Edge Cluster you need to run specific workloads:
Use Case |
Edge Cluster |
Hypervisor |
---|---|---|
metal |
KVM, LXC |
|
metal |
KVM |
Certified Infrastructure Scale¶
A single instance of OpenNebula (i.e., a single oned
process) has been stress-tested to cope with 500 hypervisors without performance degradation. This is the maximum recommended configuration for a single instance, and depending on the underlying configuration of storage and networking, it is mainly recommended to switch to a federated scenario for any larger number of hypervisors.
However, there are several OpenNebula users managing significantly higher numbers of hypervisors (to the order of two thousand) with a single instance. This largely depends, as mentioned, on the storage, networking, and also monitoring configuration.
Front-End Platform Notes¶
The following applies to all Front-Ends:
XML-RPC tuning parameters (
MAX_CONN
,MAX_CONN_BACKLOG
,KEEPALIVE_TIMEOUT
,KEEPALIVE_MAX_CONN
andTIMEOUT
) are only available with packages distributed by us, as they are compiled with a newer xmlrpc-c library.Only Ruby versions >= 2.0 are supported.
Nodes Platform Notes¶
The following items apply to all distributions:
Since OpenNebula 4.14 there is a new monitoring probe that gets information about PCI devices. By default it retrieves all the PCI devices in a Host. To limit the PCI devices for which it gets info and appear in
onehost show
, refer to PCI Passthrough.When using qcow2 storage drivers you can make sure that the data is written to disk when doing snapshots by setting the
cache
parameter towritethrough
. This change will make writes slower than other cache modes but safer. To do this edit the file/etc/one/vmm_exec/vmm_exec_kvm.conf
and change the line forDISK
:
DISK = [ driver = "qcow2", cache = "writethrough" ]
Most Linux distributions using a kernel 5.X (i.e. Debian 11) mount cgroups v2 via systemd natively. If you have hosts with a previous version of the distribution mounting cgroups via fstab or in v1 compatibility mode (i.e. Debian 10) and their libvirt version is <5.5, during the migration of VMs from older hosts to new ones you can experience errors like
WWW MMM DD hh:mm:ss yyyy: MIGRATE: error: Unable to write to '/sys/fs/cgroup/machine.slice/machine-qemu/..../cpu.weight': Numerical result out of range Could not migrate VM_UID to HOST ExitCode: 1
This happens in every single VM migration from a host with the previous OS version to a host with the new one.
To solve this, there are 2 options: Delete the VM and recreate it scheduled in a host with the new OS version or mount cgroups in v1 compatibility mode in the nodes with the new OS version. To do this
Edit the bootloader default options (normally under
/etc/defaults/grub.conf
)Modify the default commandline for the nodes (usually
GRUB_CMDLINE_LINUX="..."
) and add the optionsystemd.unified_cgroup_hierarchy=0
Recreate the grub configuration file (in most cases executing a
grub-mkconfig -o /boot/grub/grub.cfg
)Reboot the host. The change will be persistent if the kernel is updated
RedHat 8 Platform Notes¶
Disable PolicyKit for Libvirt¶
It is recommended that you disable PolicyKit for Libvirt:
cat /etc/libvirt/libvirtd.conf
...
auth_unix_ro = "none"
auth_unix_rw = "none"
unix_sock_group = "oneadmin"
unix_sock_ro_perms = "0770"
unix_sock_rw_perms = "0770"
...
AlmaLinux 9 Platform Notes¶
Disable Libvirtd’s SystemD Socket Activation¶
OpenNebula currently works only with the legacy livirtd.service
. You should disable libvirt’s modular daemons and systemd socket activation for the libvirtd.service
.
You can take a look at this bug report, for a detailed workaround procedure.
vCenter 7.0 Platform Notes¶
Important
The legacy vCenter driver is currently included in the distribution, but no longer receives updates or bug fixes.
Problem with Boot Order¶
Currently in vCenter 7.0 changing the boot order is only supported in Virtual Machines at deployment time.
Debian 10 and Ubuntu 18 Upgrade¶
When upgrading your nodes from Debian 10 or Ubuntu 18 you may need to update the opennebula sudoers file because of the /usr merge feature implemented for Debian11/Ubuntu20. You can find more information and a recommended work around in this issue.