LXC version >= 3.0.3 installed on the host.
cgroup version 1 or 2 hosts are required to implement resource control operations (e.g. CPU pinning, memory or swap limitations).
Considerations & Limitations¶
Privileged Containers and Security¶
In order to ensure the security in a multitenant environment, by default, the containers created by the LXC driver will be unprivileged. The unprivileged containers will be deployed as
root. It will use
600100001-600165537 sub UID/GID range for mapping users/groups in order to increase security in case a malicious agent is able to escape the container.
To create a privileged container, the attribute
LXC_UNPRIVILEGED = "no" needs to be added to the VM Template. The generated container will include a file with a set of container configuration parameters for privileged containers. This file is located in the frontend at /var/lib/one/remotes/etc/vmm/lxc/profiles/profile_privileged (see below for its contents). You can be fine tune this file if needed (don’t forget to sync the file using the command onehost sync).
lxc.mount.entry = 'mqueue dev/mqueue mqueue rw,relatime,create=dir,optional 0 0' lxc.cap.drop = 'sys_time sys_module sys_rawio' lxc.mount.auto = 'proc:mixed' lxc.mount.auto = 'sys:mixed' lxc.cgroup.devices.deny = 'a' lxc.cgroup.devices.allow = 'b *:* m' lxc.cgroup.devices.allow = 'c *:* m' lxc.cgroup.devices.allow = 'c 136:* rwm' lxc.cgroup.devices.allow = 'c 1:3 rwm' lxc.cgroup.devices.allow = 'c 1:5 rwm' lxc.cgroup.devices.allow = 'c 1:7 rwm' lxc.cgroup.devices.allow = 'c 1:8 rwm' lxc.cgroup.devices.allow = 'c 1:9 rwm' lxc.cgroup.devices.allow = 'c 5:0 rwm' lxc.cgroup.devices.allow = 'c 5:1 rwm' lxc.cgroup.devices.allow = 'c 5:2 rwm' lxc.cgroup.devices.allow = 'c 10:229 rwm' lxc.cgroup.devices.allow = 'c 10:200 rwm'
CPU and NUMA Pinning¶
You can pin containers to host CPUs and NUMA nodes simply by adding a
TOPOLOGY attribute to the VM Template, see the use Virtual Topology and CPU Pinning guide
Supported Storage Formats¶
Datablocks require formatting with a file system in order to be attached to a container.
Disk images must be a file system, they cannot have partition tables.
There are a number of regular features that are not implemented yet: Some of the actions supported by OpenNebula for VMs are not implemented yet for LXC. The following actions are not currently supported:
live disk resize
PCI Passthrough is not currently supported for LXC containers.
Importing wilds containers that weren’t deployed by OpenNebula is not currently supported.
The LXC driver is enabled by default in OpenNebula
/etc/one/oned.conf on your Front-end Host with reasonable defaults. Read the oned Configuration to understand these configuration parameters and Virtual Machine Drivers Reference to know how to customize and extend the drivers.
LXC driver-specific configuration is available in
/var/lib/one/remotes/etc/vmm/lxc/lxcrc on the OpenNebula Front-end node. The following list contains the supported configuration attributes and a brief description:
Options to customize the VNC access to the
Default path for the datastores. This only need to be changed if the corresponding value in oned.conf has been modified
Path to the LXC default configuration file. This file will be included in the configuration of every LXC container
Mount options configuration section also in lxcrc
Comma separated list of mount options used when shifting the uid/gid with bindfs. See <bindfs -o> command help
Mount options for disk devices (in the host). Options are set per fs type (e.g. dev_xfs, dev_ext3…)
Mount options for data DISK in the contianer (lxc.mount.entry)
Mount options for root fs in the container (lxc.rootfs.options)
Default Path to mount data disk in the container. This can be set per DISK using the TARGET attribute
LXC containers need a root file system image in order to boot. This image can be downloaded directly to OpenNebula from Docker Hub, Linux Containers and Turnkey Linux Marketplaces. Check the Public Marketplaces chapter for more information. You can use LXC with NAS (file-based), SAN (lvm) or Ceph Datastores.
When using XFS images it is recommended to use images with a block size of 4K, as it is the default block size for mounting the file system. Otherwise is possible to get an error like the one below:
Mon Apr 4 22:20:25 2022 [Z0][VMM][I]: mount: /var/lib/one/datastores/0/30/mapper/disk.1: mount(2) system call failed: Function not implemented.
Custom images can also be created by using common linux tools like the
mkfs command for creating the file system and
dd for copying an existing file system inside the new one. Also OpenNebula will preserve any custom id map present on the filesystem.
LXC containers are fully integrated with every OpenNebula networking driver.
Container Templates can be defined by using the same attributes described in Virtual Machine Template section.
CPU="1" MEMORY="146" CONTEXT=[ NETWORK="YES", SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ] DISK=[ IMAGE="Alpine Linux 3.11", IMAGE_UNAME="oneadmin" ] GRAPHICS=[ LISTEN="0.0.0.0", TYPE="VNC" ] NIC=[ NETWORK="vnet", NETWORK_UNAME="oneadmin", SECURITY_GROUPS="0" ]
The LXC driver will create a swap limitation equal to the amount of memory definded in the VM Template. The attribute
LXC_SWAP can be used to declare extra swap for the container.
Containers supports remote access via VNC protocol which allows easy access to them. The following section must be added to the container template to configure the VNC access:
GRAPHICS=[ LISTEN="0.0.0.0", TYPE="VNC" ]
RAW attribute allows us to add raw LXC configuration attributes to the final container deployment file. This permits us to set configuration attributes that are not directly supported by OpenNebula.
RAW = [ TYPE = "lxc", DATA = "lxc.signal.reboot = 9" ]
Each line of the
DATA attribute must contain only an LXC configuration attribute and its corresponding value. If a provided attribute is already set by OpenNebula, it will be discarded and the original value will take precedence.
LXC_PROFILES attribute implements a similar behavior than LXD profiles. It allows to include pre-defined LXC configuration to a container. In order to use a profile, the corresponding LXC configuration file must be available at
For example, if you want to use the profiles
extra-performance, you need to create the corresponding files containing the LXC configuration attributes (using lxc config syntax):
ls -l /var/lib/one/remotes/etc/vmm/lxc/profiles ... -rw-r--r-- 1 oneadmin oneadmin 40 abr 26 12:35 extra-performance -rw-r--r-- 1 oneadmin oneadmin 35 abr 26 12:35 production
After defining the profiles, make sure
oneadmin user has enough permission for reading them. Also, remember to use
onehost sync command to make sure the changes are synced in the host. If the profile is not available in the host, the container will be deployed without including the corresponding profile configuration.
After defining the profiles they can be used by adding the
PROFILES attribute to the VM Template:
PROFILES = "extra-performance, production"
Profiles, are implemented by using the LXC
include configuration attribute, note that the profiles will be included in the provided order and this order might affect the final configuration of the container.