In order to ensure security in a multitenant environment, by default the containers created by the LXC driver will be unprivileged. The unprivileged containers will be deployed as root
. Sub UID/GID range 600100001-600165537
will be used for mapping users/groups in order to increase security in case a malicious agent is able to escape the container.
To create a privileged container, the attribute LXC_UNPRIVILEGED = "no"
needs to be added to the VM template. The generated container will include a file with a set of container configuration parameters for privileged containers. This file is located in the Front-end at /var/lib/one/remotes/etc/vmm/lxc/profiles/profile_privileged (see below for its contents). You can be fine-tune this file if needed (don’t forget to sync the file using the command onehost sync).
lxc.mount.entry = 'mqueue dev/mqueue mqueue rw,relatime,create=dir,optional 0 0'
lxc.cap.drop = 'sys_time sys_module sys_rawio'
lxc.mount.auto = 'proc:mixed'
lxc.mount.auto = 'sys:mixed'
lxc.cgroup.devices.deny = 'a'
lxc.cgroup.devices.allow = 'b *:* m'
lxc.cgroup.devices.allow = 'c *:* m'
lxc.cgroup.devices.allow = 'c 136:* rwm'
lxc.cgroup.devices.allow = 'c 1:3 rwm'
lxc.cgroup.devices.allow = 'c 1:5 rwm'
lxc.cgroup.devices.allow = 'c 1:7 rwm'
lxc.cgroup.devices.allow = 'c 1:8 rwm'
lxc.cgroup.devices.allow = 'c 1:9 rwm'
lxc.cgroup.devices.allow = 'c 5:0 rwm'
lxc.cgroup.devices.allow = 'c 5:1 rwm'
lxc.cgroup.devices.allow = 'c 5:2 rwm'
lxc.cgroup.devices.allow = 'c 10:229 rwm'
lxc.cgroup.devices.allow = 'c 10:200 rwm'
You can pin containers to Host CPUs and NUMA nodes simply by adding a TOPOLOGY
attribute to the VM template, see the use Virtual Topology and CPU Pinning guide
Some of the actions supported by OpenNebula for VMs are not yet implemented for LXC. The following actions are not currently supported:
migration
live migration
live disk resize
save/restore
snapshots
disk-saveas
disk hot-plugging
nic hot-plugging
PCI Passthrough is not currently supported for LXC containers.
Importing wild containers that weren’t deployed by OpenNebula is not currently supported.
The LXC driver is enabled by default in OpenNebula /etc/one/oned.conf
on your Front-end Host with reasonable defaults. Read the oned Configuration to understand these configuration parameters and Virtual Machine Drivers Reference to know how to customize and extend the drivers.
LXC driver-specific configuration is available in /var/lib/one/remotes/etc/vmm/lxc/lxcrc
on the OpenNebula Front-end node. The following list contains the supported configuration attributes and a brief description of each:
Parameter | Description |
---|---|
:vnc | Options to customize the VNC access to the microVM. :width , :height , :timeout , and:command can be set |
:datastore_location | Default path for the datastores. This only needs to be changed if the corresponding value in oned.conf has been modified |
:default_lxc_config | Path to the LXC default configuration file. This file will be included in the configuration of every LXC container |
Mount options configuration section also in lxcrc:
:bindfs | Comma-separated list of mount options used when shifting the uid/gid with bindfs. See <bindfs -o> command help |
---|---|
:dev_<fs> | Mount options for disk devices (in the Host). Options are set per fs type (e.g. dev_xfs, dev_ext3…) |
:disk | Mount options for data DISK in the container (lxc.mount.entry) |
:rootfs | Mount options for root fs in the container (lxc.rootfs.options) |
:mountpoint | Default Path to mount data disk in the container. This can be set per DISK using the TARGET attribute |
LXC containers need a root file system image in order to boot. This image can be downloaded directly to OpenNebula from Linux Containers Marketplace. Check the Public Marketplaces chapter for more information. You can use LXC with NAS (file-based), SAN (lvm) or Ceph Datastores.
When using XFS images it is recommended to use images with a block size of 4K as it is the default block size for mounting the file system. Otherwise it’s possible you will get an error like the one below:
Mon Apr 4 22:20:25 2022 [Z0][VMM][I]: mount: /var/lib/one/datastores/0/30/mapper/disk.1: mount(2) system call failed: Function not implemented.
mkfs
command for creating the file system and dd
for copying an existing file system inside the new one. Also, OpenNebula will preserve any custom id map present on the filesystem.LXC containers are fully integrated with every OpenNebula networking driver.
Container templates can be defined by using the same attributes described in Virtual Machine Template section.
CPU="1"
MEMORY="146"
CONTEXT=[
NETWORK="YES",
SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]
DISK=[
IMAGE="Alpine Linux 3.11",
IMAGE_UNAME="oneadmin" ]
GRAPHICS=[
LISTEN="0.0.0.0",
TYPE="VNC" ]
NIC=[
NETWORK="vnet",
NETWORK_UNAME="oneadmin",
SECURITY_GROUPS="0" ]
The LXC driver will create a swap limitation equal to the amount of memory defined in the VM template. The attribute LXC_SWAP
can be used to declare extra swap for the container.
Containers support remote access via VNC protocol which allows easy access to them. The following section must be added to the Container template to configure the VNC access:
GRAPHICS=[
LISTEN="0.0.0.0",
TYPE="VNC" ]
The RAW
attribute allows us to add raw LXC configuration attributes to the final container deployment file. This permits us to set configuration attributes that are not directly supported by OpenNebula:
RAW = [
TYPE = "lxc",
DATA = "lxc.signal.reboot = 9" ]
DATA
attribute must contain only an LXC configuration attribute and its corresponding value. If a provided attribute is already set by OpenNebula, it will be discarded and the original value will take precedence.The LXC_PROFILES
attribute implements a similar behavior to LXD profiles. It allows users to include pre-defined LXC configuration to a container. In order to use a profile, the corresponding LXC configuration file must be available at /var/lib/one/remotes/etc/vmm/lxc/profiles
.
For example, if you want to use the profiles production
and extra-performance
, you need to create the corresponding files containing the LXC configuration attributes (using lxc config syntax):
$ ls -l /var/lib/one/remotes/etc/vmm/lxc/profiles
...
-rw-r--r-- 1 oneadmin oneadmin 40 abr 26 12:35 extra-performance
-rw-r--r-- 1 oneadmin oneadmin 35 abr 26 12:35 production
oneadmin
user has enough permission to read them. Also, remember to use onehost sync
command to make sure the changes are synced in the Host. If the profile is not available in the Host, the container will be deployed without including the corresponding profile configuration.After defining the profiles they can be used by adding the PROFILES
attribute to the VM template:
PROFILES = "extra-performance, production"
Profiles are implemented by using the LXC include
configuration attribute. Note that the profiles will be included in the provided order and this order might affect the final configuration of the container.
On top of the regular OpenNebula logs at /var/log/one
the LXC driver generates additional logs for more specific LXC operations. Sometimes a container might fail to start or not behave as intended. You can find out more about what happened by inspecting the log files at /var/log/lxc/
one-<vm_id>.console
- Contains the console output seen when starting a container. This contains information regarding how the init process within the container starts. It can also help identify problems that occur after a successful start yet a failed initialization.one-<vm_id>.log
- Contains information about how LXC handles different container operations.You can also verify the low level configuration of the container generated by OpenNebula when inspecting the file /var/lib/lxc/one-<vm_id>/config
.
RAW = [
TYPE = "lxc",
DATA = "lxc.apparmor.profile=unconfined" ]
Was this information helpful?
Glad to hear it
Sorry to hear that