LXC Driver

Requirements

OpenNebula LXC Driver requires the LXC version >= 3.0.3 to be installed on the Host.

Considerations & Limitations

Security

In order to ensure the security in a multitenant environment, only unprivileged containers are supported by LXC drivers.

The unprivileged containers will be deployed as root. It will use 600100001-600165537 sub UID/GID range for mapping users/groups in order to increase security in case a malicious agent is able to escape the container.

Storage Limitations

  • Datablocks require formatting with a file system in order to be attached to a container.
  • Disk images must be a file system, they cannot have partition tables.
  • You can use LXC with NAS (file-based), SAN (lvm) or Ceph Datastores

Container Actions

There are a number of regular features that are not implemented yet: Some of the actions supported by OpenNebula for VMs are not implemented yet for LXC. The following actions are not currently supported:

  • migration
  • live migration
  • live disk resize
  • save/restore
  • snapshots
  • disk-saveas
  • disk hot-plugging
  • nic hot-plugging

PCI Passthrough

PCI Passthrough is not currently supported for LXC containers.

Wild Containers

Importing wilds containers that weren’t deployed by OpenNebula is not currently supported.

Configuration

OpenNebula

The LXC driver is enabled by default in OpenNebula /etc/one/oned.conf on your Front-end Host with reasonable defaults. Read the oned Configuration to understand these configuration parameters and Virtual Machine Drivers Reference to know how to customize and extend the drivers.

Driver

LXC driver-specific configuration is available in /var/lib/one/remotes/etc/vmm/lxc/lxcrc on the OpenNebula Front-end node. The following list contains the supported configuration attributes and a brief description:

Parameter Description
:vnc Options to customize the VNC access to the microVM. :width, :height, :timeout, and :command can be set
:datastore_location Default path for the datastores. This only need to be changed if the corresponding value in oned.conf has been modified
:default_lxc_config Path to the LXC default configuration file. This file will be included in the configuration of every LXC container

Storage

LXC containers need a root file system image in order to boot. This image can be downloaded directly to OpenNebula from Docker Hub, Linux Containers and Turnkey Linux Marketplaces. Check the Public Marketplaces chapter for more information.

Note

Custom images can also be created by using common linux tools like the mkfs command for creating the file system and dd for copying an existing file system inside the new one.

Networking

LXC containers are fully integrated with every OpenNebula networking driver.

Usage

Container Template

Container Templates can be defined by using the same attributes described in Virtual Machine Template section.

CPU="1"
MEMORY="146"
CONTEXT=[
  NETWORK="YES",
  SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ]
DISK=[
  IMAGE="Alpine Linux 3.11",
  IMAGE_UNAME="oneadmin" ]
GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="VNC" ]
NIC=[
  NETWORK="vnet",
  NETWORK_UNAME="oneadmin",
  SECURITY_GROUPS="0" ]

Remote Access

Containers supports remote access via VNC protocol which allows easy access to them. The following section must be added to the container template to configure the VNC access:

GRAPHICS=[
  LISTEN="0.0.0.0",
  TYPE="VNC" ]

Additional Attributes

The RAW attribute allows us to add raw LXC configuration attributes to the final container deployment file. This permits us to set configuration attributes that are not directly supported by OpenNebula.

RAW = [
  TYPE = "lxc",
  DATA = "lxc.signal.reboot = 9" ]

Note

Each line of the DATA attribute must contain only an LXC configuration attribute and its corresponding value. If a provided attribute is already set by OpenNebula, it will be discarded and the original value will take precedence.

The LXC_PROFILES attribute implements a similar behavior than LXD profiles. It allows to include pre-defined LXC configuration to a container. In order to use a profile, the corresponding LXC configuration file must be available at /var/lib/one/remotes/etc/vmm/lxc/profiles.

For example, if you want to use the profiles production and extra-performance, you need to create the corresponding files containing the LXC configuration attributes (using lxc config syntax):

ls -l /var/lib/one/remotes/etc/vmm/lxc/profiles
...
-rw-r--r-- 1 oneadmin oneadmin 40 abr 26 12:35 extra-performance
-rw-r--r-- 1 oneadmin oneadmin 35 abr 26 12:35 production

Warning

After defining the profiles, make sure oneadmin user has enough permission for reading them. Also, remember to use onehost sync command to make sure the changes are synced in the host. If the profile is not available in the host, the container will be deployed without including the corresponding profile configuration.

After defining the profiles they can be used by adding the PROFILES attribute to the VM Template:

PROFILES = "extra-performance, production"

Profiles, are implemented by using the LXC include configuration attribute, note that the profiles will be included in the provided order and this order might affect the final configuration of the container.