Firecracker requires a Linux kernel version >= 4.14 and the KVM kernel module.
The specific information containing the supported platforms for Firecracker can be found in the code repository.
Considerations & Limitations¶
microVM CPU Usage¶
There are two main limitations regarding CPU usage for microVM:
OpenNebula deploys microVMs by using Firecracker’s Jailer. The Jailer takes care of increasing the security and isolation of the microVM and is the Firecracker’s recommended way of deploying microVMs in production environments. The Jailer forces the microVM to be isolated in a NUMA node; OpenNebula takes care of evenly distributing microVMs among the available NUMA nodes. One of the following policies can be selected in
rr: schedule the microVMs in a RR way across NUMA nodes based on the VM id.
random: schedule the microVMs randomly across NUMA nodes.
Currently Firecracker only supports the isolation at NUMA level so OpenNebula NUMA & CPU pinning options are not available for Firecracker microVMs.
- Firecracker microVMs support hyperthreading but in a very specific way. When hyperthreading is enabled the number of threads per core will be always two (e.g., with
VCPU=8the VM will have four cores with two threads each). In order to enable hyperthreading for microVM, the
TOPOLOGY/THREADSvalue can be used in the microVM template as shown below:
TOPOLOGY = [ CORES = "4", PIN_POLICY = "NONE", SOCKETS = "1", THREADS = "2" ]
- Firecracker only supports
- The Firecracker driver is only compatible with NFS/NAS Datastores and Local Storage Datastores. Note that ``qcow2`` format is not supported.
- As Firecracker Jailer performs a
chrootoperation under the microVM location, persistent images are not supported when using
TM_MAD=shared. In order to use persistent images when using
TM_MADmust be overwritten to use
TM_MAD=sshthis can be easily achieved by adding
TM_MAD_SYSTEM=sshat the microVM template. More info on how to combine different
TM_MADscan be found here.
Some of the actions supported by OpenNebula for VMs and containers are not supported for microVM due to Firecracker’s limitations. The following actions are not currently supported:
Driver Specifics Configuration¶
Firecracker specifics configurations are available in the
/var/lib/one/remotes/etc/vmm/firecracker/firecrackerrc file in the OpenNebula Front-end node. The following list contains the supported configuration attributes and a brief description of each one:
||Options to customize the VNC access to the
||Default path for the datastores. This only needs to be changed if the corresponding value in oned.conf has been modified|
||UID for starting microVMs corresponds with
||GID for starting microVMs corresponds with
||Firecracker binary location|
||Timeout (in seconds) for executing cancel action if shutdown gets stuck|
||Path where cgrup file system is mounted|
||If true the cpu.shares value will be set according to the VM CPU value, if false the cpu.shares is left by default which means that all the resources are shared equally across the VMs.|
||Timeout to wait for a cgroup to be empty after shutdown/cancel a microVM|
Firecracker only supports cgroup v1.
Drivers Generic Configuration¶
The Firecracker driver is enabled by default in OpenNebula
/etc/one/oned.conf on your Front-end Host. The configuration parameters:
-s are already preconfigured with reasonable defaults. If you change them, you will need to restart OpenNebula.
Unlike common VMs, Firecracker microVMs do not use full disk images (with partition tables, MBR…). Instead, Firecracker microVMs use a root file system image together with an uncompressed Linux Kernel binary file.
Root File System Images¶
The root file system can be uploaded as a raw image (
OS type) to any OpenNebula image datastore. Once the image is available it can be added as a new disk to the microVM template.
Custom images can also be created by using common linux tools like
mkfs command for creating the file system and
dd for copying an existing file system inside the new one.
The kernels must be uploaded to a Kernels & Files Datastore with the “Kernel” type. Once the kernel is available it can be referenced by using the attribute
OS section at microVM template.
Kernel images can build the desired kernel version, with the configuration attribute required for the use case. In order to improve the performance, the kernel image can be compiled with the minimal options required. Firecracker project provides a suggested configuration file in the official repository
Firecracker works with all OpenNebula networking drivers.
As Firecracker does not manage the tap devices used for microVM networking, OpenNebula takes care of managing these devices and plugs then inside the pertinent bridge. In order to enable this functionality the following actions have to be carried out manually when networking is desired for MicroVMs.
# In the frontend for each driver to be use with firecracker $ cp /var/lib/one/remotes/vnm/hooks/pre/firecracker /var/lib/one/remotes/vnm/<networking-driver>/pre.d/firecracker $ cp /var/lib/one/remotes/vnm/hooks/clean/firecracker /var/lib/one/remotes/vnm/<networking-driver>/clean.d/firecracker $ onehost sync -f
cp commands for every networking driver which is going to be used with MicroVMs. And make sure
oneadmin user has enough permissions to run the scripts.
Below there is a minimum microVM Template:
CPU="1" MEMORY="146" VCPU="2" CONTEXT=[ NETWORK="YES", SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ] DISK=[ IMAGE="Alpine Linux 3.11", IMAGE_UNAME="oneadmin" ] GRAPHICS=[ LISTEN="0.0.0.0", TYPE="VNC" ] NIC=[ NETWORK="vnet", NETWORK_UNAME="oneadmin", SECURITY_GROUPS="0" ] OS=[ BOOT="", KERNEL_CMD="console=ttyS0 reboot=k panic=1 pci=off i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd", KERNEL_DS="$FILE[IMAGE_ID=2]"]
OS sections need to contain a
KERNEL_DS attribute referencing a linux kernel from a File & Kernel datastore:
OS=[ BOOT="", KERNEL_CMD="console=ttyS0 reboot=k panic=1 pci=off i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd", KERNEL_DS="$FILE[IMAGE_ID=2]"]
MicroVMs supports remote access via VNC protocol which allows easy access to microVMs. The following section must be added to the microVM template to configure the VNC access:
GRAPHICS=[ LISTEN="0.0.0.0", TYPE="VNC" ]
Apart from the system logs, Firecracker generates a microVMs log inside the jailed folder. This log can be found in:
This log cannot be forwarded outside the VM folder, as while the Firecracker microVMs run, the Firecracker process is isolated in their VM folder to increase the security. More information on how Firecracker isolates the microVM can be found in the Firecracker official documentation.