Deploy OpenNebula on-prem with an ISO

Introduction

The OpenNebula ISO is a disk image that can be written to removable media, that can be booted up to install a pre-configured version of AlmaLinux 9 and OpenNebula Enterprise Edition, including initial setup, on a server provided.

Once the ISO has booted and finished installing and setting up the software, a pre-configured cloud based on OpenNebula will be ready for immediate use, installed on a single bare-metal server, complete with the OpenNebula Front-end server and a KVM hypervisor node. The same ISO can be used to install other KVM hypervisors on the same infrastructure.

The OS installed also includes a menu and a little set of ansible playbooks to make the OpenNebula infrastructure Management simpler.

onepoc_architecture

Requirements

The OpenNebula ISO is based on AlmaLinux 9, and thus it shares the same requirements to run. Note that only the x86-64-v2 instruction set (introduced in 2008) is supported. The following table states the minimum requirements for installing the ISO.

ComponentRequired
CPU- Recent CPU (after 2016)
- Virtualization enabled at BIOS level
Memory- Over 32 GB for frontend and nodes
Disk- 512 GB NvME
Network- At least a NIC for management*
- Recommended 2 NICs (management and service)

*Not needed for installation

ISO Download and installation

Now it is time to download the OpenNebula ISO (based on Alma Linux). Currently, the following versions are available:

Once the image is downloaded, there are two ways to go:

  • If the server has a IPMI, ILO, iDRAC, RSA or some other way to mount remote media (ISO or USB), the ISO can be directly mounted on it
  • If, on the other side, there is no server console but there is physical access to the USB ports on the server, a USB image can be created to boot from it

In Linux or MacOS, the image can be dumped on the USB with the following command

dd if=/path/to/your/opennebula-7.0.1-CE.iso of=/dev/sdXX

On windows, the USB drive can be created with Rufus.

With the media inserted (or virtually mounted) on the server, after rebooting it set the right boot device on the BIOS. Some BIOS may be able to boot the media as MBR and UEFI. We recommend to boot is as UEFI for compatibility reasons.

The bootloader will show the following screen

uefi_boot_screenmbr_boot_screen
UEFI boot screenMBR boot screen

The Options are the following:

  • Install OpenNebula POC will install a full OpenNebula frontend and the necessary software to make it a OpenNebula KVM hypervisor
  • Install OpenNebula Node will install only the KVM Hypervisor packages

The installation interface will be in text mode and will only ask for confirmation before deleting all the data on the first disk that it finds on a screen that looks like the following:

validation_script

After the confirmation, the installation will start. It will show some information related to the default settings and the packages that will be installed

anaconda_unattended_install

Frontend configuration

Once the installation is finished in the frontend, no network card will be configured, so an access with the console must be provided. It will look like the following (the colours and the font may vary)

Welcome to OpenNebula Proof of Concept (onepoc) !

- Please, log in as user `root`
- For a basic configuration of the server, please execute `onefemenu`
- After the network is configured, the sunstone interface will be running in

  http://this_server:2616

- Please, check the manual page onepoc-quickstart with a

  $ man onepoc-quickstart 7

Thank you!

The frontend menu will look like the following one (the colours and the font may vary). The options can be navigated with the cursor keys and the options can be selected with ENTER:

                            ┌──────────────────────OpenNebula node Setup─────────────────────────┐
                            │ Setup menu                                                         │
                            │ ┌────────────────────────────────────────────────────────────────┐ │
                            │ │          check_host          Check host requirements           │ │
                            │ │          netconf             Configure network                 │ │
                            │ │          enable_fw           Enable firewalld                  │ │
                            │ │          disable_fw          Disable firewalld                 │ │
                            │ │          add_host            Add OpenNebula Host               │ │
                            │ │          proxy               Configure proxy settings          │ │
                            │ │          tmate               Remote console support            │ │
                            │ │          show_oneadmin_pass  Show oneadmin password            │ │
                            │ │          quit                Exit to Shell                     │ │
                            │ │                                                                │ │
                            │ │                                                                │ │
                            │ │                                                                │ │
                            │ │                                                                │ │
                            │ └────────────────────────────────────────────────────────────────┘ │
                            ├────────────────────────────────────────────────────────────────────┤
                            │                   <  OK  >          <Cancel>                       │
                            └────────────────────────────────────────────────────────────────────┘

Network and hostname setup

Now is time to configure the network using the option netconf on the menu. This will launch nmtui (the default ncurses configuration interface), that allows the setup of the network and hostname, as more complex network configuration (bonding, VLAN, etc.)

The following menu will appear

                                                   ┌─┤ NetworkManager TUI ├──┐
                                                   │                         │
                                                   │ Please select an option │
                                                   │                         │
                                                   │ Edit a connection       │
                                                   │ Activate a connection   │
                                                   │ Set system hostname     │
                                                   │ Radio                   │
                                                   │                         │
                                                   │ Quit                    │
                                                   │                         │
                                                   │                    <OK> │
                                                   │                         │
                                                   └─────────────────────────┘

To configure the network, select Edit a connection. The following menu will appear showing all the available network interfaces. In this case the image only shows one, but there may be more than one. Select the one that will be used for OpenNebula management.

                                                  ┌───────────────────────────┐
                                                  │                           │
                                                  │ ┌─────────────┐           │
                                                  │ │ Ethernet  ↑ │ <Add>     │
                                                  │ │   enp3s0  ▒ │           │
                                                  │ │ Loopback  ▒ │ <Edit...> │
                                                  │ │   lo      ▒ │           │
                                                  │ │           ▒ │ <Delete>  │
                                                  │ │           ▒ │           │
                                                  │ │           ▮ │           │
                                                  │ │           ▒ │           │
                                                  │ │           ▒ │           │
                                                  │ │           ▒ │           │
                                                  │ │           ▒ │           │
                                                  │ │           ↓ │ <Back>    │
                                                  │ └─────────────┘           │
                                                  │                           │
                                                  └───────────────────────────┘

Select the interface that must be configured for OpenNebula management access. After that, navigate to “IPv4 CONFIGURATION”, press Enter and change the option to Manual. After that, select the field Show at the right side.

                           │                     ┌────────────┐                                      │
                           │ ═ ETHERNET          │ Disabled   │                            <Show>    │
                           │ ═ 802.1X SECURITY   │ Automatic  │                            <Show>    │
                           │                     │ Link-Local │                                      │
                           │ ╤ IPv4 CONFIGURATION│ Manual     │                            <Hide>    │
                           │ │          Addresses│ Shared     │ ___________ <Remove>                 │
                           │ │                   └────────────┘                                      │

The IPv4 Settings can be set up then. After setting up the IP/mask, default gateway and DNS servers, check Require IPv4 addressing for this connection and Automatically connect.

                           ┌───────────────────────────┤ Edit Connection ├───────────────────────────┐
                           │                                                                         │
                           │         Profile name enp3s0__________________________________           │
                           │               Device enp3s0 (XX:XX:XX:XX:XX:XX)______________           │
                           │                                                                         │
                           │ ═ ETHERNET                                                    <Show>    │
                           │ ═ 802.1X SECURITY                                             <Show>    │
                           │                                                                         │
                           │ ╤ IPv4 CONFIGURATION <Manual>                                 <Hide>    │
                           │ │          Addresses 172.20.0.7/24____________ <Remove>                 │
                           │ │                    <Add...>                                           │
                           │ │            Gateway 172.20.0.1_______________                          │
                           │ │        DNS servers 172.20.0.1_______________ <Remove>                 │
                           │ │                    <Add...>                                           │
                           │ │     Search domains <Add...>                                           │
                           │ │                                                                       │
                           │ │            Routing (No custom routes) <Edit...>                       │
                           │ │ [ ] Never use this network for default route                          │
                           │ │ [ ] Ignore automatically obtained routes                              │
                           │ │ [ ] Ignore automatically obtained DNS parameters                      │
                           │ │                                                                       │
                           │ │ [X] Require IPv4 addressing for this connection                       │
                           │ └                                                                       │
                           │                                                                         │
                           │ ═ IPv6 CONFIGURATION <Automatic>                              <Show>    │
                           │                                                                         │
                           │ [X] Automatically connect                                               │
                           │ [X] Available to all users                                              │
                           │                                                                         │
                           │                                                           <Cancel> <OK> │
                           │                                                                         │
                           └─────────────────────────────────────────────────────────────────────────┘

The default hostname is onepoc, if it needs to be changed, the option Set system hostname of the menu will lead to the following text dialog that allow the hostname change.


                                                   ┌─┤ NetworkManager TUI ├──┐
                                                   │                         │
                                                   │ Please select an option │
                                      ┌─────────────────┤ Set Hostname ├──────────────────┐
                                      │                                                   │
                                      │ Hostname ________________________________________ │
                                      │                                                   │
                                      │                                     <Cancel> <OK> │
                                      │                                                   │
                                      └───────────────────────────────────────────────────┘

                                                   │                    <OK> │
                                                   │                         │
                                                   └─────────────────────────┘

After the modification of the configuration, choose Quit on the menu. An ansible playbook will configure the needed services, it may take some minutes until finished.

....

PLAY RECAP *********************************************************************
172.20.0.7                 : ok=44   changed=2    unreachable=0    failed=0    skipped=10   rescued=0    ignored=0
frontend                   : ok=42   changed=8    unreachable=0    failed=0    skipped=28   rescued=0    ignored=0

Press any key to continue

Configuring the Hypervisor host

After the installation, the server runs only the frontend and needs to be added as a OpenNebula hypervisor to run VMs. The steps are:

  • log in as root in the server
  • execute onefemenu
  • select the option add_host

After selecting the option add_host, the IP for the host will be asked for. In this case we are using the IP that was configured before, 172.20.0.7

                                 ┌──────────────────────────────────────────────────────────┐
                                 │ Insert the IP for the node                               │
                                 │ ┌──────────────────────────────────────────────────────┐ │
                                 │ │172.20.0.7                                            │ │
                                 │ └──────────────────────────────────────────────────────┘ │
                                 │                                                          │
                                 ├──────────────────────────────────────────────────────────┤
                                 │               <  OK  >        <Cancel>                   │
                                 └──────────────────────────────────────────────────────────┘

Then, the user to log into the node will be asked. It MUST be root or have sudo root access without password

                                 ┌──────────────────────────────────────────────────────────┐
                                 │ Insert the user for the node                             │
                                 │ ┌──────────────────────────────────────────────────────┐ │
                                 │ │root                                                  │ │
                                 │ └──────────────────────────────────────────────────────┘ │
                                 │                                                          │
                                 ├──────────────────────────────────────────────────────────┤
                                 │               <  OK  >        <Cancel>                   │
                                 └──────────────────────────────────────────────────────────┘

A confirmation dialog like the following will be shown:

                       ┌──────────────────────────────────────────────────────────────────────────────┐
                       │ Add node  172.20.0.7 logging as user root (with nopasswd root permissions)?  │
                       │ Password will be asked. If not provided, an ssh connection using the ssh key │
                       │ of onepoc user will be used                                                  │
                       │                                                                              │
                       │                                                                              │
                       ├──────────────────────────────────────────────────────────────────────────────┤
                       │                         < Yes >             < No  >                          │
                       └──────────────────────────────────────────────────────────────────────────────┘

After that, an ansible playbook will run in order to execute all the needed operations on the frontend. This may take some minutes

...
PLAY RECAP *********************************************************************
...
...
172.20.0.7                 : ok=52   changed=27   unreachable=0    failed=0    skipped=2    rescued=0    ignored=0
frontend                   : ok=43   changed=11   unreachable=0    failed=0    skipped=27   rescued=0    ignored=0

Press any key to continue

Graphical User Interface

The GUI should be available in http://<frontend_ip>:2616

The oneadmin password can be obtained in onefemenu, in the option show_oneadmin_pass

Networking

The ISO deployment does not automatically configure virtual networks. Instead, FRR is configured to add BGP-EVPN so VXLANs should work for internal communication. VXLAN is a technology that allows isolation between Virtual networks, depending on an identifier between 1 and 16777215.

That means that a virtual network over VXLAN can be created from the web interface, selecting the type VXLAN, setting up the physical device for the package transit, and filling the contextualization of the network (address, gateway, etc.).

A VNET can be set up on the sunstone interface on Networks -> Virtual Networks -> Create Virtual Network, following the next model. In this case we are using the VXLAN 100 (a positive number under 16777215) on the interface enp3s0 of the frontend, but it must be adjusted to the external interface of the frontend.

sunstone-network_config

An address range must be created, in this case we chose an IPv4 range address starting from 172.16.100.8 with 100 consecutive IPs.

sunstone-network_ip_range

The network contextualization has the following parameters

sunstone-network_context

Virtual network considerations

VXLAN networks are totally internal and have no access to external networks. By default they can be considered a totally isolated net. External access from/to this networks must be configured.

Determining the VM identifier

To determine the virtual network that needs external access use onevnet list. This command will list the existent virtual networks, for instance:

# onevnet list
  ID USER     GROUP    NAME          CLUSTERS   BRIDGE    STATE    LEASES OUTD ERRO
   0 oneadmin oneadmin test_vnet     0          XXXXX     rdy           0    0    0

The ID and the NAME field of every row can be used for all operations on virtual networks.

Creating the virtual Network Gateway (access from the frontend)

In this case, to create the default gateway on this virtual net, the command onevnet_add_gw followed by the ID of the virtual network should be executed. For example the following command will create the gateway for the network 0

# onevnet_add_gw 0

To delete the gateway and make the network unreachable, reverting the behaviour, onevnet_del_gw <NETWORK_ID> should be executed in the same way

Setting up NAT (access to the same networks as the frontend)

Virtual machines on this virtual network won’t be able to access to the same networks as the frontend because there is no NAT. A simple NAT can be created executing the command enable_masquerade

Add local route (access from external networks to the virtual network)

After the gateway has been created and NAT masquerade has been enabled, the VMs in the virtual network 172.16.100.0/24:

  • can communicate (bidirectionally) with the frontend
  • can access to the same networks that the frontend (i.e. internet)

Currently, any machine (even if it has access to the frontend) cannot reach ths virtual network because doesn’t know how to arrive to it. For that, a route via the frontend external IP is needed. A route can be added locally.

On a workstation with access to the frontend, a local route to the virtual net can be created with the following commands depending on the operating system

  • Linux: sudo ip route add 172.16.100.0/24 via <frontend_ip>
  • Windows: route add 172.16.100.0 MASK 255.255.255.0 <frontend_ip>
  • BSD: route add -net 172.16.100.0/24 <frontend_ip>

After the route exists, the workstation should be able to reach the virtual machines running on the frontend without further configuration.

Next Steps

Additionally, we recommend checking Validate the environment, that describes how to explore the resources installed and how to download and run appliances from the OpenNebula Marketplace.

Finally, you can use your OpenNebula installation to Run a Kubernetes Cluster on OpenNebula with minimal steps – first downloading the OneKE Service from the OpenNebula Public Marketplace, then deploying a full-fledged K8s cluster with a test application.