Deploy OpenNebula Front-end on VMware

In this guide, we’ll go through a Front-end OpenNebula environment deployment, where all the OpenNebula services needed to use, manage and run the cloud will be deployed through an OVA and collocated on the single VM running on a vCenter instance. Afterwards, you can follow the Operations and Usage basics guides of this same Quick Start to launch edge clusters based on open source hypervisors.


vOneCloud is a virtual appliance for vSphere that builds on top of your vCenter an OpenNebula cloud for development, testing or product evaluation in five minutes. In a nutshell, it is an OVA file with a configured AlmaLinux and OpenNebula installation. vOneCloud is free to download and use and can be also used for small-size production deployments.

vOneCloud ships with the following components under the hood:




6.8 (release notes)


Default version shipped in AlmaLinux 8

Phusion Passenger

Default version shipped in AlmaLinux 8 (used to run Sunstone)

vOneCloud comes with a Control Console, a text-based wizard accessible through the vCenter console to the vOneCloud appliance. It is available by opening the vOneCloud appliance console in vCenter. It requires no authentication since only the vCenter administrator will be able to open the vOneCloud console. It can be used to configure the network, root password and change the password of the OpenNebula oneadmin user.



In order to follow the Running Kubernetes Clusters with your vOneCloud instance, you will need a publicly accessible IP address so the deployed services can report to the OneGate server. See OneGate Configuration for more details.

The following components are needed to use the vOneCloud appliance:



vCenter 7.0

  • ESX hosts need to be grouped into clusters.

  • The IP or DNS needs to be known, as well as the credentials (username and password) of an admin user.

ESX 7.0

  • With at least 16 GB of free RAM and 100GB of free size on a datastore.

Firefox (> 3.5) and Chrome

Other browsers, including Safari, are not supported and may not work well.

vOneCloud ships with a default of 2 vCPUs, 16 GiB of RAM and 100GB of disk size, and as such it has been certified for infrastructures of the following dimensions:

  • Up to 1.000 VMs in total

  • Up to 100 users, the limit being 10 users accessing the system simultaneously

Take into account that vOneCloud is shipped for evaluation purposes.


vOneCloud ships with several pre-created user accounts which will be described in this section:







Appliance administrator

This user can log into the appliance (local login, no SSH).



Service user

Used to run all OpenNebula services.


OpenNebula Sunstone

Cloud administrator

Cloud administrator. Run any task in OpenNebula, including creating other users.

Download and Deploy

vOneCloud can be downloaded by completing the form here.

The OVA file can be imported into an existing vCenter infrastructure. It is based on AlmaLinux 8 with VMware tools enabled.

Follow the next steps to deploy a fully functional OpenNebula cloud.

Step 1. Deploying the OVA

Log in to your vCenter installation and select the appropriate datacenter and cluster where you want to deploy the appliance. Select Deploy OVF Template.


Browse to the download path of the OVA that can be downloaded from the link above.

Select the name, folder, and a compute resource where you want vOneCloud to be deployed. Also, you’ll need to select the datastore in which to copy the OVA.

Select the network. You will need to choose a network that has access to the ESX hosts.

Review the settings selection and click finish. Wait for the Virtual Machine Template to appear in the cluster.


After importing the vOneCloud OVA it needs to be cloned into a Virtual Machine. Before powering it on, the vOneCloud Virtual Machine can be edited to, for instance, add a new network interface, increase the amount of RAM, the available CPUs for performance, etc. Now you can power on the Virtual Machine.

Step 2. vOneCloud Control Console - Initial Configuration

When the VM boots up you will see in the VM console in vCenter the vOneCloud Control Console, showing this wizard:


If you are presented instead with the following:


You are being presented with the wrong tty. You will need to press Ctrl+Alt+F1 to access the Control Console.

In this wizard you first need to configure the network. If you are using DHCP you can simply skip to the next item.

If you are using a static network configuration, answer yes and you will need to use a ncurses interface to:

  • “Edit a connection”

  • Select “System eth0”

  • Change IPv4 CONFIGURATION from <Automatic> to <Manual> and select “Show”

  • Input the desired IP address/24 in Addresses

  • Input Gateway and DNS Servers

  • Select OK and then quit the dialog

Here’s an example of static network configuration on the available network interface on the 10.0.1.x class C network, with a gateway in and using as the DNS server:


The second action needed is to set the oneadmin account password. You will need this to log in to OpenNebula. Check the Accounts section to learn more about vOneCloud roles and users.


In the third step, you need to define a root password. You won’t be using this very often, so write it down somewhere safe. It’s your master password to the appliance.

This password can be used to access the OpenNebula command line interface; for that, you need to SSH to vOneCloud using the root account and password. In OS X and Linux environments, simply use ssh to log in to the root account of vOneCloud’s IP. For Windows environments you can use software like PuTTY or even SFTP clients like WinSCP. Alternatively, open the console of the vOneCloud VM in vCenter and change the tty (Ctrl + Alt + F2).

As the last step, you need to configure a public-facing address that will be used to access your vOneCloud instance by end-users. Enter the fully qualified domain name, hostname valid within your network, or the IP address.


Step 3. Check access to the Sunstone GUI

After opening the Sunstone interface (http://<appliance_ip> with oneadmin credentials), you are now ready to add computing clusters to OpenNebula and start launching your first Virtual Machines!


If Sunstone greets you with an error while connecting to the public FireEdge endpoint, return to Control Center in the previous step and configure a valid endpoint:


Next Steps

After reaching this point, you can continue importing vCenter resources if you are planning to manage then with OpenNebula.

If you want to try out instead OpenNebula public resource infrastructure provisioning, we recommend following the Operations Guide from Quick Start after finishing this guide to add computing power to your shiny new OpenNebula cloud.

Import Existing vCenter Resources

Importing a vCenter infrastructure into OpenNebula can be carried out easily through the Sunstone Web UI. Follow the next steps to import an existing vCenter cluster as well as any already defined VM Template and Networks.

You will need the IP or hostname of the vCenter server, as well as a user declared as Administrator in vCenter. There’s more info on needed permissions in the vCenter node in tallation guide.


For security reasons, you may define different users to access different ESX Clusters. A different user can be defined in OpenNebula per ESX cluster, which is encapsulate in OpenNebula as an OpenNebula Host.

Step 1. Sunstone login

Log in to Sunstone as oneadmin, as explained in the previous section.

The oneadmin account has full control of all the physical and virtual resources.

Step 2. Import vCenter Cluster

To import new vCenter clusters to be managed in OpenNebula, proceed in Sunstone to the Infrastructure --> Hosts tab and click on the “+” green icon.



OpenNebula does not support spaces in vCenter cluster names.

In the dialog that pops up, select vCenter as Type in the drop-down. You now need to fill in the data according to the following table:


vCenter hostname (FQDN) or IP address


Username of a vCenter user with administrator rights


Password for the above user

Select the vCenter cluster to import as OpenNebula Host and click on “Import”. After importing you should see a message indicating that the Host was successfully imported.

Now it’s time to check that the vCenter import has been successful. In Infrastructure --> Hosts check if the vCenter cluster has been imported, and if all the ESX Hosts are available in the ESX tab.


Take into account that one vCenter cluster (with all its ESX Hosts) will be represented as one OpenNebula Host. It’s not possible to import individual ESX Hosts; they need to be grouped in vCenter clusters.

Step 3. Import Datastores

Datastores can be imported from the Storage --> Datastores Since datastores are going to be used to hold the images from VM Templates, all datastores must be imported before VM Template import.

vCenter datastores hosts VMDK files and other file types so VMs and templates can use them, and these datastores can be represented in OpenNebula as both an Images Datastore and a System Datastore:

  • Images Datastore. Stores the images repository. VMDK files are represented as OpenNebula images stored in this datastore.

  • System Datastore. Holds disk for running virtual machines, copied or cloned from the Images Datastore.

For example, if we have a vcenter datastore called ‘’nfs’’, when we import the vCenter datastore into OpenNebula, two OpenNebula datastores will be created as an Images Datastore and as a System Datastore pointing to the same vCenter datastore.

First go to Storage --> Datastores , click on the “+” green icon and click on “Import”. Select the Host (vCenter cluster) and click on “Get Datastores”.


Select the datastore to import and click on “Import”. After importing you should see a message indicating that the datastore was successfully imported.


If the vCenter instance features a read-only datastore, please be aware that you should disable the SYSTEM representation of the datastore after importing it to avoid OpenNebula trying to deploy VMs in it.

Step 4. Import Networks

Similarly, Port Groups, Distributed Port Groups and NSX-T / NSX-V logical switches, can also be imported using a similar Import button in Network --> Virtual Networks.

Select the Host and click on “Get Networks”. Select the Network and click on Import. After importing you should see a message indicating that the network was successfully imported.


Virtual Networks can be further refined with the inclusion of different Address Ranges. This refinement can be done at import time, defining the size of the network using one of the following supported Address Ranges:

  • IPv4: Need to define at least starting IP address. MAC address can be defined as well

  • IPv6: Can optionally define starting MAC address, GLOBAL PREFIX, and ULA PREFIX

  • Ethernet: Does not manage IP addresses but rather MAC addresses. If a starting MAC is not provided, OpenNebula will generate one.

Step 5. Import VM Templates


Since datastores are going to be used to hold the images from VM Templates, all datastore must be imported before VM Template import.

In OpenNebula, Virtual Machines are deployed from VMware VM Templates that must exist previously in vCenter and must be imported into OpenNebula. There is a one-to-one relationship between each VMware VM Template and the equivalent OpenNebula VM Template. Users will then instantiate the OpenNebula VM Template and OpenNebula will create a Virtual Machine clone from the vCenter template.

vCenter VM Templates can be imported and reacquired using the Import button in Templates --> VMs.


Select the Host and click on “Get Templates”. Select the template to import and click on “Import”.

When a VMware VM Template is imported, OpenNebula will detect any virtual disk and network interface within the template. For each virtual disk, OpenNebula will create an image representing each disk discovered in the template. In the same way, OpenNebula will create a network representation for each standard or distributed port group associated with virtual network interfaces found in the template. The imported OpenNebula VM templates can be modified by selecting the VM Template in Virtual Resources --> Templates and clicking on the Update button.

If the vCenter infrastructure has running or powered off Virtual Machines, OpenNebula can import and subsequently manage them. To import vCenter VMs, proceed to the Wilds tab in the Host info tab representing the vCenter cluster the VMs are running in, select the VMs to be imported and click on the import button.

After the VMs are in the running state, you can operate on their life-cycle, assign them to particular users, attach or detach network interfaces, create snapshots, do capacity resizing (change CPU and MEMORY after powering the VMs off), etc.


Resources imported from vCenter will have their names appended with the name of the cluster where these resources belong in vCenter, to ease their identification within OpenNebula.

Step 6. Verification - Launch a VM

Let’s check out this OpenNebula installation doing what it does best: launching Virtual Machines. Go to your Instances -> VMs tab in Sunstone and click on the “+” green icon. Select the VM Template imported in the previous step (feel free to change any configuration aspect) and click on Create.


OK! Your VM should be up and running switfly. Check the console icon to access your VM through VMRC within Sunstone.

Now you can proceed to Operations Basics to launch an edge cluster on a public cloud provider.