<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hosts and Clusters on</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/</link><description>Recent content in Hosts and Clusters on</description><generator>Hugo</generator><language>en</language><lastBuildDate>Thu, 16 Oct 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/index.xml" rel="self" type="application/rss+xml"/><item><title>Overview</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/overview/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/overview/</guid><description>&lt;p&gt;&lt;a id="hostsubsystem"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# Overview --&gt;
&lt;p&gt;A &lt;strong&gt;Host&lt;/strong&gt; is a server that has the ability to run Virtual Machines and that is connected to OpenNebula’s Front-end server. To learn how to prepare the Hosts, you can read the &lt;a href="https://docs.opennebula.io/7.2/software/installation_process/"&gt;Installation&lt;/a&gt; guide. Hosts are usually grouped in &lt;strong&gt;Clusters&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="how-should-i-read-this-chapter"&gt;How Should I Read This Chapter&lt;/h2&gt;
&lt;p&gt;In this chapter there are four guides describing these objects.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Host Management&lt;/strong&gt;: Host management is achieved through the &lt;code&gt;onehost&lt;/code&gt; CLI command or through the Sunstone GUI. You can read about Host Management in more detail in the &lt;a href="https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/hosts/#hosts-guide"&gt;Hosts&lt;/a&gt; guide.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cluster Management&lt;/strong&gt;: Hosts can be grouped in clusters. These clusters are managed with the &lt;code&gt;onecluster&lt;/code&gt; CLI command or through the Sunstone GUI. You can read about Cluster Management in more detail in the &lt;a href="https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/cluster_guide/#cluster-guide"&gt;Clusters&lt;/a&gt; guide.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;You should read all the guides in this chapter to familiarize yourself with these objects. For small and homogeneous clouds you may not need to create new clusters.&lt;/p&gt;</description></item><item><title>Hosts</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/hosts/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/hosts/</guid><description>&lt;p&gt;&lt;a id="hosts"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a id="hosts-guide"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# Hosts --&gt;
&lt;p&gt;In order to use your existing physical nodes, you have to add them to OpenNebula as Hosts. To add a Host only its hostname and type is needed.&lt;/p&gt;









&lt;div class="alert alert-warning" role="alert"&gt;
 
 &lt;div class="alert-heading"&gt;
 &lt;i class="alert-icon fas fa-triangle-exclamation"&gt;&lt;/i&gt; Warning
 &lt;/div&gt;
 
 &lt;div class="alert-body"&gt;
 Before adding a Linux Host check that you can SSH to it without being prompted for a password.
 &lt;/div&gt; 
&lt;/div&gt;
&lt;h2 id="creating-and-deleting-hosts"&gt;Creating and Deleting Hosts&lt;/h2&gt;
&lt;p&gt;Hosts are the servers managed by OpenNebula responsible for running VMs. To use these Hosts in OpenNebula you need to register them so they are monitored and made available to the scheduler.&lt;/p&gt;</description></item><item><title>Clusters</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/cluster_guide/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/cluster_guide/</guid><description>&lt;p&gt;&lt;a id="cluster-guide"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# Clusters --&gt;
&lt;p&gt;Clusters group together Hosts, datastores, and virtual networks that are configured to work together. A cluster is used to:&lt;/p&gt;
&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;Ensure that VMs use resources that are compatible.&lt;/li&gt;
&lt;li&gt;Assign resources to user groups by creating Virtual Private Clouds.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;
&lt;p&gt;Clusters should contain homogeneous resources. Note that some operations like live migrations are restricted to Hosts in the same cluster.&lt;/p&gt;
&lt;p&gt;The requirements for live migrating VMs between Hosts of the same cluster are that no differences occur in the following areas of the hypervisors:&lt;/p&gt;</description></item><item><title>Virtual Topology and CPU Pinning</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/numa/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/numa/</guid><description>&lt;p&gt;&lt;a id="numa"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# Virtual Topology and CPU Pinning --&gt;
&lt;h2 id="overview"&gt;Overview&lt;/h2&gt;
&lt;p&gt;In this guide you’ll learn to set up OpenNebula to control how VM resources are mapped onto the hypervisor ones. These settings will help you to fine tune the performance of VMs. We will use the following concepts:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cores, Threads, and Sockets&lt;/strong&gt;. A computer processor is connected to the motherboard through a &lt;em&gt;socket&lt;/em&gt;. A processor can pack one or more &lt;em&gt;cores&lt;/em&gt;, each one implements a separate processing unit that shares some cache levels, memory, and I/O ports. CPU cores’ performance can be improved by the use of hardware &lt;em&gt;multi-threading&lt;/em&gt; (SMT) that permits multiple execution flows to run simultaneously on a single core.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;NUMA&lt;/strong&gt;. Multi-processor servers are usually arranged in nodes or cells. Each &lt;em&gt;NUMA node&lt;/em&gt; holds a fraction of the overall system memory. In this configuration, a processor accesses memory and I/O ports local to its node faster than those of non-local ones.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Hugepages&lt;/strong&gt;. Systems with big physical memory also use a large number of virtual memory pages. This big number makes the use of virtual-to-physical translation caches inefficient. Hugepages reduces the number of virtual pages in the system and optimizes the virtual memory subsystem.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In OpenNebula the virtual topology of a VM is defined by the number of sockets, cores, and threads. We assume that a NUMA node or cell is equivalent to a socket and they will be used interchangeably in this guide.&lt;/p&gt;</description></item><item><title>PCI Passthrough</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/pci_passthrough/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/pci_passthrough/</guid><description>&lt;p&gt;&lt;a id="kvm-pci-passthrough"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# PCI Passthrough --&gt;
&lt;p&gt;It is possible to discover PCI devices in the Hosts and directly assign them to Virtual Machines in the KVM hypervisor.&lt;/p&gt;
&lt;p&gt;The setup and environment information is taken from &lt;a href="https://stewartadam.io/howtos/fedora-20/create-gaming-virtual-machine-using-vfio-pci-passthrough-kvm"&gt;here&lt;/a&gt;. You can safely ignore all the VGA-related sections, those for PCI devices that are not graphic cards, or if you don’t want to output video signal from them.&lt;/p&gt;









&lt;div class="alert alert-warning" role="alert"&gt;
 
 &lt;div class="alert-heading"&gt;
 &lt;i class="alert-icon fas fa-triangle-exclamation"&gt;&lt;/i&gt; Warning
 &lt;/div&gt;
 
 &lt;div class="alert-body"&gt;
 The overall setup state was extracted from a preconfigured Fedora 22 machine. &lt;strong&gt;Configuration for your distro may be different.&lt;/strong&gt;
 &lt;/div&gt; 
&lt;/div&gt;
&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;
&lt;p&gt;Virtualization Host must&lt;/p&gt;</description></item><item><title>NVIDIA vGPU &amp; MIG</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/vgpu/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/vgpu/</guid><description>&lt;p&gt;&lt;a id="kvm-vgpu"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# NVIDIA vGPU and MIG support --&gt;
&lt;p&gt;Configuring the hypervisor for NVIDIA® vGPU and MIG (Multi-Instance GPU) capabilities facilitates centralized management, flexibility in resource allocation, and enhanced security isolation across all virtualized GPU workloads.&lt;/p&gt;
&lt;h2 id="bios"&gt;BIOS&lt;/h2&gt;
&lt;p&gt;You need to check that the following settings are enabled in your BIOS configuration:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Enable SR-IOV&lt;/li&gt;
&lt;li&gt;Enable IOMMU&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Note that the specific menu options where you need to activate these features depend on the motherboard manufacturer.&lt;/p&gt;</description></item><item><title>NVIDIA GPU Passthrough</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/nvidia_gpu_passthrough/</link><pubDate>Thu, 16 Oct 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/nvidia_gpu_passthrough/</guid><description>&lt;p&gt;&lt;a id="nvidia-gpu-passthrough"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# NVIDIA GPU Passthrough --&gt;
&lt;p&gt;Here you will find detailed instructions for configuring PCI passthrough for high-performance NVIDIA® GPUs on x86_64 hypervisors. The procedures described here have been validated with NVIDIA H100 GPUs but can be adapted for other similar high-performance NVIDIA GPUs. This allows Virtual Machines to get exclusive access to the GPU, which is recommended for AI/ML workloads. It builds upon the concepts explained in the general &lt;a href="https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/pci_passthrough/"&gt;PCI Passthrough&lt;/a&gt; guide. This guide is not applicable to ARM-based systems.&lt;/p&gt;</description></item><item><title>OpenNebula NVIDIA Fabric Manager</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/one_fabricmanager/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/hosts_and_clusters/one_fabricmanager/</guid><description>&lt;p&gt;&lt;a id="one_fabricmanager"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2 id="introduction"&gt;Introduction&lt;/h2&gt;
&lt;p&gt;The OpenNebula NVIDIA® Fabric Manager integration provides a complete solution for managing NVIDIA NVSwitch fabric within a cloud environment.&lt;/p&gt;
&lt;h2 id="nvidia-shared-nvswitch-virtualization-model"&gt;NVIDIA Shared NVSwitch Virtualization Model&lt;/h2&gt;
&lt;p&gt;The OpenNebula integration with Fabric Manager integration follows the NVIDIA Shared NVSwitch Virtualization Model. This model uses a Service VM in each hypervisor for managing the NVSwitches. The NVSwitches are added to the Service VM as PCI passthrough devices to configure the selected GPU partitioning. The Guest VMs are configured with PCI passthrough for the GPUs only, without any visibility of the NVSwitches. See the reference architecture in the image below.&lt;/p&gt;</description></item></channel></rss>