<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hypervisor Configuration on</title><link>https://docs.opennebula.io/7.2/product/operation_references/hypervisor_configuration/</link><description>Recent content in Hypervisor Configuration on</description><generator>Hugo</generator><language>en</language><lastBuildDate>Mon, 17 Feb 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://docs.opennebula.io/7.2/product/operation_references/hypervisor_configuration/index.xml" rel="self" type="application/rss+xml"/><item><title>KVM Driver</title><link>https://docs.opennebula.io/7.2/product/operation_references/hypervisor_configuration/kvm_driver/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/operation_references/hypervisor_configuration/kvm_driver/</guid><description>&lt;p&gt;&lt;a id="kvmg"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# KVM Driver --&gt;
&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;
&lt;p&gt;To support virtualization, the Hosts will need a CPU with Intel VT or AMD’s AMD-V features. KVM&amp;rsquo;s &lt;a href="http://www.linux-kvm.org/page/FAQ#Preparing_to_use_KVM"&gt;Preparing to use KVM&lt;/a&gt; guide will clarify any doubts you may have regarding whether your hardware supports KVM.&lt;/p&gt;
&lt;p&gt;KVM will be installed and configured after following the &lt;a href="https://docs.opennebula.io/7.2/software/installation_process/manual_installation/kvm_node_installation/#kvm-node"&gt;KVM Host Installation&lt;/a&gt; section.&lt;/p&gt;
&lt;h2 id="considerations--limitations"&gt;Considerations &amp;amp; Limitations&lt;/h2&gt;
&lt;p&gt;Try to use &lt;a href="https://docs.opennebula.io/7.2/product/operation_references/hypervisor_configuration/kvm_driver/#kvmg-virtio"&gt;virtio&lt;/a&gt; whenever possible for both networks and disks. Using emulated hardware for networks and disks, will have an impact on performance and will not expose all the available functionality. For instance, if you don’t use &lt;code&gt;virtio&lt;/code&gt; for the disk drivers, you will not be able to exceed a small number of devices connected to the controller, meaning that you have a limit when attaching disks and it will not work while the VM is running (live disk-attach).&lt;/p&gt;</description></item><item><title>LXC Driver</title><link>https://docs.opennebula.io/7.2/product/operation_references/hypervisor_configuration/lxc_driver/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/operation_references/hypervisor_configuration/lxc_driver/</guid><description>&lt;p&gt;&lt;a id="lxdmg"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&lt;a id="lxcmg"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# LXC Driver --&gt;
&lt;h2 id="requirements"&gt;Requirements&lt;/h2&gt;
&lt;ul&gt;
&lt;li&gt;LXC version &amp;gt;= 3.0.3 installed on the Host.&lt;/li&gt;
&lt;li&gt;cgroup version 1 or 2 Hosts are required to implement resource control operations (e.g., CPU pinning, memory or swap limitations).&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="considerations--limitations"&gt;Considerations &amp;amp; Limitations&lt;/h2&gt;
&lt;h3 id="privileged-containers-and-security"&gt;Privileged Containers and Security&lt;/h3&gt;
&lt;p&gt;In order to ensure security in a multitenant environment, by default the containers created by the LXC driver will be unprivileged. The unprivileged containers will be deployed as &lt;code&gt;root&lt;/code&gt;. Sub UID/GID range &lt;code&gt;600100001-600165537&lt;/code&gt; will be used for mapping users/groups in order to increase security in case a malicious agent is able to escape the container.&lt;/p&gt;</description></item></channel></rss>