<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>SAN-LVM Storage on</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/</link><description>Recent content in SAN-LVM Storage on</description><generator>Hugo</generator><language>en</language><lastBuildDate>Mon, 17 Feb 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/index.xml" rel="self" type="application/rss+xml"/><item><title>Overview</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/overview/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/overview/</guid><description>&lt;p&gt;This storage configuration assumes that Hosts have access to storage devices (LUNs) exported by an
Storage Area Network (SAN) server using a suitable protocol like iSCSI or Fibre Channel. The Hosts
will interface the devices through the LVM abstraction layer. Virtual Machines run from an LV
(logical volume) device instead of plain files. This reduces the overhead of having a filesystem in
place and thus it may increase I/O performance.&lt;/p&gt;</description></item><item><title>SAN/LVM: Everpure setup</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/everpure_guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/everpure_guide/</guid><description>&lt;p&gt;This setup assumes you are using a Everpure FlashArray with iSCSI and want to use it as a backend for one of OpenNebula&amp;rsquo;s &lt;a href="https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/"&gt;LVM datastore options&lt;/a&gt;. The configuration uses standard volume and Host mappings. If you are familiar with the Everpure interface, create the required resources as desired.&lt;/p&gt;
&lt;h2 id="everpure-configuration"&gt;Everpure Configuration&lt;/h2&gt;
&lt;h3 id="host-and-host-group"&gt;Host and Host Group&lt;/h3&gt;
&lt;p&gt;For each of the Hosts and Front-end you&amp;rsquo;ll need to either gather or define their iSCSI Initiator Name. If you have already started iscsid at least once on the machine it should have a name generated in &lt;code&gt;/etc/iscsi/initiatorname.iscsi&lt;/code&gt;. If you would prefer to define it, you can modify that file&amp;rsquo;s contents to something like &lt;code&gt;InitiatorName=iqn.2024-01.com.example.pure:some.host.id&lt;/code&gt; then restarting iscsid (and then reconnect any active iscsi sessions, if already connected). Each name must be unique.&lt;/p&gt;</description></item><item><title>SAN/LVM: NetApp setup</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/netapp_guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/netapp_guide/</guid><description>&lt;p&gt;This setup assumes you are using NetApp ONTAP with iSCSI and are trying to use it as a backend for one of OpenNebula&amp;rsquo;s &lt;a href="https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/"&gt;LVM datastore options&lt;/a&gt;. The configuration uses standard volume and host mappings. If you are familiar with the NetApp ONTAP interface and its functionality, create the required resources as desired.&lt;/p&gt;









&lt;div class="alert alert-info" role="alert"&gt;
 
 &lt;div class="alert-heading"&gt;
 &lt;i class="alert-icon fa-sharp fa-solid fa-circle-info"&gt;&lt;/i&gt; Note
 &lt;/div&gt;
 
 &lt;div class="alert-body"&gt;
 This guide is provided as a prerequisite to use the LVM drivers over a NetApp appliance. It is not
needed if you are using the &lt;a href="https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/netapp/"&gt;native NetApp&lt;/a&gt; driver.
 &lt;/div&gt; 
&lt;/div&gt;
&lt;h2 id="netapp-configuration"&gt;NetApp Configuration&lt;/h2&gt;
&lt;h3 id="svm-storage-virtual-machine"&gt;SVM (Storage Virtual Machine)&lt;/h3&gt;
&lt;p&gt;Create or use an existing SVM which has iSCSI enabled. Navigate to &lt;strong&gt;Storage → Storage VMs&lt;/strong&gt; to see the list.&lt;/p&gt;</description></item><item><title>SAN/LVM: Generic setup</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/generic_guide/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/generic_guide/</guid><description>&lt;h3 id="hosts-san-configuration"&gt;Hosts SAN Configuration&lt;/h3&gt;
&lt;p&gt;The abstraction required to access LUNs consists of block devices. This means that there are
several ways to set them up, although it usually involves using a network block protocol such as
iSCSI or Fibre Channel, as well as some way to make it redundant, like DM Multipath.&lt;/p&gt;
&lt;p&gt;Here is a sample session for setting up access via iSCSI and multipath:&lt;/p&gt;
&lt;pre tabindex="0"&gt;&lt;code&gt;# === ISCSI ===

TARGET_IP=&amp;#34;192.168.1.100&amp;#34; # IP of SAN appliance
TARGET_IQN=&amp;#34;iqn.2023-01.com.example:storage.target1&amp;#34; # iSCSI Qualified Name

# === Install tools ===
# RedHat derivates:
sudo dnf install -y iscsi-initiator-utils
# Ubuntu/Debian:
sudo apt update &amp;amp;&amp;amp; sudo apt install -y open-iscsi
# SLES/openSUSE:
sudo zypper install -y open-iscsi

# === Enable iSCSI services ===
# RedHat derivates:
sudo systemctl enable --now iscsid
# Ubuntu/Debian:
sudo systemctl enable --now open-iscsi
# SLES/openSUSE:
sudo systemctl enable --now iscsid.socket iscsi

# === Discover targets ===
sudo iscsiadm -m discovery -t sendtargets -p &amp;#34;$TARGET_IP&amp;#34;

# === Log in to the target ===
sudo iscsiadm -m node -T &amp;#34;$TARGET_IQN&amp;#34; -p &amp;#34;$TARGET_IP&amp;#34; --login

# === Make login persistent across reboots ===
sudo iscsiadm -m node -T &amp;#34;$TARGET_IQN&amp;#34; -p &amp;#34;$TARGET_IP&amp;#34; \
 --op update -n node.startup -v automatic
&lt;/code&gt;&lt;/pre&gt;&lt;pre tabindex="0"&gt;&lt;code&gt;# === MULTIPATH ===

# === Install tools ===
# RedHat derivates:
sudo dnf install -y device-mapper-multipath
# Ubuntu/Debian:
sudo apt update &amp;amp;&amp;amp; sudo apt install -y multipath-tools
# SLES/openSUSE:
sudo zypper install -y multipath-tools

# === Enable multipath daemon ===
sudo systemctl enable --now multipathd

# === Create multipath config file ===
sudo tee /etc/multipath.conf &amp;gt; /dev/null &amp;lt;&amp;lt;EOF
defaults {
 user_friendly_names yes
 find_multipaths yes
}
# Optional: blacklist local boot disks if needed
# blacklist {
# devnode &amp;#34;^sd[a-z]&amp;#34;
# }
EOF

# === Reload multipath ===
sudo multipath -F # Flush existing config (safely if not in use)
sudo multipath # Re-scan for multipath devices
sudo systemctl restart multipathd

# === Show current multipath devices ===
sudo multipath -ll
&lt;/code&gt;&lt;/pre&gt;&lt;p&gt;&lt;a id="frontend-configuration"&gt;&lt;/a&gt;&lt;/p&gt;</description></item><item><title>LVM SAN Datastore (EE)</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/lvm/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/lvm/</guid><description>&lt;p&gt;With LVM SAN Datastore (EE), both disks images and actual VM drives are stored as Logical Volumes
(LVs) in the SAN storage. This allows for fast and efficient VM instantiation, as no data needs to
be copied or moved.&lt;/p&gt;
&lt;p&gt;Use this option for high-end Storage Area Networks (SANs) when a dedicated driver for that hardware,
such as &lt;a href="https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/netapp/"&gt;NetApp&lt;/a&gt;, is not available. The same LUN can be exported to all the
Hosts while Virtual Machines will be able to run directly from the SAN.&lt;/p&gt;</description></item><item><title>LVM (File Mode) SAN Datastore</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/filemode/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/filemode/</guid><description>&lt;p&gt;In this setup, disk images are stored in file format, such as raw and qcow2, in the Image Datastore,
and then dumped into a LVM Logical Volume in the SAN when a Virtual Machine is created. The image
files are transferred from Frontend to Hosts through the SSH protocol. Additionally, enable &lt;a href="https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/filemode/#lvm-thin"&gt;LVM
Thin&lt;/a&gt; to support creating thin snapshots of the VM disks.&lt;/p&gt;
&lt;h2 id="how-should-i-read-this-chapter"&gt;How Should I Read This Chapter&lt;/h2&gt;
&lt;p&gt;Before performing the procedures outlined in this chapter you must configure access to the SAN following one of the setup guides in the &lt;a href="https://docs.opennebula.io/7.2/product/cluster_configuration/lvm/overview/#san-appliance-setup"&gt;LVM Overview&lt;/a&gt; section.&lt;/p&gt;</description></item></channel></rss>