<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>SAN-Native Storage on</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/</link><description>Recent content in SAN-Native Storage on</description><generator>Hugo</generator><language>en</language><lastBuildDate>Mon, 17 Feb 2025 00:00:00 +0000</lastBuildDate><atom:link href="https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/index.xml" rel="self" type="application/rss+xml"/><item><title>Everpure FlashArray SAN Datastore (EE)</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/everpure/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/everpure/</guid><description>&lt;p&gt;OpenNebula’s &lt;strong&gt;Everpure FlashArray SAN Datastore&lt;/strong&gt; delivers production-grade, native control of FlashArray block storage, from provisioning through cleanup, directly from OpenNebula. This integration exposes the full lifecycle of FlashArray Volumes, Snapshots, and Clones, and automates Host connectivity via Everpure’s Host/Host-group model with reliable iSCSI and multipath handling. All communication with the array uses authenticated HTTPS against the FlashArray REST API. This datastore driver is part of OpenNebula Enterprise Edition (EE).&lt;/p&gt;
&lt;h3 id="key-benefits"&gt;Key Benefits&lt;/h3&gt;
&lt;p&gt;With the native Everpure driver, OpenNebula users gain the performance consistency of FlashArray’s always-thin, metadata-driven architecture. Everpure’s zero-copy snapshots and clones complete instantly, without impacting write amplification or introducing snapshot-tree latency penalties typical of Host-side copy-on-write systems. Under mixed 4k/8k and fsync-heavy workloads, FlashArray maintains flat latency profiles even with deep snapshot histories, while LVM-thin commonly exhibits early degradation as CoW pressure increases. The result is higher, steadier IOPS and predictable latency for Virtual Machine disks at scale.&lt;/p&gt;</description></item><item><title>NetApp SAN Datastore (EE)</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/netapp/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/netapp/</guid><description>&lt;p&gt;OpenNebula’s &lt;strong&gt;NetApp SAN Datastore&lt;/strong&gt; provides production-grade, native control of NetApp ONTAP block storage, from provisioning to cleanup, directly from OpenNebula. This integration enables the full lifecycle of Volumes, LUNs, and Snapshots — including FlexClone-based cloning and SAN-snapshot incremental backups — with secure HTTPS control and robust iSCSI/multipath host orchestration. This datastore driver is part of OpenNebula Enterprise Edition (EE).&lt;/p&gt;
&lt;h3 id="key-benefits"&gt;Key Benefits&lt;/h3&gt;
&lt;p&gt;Using the native NetApp driver, mixed 4k/8k (70/30) workloads delivered up to ~3x higher IOPS with ~4 to 10x lower write latency than LVM-thin, especially as snapshots accumulate. In pgbench, the NetApp path sustained ~2x higher, steadier transactions per second (TPS). On 4k fsync micro-writes it showed ~10 to 25% lower write latency. In short, the NetApp array keeps latency flat under snapshot pressure while host-side LVM-thin degrades sooner due to copy on write.&lt;/p&gt;</description></item><item><title>iSCSI - Libvirt Datastore</title><link>https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/iscsi_ds/</link><pubDate>Mon, 17 Feb 2025 00:00:00 +0000</pubDate><guid>https://docs.opennebula.io/7.2/product/cluster_configuration/san_storage/iscsi_ds/</guid><description>&lt;p&gt;&lt;a id="iscsi-ds"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;!--# iSCSI - Libvirt Datastore --&gt;
&lt;p&gt;This Datastore is used to register already existing iSCSI volume available to the hypervisor Nodes.&lt;/p&gt;









&lt;div class="alert alert-warning" role="alert"&gt;
 
 &lt;div class="alert-heading"&gt;
 &lt;i class="alert-icon fas fa-triangle-exclamation"&gt;&lt;/i&gt; Warning
 &lt;/div&gt;
 
 &lt;div class="alert-body"&gt;
 Only administrators install and manage the iSCSI - Libvirt Datastore. Allowing users the creation of images in this datastore implies a significant security risk.
 &lt;/div&gt; 
&lt;/div&gt;
&lt;h2 id="front-end-setup"&gt;Front-end Setup&lt;/h2&gt;
&lt;p&gt;No additional configuration is needed&lt;/p&gt;
&lt;h2 id="node-setup"&gt;Node Setup&lt;/h2&gt;
&lt;p&gt;The Nodes need to meet the following requirements:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The devices you want to attach to a VM should be accessible by the hypervisor.&lt;/li&gt;
&lt;li&gt;QEMU needs to be compiled with libiscsi support.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="iscsi-chap-authentication"&gt;iSCSI CHAP Authentication&lt;/h3&gt;
&lt;p&gt;In order to use CHAP authentication, you will need to create a libvirt secret in &lt;strong&gt;all&lt;/strong&gt; the hypervisors. Follow this &lt;a href="https://libvirt.org/formatsecret.html#iSCSIUsageType"&gt;Libvirt Secret XML format&lt;/a&gt; guide to register the secret. Take the following into consideration:&lt;/p&gt;</description></item></channel></rss>