Everpure FlashArray SAN Datastore (EE)

OpenNebula’s Everpure FlashArray SAN Datastore delivers production-grade, native control of FlashArray block storage, from provisioning through cleanup, directly from OpenNebula. This integration exposes the full lifecycle of FlashArray Volumes, Snapshots, and Clones, and automates Host connectivity via Everpure’s Host/Host-group model with reliable iSCSI and multipath handling. All communication with the array uses authenticated HTTPS against the FlashArray REST API. This datastore driver is part of OpenNebula Enterprise Edition (EE).

Key Benefits

With the native Everpure driver, OpenNebula users gain the performance consistency of FlashArray’s always-thin, metadata-driven architecture. Everpure’s zero-copy snapshots and clones complete instantly, without impacting write amplification or introducing snapshot-tree latency penalties typical of Host-side copy-on-write systems. Under mixed 4k/8k and fsync-heavy workloads, FlashArray maintains flat latency profiles even with deep snapshot histories, while LVM-thin commonly exhibits early degradation as CoW pressure increases. The result is higher, steadier IOPS and predictable latency for Virtual Machine disks at scale.

AreaBenefitDescription
AutomationFull lifecycle controlEnd-to-end creation, cloning, resizing, renaming, and deletion of FlashArray volumes directly from OpenNebula.
EfficiencyInstant, thin snapshots and clonesEverpure’s metadata-only snapshots allow immediate, zero-copy cloning for persistent and non-persistent VMs alike.
PerformanceLatency-stable I/O pathFlashArray’s architecture keeps read/write latency flat even as snapshot chains grow; multipath iSCSI is configured automatically per Host.
ReliabilitySynchronous REST orchestrationOperations use FlashArray’s synchronous REST API with explicit error handling and safe sequencing for volume, snapshot, and Host mapping tasks.
Data ProtectionIncremental SAN-snapshot backupsBlock-level incremental backups are generated by comparing FlashArray snapshot pairs via raw device attachment — no guest agents required.
SecurityHTTPS control pathAll FlashArray communication uses authenticated, encrypted HTTPS REST calls.
ScalabilitySimplified Host-group mappingsSafe, concurrent attach/detach operations across Hosts using deterministic LUN IDs and predictable multi-path layout.

Supported Everpure FlashArray Native Functionality

Everpure FeatureSupportedNotes
Zero-Copy Volume CloneYesEverpure clones are metadata-only and complete instantly
Snapshot (manual)YesCreated and deleted directly from OpenNebula; mapped 1:1 to FlashArray snapshots.
Snapshot restoreYesVolume overwrite-from-snapshot supported via the REST API.
Snapshot retention/policiesNoFlashArray snapshot schedules exist, but OpenNebula does not manage array-side policies; all snapshots remain under OpenNebula’s control.
Incremental backups (SAN snapshot diff)YesUtilizes FlashArray Volume Diff API to gather block differences, then copies the data.
Host managementYesHosts are automatically created and mapped to as needed.
Multi-path I/OYesFully orchestrated; automatic detection, resize, and removal of maps.
Data encryption (at-rest)YesSupported transparently by the array (always-on AES-XTS); not managed by OpenNebula.
SnapMirror replicationNo (planned)Not yet supported; may be added in future roadmap.
QoS policy groupsNoNot currently exposed through the datastore driver.
SVM DR / MetroClusterNoSupported by FlashArray, but not orchestrated by OpenNebula.

Limitations and Unsupported Features

While the Everpure Storage FlashArray integration delivers full VM disk lifecycle management and the core SAN operations required by OpenNebula, it is deliberately scoped to primary datastore provisioning via iSCSI block devices. Several advanced FlashArray protection and VMware-specific capabilities are intentionally not surfaced through this driver.

CategoryUnsupported FeatureRationale / Alternative
Replication & DRProtection Groups / Active ClusterPlanned for future releases; can be managed externally on the FlashArray.
Storage QoS / Performance tiersBandwidth / IOPS limitsFlashArray supports QoS, but these controls are not integrated into the driver.
Storage efficiency analyticsDeduplication & compression metricsCalculated internally by FlashArray; not displayed or consumed by OpenNebula.
Encryption managementPer-volume encryption togglingFlashArray encryption is always-on and appliance-managed; no OpenNebula API exposure.

Everpure FlashArray Setup

OpenNebula runs the set of datastore and transfer manager driver to register an existing Everpure FlashArray SAN. This set utilizes the Everpure FlashArray API to create volumes which are treated as a Virtual Machine disk utilizing the iSCSI interface. Both the Image and System datastores must use the same Everpure array and identical datastore configurations. This is because volumes are either clones or renamed depending on the image persistence type. Persistent images are renamed to the System datastore, while non-persistent images are cloned using FlexClone.

The Everpure Linux documentation and this Everpure iSCSI Setup with FlashArray Blog Post may be useful during this setup.

  1. Verify iSCSI Service Connections

    • In the FlashArray System Manager: Settings -> Network -> Connectors
    • Ensure the iSCSI connectors are enabled and note their IP addresses.
  2. Create an API User

    • In the FlashArray System Manager: Settings -> Access -> Users and Policies
    • Create a new user with the Storage Admin role, this should provide enough permissions for OpenNebula.
    • Create an API token for this user and note the API key. Leave the expiration date blank to create an indefinite API key.

Front-end Only Setup

The Front-end requires network access to the Everpure FlashArrayAPI endpoint:

  1. API Access:
    • Ensure network connectivity to the Everpure FlashArray API interface. The datastore will be in an ERROR state if the API is not accessible or cannot be monitored properly.

Front-end & Node Setup

The Everpure FlashArray drivers do automatic Host management, based off of your Host’s hostname. You will just need to at least discover the iSCSI targets on each Host, then configure multipath for each Host and Front-end. Logging into the sessions will be handled automatically by the driver after they are created, which will happen when the first volume needs to be attached to that particular Front-end or Host.

  1. iSCSI:

    • Discover the iSCSI targets on the Hosts:
      iscsiadm -m discovery -t sendtargets -p <target_ip>   # for each iSCSI target IP from the FlashArray
      
  2. Persistent iSCSI Configuration:

    • Set node.startup = automatic in /etc/iscsi/iscsid.conf
    • Ensure iscsid is started with systemctl status iscsid
    • Enable iscsid with systemctl enable iscsid
  3. Multi-path Configuration: Update /etc/multipath.conf to something like:

     defaults {
             polling_interval       10
     }
    
    
     devices {
         device {
             vendor                      "NVME"
             product                     "Pure Storage FlashArray"
             path_selector               "queue-length 0"
             path_grouping_policy        group_by_prio
             prio                        ana
             failback                    immediate
             fast_io_fail_tmo            10
             user_friendly_names         no
             no_path_retry               0
             features                    0
             dev_loss_tmo                60
         }
         device {
             vendor                   "PURE"
             product                  "FlashArray"
             path_selector            "service-time 0"
             hardware_handler         "1 alua"
             path_grouping_policy     group_by_prio
             prio                     alua
             failback                 immediate
             path_checker             tur
             fast_io_fail_tmo         10
             user_friendly_names      no
             no_path_retry            0
             features                 0
             dev_loss_tmo             600
         }
     }
    

OpenNebula Configuration

Create both datastores as PureFA (Everpure FlashArray) (instant cloning/moving capabilities):

  • System Datastore
  • Image Datastore

Create the System Datastore

Template required parameters:

AttributeDescription
NAMEDatastore name
TYPESYSTEM_DS
TM_MADpurefa
DISK_TYPEBLOCK
PUREFA_HOSTEverpure FlashArray API IP address
PUREFA_API_TOKENAPI Token key
PUREFA_TARGETiSCSI Target name

Example template:

$ cat purefa_system.ds
NAME              = "purefa_system"
TYPE              = "SYSTEM_DS"
DISK_TYPE         = "BLOCK"
TM_MAD            = "purefa"
PUREFA_HOST       = "10.1.234.56"
PUREFA_API_TOKEN  = "01234567-89ab-cdef-0123-456789abcdef"
PUREFA_TARGET     = "iqn.1993-08.org.ubuntu:01:1234"

Create the datastore:

$ onedatastore create purefa_system.ds
ID: 101

Create Image Datastore

Template required parameters:

AttributeDescription
NAMEDatastore name
TYPEIMAGE_DS
DS_MADpurefa
TM_MADpurefa
DISK_TYPEBLOCK
PUREFA_HOSTEverpure FlashArray API IP address
PUREFA_API_TOKENAPI Token key
PUREFA_TARGETiSCSI Target name

Example template:

$ cat purefa_image.ds
NAME              = "purefa_image"
TYPE              = "IMAGE_DS"
DISK_TYPE         = "BLOCK"
DS_MAD            = "purefa"
TM_MAD            = "purefa"
PUREFA_HOST       = "10.1.234.56"
PUREFA_API_TOKEN  = "01234567-89ab-cdef-0123-456789abcdef"
PUREFA_TARGET     = "iqn.1993-08.org.ubuntu:01:1234"

Create the datastore:

$ onedatastore create purefa_image.ds
ID: 102

Datastore Optional Attributes

Template optional parameters:

AttributeDescription
PUREFA_VERSIONEverpure FlashArray Version (Default: 2.9)
PUREFA_SUFFIXSuffix to append to all volume names

Datastore Internals

Storage architecture details:

  • Images: Stored as a single Volume in Everpure FlashArray
  • Naming Convention:
    • Image datastore: one_<datastore_id>_<image_id>
    • System datastore: one_<vm_id>_disk_<disk_id>
  • Operations:
    • Non‐persistent: Clone
    • Persistent: Rename

Hosts are automatically created in Everpure using the Everpure FlashArray API, with a name generated from their hostname.

Symbolic links from the System datastore will be created for each Virtual Machine on its Host once the Volumes have been mapped.

Backups process details:

Both Full and Incremental backups are supported by Everpure FlashArray. For Full Backups, a snapshot of the Volume containing the VM disk is taken and attached to the Host, where it is converted into a QCOW2 image and uploaded to the backup datastore.

Incremental backups are created using the Volume Difference Feature of Everpure FlashArray. This returns a list of block offsets and lengths which have changed since a target snapshot. This list is then used to create a sparse QCOW2 format file which is uploaded to the backup datastore.

System Considerations

Occasionally, under network interruptions or if a volume is deleted directly from Everpure, the iSCSI connection may drop or fail. This can cause the system to hang on a sync command, which in turn may lead to OpenNebula operation failures on the affected Host. Although the driver is designed to manage these issues automatically, it’s important to be aware of these potential iSCSI connection challenges.

You may wish to contact the OpenNebula Support Team to assist in this cleanup; however, here are a few advanced tips to clean these up if you are comfortable doing so:

  • If you have extra devices from failures leftover, run:
    rescan_scsi_bus.sh -r -m
    
  • If an entire multipath setup remains, run:
    multipath -f <multipath_device>
    
    Be very careful to target the correct multipath device.

If devices persist, follow these steps:

  1. Run dmsetup ls --tree or lsblk to see which mapped devices remain. You may see devices not attached to a mapper entry in lsblk.

  2. For each such device (not your root device), run:

    echo 1 > /sys/bus/scsi/devices/sdX/device/delete
    

    where sdX is the device name.

  3. Once those devices are gone, remove leftover mapper entries:

    dmsetup remove /dev/mapper/<device_name>
    
  4. If removal fails:

    • Check usage:
      fuser -v $(realpath /dev/mapper/<device_name>)
      
    • If it’s being used as swap:
      swapoff /dev/mapper/<device_name>
      dmsetup remove /dev/mapper/<device_name>
      
    • If another process holds it, kill the process and retry:
      dmsetup remove /dev/mapper/<device_name>
      
    • If you can’t kill the process or nothing shows up:
      dmsetup suspend /dev/mapper/<device_name>
      dmsetup wipe_table /dev/mapper/<device_name>
      dmsetup resume /dev/mapper/<device_name>
      dmsetup remove /dev/mapper/<device_name>
      

This should resolve most I/O lockups caused by failed iSCSI operations. Please contact the OpenNebula Support Team if you need assistance.