Everpure FlashArray SAN Datastore (EE)
OpenNebula’s Everpure FlashArray SAN Datastore delivers production-grade, native control of FlashArray block storage, from provisioning through cleanup, directly from OpenNebula. This integration exposes the full lifecycle of FlashArray Volumes, Snapshots, and Clones, and automates Host connectivity via Everpure’s Host/Host-group model with reliable iSCSI and multipath handling. All communication with the array uses authenticated HTTPS against the FlashArray REST API. This datastore driver is part of OpenNebula Enterprise Edition (EE).
Key Benefits
With the native Everpure driver, OpenNebula users gain the performance consistency of FlashArray’s always-thin, metadata-driven architecture. Everpure’s zero-copy snapshots and clones complete instantly, without impacting write amplification or introducing snapshot-tree latency penalties typical of Host-side copy-on-write systems. Under mixed 4k/8k and fsync-heavy workloads, FlashArray maintains flat latency profiles even with deep snapshot histories, while LVM-thin commonly exhibits early degradation as CoW pressure increases. The result is higher, steadier IOPS and predictable latency for Virtual Machine disks at scale.
| Area | Benefit | Description |
|---|---|---|
| Automation | Full lifecycle control | End-to-end creation, cloning, resizing, renaming, and deletion of FlashArray volumes directly from OpenNebula. |
| Efficiency | Instant, thin snapshots and clones | Everpure’s metadata-only snapshots allow immediate, zero-copy cloning for persistent and non-persistent VMs alike. |
| Performance | Latency-stable I/O path | FlashArray’s architecture keeps read/write latency flat even as snapshot chains grow; multipath iSCSI is configured automatically per Host. |
| Reliability | Synchronous REST orchestration | Operations use FlashArray’s synchronous REST API with explicit error handling and safe sequencing for volume, snapshot, and Host mapping tasks. |
| Data Protection | Incremental SAN-snapshot backups | Block-level incremental backups are generated by comparing FlashArray snapshot pairs via raw device attachment — no guest agents required. |
| Security | HTTPS control path | All FlashArray communication uses authenticated, encrypted HTTPS REST calls. |
| Scalability | Simplified Host-group mappings | Safe, concurrent attach/detach operations across Hosts using deterministic LUN IDs and predictable multi-path layout. |
Supported Everpure FlashArray Native Functionality
| Everpure Feature | Supported | Notes |
|---|---|---|
| Zero-Copy Volume Clone | Yes | Everpure clones are metadata-only and complete instantly |
| Snapshot (manual) | Yes | Created and deleted directly from OpenNebula; mapped 1:1 to FlashArray snapshots. |
| Snapshot restore | Yes | Volume overwrite-from-snapshot supported via the REST API. |
| Snapshot retention/policies | No | FlashArray snapshot schedules exist, but OpenNebula does not manage array-side policies; all snapshots remain under OpenNebula’s control. |
| Incremental backups (SAN snapshot diff) | Yes | Utilizes FlashArray Volume Diff API to gather block differences, then copies the data. |
| Host management | Yes | Hosts are automatically created and mapped to as needed. |
| Multi-path I/O | Yes | Fully orchestrated; automatic detection, resize, and removal of maps. |
| Data encryption (at-rest) | Yes | Supported transparently by the array (always-on AES-XTS); not managed by OpenNebula. |
| SnapMirror replication | No (planned) | Not yet supported; may be added in future roadmap. |
| QoS policy groups | No | Not currently exposed through the datastore driver. |
| SVM DR / MetroCluster | No | Supported by FlashArray, but not orchestrated by OpenNebula. |
Limitations and Unsupported Features
While the Everpure Storage FlashArray integration delivers full VM disk lifecycle management and the core SAN operations required by OpenNebula, it is deliberately scoped to primary datastore provisioning via iSCSI block devices. Several advanced FlashArray protection and VMware-specific capabilities are intentionally not surfaced through this driver.
| Category | Unsupported Feature | Rationale / Alternative |
|---|---|---|
| Replication & DR | Protection Groups / Active Cluster | Planned for future releases; can be managed externally on the FlashArray. |
| Storage QoS / Performance tiers | Bandwidth / IOPS limits | FlashArray supports QoS, but these controls are not integrated into the driver. |
| Storage efficiency analytics | Deduplication & compression metrics | Calculated internally by FlashArray; not displayed or consumed by OpenNebula. |
| Encryption management | Per-volume encryption toggling | FlashArray encryption is always-on and appliance-managed; no OpenNebula API exposure. |
Everpure FlashArray Setup
OpenNebula runs the set of datastore and transfer manager driver to register an existing Everpure FlashArray SAN. This set utilizes the Everpure FlashArray API to create volumes which are treated as a Virtual Machine disk utilizing the iSCSI interface. Both the Image and System datastores must use the same Everpure array and identical datastore configurations. This is because volumes are either clones or renamed depending on the image persistence type. Persistent images are renamed to the System datastore, while non-persistent images are cloned using FlexClone.
The Everpure Linux documentation and this Everpure iSCSI Setup with FlashArray Blog Post may be useful during this setup.
Verify iSCSI Service Connections
- In the FlashArray System Manager: Settings -> Network -> Connectors
- Ensure the iSCSI connectors are enabled and note their IP addresses.
Create an API User
- In the FlashArray System Manager: Settings -> Access -> Users and Policies
- Create a new user with the Storage Admin role, this should provide enough permissions for OpenNebula.
- Create an API token for this user and note the API key. Leave the expiration date blank to create an indefinite API key.
Front-end Only Setup
The Front-end requires network access to the Everpure FlashArrayAPI endpoint:
- API Access:
- Ensure network connectivity to the Everpure FlashArray API interface. The datastore will be in an ERROR state if the API is not accessible or cannot be monitored properly.
Front-end & Node Setup
The Everpure FlashArray drivers do automatic Host management, based off of your Host’s hostname. You will just need to at least discover the iSCSI targets on each Host, then configure multipath for each Host and Front-end. Logging into the sessions will be handled automatically by the driver after they are created, which will happen when the first volume needs to be attached to that particular Front-end or Host.
iSCSI:
- Discover the iSCSI targets on the Hosts:
iscsiadm -m discovery -t sendtargets -p <target_ip> # for each iSCSI target IP from the FlashArray
- Discover the iSCSI targets on the Hosts:
Persistent iSCSI Configuration:
- Set
node.startup = automaticin/etc/iscsi/iscsid.conf - Ensure iscsid is started with
systemctl status iscsid - Enable iscsid with
systemctl enable iscsid
- Set
Multi-path Configuration: Update
/etc/multipath.confto something like:defaults { polling_interval 10 } devices { device { vendor "NVME" product "Pure Storage FlashArray" path_selector "queue-length 0" path_grouping_policy group_by_prio prio ana failback immediate fast_io_fail_tmo 10 user_friendly_names no no_path_retry 0 features 0 dev_loss_tmo 60 } device { vendor "PURE" product "FlashArray" path_selector "service-time 0" hardware_handler "1 alua" path_grouping_policy group_by_prio prio alua failback immediate path_checker tur fast_io_fail_tmo 10 user_friendly_names no no_path_retry 0 features 0 dev_loss_tmo 600 } }
OpenNebula Configuration
Create both datastores as PureFA (Everpure FlashArray) (instant cloning/moving capabilities):
- System Datastore
- Image Datastore
Create the System Datastore
Template required parameters:
| Attribute | Description |
|---|---|
NAME | Datastore name |
TYPE | SYSTEM_DS |
TM_MAD | purefa |
DISK_TYPE | BLOCK |
PUREFA_HOST | Everpure FlashArray API IP address |
PUREFA_API_TOKEN | API Token key |
PUREFA_TARGET | iSCSI Target name |
Example template:
$ cat purefa_system.ds
NAME = "purefa_system"
TYPE = "SYSTEM_DS"
DISK_TYPE = "BLOCK"
TM_MAD = "purefa"
PUREFA_HOST = "10.1.234.56"
PUREFA_API_TOKEN = "01234567-89ab-cdef-0123-456789abcdef"
PUREFA_TARGET = "iqn.1993-08.org.ubuntu:01:1234"
Create the datastore:
$ onedatastore create purefa_system.ds
ID: 101
Create Image Datastore
Template required parameters:
| Attribute | Description |
|---|---|
NAME | Datastore name |
TYPE | IMAGE_DS |
DS_MAD | purefa |
TM_MAD | purefa |
DISK_TYPE | BLOCK |
PUREFA_HOST | Everpure FlashArray API IP address |
PUREFA_API_TOKEN | API Token key |
PUREFA_TARGET | iSCSI Target name |
Example template:
$ cat purefa_image.ds
NAME = "purefa_image"
TYPE = "IMAGE_DS"
DISK_TYPE = "BLOCK"
DS_MAD = "purefa"
TM_MAD = "purefa"
PUREFA_HOST = "10.1.234.56"
PUREFA_API_TOKEN = "01234567-89ab-cdef-0123-456789abcdef"
PUREFA_TARGET = "iqn.1993-08.org.ubuntu:01:1234"
Create the datastore:
$ onedatastore create purefa_image.ds
ID: 102
Datastore Optional Attributes
Template optional parameters:
| Attribute | Description |
|---|---|
PUREFA_VERSION | Everpure FlashArray Version (Default: 2.9) |
PUREFA_SUFFIX | Suffix to append to all volume names |
Datastore Internals
Storage architecture details:
- Images: Stored as a single Volume in Everpure FlashArray
- Naming Convention:
- Image datastore:
one_<datastore_id>_<image_id> - System datastore:
one_<vm_id>_disk_<disk_id>
- Image datastore:
- Operations:
- Non‐persistent: Clone
- Persistent: Rename
Hosts are automatically created in Everpure using the Everpure FlashArray API, with a name generated from their hostname.
Symbolic links from the System datastore will be created for each Virtual Machine on its Host once the Volumes have been mapped.
Backups process details:
Both Full and Incremental backups are supported by Everpure FlashArray. For Full Backups, a snapshot of the Volume containing the VM disk is taken and attached to the Host, where it is converted into a QCOW2 image and uploaded to the backup datastore.
Incremental backups are created using the Volume Difference Feature of Everpure FlashArray. This returns a list of block offsets and lengths which have changed since a target snapshot. This list is then used to create a sparse QCOW2 format file which is uploaded to the backup datastore.
/var/tmp/one/etc/tm/san/backup.confnbd kernel module to be loaded and the nbdfuse package to be installed on all OpenNebula nodes.System Considerations
Occasionally, under network interruptions or if a volume is deleted directly from Everpure, the iSCSI connection may drop or fail. This can cause the system to hang on a sync command, which in turn may lead to OpenNebula operation failures on the affected Host. Although the driver is designed to manage these issues automatically, it’s important to be aware of these potential iSCSI connection challenges.
You may wish to contact the OpenNebula Support Team to assist in this cleanup; however, here are a few advanced tips to clean these up if you are comfortable doing so:
- If you have extra devices from failures leftover, run:
rescan_scsi_bus.sh -r -m - If an entire multipath setup remains, run:Be very careful to target the correct multipath device.
multipath -f <multipath_device>
If devices persist, follow these steps:
Run
dmsetup ls --treeorlsblkto see which mapped devices remain. You may see devices not attached to a mapper entry inlsblk.For each such device (not your root device), run:
echo 1 > /sys/bus/scsi/devices/sdX/device/deletewhere
sdXis the device name.Once those devices are gone, remove leftover mapper entries:
dmsetup remove /dev/mapper/<device_name>If removal fails:
- Check usage:
fuser -v $(realpath /dev/mapper/<device_name>) - If it’s being used as swap:
swapoff /dev/mapper/<device_name> dmsetup remove /dev/mapper/<device_name> - If another process holds it, kill the process and retry:
dmsetup remove /dev/mapper/<device_name> - If you can’t kill the process or nothing shows up:
dmsetup suspend /dev/mapper/<device_name> dmsetup wipe_table /dev/mapper/<device_name> dmsetup resume /dev/mapper/<device_name> dmsetup remove /dev/mapper/<device_name>
- Check usage:
This should resolve most I/O lockups caused by failed iSCSI operations. Please contact the OpenNebula Support Team if you need assistance.
GIVE FEEDBACK
Was this resource helpful?
Glad to hear it
Sorry to hear that