NetApp SAN Datastore (EE)
OpenNebula’s NetApp SAN Datastore provides production-grade, native control of NetApp ONTAP block storage, from provisioning to cleanup, directly from OpenNebula. This integration enables the full lifecycle of Volumes, LUNs, and Snapshots — including FlexClone-based cloning and SAN-snapshot incremental backups — with secure HTTPS control and robust iSCSI/multipath host orchestration. This datastore driver is part of OpenNebula Enterprise Edition (EE).
Key Benefits
| Area | Benefit | Description |
|---|---|---|
| Automation | Full lifecycle control | End-to-end creation, cloning, resizing, renaming, and cleanup of Volumes and LUNs directly from OpenNebula. |
| Efficiency | Space-efficient FlexClone and snapshot management | Uses NetApp’s native FlexClone for instant, zero-copy clones of non-persistent images. |
| Performance | Optimized host I/O path | Native multipath and iSCSI management per host for high throughput and availability. |
| Reliability | Job-aware REST orchestration | Each ONTAP operation is tracked via ONTAP’s job system with retries and error mapping. |
| Data Protection | Incremental SAN-snapshot backups (7.0.1+) | Enables block-level incremental backups based on array snapshots—no guest agents required. |
| Security | HTTPS control path | All ONTAP communication is performed securely via authenticated HTTPS REST calls. |
| Scalability | Parallel host operations | Safe concurrent attach/detach with per-path coordination and multipath support. |
Supported NetApp Native Functionality
| NetApp Feature | Supported | Notes |
|---|---|---|
| FlexClone | Yes | Used for instant cloning of non-persistent images; optional split for persistent ones. |
| Snapshot (manual) | Yes | Created, listed, and deleted directly from OpenNebula; job-aware. |
| Snapshot restore | Yes | Volume restoration from snapshot UUID or name. |
| Snapshot autogrow | Yes | Tunable via NETAPP_GROW_THRESHOLD and NETAPP_GROW_RATIO. |
| Incremental backups (SAN snapshot diff) | Yes | New in 7.0.1; generates block-level sparse deltas without guest agents. |
| Volume autosize | Yes | Automatic growth and shrink; configurable thresholds. |
| LUN mapping / igroup management | Yes | Handled automatically per host; safe detach and cleanup. |
| Multipath I/O | Yes | Fully orchestrated; automatic detection, resize, and removal of maps. |
| Snapshot policy management | Yes | Driver enforces “no automatic snapshots”; OpenNebula controls all snapshots. |
| Data encryption (at-rest) | Yes | Supported transparently if enabled in ONTAP; not managed by OpenNebula. |
| SnapMirror replication | No (planned) | Not yet supported; may be added in future roadmap. |
| QoS policy groups | No | Not currently exposed through the datastore driver. |
| SVM DR / MetroCluster | No | Supported by ONTAP, but not orchestrated by OpenNebula. |
Limitations and Unsupported Features
While the NetApp integration covers the full VM disk lifecycle and most SAN-level operations, it focuses strictly on primary datastore management using iSCSI block devices. Some advanced ONTAP-specific or VMware-exclusive capabilities are intentionally not part of this driver.
Important
This integration targets block-level provisioning for OpenNebula environments.
It does not expose advanced replication or NAS-protocol features available in other ecosystems (e.g. VMware vVols or SnapMirror).| Category | Unsupported Feature | Rationale / Alternative |
|---|---|---|
| Replication & DR | SnapMirror, SnapVault | Planned for future releases; can be managed externally through ONTAP. |
| NAS protocols | NFS / CIFS shares | Driver focuses on iSCSI block storage only. |
| ONTAP-managed automatic snapshots | Automated snapshot schedules | OpenNebula requires full control of snapshot lifecycle. |
| Storage QoS / Performance tiers | Policy group integration | Manual setup possible in ONTAP, not exposed in driver. |
| Storage efficiency analytics | Deduplication & compression metrics | Handled internally by ONTAP, not shown in OpenNebula UI. |
| Encryption management | Per-volume encryption toggling | Configure at SVM level outside OpenNebula. |
| Advanced VMware features | VAAI offloads, Storage DRS, vVols | VMware-specific APIs, not applicable to OpenNebula. |
| Multi-instance sharing | Shared datastore IDs | Explicitly unsupported — datastore IDs must be unique per OpenNebula instance. |
| SVM HA features | MetroCluster, SyncMirror | Can be used under the SVM, but not managed by OpenNebula. |
NetApp ONTAP Setup
OpenNebula runs the set of datastore and transfer manager driver to register an existing NetApp SAN appliance. This set utilizes the NetApp ONTAP API to create volumes with a single LUN, which will be treated as Virtual Machine disks using the iSCSI interface. Both the Image and System datastores should use the same NetApp SAN appliance with identical Storage VM configurations (aggregates, etc.), as volumes (disks) are either cloned or renamed depending on the image type. Persistent images are renamed to the System datastore, while non‐persistent images are cloned using FlexClone and then split.
The NetApp ONTAP documentation may be useful during this setup.
Note
Sharing datastores between multiple OpenNebula instances is not supported and may cause issues if they share datastore IDs.The NetApp system requires specific configurations. This driver operates using a Storage VM that provides iSCSI connections, with volumes/LUNs mapped directly to each Host after creation. Configure and enable the iSCSI protocol according to your infrastructure requirements.
- Define Aggregates/Local Tiers for your Storage VM:
- In ONTAP System Manager: Storage > Storage VMs > Select your SVM > Edit > Limit volume creation to preferred local tiers
- Assign at least one aggregate/tier and note their UUID(s) from the URL for later use
Note
The UUID of an object is often found in the URL if using the web interface, otherwise you can use the ONTAP CLI to gather them: vserver show -fields uuid, igroup show -fields uuid, etc.To enable capacity monitoring:
- Enable Enable maximum capacity limit on the same Edit Storage VM screen
- If not configured, set
DATASTORE_CAPACITY_CHECK=noin both of the OpenNebula datastores’ attributes
This driver will manage the snapshots, so do not enable any automated snapshots for this SVM; they will not be picked up by OpenNebula unless created through OpenNebula.
If you do not plan to use the administrator account, create a new user with all API permissions and assign it to the SVM.
Front-end Only Setup
The Front-end requires network access to the NetApp ONTAP API endpoint:
- API Access:
- Ensure network connectivity to the NetApp ONTAP API interface. The datastore will be in an ERROR state if the API is not accessible or the SVM cannot be monitored properly.
Front-end & Node Setup
Configure both the Front-end and nodes with persistent iSCSI connections:
iSCSI Initiators:
- Configure initiator security in NetApp Storage VM:
- Storage VM > Settings > iSCSI Protocol > Initiator Security
- Add initiators from
/etc/iscsi/initiatorname.conf(all nodes and Front-end)
- Discover and login to the iSCSI targets:
iscsiadm -m discovery -t sendtargets -p <target_ip> # for each iSCSI target IP from NetApp iscsiadm -m node -l # login to all discovered targets
- Configure initiator security in NetApp Storage VM:
Persistent iSCSI Configuration:
- Set
node.startup = automaticin/etc/iscsi/iscsid.conf - Ensure iscsid is started with
systemctl status iscsid - Enable iscsid with
systemctl enable iscsid
- Set
Multipath Configuration: Update
/etc/multipath.confto something like:defaults { user_friendly_names yes find_multipaths yes } devices { device { vendor "NETAPP" product "LUN.*" no_path_retry queue path_checker tur alias_prefix "mpath" } } blacklist { devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*" devnode "^hd[a-z][0-9]*" devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]" }
OpenNebula Configuration
Create both datastores as NetApp (instant cloning/moving capabilities):
- System Datastore
- Image Datastore
Create System Datastore
Template required parameters:
| Attribute | Description |
|---|---|
NAME | Datastore name |
TYPE | SYSTEM_DS |
TM_MAD | netapp |
DISK_TYPE | BLOCK |
NETAPP_HOST | NetApp ONTAP API IP address |
NETAPP_USER | API username |
NETAPP_PASS | API password |
NETAPP_SVM | Storage VM UUID |
NETAPP_AGGREGATES | Aggregate UUID(s) |
NETAPP_IGROUP | Initiator group UUID |
NETAPP_TARGET | iSCSI Target name |
Example template:
$ cat netapp_system.ds
NAME = "netapp_system"
TYPE = "SYSTEM_DS"
DISK_TYPE = "BLOCK"
DS_MAD = "netapp"
TM_MAD = "netapp"
NETAPP_HOST = "10.1.234.56"
NETAPP_USER = "admin"
NETAPP_PASS = "password"
NETAPP_SVM = "c9dd74bc-8e3e-47f0-b274-61be0b2ccfe3"
NETAPP_AGGREGATES = "280f5971-3427-4cc6-9237-76c3264543d5"
NETAPP_IGROUP = "27702521-68fb-4d9a-9676-efa3018501fc"
NETAPP_TARGET = "iqn.1993-08.org.debian:01:1234"
$ onedatastore create netapp_system.ds
ID: 101
Create Image Datastore
Template required parameters:
| Attribute | Description |
|---|---|
NAME | Datastore name |
TYPE | IMAGE_DS |
DS_MAD | netapp |
TM_MAD | netapp |
DISK_TYPE | BLOCK |
NETAPP_HOST | NetApp ONTAP API IP address |
NETAPP_USER | API username |
NETAPP_PASS | API password |
NETAPP_SVM | Storage VM UUID |
NETAPP_AGGREGATES | Aggregate UUID(s) |
NETAPP_IGROUP | Initiator group UUID |
NETAPP_TARGET | iSCSI Target name |
Example template:
$ cat netapp_image.ds
NAME = "netapp_image"
TYPE = "IMAGE_DS"
DISK_TYPE = "BLOCK"
TM_MAD = "netapp"
NETAPP_HOST = "10.1.234.56"
NETAPP_USER = "admin"
NETAPP_PASS = "password"
NETAPP_SVM = "c9dd74bc-8e3e-47f0-b274-61be0b2ccfe3"
NETAPP_AGGREGATES = "280f5971-3427-4cc6-9237-76c3264543d5"
NETAPP_IGROUP = "27702521-68fb-4d9a-9676-efa3018501fc"
NETAPP_TARGET = "iqn.1993-08.org.debian:01:1234"
$ onedatastore create netapp_image.ds
ID: 102
Datastore Optional Attributes
Since Volumes contain the LUNs and snapshots, they are by default configured to contain 10% extra space and reserve this for snapshots. You can override these settings with the following datastore attributes, which should be same for both datastores:
Template optional parameters:
| Attribute | Description |
|---|---|
NETAPP_SUFFIX | Volume/LUN name suffix. |
NETAPP_GROW_THRESHOLD | Volume autogrow threshold in percent. Default: 96 |
NETAPP_GROW_RATIO | Volume maximum autogrow ratio. Default: 2 |
NETAPP_SNAPSHOT_RESERVE | Volume snapshot reserve in percent. Default: 10 |
Note
Volumes will be created with the extra reservation space in mind, which will be size * ( 1 + NETAPP_SNAPSHOT_RESERVE / 100 ).Datastore Internals
Storage architecture details:
- Images: Stored as a volume with a single LUN in NetApp
- Naming Convention:
- Image datastore:
one_<datastore_id>_<image_id>(volume),one_<datastore_id>_<image_id>_lun(LUN) - System datastore:
one_<vm_id>_disk_<disk_id>(volume),one_<datastore_id>_<vm_id>_disk_<disk_id>_lun(LUN)
- Image datastore:
- Operations:
- Non‐persistent: FlexClone, then split
- Persistent: Rename
Symbolic links from the System datastore will be created for each Virtual Machine on its Host once the LUNs have been mapped.
Note
The minimum size for a NetApp volume is 20 MB, so any disk smaller than that will result in a 20 MB volume; however, the LUN inside will be the correct size.Known Issues
Currently the NetApp password on the Datastore is not encrypted due to a typo in the configuration file /etc/one/oned.conf. To encrypt this password, the Encrypted Attributes section of this file you must change DATASTORE_ENCRYPTED_ATTR = "NETAPP_PASSWORD" to DATASTORE_ENCRYPTED_ATTR = "NETAPP_PASS" and then restart OpenNebula.
System Considerations
Occasionally, under network interruptions or if a volume is deleted directly from NetApp, the iSCSI connection may drop or fail. This can cause the system to hang on a sync command, which in turn may lead to OpenNebula operation failures on the affected Host. Although the driver is designed to manage these issues automatically, it’s important to be aware of these potential iSCSI connection challenges.
You may wish to contact the OpenNebula Support team to assist in this cleanup; however, here are a few advanced tips to clean these up if you are comfortable doing so:
- If you have extra devices from failures leftover, run:
rescan_scsi_bus.sh -r -m - If an entire multipath setup remains, run:Be very careful to target the correct multipath device.
multipath -f <multipath_device>
Note
This behavior stems from the inherent complexities of iSCSI connections and is not exclusive to OpenNebula or NetApp.If devices persist, follow these steps:
- Run
dmsetup ls --treeorlsblkto see which mapped devices remain. You may see devices not attached to a mapper entry inlsblk. - For each such device (not your root device), run:where
echo 1 > /sys/bus/scsi/devices/sdX/device/deletesdXis the device name. - Once those devices are gone, remove leftover mapper entries:
dmsetup remove /dev/mapper/<device_name> - If removal fails:
- Check usage:
fuser -v $(realpath /dev/mapper/<device_name>) - If it’s being used as swap:
swapoff /dev/mapper/<device_name> dmsetup remove /dev/mapper/<device_name> - If another process holds it, kill the process and retry:
dmsetup remove /dev/mapper/<device_name> - If you can’t kill the process or nothing shows up:
dmsetup suspend /dev/mapper/<device_name> dmsetup wipe_table /dev/mapper/<device_name> dmsetup resume /dev/mapper/<device_name> dmsetup remove /dev/mapper/<device_name>
- Check usage:
This should resolve most I/O lockups caused by failed iSCSI operations. Please contact the OpenNebula Support team if you need assistance.
We value your feedback
Was this information helpful?
Glad to hear it
Sorry to hear that