This datastore is used to register an existing NetApp SAN appliance. It utilizes the NetApp ONTAP API to create volumes with a single LUN, which will be treated as Virtual Machine disks using the iSCSI interface. Both the Image and System datastores should use the same NetApp SAN appliance with identical Storage VM configurations (aggregates, etc.), as volumes (disks) are either cloned or renamed depending on the image type. Persistent images are renamed to the System datastore, while non‐persistent images are cloned using FlexClone and then split.
The NetApp ONTAP documentation may be useful during this setup.
The NetApp system requires specific configurations. This driver operates using a Storage VM that provides iSCSI connections, with volumes/LUNs mapped directly to each Host after creation. Configure and enable the iSCSI protocol according to your infrastructure requirements.
vserver show -fields uuid
, igroup show -fields uuid
, etc.To enable capacity monitoring:
DATASTORE_CAPACITY_CHECK=no
in both of the OpenNebula datastores’ attributesThis driver will manage the snapshots, so do not enable any automated snapshots for this SVM; they will not be picked up by OpenNebula unless created through OpenNebula.
If you do not plan to use the administrator account, create a new user with all API permissions and assign it to the SVM.
The Front-end requires network access to the NetApp ONTAP API endpoint:
Configure both the Front-end and nodes with persistent iSCSI connections:
iSCSI Initiators:
/etc/iscsi/initiatorname.conf
(all nodes and Front-end)iscsiadm -m discovery -t sendtargets -p <target_ip> # for each iSCSI target IP from NetApp
iscsiadm -m node -l # login to all discovered targets
Persistent iSCSI Configuration:
node.startup = automatic
in /etc/iscsi/iscsid.conf
systemctl status iscsid
systemctl enable iscsid
Multipath Configuration:
Update /etc/multipath.conf
to something like:
defaults {
user_friendly_names yes
find_multipaths yes
}
devices {
device {
vendor "NETAPP"
product "LUN.*"
no_path_retry queue
path_checker tur
alias_prefix "mpath"
}
}
blacklist {
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z][0-9]*"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}
Create both datastores for optimal performance (instant cloning/moving capabilities):
Template parameters:
Attribute | Description |
---|---|
NAME | Datastore name |
TYPE | SYSTEM_DS |
TM_MAD | netapp |
DISK_TYPE | BLOCK |
NETAPP_HOST | NetApp ONTAP API IP address |
NETAPP_USER | API username |
NETAPP_PASS | API password |
NETAPP_SVM | Storage VM UUID |
NETAPP_AGGREGATES | Aggregate UUID(s) |
NETAPP_IGROUP | Initiator group UUID |
NETAPP_TARGET | iSCSI Target name |
Example template:
$ cat netapp_system.ds
NAME = "netapp_system"
TYPE = "SYSTEM_DS"
DISK_TYPE = "BLOCK"
DS_MAD = "netapp"
TM_MAD = "netapp"
NETAPP_HOST = "10.1.234.56"
NETAPP_USER = "admin"
NETAPP_PASS = "password"
NETAPP_SVM = "c9dd74bc-8e3e-47f0-b274-61be0b2ccfe3"
NETAPP_AGGREGATES = "280f5971-3427-4cc6-9237-76c3264543d5"
NETAPP_IGROUP = "27702521-68fb-4d9a-9676-efa3018501fc"
NETAPP_TARGET = "iqn.1993-08.org.debian:01:1234"
$ onedatastore create netapp_system.ds
ID: 101
Template parameters:
Attribute | Description |
---|---|
NAME | Datastore name |
TYPE | IMAGE_DS |
DS_MAD | netapp |
TM_MAD | netapp |
DISK_TYPE | BLOCK |
NETAPP_HOST | NetApp ONTAP API IP address |
NETAPP_USER | API username |
NETAPP_PASS | API password |
NETAPP_SVM | Storage VM UUID |
NETAPP_AGGREGATES | Aggregate UUID(s) |
NETAPP_IGROUP | Initiator group UUID |
NETAPP_TARGET | iSCSI Target name |
Example template:
$ cat netapp_image.ds
NAME = "netapp_image"
TYPE = "IMAGE_DS"
DISK_TYPE = "BLOCK"
TM_MAD = "netapp"
NETAPP_HOST = "10.1.234.56"
NETAPP_USER = "admin"
NETAPP_PASS = "password"
NETAPP_SVM = "c9dd74bc-8e3e-47f0-b274-61be0b2ccfe3"
NETAPP_AGGREGATES = "280f5971-3427-4cc6-9237-76c3264543d5"
NETAPP_IGROUP = "27702521-68fb-4d9a-9676-efa3018501fc"
NETAPP_TARGET = "iqn.1993-08.org.debian:01:1234"
$ onedatastore create netapp_image.ds
ID: 102
Storage architecture details:
one_<datastore_id>_<image_id>
(volume), one_<datastore_id>_<image_id>_lun
(LUN)one_<vm_id>_disk_<disk_id>
(volume), one_<datastore_id>_<vm_id>_disk_<disk_id>_lun
(LUN)Symbolic links from the System datastore will be created for each Virtual Machine on its Host once the LUNs have been mapped.
Occasionally, under network interruptions or if a volume is deleted directly from NetApp, the iSCSI connection may drop or fail. This can cause the system to hang on a sync
command, which in turn may lead to OpenNebula operation failures on the affected Host. Although the driver is designed to manage these issues automatically, it’s important to be aware of these potential iSCSI connection challenges.
You may wish to contact the OpenNebula Support team to assist in this cleanup; however, here are a few advanced tips to clean these up if you are comfortable doing so:
rescan_scsi_bus.sh -r -m
multipath -f <multipath_device>
If devices persist, follow these steps:
dmsetup ls --tree
or lsblk
to see which mapped devices remain. You may see devices not attached to a mapper entry in lsblk
.echo 1 > /sys/bus/scsi/devices/sdX/device/delete
sdX
is the device name.dmsetup remove /dev/mapper/<device_name>
fuser -v $(realpath /dev/mapper/<device_name>)
swapoff /dev/mapper/<device_name>
dmsetup remove /dev/mapper/<device_name>
dmsetup remove /dev/mapper/<device_name>
dmsetup suspend /dev/mapper/<device_name>
dmsetup wipe_table /dev/mapper/<device_name>
dmsetup resume /dev/mapper/<device_name>
dmsetup remove /dev/mapper/<device_name>
This should resolve most I/O lockups caused by failed iSCSI operations. Please contact the OpenNebula Support team if you need assistance.
Was this information helpful?
Glad to hear it
Sorry to hear that