External NSF and iSCSI Storage Setup for oVirt/RHV Nodes

Mindwatering Incorporated

Author: Tripp W Black

Created: 04/03 at 02:04 PM

 

Category:
Linux
Other

Task
Setup External NFS and iSCSI for use with oVirt.



NFS Setup:


1. Install CentOS Stream 10 or RockyOS on storage host.

Notes:
- We have disk 0 and 1 mirrored as the "system" disk - "/". We then have disk 2,3,4, and 5, set-up in a RAID 10. This RAID 10 array is mounted in /etc/fstab to /local/storage/
- Hostname: stornfs1.mindwatering.net
- First User: myadminid
- Network: 10.0.42.x/24

2. Install NFS utils and software:
$ ssh myadminid@stornfs1.mindwatering.net
<enter pwd if not key based>

Install tools:
$ sudo su -
# dnf install nfs-utils -y
<wait>

Review NFS versions supported:
# cat /proc/fs/nfsd/versions
<review: e.g. +3 +4 +4.1 +4.2>

Enable services:
# systemctl enable nfs-server
# systemctl enable rpcbind


3. Create the storage user and group:
Create the group with desired GID (group id):
# groupadd kvm -g 36

Create the user with same UID for convenience:
# useradd vdsm -u 36 -g kvm

Add the new user and group to the the storage location on the disk array already added/mapped:
Note: You can also do 750 so others locally have no access if the NFS storage server has any normal users.
# chmod 755 /local/storage
# chown vdsm:kvm /storage/

Update the exports file adding read/write access to storage location:
- minimal example:
# vi /etc/exports
...
/local/storage *(rw)
...
<esc>:wq (to save)

- more realistic example, allowing localhost and local network access only:
# vi /etc/exports
...
/local/storage 127.0.0.1(rw,async,no_root_squash,no_subtree_check) 10.0.42.0/24(rw,async,no_root_squash,no_subtree_check)
...
<esc>:wq (to save)

If using NFS 3 and we wish to restrict the ports used, we update NFS for static ports. For NFS 4, skip this step.
$ vi /etc/default/nfs-kernel-server
...
RPCNFSDCOUNT="-N 4 4"
RPCMOUNTDOPTS="-p 892"
STATDOPTS="-p 662"
RPCQUOTADOPTS="-p 875"
...
<esc>:wq (to save)

Activate the share:
$ sudo exportfs -ra

Verify:
$ sudo exportfs -v
<confirm /export NFS shares listed for each ip/subnet added>

Reload services:
# systemctl restart nfs-server
# systemctl status nfs-server
# systemctl restart rpcbind
# systemctl status rpcbind

Review ports listening:
# rpcinfo -p localhost | grep nfs
123123 3 tcp 2049 nfs
123123 4 tcp 2049 nfs



4. Set-up the firewall ports:
Confirm the active zone (typically public):
# firewall-cmd --get-active-zones
public
interfaces: eno16712345

List current rules:
# firewall-cmd --list-all

Notes:
- When adding, you can add open for any clients anywhere or restrict to just the current network
- For NFS 4+, only ports 2049 (NFS), 111 (rpcbind), and 20048 (mountd) need to be open
e.g. firewall-cmd --zone=public --add-port=111/tcp --permanent vs. firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="111" accept"

For NFS 4+, open the 3 ports needed:
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="111" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="udp" port="111" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="2049" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="20048" accept"

For NFS 3, open the NFS ports to the local network:
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="111" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="udp" port="111" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="2049" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="32803" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="udp" port="32769" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="892" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="udp" port="892" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="875" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="udp" port="875" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="662" accept"
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="udp" port="662" accept"

Reload the firewall to apply the new rules:
# firewall-cmd --reload

Confirm loaded and running:
# firewall-cmd --state
running

Review rules:
# firewall-cmd --list-services



ISCSI Setup:


Notes for oVirt:
- oVirt does NOT support 4K block sizes. We must use legacy 512b blocks.
- If using block storage and VMs on raw devices/direct LUNs and manage them with the LVM (Logical Volume Manager), they will be activated when the host is booted and can cause corruption. A filter must be created by running vdsm-tool config-lvm-filter.
- If there is an intermittent loss to a back-end ISCSI SAN, the storage file system will be read-only and remain in this state after connection. On the SAN ISCSI host, add an auto-queue on connection.
- In this example, the iSCSI target backends are created from a backstores/ directory appropriately mapped to its disk array.
Add a drop-in multipath configuration:
# sudo vi /etc/multipath/conf.d/host.conf
multipaths {
multipath {
wwid boot_LUN_wwid
no_path_retry queue
}
}

- Backend types:
- - fileio backstore: Use a fileio storage object when using regular files on the local file system as disk images (e.g. mapped files).
- - block backstore: Use a block storage object when using any local block device and logical device.
- - pscsi backstore: Use a pscsi storage object when using storage object supports direct pass-through of SCSI commands.
- - ramdisk backstore: Create a ramdisk storage object for a temporary RAM backed device.


1. Create the iSCSI storage interconnect, target, and backend:
$ ssh myadminid@storiscsi1.mindwatering.net
<enter pwd if not key based>

Install targetcli tool:
$ sudo su -
# dnf install targetcli -y
<wait>

Enable and start:
# systemctl enable target
# systemctl start target
# systemctl status target
<view loaded/running>

View config and add first target:
# target cli
/> ls
o- /........................................[...]
o- backstores.............................[...]
| o- block.................[Storage Objects: 0]
| o- fileio................[Storage Objects: 0]
| o- pscsi.................[Storage Objects: 0]
| o- ramdisk...............[Storage Objects: 0]
o- iscsi...........................[Targets: 0]
o- loopback........................[Targets: 0]

Navigate to the iscsi folder:
/> iscsi/

Create the target:
/iscsi> create iqn.2026-01.net.mindwatering
Created target iqn.2026-01.net.mindwatering:<port>
Created TPG1
...
/iscsi> ls/
<view the new target added>

Navigate to the backend to the target and create the storage object:
/iscsi> /backstores/fileio
/backstores/fileio> create file1 /local/storage/disk1.img 239M write_back=false
Created fileio file1...
/backstores/fileio> ls
<review new storage added>


2. Open the firewall for client connections:
- For connections from anywhere:
# firewall-cmd --permanent --add-port=3260/tcp
- For connections restricted to the current network:
# firewall-cmd --permanent --zone=public --add-rich-rule="rule family="ipv4" source address="10.0.42.0/24" port protocol="tcp" port="3260" accept"

Reload the firewall to apply the new rules:
# firewall-cmd --reload

Confirm loaded and running:
# firewall-cmd --state
running

Review rules:
# firewall-cmd --list-services





previous page

×