All of the nodes are also running CentOS 7.6.1810 with the software group 'Infiniband Support' installed (because this one still supports NFSoRDMA).
On the head node, I have four Samsung 860 EVO 1 TB SATA 6 Gbps SSDs in RAID0 through the Marvell 9230 controller on an Asus P9X79-E WS motherboard.
Testing on the headnode itself shows that I can get around 21.9 Gbps total throughput when running:
Code: Select all
$ time -p dd if=/dev/zero of=10Gfile bs=1024k count=10240
But when I trying to do the same thing over IB, I can only get about 8.5 Gbps at best.
NFSoRDMA is configured properly.
Here is /etc/exports:
Code: Select all
/home/cluster *(rw,async,no_root_squash,no_all_squash,no_subtree_check)
Code: Select all
# Load IPoIB
IPOIB_LOAD=yes
# Load SRP (SCSI Remote Protocol initiator support) module
SRP_LOAD=yes
# Load SRPT (SCSI Remote Protocol target support) module
SRPT_LOAD=yes
# Load iSER (iSCSI over RDMA initiator support) module
ISER_LOAD=yes
# Load iSERT (iSCSI over RDMA target support) module
ISERT_LOAD=yes
# Load RDS (Reliable Datagram Service) network protocol
RDS_LOAD=no
# Load NFSoRDMA client transport module
XPRTRDMA_LOAD=yes
# Load NFSoRDMA server transport module
SVCRDMA_LOAD=yes
# Load Tech Preview device driver modules
TECH_PREVIEW_LOAD=no
Code: Select all
aes0:/home/cluster /home/cluster nfs defaults,rdma,port=20049 0 0
Code: Select all
aes0:/home/cluster on /home/cluster type nfs4 (rw,relatime,vers=4.1,rsize=1048576,wsize=1048576,namlen=255,hard,proto=rdma,port=20049,timeo=600,retrans=2,sec=sys,clientaddr=xxxxxxx,local_lock=none,addr=xxxxxxx)
Code: Select all
$ mount
...
/dev/sdb1 on /home/cluster type xfs (rw,relatime,attr2,inode64,noquota)
...
I don't really understand why the NFSoRDMA mount appears to be capped at those less than 10 Gbps speeds.
Your help is greatly appreciated.
Thank you.