Re: How to export tmpfs/ramfs over pNFS?
Posted: 2019/08/21 02:23:30
For those that might be following the saga, here's an update:
I was unable to mount tmpfs using pNFS.
Other people (here and elsewhere) suggested that I use GlusterFS, so I've deployed that and am testing it now.
On my compute nodes, I created a 64 GB RAM drive on each node:
and edited my /etc/fstab likewise.
I then created the mount points for the GlusterFS volume and then created said volume:
but that was a no-go when I tried to mount it, so I disabled SELinux (based on the error message that was being written to the log file), deleted the volume, and created it again with:
Started the volume up, and I was able to mount it now with:
Out of all of the test trials, here's the best result that I've been able to get so far. (The results are VERY sporadic and they're kind of all over the map. I haven't quite why just yet.
Interestingly enough, when I try to do the same thing on /dev/shm, I only max out at around 2.8 GB/s.
So at best right now, with GlusterFS, I'm able to get about 16 Gbps throughput on four 64 GB RAM drives (for a total of 256 GB split acrossed four nodes).
Note that IS with a distributed volume for the time being.
Here are the results with the dispersed volume:
It's quite a lot slower.
I was unable to mount tmpfs using pNFS.
Other people (here and elsewhere) suggested that I use GlusterFS, so I've deployed that and am testing it now.
On my compute nodes, I created a 64 GB RAM drive on each node:
Code: Select all
# mount -t tmpfs -o size=64g /bricks/brick1
I then created the mount points for the GlusterFS volume and then created said volume:
Code: Select all
# gluster volume create gv0 transport=rdma node{1..4}:/bricks/brick1/gv0
Code: Select all
# gluster volume create gv0 transport=tcp,rdma node{1..4}:/bricks/brick1/gv0
Code: Select all
# mount -t glusterfs -o transport=rdma,direct-io-mode=enable node1:/gv0 /mnt/gv0
Code: Select all
[root@node1 gv0]# for i in `seq -w 1 4`; do dd if=/dev/zero of=10Gfile$i bs=1024k count=10240; done
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.47401 s, 2.0 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.64206 s, 1.9 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.70306 s, 1.9 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.56882 s, 1.9 GB/s
So at best right now, with GlusterFS, I'm able to get about 16 Gbps throughput on four 64 GB RAM drives (for a total of 256 GB split acrossed four nodes).
Note that IS with a distributed volume for the time being.
Here are the results with the dispersed volume:
Code: Select all
[root@node1 gv1]# for i in `seq -w 1 4`; do dd if=/dev/zero of=10Gfile$i bs=1024k count=10240; done
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 19.7886 s, 543 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 20.9642 s, 512 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 20.6107 s, 521 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 21.7163 s, 494 MB/s