I was unable to mount tmpfs using pNFS.
Other people (here and elsewhere) suggested that I use GlusterFS, so I've deployed that and am testing it now.
On my compute nodes, I created a 64 GB RAM drive on each node:
Code: Select all
# mount -t tmpfs -o size=64g /bricks/brick1
I then created the mount points for the GlusterFS volume and then created said volume:
Code: Select all
# gluster volume create gv0 transport=rdma node{1..4}:/bricks/brick1/gv0
Code: Select all
# gluster volume create gv0 transport=tcp,rdma node{1..4}:/bricks/brick1/gv0
Code: Select all
# mount -t glusterfs -o transport=rdma,direct-io-mode=enable node1:/gv0 /mnt/gv0
Code: Select all
[root@node1 gv0]# for i in `seq -w 1 4`; do dd if=/dev/zero of=10Gfile$i bs=1024k count=10240; done
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.47401 s, 2.0 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.64206 s, 1.9 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.70306 s, 1.9 GB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 5.56882 s, 1.9 GB/s
So at best right now, with GlusterFS, I'm able to get about 16 Gbps throughput on four 64 GB RAM drives (for a total of 256 GB split acrossed four nodes).
Note that IS with a distributed volume for the time being.
Here are the results with the dispersed volume:
Code: Select all
[root@node1 gv1]# for i in `seq -w 1 4`; do dd if=/dev/zero of=10Gfile$i bs=1024k count=10240; done
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 19.7886 s, 543 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 20.9642 s, 512 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 20.6107 s, 521 MB/s
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 21.7163 s, 494 MB/s