[tech] Molmol the slightly more fileserver

David Adam zanchey at ucc.gu.uwa.edu.au
Fri Aug 15 15:46:07 WST 2014


Thanks to [SZM] for getting Debian+ZFS configured on Molmol.

I won't be at the tech meeting tonight so here is what I've done in the 
last few weeks. The short story is that actual NFS performance on 
Debian+ZFS was worryingly terrible.

I made some changes to Molmol over last couple of weeks. The initial ZFS 
setup had everything in RAIDZ. I redid the partitioning and the RAID setup 
as detailed on the wiki page (http://wiki.ucc.asn.au/Molmol).

I also installed FreeBSD and configured an identical array (the ZFS 
versions in Debian+ZFS 7.6 and FreeBSD 10.0 are slightly different so I 
had to destroy the array each time). This is also detailed on the wiki 
page.

I wasn't sure how to run iozone and I wanted to run some slightly 
different benchmarks, because I'm more interested in "real-world" tasks 
compared to 4k writes. I'm on holiday overseas at the moment so these 
aren't terribly rigorous, but I tried untarring the Linux 3.16-rc7 tarball 
in various ways. Firstly I decompressed it and stowed in on /tmp on 
Motsugo, which (I think) is RAM-backed tmpfs.

Then I ran `time tar xf /tmp/linux-3.16-rc7.tar` into the various storage 
options.

With Molmol booted into FreeBSD, it untars locally (i.e. not over NFS) in 
6 seconds.

With Molmol booted into FreeBSD and mounted over NFSv3 to Motsugo (using 
wsize/rsize=8192), it untars in about 3m32s. Increasing wsize to 32768 
didn't change things much (3m26s).

With Molmol booted into Debian and mounted over NFSv3 to Motsugo (using 
rsize/wsize=8192) performance was much, much worse: 22 minutes!

For comparison, the same operation against /services (from Nortel) was 
2m3s.

I'm not sure why the performance is so terrible on Debian. Obviously there 
are lots of gaps in the matrix I haven't filled in (what's it like locally 
on Debian+ZFS, what's performance on the stuff mounted on Mylah like), and 
there's times when our network performance goes through the floor (down 
from gigabit to 200 megabit or so) which I haven't worked out.

The other interesting thing on Debian+ZFS is a strange intersection of ZFS 
datasets (which are like sub-filesystems, I suppose) and NFS mounting.

Create a ZFS pool with a dataset (/space) and a dataset within it 
(/space/scratch), Mount the dataset /space over NFS (`mount -t nfs -o 
wsize=8192,rsize=8192,vers=3 molmol:/space /mnt` or so). The contents of 
`/mnt` will be `/mnt/scratch/`, but if you write any data into 
`/mnt/scratch/` it doesn't end up in `/space/scratch`. Instead, it's 
written into a subdirectory in `/space` called scratch, which doesn't 
appear unless you destroy the /space/scratch dataset.

It would be worth seeing whether this happens under FreeBSD as well, but I 
don't think it does. That would definitely need to be sorted out before we 
used Debian+ZFS on Molmol.

Hopefully that made some sense.

David Adam (UTC+0100)
zanchey at ucc.gu.uwa.edu.au

On Sun, 27 Jul 2014, Sam Moore wrote:
> I have used iozone to generate some stats about /away (from 
> enron+stearns->mylah), /services (from nortel) and /there (from molmol) 
> on motsugo.
> 
> The results are at http://molmol.ucc.asn.au/iozone
> 
> Apparently there was a plan that I didn't follow, so after determining 
> what the plan actually was I might follow it and do this again. Feel 
> free to follow the plan if you do know what it is and I haven't done it yet.
> 
> [SZM]
> 
> On 27/07/14 somewhere near 6pm, Sam Moore wrote:
> > I have installed debian 7.6 Wheezy on Molmol. It has the parts of the
> > SOE that upset me the least.
> >
> > It has ZFS and is exporting /there to motsugo. So I guess we can move
> > all the files over /there now*.
> >
> > Some more information is at http://wiki.ucc.asn.au/Molmol
> >
> > Thanks to [SLX] and [GEE] for helping and [BG3] for lifting things.
> >
> > [SZM]
> >
> > * Probably not. I just really wanted to say that.
> > _______________________________________________
> > List Archives: http://lists.ucc.gu.uwa.edu.au/pipermail/tech
> >
> > Unsubscribe here: http://lists.ucc.gu.uwa.edu.au/mailman/options/tech/matches%40ucc.gu.uwa.edu.au
> >
> 
> _______________________________________________
> List Archives: http://lists.ucc.gu.uwa.edu.au/pipermail/tech
> 
> Unsubscribe here: http://lists.ucc.gu.uwa.edu.au/mailman/options/tech/zanchey%40ucc.gu.uwa.edu.au
> 

Cheers,

David Adam
zanchey at ucc.gu.uwa.edu.au
Ask Me About Our SLA!


More information about the tech mailing list