[tech] Staging server storage purchase, was Re: [ucc] Minutes of Meeting on 28th May 2020

Nick Bannon nick at ucc.gu.uwa.edu.au
Sun Aug 22 23:49:00 AWST 2021


On Thu, May 28, 2020 at 09:19:25PM +0800, Alexie Wallace wrote:
>                     UCC Committee Meeting Minutes 2020-05-28
[...]
>     Motion: Approve the request made at the Wheel meeting for $1300 for backup
>     drives
> 
>      • Passed 5-0-0
>      • Will need to email Tech@ with the ones we want to buy
>      • Cheapest one that will meet the specs that we want
>      • Action [MPT]: to email which ones to buy before purchase is made

I'm going to action this imminently, and buy a pair of Seagate Exos X16
14TB's (helium).
https://www.seagate.com/au/en/enterprise-storage/exos-drives/exos-x-drives/exos-x16/

- There's some possibility of more/larger drives for an excellent price,
  if there's rapid support for the notion
  - The first pair ought to be barely adequate for now, but being able
    to try multiple of the below approaches without more human effort
    (the usual limiting factor) will be worthwhile
  - Some WD's may have the "3.3V SATA pin" issue, easy to work around,
    or perhaps they will just slot into a DELL R710 and run?
    - https://www.instructables.com/How-to-Fix-the-33V-Pin-Issue-in-White-Label-Disks-/ )

Nick.

On Wed, Mar 24, 2021 at 01:08:37PM +0800, Nick Bannon wrote:
[...]
> 1. Staging backups for in-clubroom server
> rebuilds/upgrades/rearrangements, separate to the offsite legacy
> backups. We've already budgetted for the bulk HDD storage we need to
> start this plan. Until now I was aiming to start with an old IBM/Lenovo
> x-series server and replace the drive controller/HBA with a secondhand
> LSI9207-8i I picked up.
[...]
> So, for 1. all we need is:
> 
> - at least two 10TB+ 3.5" HDDs, more like 14-18 TB at current prices:
>   - Careful to avoid the SMRs - only the smallest and now the very
>     largest have been SMR
>   - https://staticice.com.au/cgi-bin/search.cgi?q=14TB+exos
>   - https://www.ozbargain.com.au/node/612624 shuckables and the like
>     are almost irresistable, put the savings into spares
>   - room for a spare for mirror rebuilding, and/or expanding with
>     another pair
>   - No RAID-5 https://www.baarf.dk/
> - SSD or two for boot/L2ARC/ZIL/SLOG/bcache
> - Mounting for the above
>   (Looks like we can print the drive sleds/trays/caddys!
>    https://www.thingiverse.com/thing:2168447
>   )
> - Access through the DELL PERC6/i/H200/H700/H800... or the LSI9207-8i
> 
> Most kinds of upgrades to motsugo or molmol (our fileservers), or
> our larger VMs, are disruptive. To help, we need a target we can
> send whole snapshot filesystems and system images to and from at
> gigabit/dual-gigabit/10G sorts of speeds. Some flexibility is needed,
> we're probably wanting to at least try most of the following at
> multi-terabyte scale:
> 
> - dd if=/dev/...
> - zfs/btrfs send/receive
> - borg/borgmatic ( aggregates all the little files, makes an offsite
>   rclone much happier)
> - a target for https://pbs.proxmox.com/ ?
> - a https://tracker.debian.org/pkg/moosefs chunkserver ?
> 
> >From outside, this is going to look a lot like molmol with extra space,
> but it's time we tried to pin down why molmol's latency goes up so much
> when one client starts a single big boring copy or VM clone.
> 
> Nick.

-- 
   Nick Bannon   | "I made this letter longer than usual because
nick-sig at rcpt.to | I lack the time to make it shorter." - Pascal


More information about the tech mailing list