From frekk at ucc.asn.au Sat Mar 2 20:57:16 2019 From: frekk at ucc.asn.au (Felix von Perger) Date: Sat, 2 Mar 2019 20:57:16 +0800 Subject: [tech] Fwd: Prices for Stuff from Wholesale & Ebay Links In-Reply-To: References: Message-ID: <2b0eb662-9f71-86ff-0ef6-6342a80bef69@ucc.asn.au> Here are some sample prices for equipment? from Melissa - note that the wholesaler names have been redacted. Regarding monitors, I'd be tempted to stick with the 24" variety, since those are approximately the same size as the others we already have, and they are only 1080p resolution anyway. - [FVP] -------- Forwarded Message -------- Subject: Prices for Stuff from Wholesale Date: Fri, 1 Mar 2019 14:07:35 +0800 From: Melissa Star To: president at ucc.asn.au Hi Felix and Everyone, Just reconfirming prices and stock availability for various things: Some things are in stock, some are not, check notes. * * * * *Wholesaler 1Wholesaler 2* * * * * *SAMSUNG 860 QVO - 1TB160158Note: Wholesaler 1 has Perth Availability from 6th March only. Wholesaler 2 have plenty of Stock in Melbourne but can courier over.* *(35K IOPS) * *SAMSUNG 860 EVO - 1TBX189??? ??? ?? Wholesaler 2 currently has Perth stock, Wholesaler 1 does not.* *(98/99K IOPS)* * * *SAMSUNG 860 QVO - ?2TBX 302 (!!!)Exactly 8 in stock in Perth, more in Melbourne* * * *SAMSUG EVO M2 - 1TBX 189ETA LATE NEXT WEEK* * * *INTEL 660P M2 - 1TB2162 in Perth, about 8 others around the place. Rare. More stock at Wholesaler 2 but $10 more expensive.* *150K + IOPS* * * *????* * * AOC 27" 1ms 144Hz Full HD FreeSync Frameless Gaming Monitor - G2790PX $365 + GST. Stock in WA NOW (7 in stock) AOC 24.5" 1ms 144Hz Full HD FreeSync Frameless Gaming Monitor - G2590PX $300 + GST. Stock in WA on 15th March, we have to reserve. ------------------------------------------------------------------------ Here are some relevant Ebay Links: *Dell R710* https://www.ebay.com.au/itm/Dell-R710-2-x-E5520-2-27Ghz-48GB-RAM-2-x-PSU/173751309985?_trkparms=aid%3D111001%26algo%3DREC.SEED%26ao%3D1%26asc%3D20160908131621%26meid%3D6ceb75c482b9414fa39c8c113dc37e3f%26pid%3D100678%26rk%3D1%26rkt%3D15%26sd%3D173751309985%26itm%3D173751309985&_trksid=p2481888.c100678.m3607&_trkparms=pageci%3A221f2b2e-3be8-11e9-9945-74dbd18015f3%7Cparentrq%3A37dc542d1690ab67025673bdfffd48fa%7Ciid%3A1 *Compatibility with XEON 56XX Processors:* * * *https://store.flagshiptech.com/dell-poweredge-r710-intel-xeon-5600-series-cpus-processors/?sort=featured&page=2 * * * Rack Shelves of Various Types: https://www.ebay.com.au/itm/1U-SLIDE-SHELF-19-Adjustable-Depth-300mm-Suit-19-600mm-Deep-Server-Rack/122436251015?hash=item1c81c4ed87:g:u80AAOSwInxXOAKQ:rk:1:pf:1&frcectupt=true https://www.ebay.com.au/itm/1U-FIXED-SHELF-700mm-Suit-19-Inch-1000mm-Deep-Free-Standing-Server-Cabinet/122436231008?_trkparms=aid%3D222007%26algo%3DSIM.MBE%26ao%3D2%26asc%3D20140106155344%26meid%3D84fd2bcf86f049a297719d11ab4f1f5f%26pid%3D100005%26rk%3D4%26rkt%3D12%26sd%3D122436251015%26itm%3D122436231008&_trksid=p2047675.c100005.m1851 -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.ucc.gu.uwa.edu.au/pipermail/tech/attachments/20190302/67ecfb06/attachment.htm From zanchey at ucc.gu.uwa.edu.au Sat Mar 2 21:27:24 2019 From: zanchey at ucc.gu.uwa.edu.au (David Adam) Date: Sat, 2 Mar 2019 21:27:24 +0800 (AWST) Subject: [tech] Dead disk in Molmol Message-ID: Hi all, Molmol has dropped one of its SSDs: Feb 26 14:15:10 molmol kernel: ahcich1: Timeout on slot 25 port 0 Feb 26 14:15:10 molmol kernel: ahcich1: is 00000000 cs 02000000 ss 00000000 rs 02000000 tfd c0 serr 00000000 cmd 0004d917 Feb 26 14:15:10 molmol kernel: (ada1:ahcich1:0:0:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00 Feb 26 14:30:34 molmol kernel: (ada1:ahcich1:0:0:0): CAM status: Command timeout Feb 26 14:30:34 molmol kernel: (ada1:ahcich1:0:0:0): Retrying command Feb 26 14:30:34 molmol kernel: ahcich1: AHCI reset: device not ready after 31000ms (tfd = 00000080) (etc.) It's detached from the bus and won't reattach. The device is a Samsung SSD 840 PRO Series DXM05B0Q (s/n S1ATNSAD864731A) - note that there are two of these in the machine! I'm not sure whether it is hotpluggable or not. This SSD was providing one half of the SLOG mirror [1] and a RAID partition for the root filesystem. The other half is provided by the other Samsung 840 PRO: zfs pool status: NAME STATE READ WRITE CKSUM logs mirror-4 DEGRADED 0 0 0 5535644740799039914 REMOVED 0 0 0 was /dev/gpt/molmol-slog gpt/molmol-slog0 ONLINE 0 0 0 Checking status of gmirror(8) devices: Name Status Components mirror/gmirror0 DEGRADED ada0p2 (ACTIVE) If one has gone, I suspect the other is not far behind (SLOG devices do a lot of writing), so it is probably worth replacing at least one and possibly both. This may be part of why performance has tanked recently (although I have no evidence to support this statement). They don't need to be big - we're currently using 80 GB of the 256 GB disk - but they do need to be reliable and fast. I have zero idea what the best part to pick is; any thoughts? David Adam zanchey@ UCC Wheel Member [1]: https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log/ From bob at ucc.asn.au Sun Mar 3 15:19:14 2019 From: bob at ucc.asn.au (Bob Adamson) Date: Sun, 3 Mar 2019 15:19:14 +0800 Subject: [tech] [committee] Fwd: Prices for Stuff from Wholesale & Ebay Links In-Reply-To: <2b0eb662-9f71-86ff-0ef6-6342a80bef69@ucc.asn.au> References: <2b0eb662-9f71-86ff-0ef6-6342a80bef69@ucc.asn.au> Message-ID: <003f01d4d191$6d0bf3f0$4723dbd0$@ucc.asn.au> Sweet, can we please get two SAMSUNG 860 QVO (2TB) disks to replace the currently failing pair in motsugo @ $302 each. We would normally replace these with WD Black 2TB disks (WD2003FZEX) @ $189 each, however motsugo is due for replacement soon and I envisage those two SSD's becoming bulk data disks in clubroom desktops when motsugo is upgraded. Cheers, Bob From: committee-bounces+bob=ucc.gu.uwa.edu.au at ucc.asn.au On Behalf Of Felix von Perger Sent: Saturday, 2 March 2019 8:57 PM To: tech at ucc.asn.au Cc: committee at ucc.asn.au Subject: [committee] Fwd: Prices for Stuff from Wholesale & Ebay Links Here are some sample prices for equipment from Melissa - note that the wholesaler names have been redacted. Regarding monitors, I'd be tempted to stick with the 24" variety, since those are approximately the same size as the others we already have, and they are only 1080p resolution anyway. - [FVP] -------- Forwarded Message -------- Subject: Prices for Stuff from Wholesale Date: Fri, 1 Mar 2019 14:07:35 +0800 From: Melissa Star To: president at ucc.asn.au Hi Felix and Everyone, Just reconfirming prices and stock availability for various things: Some things are in stock, some are not, check notes. Wholesaler 1 Wholesaler 2 SAMSUNG 860 QVO - 1TB 160 158 Note: Wholesaler 1 has Perth Availability from 6th March only. Wholesaler 2 have plenty of Stock in Melbourne but can courier over. (35K IOPS) SAMSUNG 860 EVO - 1TB X 189 Wholesaler 2 currently has Perth stock, Wholesaler 1 does not. (98/99K IOPS) SAMSUNG 860 QVO - 2TB X 302 (!!!) Exactly 8 in stock in Perth, more in Melbourne SAMSUG EVO M2 - 1TB X 189 ETA LATE NEXT WEEK INTEL 660P M2 - 1TB 216 2 in Perth, about 8 others around the place. Rare. More stock at Wholesaler 2 but $10 more expensive. 150K + IOPS ???? AOC 27" 1ms 144Hz Full HD FreeSync Frameless Gaming Monitor - G2790PX $365 + GST. Stock in WA NOW (7 in stock) AOC 24.5" 1ms 144Hz Full HD FreeSync Frameless Gaming Monitor - G2590PX $300 + GST. Stock in WA on 15th March, we have to reserve. _____ Here are some relevant Ebay Links: Dell R710 https://www.ebay.com.au/itm/Dell-R710-2-x-E5520-2-27Ghz-48GB-RAM-2-x-PSU/173751309985?_trkparms=aid%3D111001%26algo%3DREC.SEED%26ao%3D1%26asc%3D20160908131621%26meid%3D6ceb75c482b9414fa39c8c113dc37e3f%26pid%3D100678%26rk%3D1%26rkt%3D15%26sd%3D173751309985%26itm%3D173751309985 &_trksid=p2481888.c100678.m3607&_trkparms=pageci%3A221f2b2e-3be8-11e9-9945-74dbd18015f3%7Cparentrq%3A37dc542d1690ab67025673bdfffd48fa%7Ciid%3A1 Compatibility with XEON 56XX Processors: https://store.flagshiptech.com/dell-poweredge-r710-intel-xeon-5600-series-cpus-processors/?sort=featured &page=2 Rack Shelves of Various Types: https://www.ebay.com.au/itm/1U-SLIDE-SHELF-19-Adjustable-Depth-300mm-Suit-19-600mm-Deep-Server-Rack/122436251015?hash=item1c81c4ed87:g:u80AAOSwInxXOAKQ:rk:1:pf:1 &frcectupt=true https://www.ebay.com.au/itm/1U-FIXED-SHELF-700mm-Suit-19-Inch-1000mm-Deep-Free-Standing-Server-Cabinet/122436231008?_trkparms=aid%3D222007%26algo%3DSIM.MBE%26ao%3D2%26asc%3D20140106155344%26meid%3D84fd2bcf86f049a297719d11ab4f1f5f%26pid%3D100005%26rk%3D4%26rkt%3D12%26sd%3D122436251015%26itm%3D122436231008 &_trksid=p2047675.c100005.m1851 -------------- next part -------------- An HTML attachment was scrubbed... URL: https://lists.ucc.gu.uwa.edu.au/pipermail/tech/attachments/20190303/f50fcb6b/attachment-0001.htm From bob at ucc.asn.au Sun Mar 3 15:21:02 2019 From: bob at ucc.asn.au (Bob Adamson) Date: Sun, 3 Mar 2019 15:21:02 +0800 Subject: [tech] Dead disk in Molmol In-Reply-To: References: Message-ID: <004c01d4d191$ada21250$08e636f0$@ucc.asn.au> Hi All, I think we should look for something a bit more enterprisey for this task, since it is such a critical component - this machine hosts a lot of club and member VM storage, as well as clubroom desktop home directories. Given the issues we seem to be having with speeds, I don't think we should skimp on disks this time around. This page, though from 2014, details how we might check the performance of SSD's as a Ceph journaling device, which (aiui) uses synchronous writes similar to the requirements of NFS on ZFS: https://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd-is -suitable-as-a-journal-device/ . The results are somewhat scant, but what is apparent is the order of magnitude in speed difference between consumer and enterprise SSDs for this use. Despite having many bays on the front, molmol only supports 8 SAS disks, 2 SATA3 disks, and 4 SATA2 disks. The 8 SAS ports are taken up by spinning disks at the moment, and the 2 SATA3 ports are used by the system/SLOG disks. There's really no point in using the SATA2 ports due to their speed. The mobo is a supermicro x9srh-7tf , so it has 1xPCIe 3.0 x16 slot and 1xPCIe 3.0 x8 slot. Given that the mobo has 10G ethernet onboard, I think both of those slots should be free. The case itself is 2RU, so we could support 2 low profile PCIe SSD cards. Anyway, what I'm thinking is to replace the failing system disk with another similar SSD, then chuck a single, fast, PCIe SSD in it for the SLOG and L2ARC only. If it fails, aiui we don't have a corrupt file system, we just lose the last 5 seconds of data (correct me if I'm wrong here?). This is based on a few google results, like https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSSLOGLossEffects $409 plus delivery for an Intel Optane 900P: https://www.scorptec.com.au/product/Hard-Drives-&-SSDs/SSD-2.5-&-PCI-Express /70481-SSDPED1D280GASX Plus $90 to replace the failed system disk with a 250GB 860EVO: https://www.scorptec.com.au/product/Hard-Drives-&-SSDs/SSD-2.5-&-PCI-Express /71382-MZ-76E250BW $15 delivery $514 total Thoughts? I'm happy to order it, just approve it at a committee meeting (or outside of one via circular) and let me know. Thanks, Bob -----Original Message----- From: tech-bounces+bob=ucc.gu.uwa.edu.au at ucc.asn.au On Behalf Of David Adam Sent: Saturday, 2 March 2019 9:27 PM To: tech at ucc.gu.uwa.edu.au Subject: [tech] Dead disk in Molmol Hi all, Molmol has dropped one of its SSDs: Feb 26 14:15:10 molmol kernel: ahcich1: Timeout on slot 25 port 0 Feb 26 14:15:10 molmol kernel: ahcich1: is 00000000 cs 02000000 ss 00000000 rs 02000000 tfd c0 serr 00000000 cmd 0004d917 Feb 26 14:15:10 molmol kernel: (ada1:ahcich1:0:0:0): FLUSHCACHE48. ACB: ea 00 00 00 00 40 00 00 00 00 00 00 Feb 26 14:30:34 molmol kernel: (ada1:ahcich1:0:0:0): CAM status: Command timeout Feb 26 14:30:34 molmol kernel: (ada1:ahcich1:0:0:0): Retrying command Feb 26 14:30:34 molmol kernel: ahcich1: AHCI reset: device not ready after 31000ms (tfd = 00000080) (etc.) It's detached from the bus and won't reattach. The device is a Samsung SSD 840 PRO Series DXM05B0Q (s/n S1ATNSAD864731A) - note that there are two of these in the machine! I'm not sure whether it is hotpluggable or not. This SSD was providing one half of the SLOG mirror [1] and a RAID partition for the root filesystem. The other half is provided by the other Samsung 840 PRO: zfs pool status: NAME STATE READ WRITE CKSUM logs mirror-4 DEGRADED 0 0 0 5535644740799039914 REMOVED 0 0 0 was /dev/gpt/molmol-slog gpt/molmol-slog0 ONLINE 0 0 0 Checking status of gmirror(8) devices: Name Status Components mirror/gmirror0 DEGRADED ada0p2 (ACTIVE) If one has gone, I suspect the other is not far behind (SLOG devices do a lot of writing), so it is probably worth replacing at least one and possibly both. This may be part of why performance has tanked recently (although I have no evidence to support this statement). They don't need to be big - we're currently using 80 GB of the 256 GB disk - but they do need to be reliable and fast. I have zero idea what the best part to pick is; any thoughts? David Adam zanchey@ UCC Wheel Member [1]: https://pthree.org/2012/12/06/zfs-administration-part-iii-the-zfs-intent-log / _______________________________________________ List Archives: http://lists.ucc.asn.au/pipermail/tech Unsubscribe here: https://lists.ucc.gu.uwa.edu.au/mailman/options/tech/bob%40ucc.gu.uwa.edu.au From melissa at netexperts.com.au Tue Mar 5 15:36:14 2019 From: melissa at netexperts.com.au (Melissa Star) Date: Tue, 5 Mar 2019 15:36:14 +0800 Subject: [tech] Responding to Server Room Heat, Ashera Shutdown and Remote Control Message-ID: Hi Everyone, After discussion with Felix last night, I am proposing the following: Remote Power-Off for Ashera (and optionally other servers) There was some discussion last night regarding the need to be able to remotely power down servers. For Ashera, I have created a shutdown at ashera.ucc.asn.au account. Logging in to this account via ssh will cause the machine to immediately power down. Note that there is no "confirmation prompt" and once the machine is powered down it must be physically restarted in the server room. This was achieved by: 1. Creating the shutdown user and assigning group "operator". 2. Change Shell (chsh) to /usr/local/bin/bash 3. Placing the command "shutdown -P now" into the .bashrc file. This can also be done using the init file of any other shell of your choice. I haven't set up SSH keys for the account yet, but could either use the standard SSH authorized_keys file for wheel, or preferably I could implement a command accessible to wheel that would cause ashera (and if desired, other servers) to shut down as needed. As Ashera is my own sever co-located at UCC, I've already implemented this feature. Making Remote Power-Off possible for UCC servers on command (apocalypse command) If you are interested, I could also write a bash script that would respond to "apocalypse