[tech] Updates from Tech Fixup :: Downtime Friday 2019-06-21 10:00-14:00

Felix von Perger frekk at ucc.asn.au
Thu Jun 20 12:49:20 AWST 2019


Hi all,

Since the tech "fixup" on Monday this week, [MPT] and I have been 
working on various upgrades to the networking infrastructure. Primarily 
this has been testing but on Tuesday night I performed some upgrades to 
lard and bitumen as well. In order to complete the upgrades we will be 
working on Friday morning, which will almost certainly cause some downtime.

This means lard and bitumen are now both running the latest available 
version of Cisco IOS (now both 15.0(2)SG11, was 12.2(54)SG1 and 
12.2(31)SG1 respectively). Testing the redundant supervisor 
functionality in bitumen (at least for the purposes of software 
upgrades) seemed to work with minimal downtime, although it did seem 
that the uplink dropped out briefly during the switchover process.

Lard's supervisor was upgraded from a Supervisor II-Plus (WS-X4013+) to 
a Supervisor II-Plus-10GE (WS-X4013+10GE) which has two 10-gigabit X2 
uplink ports and 4 gigabit SFPs instead of the two GBIC connectors. The 
UWA uplink uses a Cisco 1000BASE-LX GBIC transceiver (model WS-G5486) 
and I couldn't find any other 1000BASE-LX modules so I took one of the 
two 6-port GBIC line cards from bitumen (noting that the hot-swap 
functionality also seems to work) and configured that to work with the 
new supervisor in lard.

 From the parts purchased at the auction a few months ago, a new switch 
was constructed in one of the new Cisco WS-C4506-E chasses. Called 
kerosene, this contains a Cisco Supervisor 6-E (WS-X45-Sup6-E) with two 
48-port gigabit POE line cards (WS-X4648-RJ45V+E). The Sup6-E has two 
10-gigabit X2 ports and we have acquired the latest released version of 
Cisco IOS for the E-series supervisors (15.2.2E8(MD)) along with upgrade 
programs for the ROMMONs in all our devices. Hopefully on Friday we will 
be ready to swap out bitumen for kerosene once it has been more fully 
configured.

Some interesting notes from the last few days - the classic (not 
E-series) line cards only support 6Gbps total bandwidth per slot, 
whereas the E-series cards support 24Gbps. The classic supervisors also 
don't seem to work in the E-series chasses (at least, the Supervisor 
II-Plus-10GE didn't seem to power on at all when installed in an 4506-E 
chassis). Hot-swap does seem to work, at least with modules and line cards.

Regarding servers, nothing much was done with molmol (according to [DAA] 
the urgent part of the upgrade was already done, and updating to the 
next major version would take a long time). The Cisco UCS server hasn't 
been setup yet either although the plan is to make it into another 
Proxmox VM host.

Desktop-wise, I've put the 240GB SSD (spare from Christmas) in Cichlid 
plus a fresh install of Windows 10. Attempting to install the 500GB SSD 
(spare from Pinball) into Corydoras resulted in a SATA cable (not 
power!) almost catching fire and one rather dead SSD - will try to chase 
up warranty on that. An old spare 120GB SSD (lying around in the machine 
room) made its way into Cobra where it now contains the previously 
HDD-bound Linux installation, while the HDD has been cursed with a fresh 
install of Windows 10, so it can now be booted into either Windows or 
Linux. I've configured a scheduled task to reboot the machine from 
Windows between 2-5am each day (where the default boot option is Linux) 
and installed unattended-upgrades on Linux.

Regarding dualboots, I'd like to suggest that we install Windows on at 
least Corydoras plus one or both of Clownfish / Porcupine in a similar 
configuration to Cobra (restarting automatically into Linux each morning 
+ unattended-upgrades). In terms of storage, Windows can fit (just) on a 
200-300GB partition and Linux can be comfortably shrunk to 100-200GB 
where necessary. To reduce the size required for Windows it might be 
possible to store the Users directory or roaming profile cache on 
another drive (although I haven't found any "proper" way to do this). In 
any case I'd suggest we purchase at least one or two more SATA SSDs (for 
Corydoras and Corvo, which don't have SSDs yet).

For the Cisco server, we don't have any suitable drives on hand, so I'd 
further propose buying 3 500GB SSDs (two for the system RAID and one for 
expanding ceph). Another task would be to reinstall Proxmox on Loveday 
to fully utilise its own 500GB SSDs. In total that would be 5 x 500GB 
SSDs (for servers and desktops). Budget of $150 per SSD for at least 
500GB each?

All the best,

Felix von Perger [FVP]
UCC President & Wheel Member



More information about the tech mailing list