From alfred.burgess95 at gmail.com Thu Mar 2 10:18:27 2017 From: alfred.burgess95 at gmail.com (alfred burgess) Date: Thu, 2 Mar 2017 10:18:27 +0800 Subject: [tech] Dispense Frozen Display Message-ID: Hi all, Since this morning (possibly longer) the snack machine's display has been frozen. The display reads *G******** where the "*" represents no character displayed. All other functions seem to be in tact including logging in and dispensing. Sincerely, - [TBB] Alfred Burgess -------------- next part -------------- An HTML attachment was scrubbed... URL: http://lists.ucc.gu.uwa.edu.au/pipermail/tech/attachments/20170302/cf2725ff/attachment.htm From matt at ucc.asn.au Sun Mar 5 23:13:37 2017 From: matt at ucc.asn.au (Matt Johnston) Date: Sun, 5 Mar 2017 23:13:37 +0800 Subject: [tech] Fail event on /dev/md/0:motsugo In-Reply-To: <20170305150736.4B36624F92@motsugo.ucc.gu.uwa.edu.au> References: <20170305150736.4B36624F92@motsugo.ucc.gu.uwa.edu.au> Message-ID: <0201FCCC-FF65-4672-B500-7A227D6DA90F@ucc.asn.au> trs80 noticed sdf on motsugo was bodgy, errors in dmesg and it had 8 Current Pending Sectors from smrtctl I've removed it from the raid and re-added it - hopefully rewriting those sectors will make it use some fresh backup ones that are OK. If not it might be under warranty. Matt > On Sun 5/3/2017, at 11:07 pm, mdadm monitoring wrote: > > This is an automatically generated mail message from mdadm > running on motsugo > > A Fail event had been detected on md device /dev/md/0. > > It could be related to component device /dev/sdf1. > > Faithfully yours, etc. > > P.S. The /proc/mdstat file currently contains the following: > > Personalities : [raid1] [raid6] [raid5] [raid4] > md0 : active raid6 sdc1[0] sdg1[4] sdf1[5](F) sde1[6] sdd1[1] > 5860535808 blocks super 1.2 level 6, 512k chunk, algorithm 2 [5/4] [UUU_U] > [=================>...] check = 86.0% (1681398528/1953511936) finish=904.9min speed=5010K/sec > > md1 : active raid1 sda1[3] sdb1[1] > 117211608 blocks super 1.2 [2/2] [UU] > > unused devices: From bob at ucc.gu.uwa.edu.au Tue Mar 7 13:28:10 2017 From: bob at ucc.gu.uwa.edu.au (Andrew Adamson) Date: Tue, 7 Mar 2017 13:28:10 +0800 (AWST) Subject: [tech] Proxmox upgrade status Message-ID: Hi All, The proxmox upgrade went fairly well, though the prep work took a lot longer than I thought it would. Current status is: Maltair: running proxmox 4, now with mirrored SSD's as system disks Heathred: installed with proxmox 4 and renamed to loveday. It now only has two SSD's instead of four, since VM's can be stored centrally now. Both of the above are set up in a 2 machine cluster for now. Medico: still has proxmox 3, isn't in the cluster, and hasn't been reinstalled yet. TODO: - The heathred machine image needs to be set up as a VM on loveday, and the VM's it was hosting can be de-nested and run on proxmox. - Set up wheel keys, fail2ban, disk schedulers, other tidbits - Nuke and set up medico User logins and vm creation should all be as before, but the container creation is probably a bit different. If the next person to create one could update wiki.ucc.asn.au/Proxmox that would be great. As usual, the web interface is only accessible from inside the UCC network on https://maltair.ucc.asn.au:8006 or https://loveday.ucc.asn.au:8006 Andrew Adamson bob at ucc.asn.au |"If you can't beat them, join them, and then beat them." | | ---Peter's Laws | From zanchey at ucc.gu.uwa.edu.au Tue Mar 7 15:53:22 2017 From: zanchey at ucc.gu.uwa.edu.au (David Adam) Date: Tue, 7 Mar 2017 15:53:22 +0800 (AWST) Subject: [tech] Proxmox upgrade status In-Reply-To: References: Message-ID: On Tue, 7 Mar 2017, Andrew Adamson wrote: > The proxmox upgrade went fairly well, though the prep work took a lot > longer than I thought it would. Current status is: > > Maltair: running proxmox 4, now with mirrored SSD's as system disks > > Heathred: installed with proxmox 4 and renamed to loveday. It now only > has two SSD's instead of four, since VM's can be stored centrally now. > > Both of the above are set up in a 2 machine cluster for now. > > Medico: still has proxmox 3, isn't in the cluster, and hasn't been > reinstalled yet. > > TODO: > > - The heathred machine image needs to be set up as a VM on loveday, and > the VM's it was hosting can be de-nested and run on proxmox. > > - Set up wheel keys, fail2ban, disk schedulers, other tidbits I've set up wheel keys (same way as Medico was set up; when we add Medico to the cluster we will need to include its root key in /home/wheel/uccroot/extra-maltair). [DAA] From oxinabox at ucc.asn.au Tue Mar 28 12:56:14 2017 From: oxinabox at ucc.asn.au (oxinabox at ucc.asn.au) Date: Tue, 28 Mar 2017 12:56:14 +0800 Subject: [tech] Clearing Out Databooks Message-ID: I have added to the "Free to a good home" pile in the corridor Texas Instrument Databooks and National Semiconductor DataBooks. They will stay there for some weeks then be thrown out. They are for the product lines from 1985-1995. As with all Databooks, they are now available online for anyparts that are still available. If you personally are using parts that are 20 years old. Then please, personally take these books, into your personal storage location. I think this may be the last of the databooks. Given the first things to be put up this year, was like 6 boxs of them (Which I think have now been either claimed) Kind Regards [*OX] Wheel Member From oxinabox at ucc.asn.au Tue Mar 28 13:45:45 2017 From: oxinabox at ucc.asn.au (oxinabox at ucc.asn.au) Date: Tue, 28 Mar 2017 13:45:45 +0800 Subject: [tech] Clearing Out Databooks In-Reply-To: References: Message-ID: <97ba31f7fcbb4cca2d13f6135e2bfc80@ucc.asn.au> I found another pile of them. Add the the list Intel Databooks, about the same years. They are with the others now [*OX] On 28.03.2017 12:56, oxinabox at ucc.asn.au wrote: > I have added to the "Free to a good home" pile in the corridor Texas > Instrument Databooks and National Semiconductor DataBooks. > They will stay there for some weeks then be thrown out. > > They are for the product lines from 1985-1995. > As with all Databooks, they are now available online for anyparts that > are still available. > If you personally are using parts that are 20 years old. > Then please, personally take these books, into your personal storage > location. > > I think this may be the last of the databooks. > Given the first things to be put up this year, was like 6 boxs of them > (Which I think have now been either claimed) > > Kind Regards > [*OX] > Wheel Member > _______________________________________________ > List Archives: http://lists.ucc.gu.uwa.edu.au/pipermail/tech > > Unsubscribe here: > http://lists.ucc.gu.uwa.edu.au/mailman/options/tech/oxinabox%40ucc.asn.au From adrian.chadd at gmail.com Sun Mar 5 15:30:56 2017 From: adrian.chadd at gmail.com (Adrian Chadd) Date: Sun, 05 Mar 2017 07:30:56 -0000 Subject: [tech] Fwd: Re: BBC LV-ROM player In-Reply-To: References: Message-ID: hiya, hm, has anyone contacted the computer history museum here in the bay area? (I have a BBC master somewhere, but hm, doesn't it require some other interface thing?) -adrian On 25 February 2017 at 05:26, Andrew Williams wrote: > On 2017-02-25 7:51 PM, Frames wrote: > >>> However, we have not attempted to connect it to any kind of output >>> device. >>> Because we do not have any compatible displays on hand. >>> >>> We also do not possess a remote for it, nor any media. > > For what it's worth, I have a couple of SCART adaptors (to composite > video yellow/red/white RCA leads) if anyone has any media and wants to > get it going. I suspect finding a BBC Master to drive it would be the > hardest job... > > Have you contacted the Australian Computer Museum (WA, or any other > branch?) to see if they want it? Please don't throw it away, I'll pick > it up and stick it in my shed if that's the only alternative to a dumpster. > > Andrew > _______________________________________________ > List Archives: http://lists.ucc.gu.uwa.edu.au/pipermail/tech > > Unsubscribe here: http://lists.ucc.gu.uwa.edu.au/mailman/options/tech/adrian%40ucc.gu.uwa.edu.au