From zanchey at ucc.gu.uwa.edu.au Sat Jan 17 21:23:20 2009 From: zanchey at ucc.gu.uwa.edu.au (David Adam) Date: Sat, 17 Jan 2009 21:23:20 +0900 (WST) Subject: [tech] New gear Message-ID: Courtesy of Fugro FSI, the UCC now possesses 16 1U servers. They are dual single-core Xeon 2.4GHz machines with 3GB ECC Registered RAM. They only come with single 40GB (or so) ATA drives, which are likely to be unreliable having spent working lives as scratch disks. Dual Gigabit Ethernet interfaces which might be useful. Two have gone to blinken@, I know adrian@ would like another couple for his Squid cluster when he gets back in the country, and I'm considering acquiring one. There's also three 2U Dell PowerEdge 2650 servers (sans drives) and a 1U IBM X-series 335, which have shown up from (I think) iiNet, which gives us a fair chunk of new (to UCC) hardware. I have no real plans for what to do with it all yet, although we could probably replace Mooneye (perhaps with a VM, although not Xen because the networking seems to suck hard). David Adam UCC Wheel Member zanchey@ From adrian at ucc.gu.uwa.edu.au Sun Jan 18 23:07:49 2009 From: adrian at ucc.gu.uwa.edu.au (Adrian Chadd) Date: Sun, 18 Jan 2009 23:07:49 +0900 Subject: [tech] New gear In-Reply-To: References: Message-ID: <20090118140749.GA19125@ucc.gu.uwa.edu.au> On Sat, Jan 17, 2009, David Adam wrote: > with it all yet, although we could probably replace Mooneye (perhaps with > a VM, although not Xen because the networking seems to suck hard). remembered the magic setting? Always set dom0's RAM level to something minimal rather than trusting the balloon driver, so the network device don't run out of RAM before the balloon code can give it to them? :) Adrian From trs80 at ucc.gu.uwa.edu.au Sun Jan 18 23:36:27 2009 From: trs80 at ucc.gu.uwa.edu.au (James Andrewartha) Date: Sun, 18 Jan 2009 23:36:27 +0900 (WST) Subject: [tech] New gear In-Reply-To: <20090118140749.GA19125@ucc.gu.uwa.edu.au> References: <20090118140749.GA19125@ucc.gu.uwa.edu.au> Message-ID: On Sun, 18 Jan 2009, Adrian Chadd wrote: > On Sat, Jan 17, 2009, David Adam wrote: > > > with it all yet, although we could probably replace Mooneye (perhaps with > > a VM, although not Xen because the networking seems to suck hard). > > remembered the magic setting? Always set dom0's RAM level to something > minimal rather than trusting the balloon driver, so the network device > don't run out of RAM before the balloon code can give it to them? :) No, the problem is VLANs - configuring them in Xen is a right PITA. -- # TRS-80 trs80(a)ucc.gu.uwa.edu.au #/ "Otherwise Bub here will do \ # UCC Wheel Member http://trs80.ucc.asn.au/ #| what squirrels do best | [ "There's nobody getting rich writing ]| -- Collect and hide your | [ software that I know of" -- Bill Gates, 1980 ]\ nuts." -- Acid Reflux #231 / From adrian at ucc.gu.uwa.edu.au Mon Jan 19 02:10:33 2009 From: adrian at ucc.gu.uwa.edu.au (Adrian Chadd) Date: Mon, 19 Jan 2009 02:10:33 +0900 Subject: [tech] New gear In-Reply-To: References: <20090118140749.GA19125@ucc.gu.uwa.edu.au> Message-ID: <20090118171033.GB19125@ucc.gu.uwa.edu.au> On Sun, Jan 18, 2009, James Andrewartha wrote: > No, the problem is VLANs - configuring them in Xen is a right PITA. The problem is distributions and the order in which they bring up interfaces. What I do under Fedora Core and CentOS: * eth0 stays management, for dom0. I let the silly Xen script work on that * eth1 is for vlans. I have a custom script which just sets up the bridging seperately, ignoring the xen scripts * I then create bridges, name them xenbrX.Y, and Xen domU's just work with them. The trouble is the order in which the vlan interfaces and the xen startup happen. If the Xen scripts clone the interface and then you create vlan subif's off the "virtual" ethernet (ie, not pethX) then none of your bridge interfaces work correctly. Of course, you could also just use eth0 + vlans, by making eth0 mgmt only, not configuring a network script (or configuring a null one) so it doesn't setup eth0 "magically", and then hard-code into your config vlan subif's and bridge interfaces. Also, of course, some distributions (hai debian!) don't like you bringing up an interface with no IP address, kjust so you can use it as a bridging interface. I'd love to know how to tell ifup/ifdown to do this without putting in fake IP's into the interface. (hint: 0.0.0.0 doesn't work in all distributions.) 2c, Adrian From michael at ucc.gu.uwa.edu.au Mon Jan 19 10:04:34 2009 From: michael at ucc.gu.uwa.edu.au (Michael Deegan) Date: Mon, 19 Jan 2009 10:04:34 +0900 Subject: [tech] New gear In-Reply-To: <20090118171033.GB19125@ucc.gu.uwa.edu.au> References: <20090118140749.GA19125@ucc.gu.uwa.edu.au> <20090118171033.GB19125@ucc.gu.uwa.edu.au> Message-ID: <20090119010434.GL1772@wibble.darktech.org> On Mon, Jan 19, 2009 at 02:10:33AM +0900, Adrian Chadd wrote: > Also, of course, some distributions (hai debian!) don't like you bringing up an > interface with no IP address, kjust so you can use it as a bridging interface. > I'd love to know how to tell ifup/ifdown to do this without putting in fake > IP's into the interface. (hint: 0.0.0.0 doesn't work in all distributions.) What about something like: auto bridge iface bridge inet manual up ifconfig ethX up down ifconfig ethX down ||true (doing the same using iproute2 is left as an exercise for the reader) Debian's /etc/network/interfaces is actually quite flexible. -MD -- ------------------------------------------------------------------------------- Michael Deegan Hugaholic http://wibble.darktech.org/gallery/ ------------------------- Nyy Tybel Gb Gur Ulcabgbnq! ------------------------- From Adrian at ScreamingRoot.org Mon Jan 19 15:11:32 2009 From: Adrian at ScreamingRoot.org (Adrian Woodley) Date: Mon, 19 Jan 2009 15:11:32 +0900 Subject: [tech] New gear Message-ID: <681149047d540c8b5f1574d004dcef92@localhost> G'day David, et al, I could do with a real machine to run bfb.asn.au on. The domain name is dedicated to provide web and email services to volunteer bush fire brigades in Australia. Currently its hosted on one of Adrian's virtual machines, but some real RAM would allow me to improve the spam/virus filtering and host brigade webpages. The service is currently being run by Sawyers Valley Volunteer Bush Fire Brigade, although there is plans to fork off a separate association to house and manage the domain and its services. Either way, UCC would be donating the machine to a not-for-profit association (and not just AdrianW's collection of toys). There are currently four volunteer brigades using this system, but with suitable funding and support we intend to expand it nation-wide. A UCC logo and link could be displayed on the htt://bfb.asn.au/ site as a sponsor. Cheers, Adrian Woodley David Adam wrote: > Courtesy of Fugro FSI, the UCC now possesses 16 1U servers. They are dual > single-core Xeon 2.4GHz machines with 3GB ECC Registered RAM. > > They only come with single 40GB (or so) ATA drives, which are likely to be > unreliable having spent working lives as scratch disks. Dual Gigabit > Ethernet interfaces which might be useful. > > Two have gone to blinken@, I know adrian@ would like another couple for > his Squid cluster when he gets back in the country, and I'm considering > acquiring one. > > There's also three 2U Dell PowerEdge 2650 servers (sans drives) and a 1U > IBM X-series 335, which have shown up from (I think) iiNet, which gives us > a fair chunk of new (to UCC) hardware. I have no real plans for what to do > with it all yet, although we could probably replace Mooneye (perhaps with > a VM, although not Xen because the networking seems to suck hard). > > David Adam > UCC Wheel Member > zanchey@ > > From zanchey at ucc.gu.uwa.edu.au Tue Jan 20 18:23:30 2009 From: zanchey at ucc.gu.uwa.edu.au (David Adam) Date: Tue, 20 Jan 2009 18:23:30 +0900 (WST) Subject: [tech] New gear In-Reply-To: References: Message-ID: On Sat, 17 Jan 2009, David Adam wrote: > Courtesy of Fugro FSI, the UCC now possesses 16 1U servers. They are dual > single-core Xeon 2.4GHz machines with 3GB ECC Registered RAM. > > They only come with single 40GB (or so) ATA drives, which are likely to be > unreliable having spent working lives as scratch disks. Dual Gigabit > Ethernet interfaces which might be useful. We now have an additional 12, bringing the total to 24 (as two have already disappeared to blinken@). I think we can safely give one away to adrianw@ if it's for a good cause. Not sure about the rest - might be worth sitting down on Saturday evening post-cleanup and inventorying everything. I was thinking about offering them on tech-contacts for a nominal sum if we have plenty to spare. [DAA] From adrian at ucc.gu.uwa.edu.au Wed Jan 21 03:01:48 2009 From: adrian at ucc.gu.uwa.edu.au (Adrian Chadd) Date: Wed, 21 Jan 2009 03:01:48 +0900 Subject: [tech] New gear In-Reply-To: <681149047d540c8b5f1574d004dcef92@localhost> References: <681149047d540c8b5f1574d004dcef92@localhost> Message-ID: <20090120180148.GC19125@ucc.gu.uwa.edu.au> You could always ask me for more RAM. :) On Mon, Jan 19, 2009, Adrian Woodley wrote: > G'day David, et al, > > I could do with a real machine to run bfb.asn.au on. The domain name is > dedicated to provide web and email services to volunteer bush fire > brigades in Australia. Currently its hosted on one of Adrian's virtual > machines, but some real RAM would allow me to improve the spam/virus > filtering and host brigade webpages. > > The service is currently being run by Sawyers Valley Volunteer Bush Fire > Brigade, although there is plans to fork off a separate association to > house and manage the domain and its services. Either way, UCC would be > donating the machine to a not-for-profit association (and not just > AdrianW's collection of toys). > > There are currently four volunteer brigades using this system, but with > suitable funding and support we intend to expand it nation-wide. > > A UCC logo and link could be displayed on the htt://bfb.asn.au/ site as > a sponsor. > > Cheers, > > Adrian Woodley > > David Adam wrote: > > Courtesy of Fugro FSI, the UCC now possesses 16 1U servers. They are dual > > > single-core Xeon 2.4GHz machines with 3GB ECC Registered RAM. > > > > They only come with single 40GB (or so) ATA drives, which are likely to > be > > unreliable having spent working lives as scratch disks. Dual Gigabit > > Ethernet interfaces which might be useful. > > > > Two have gone to blinken@, I know adrian@ would like another couple for > > his Squid cluster when he gets back in the country, and I'm considering > > acquiring one. > > > > There's also three 2U Dell PowerEdge 2650 servers (sans drives) and a 1U > > IBM X-series 335, which have shown up from (I think) iiNet, which gives > us > > a fair chunk of new (to UCC) hardware. I have no real plans for what to > do > > with it all yet, although we could probably replace Mooneye (perhaps with > > > a VM, although not Xen because the networking seems to suck hard). > > > > David Adam > > UCC Wheel Member > > zanchey@ > > > > From zanchey at ucc.gu.uwa.edu.au Fri Jan 23 17:51:04 2009 From: zanchey at ucc.gu.uwa.edu.au (David Adam) Date: Fri, 23 Jan 2009 17:51:04 +0900 (WST) Subject: [tech] Manduba @ Arts Message-ID: Thanks to the hard work of [TRS], [JCF], [LGM] and [PMC], Manduba is now installed in one of the Faculty of Arts server rooms and accessible on the Internet. Manduba is a Sun Enterprise 4000 with 11.5 GB of RAM and 12 x 400 MHz UltraSPARC II processors. Clearly, it is pretty good at parallel processing. It runs OpenSolaris Nevada snv_101, and has a 64 GB ZFS array attached. It's sitting on our uplink VLAN (UWA VLAN 13) at 192.168.13.20, but also has a globally-routable address (manduba.ucc.gu.uwa.edu.au or 130.95.13.254). The 130.95 address is provided by an IPsec tunnel from Musundo. An IPsec tunnel from Madako would have been preferable but unfortunately the documentation for Debian/Linux's IPsec implementations is atrocious. It has user authentication via LDAP and home directories mounted via NFS. NFS mounts of /services and /away won't work until I fiddle with the routing a bit more (possibly giving Musundo an additional IP to run IPsec tunnels). [TRS] has also adjusted the firewall so that it has full Internet access via the Bright link. The plan is to attach a GNOME buildbot at some stage to help with regression testing, so if you're interested in helping out get in touch with trs80@ or myself. There are also vague future plans to allow interested members of the GNOME community access to help resolve Solaris issues. If that happens, we'll probably switch to mounting the less-secure /away directories. David Adam UCC Wheel Member zanchey at ucc.gu.uwa.edu.au From zanchey at ucc.gu.uwa.edu.au Sat Jan 24 09:42:06 2009 From: zanchey at ucc.gu.uwa.edu.au (David Adam) Date: Sat, 24 Jan 2009 09:42:06 +0900 (WST) Subject: [tech] Martello downtime today Message-ID: If you're keeping an eye on the UCC Status page (https://twitter.com/ucc_status) you will have noticed that there is a chance of some downtime today due to the cleanup. The current plan is to power Martello down and extract the dead drive. This will affect /home, webmail, users' webspace and incoming mail, plus some local services like the snack machine. Give the clubroom a call on 64883901 after 1pm if you have any questions. Thanks for your patience! David Adam UCC Wheel Member zanchey at ucc.gu.uwa.edu.au