[tech] mylah and musundo repurposing, virtualisation and storage

Nick Bannon nick at ucc.gu.uwa.edu.au
Tue Feb 9 13:53:09 WST 2010

On Tue, Feb 09, 2010 at 12:08:03PM +0800, James Andrewartha wrote:
> To summarise: Move samba, /services and backups to mylah, turn musundo 
> into a reliable network storage server for VMs. [before O-Day]

That does sound good, thanks for the benchmarks.

> There's still some open questions:
>  o If we make mylah a Xen server, what OS should be used for the dom0? 
>    Debian, which may have problems with multiple CPU domUs, or CentOS 5.4 
>    which will be harder to maintain? Or OpenSolaris, which has its own 
>    well-integrated Xen support and fancy network management (Crossbow)?
>  o What virtualisation software do we use for the Sun servers from Arts, 
>    when they arrive? Do we want to virtualise Windows Server 2008, which
>    will require using VMWare, or dedicate an entire machine to it? Or run 
>    VMWare on one, and Xen on the others? Again, if Xen, what dom0 OS?

We're in the happy position that we have more decent hardware than we
know what to do with. It's nice to use a few platforms to let UCC members
learn about them, some for "production" and there can be others out in
the clubroom to experiment with and ponder upgrades with. I'm looking for
to the next round of handmedowns with amd64 and VT support, to make kvm
(and xenner, not so good at SMP either, yet) decent options.

I'd like to give Windows Server 2008 most of a machine, with VMWare,
it'll let us share the console and snapshot it nicely. I'd like to
have a couple of similar Xen machines, with working live migration.
Easy to get slack and never upgrade those, though, hence we might need
test machines in the clubroom that don't run all the time.

Whoever reinstalls mylah can choose, but try to get them to promise to
involve others in the regular maintenance.

>  o Do we set up a separate network segment for storage traffic, which
>    would require another gigabit switch, or is our main network 
>    underutilised enough that it'd be overkill?

I think we need a gigabit switch in each rack for cabling sanity.

I don't think we need a second just for SAN, yet, but it really might
be worth:
  * coloured SAN cables or cable tags;
  * physically placing the storage ports at one end of the switch; and
  * possibly not just throwing the traffic into the general VLAN trunk,
    instead physically patching a few storage ports between each switch.
    That would gain most of the real, tactile, "here's where the storage
    traffic flows, it's a bit less likely to explode if someone mistypes
    a config"


   Nick Bannon   | "I made this letter longer than usual because
nick-sig at rcpt.to | I lack the time to make it shorter." - Pascal

More information about the tech mailing list