<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<p>Hi again,</p>
<p>A quick followup - we now have 4 additional <a
moz-do-not-send="true"
href="https://www.fs.com/au/products/74668.html">"generic" SFP+
modules</a> from FS.com and 2 <a
href="https://support.hpe.com/hpsc/doc/public/display?docId=emr_na-c02694717"
moz-do-not-send="true">HP NC523SFP dual port PCIe cards</a> to
play with. Ceph has also been configured across the 3 proxmox
hosts with a total of around 400GB of SSD-backed, redundant
storage.<br>
</p>
<p>Now that we have the parts, time to do the upgrade! [CFE] and I
will be coming in next Saturday to attempt to connect loveday and
motsugo to the 10G network, and if anyone else is interested
please feel free to meet us in the clubroom around 10am.</p>
<p>As a result of installing cards and (re)configuring networking,
expect downtime for motsugo (including email access via IMAP/POP3,
ssh.ucc.asn.au, and unexpected termination of all running user
sessions) and allow for the possibility of (temporary) total
catastrophic network failure on 2018-01-12 between 10:00 and 18:00
AWST.</p>
<p>Due to failover capabilities in our Proxmox cluster, it is
unlikely that there will be any noticeable interruptions to our VM
hosting and storage services during this time, except perhaps in
the case of possible total network failure as mentioned above.</p>
<p>Best regards,</p>
<p>Felix von Perger [FVP]<br>
UCC President & Wheel member<br>
</p>
<p>On 16/11/18 10:40 pm, Felix von Perger wrote:<br>
</p>
<blockquote type="cite"
cite="mid:dbd7b277-637a-3c59-106e-bdd76fdb6464@ucc.asn.au">
<pre class="moz-quote-pre" wrap="">Hi tech,
I've looked into configuring ceph distributed storage for VM disks
(<a class="moz-txt-link-freetext" href="http://docs.ceph.com/docs/master/releases/luminous/">http://docs.ceph.com/docs/master/releases/luminous/</a>) on the Proxmox
cluster using the 3 existing 500GB SSDs.In order to ensure failover is
possible in case of one of the 3 hosts going offline, ceph requires a
minimum data redundancy of 3 leaving a total storage capacity of around
500TB (from the total raw storage space of 1.5TB). The idea is to have
at least our core VMs and filesystems (ie /services) on SSD-backed
storage to make things more snappy.
As per the documentation
(<a class="moz-txt-link-freetext" href="http://docs.ceph.com/docs/master/start/hardware-recommendations/">http://docs.ceph.com/docs/master/start/hardware-recommendations/</a>) ceph
is limited to the bandwidth of the slowest network link, and given that
we are using SSDs there would be a noticeable improvement upgrading to
10Gbps from the current bottleneck of 1Gbps on loveday.
Hardware-wise, the cheapest option seems to be the Mellanox ConnectX-2
(such as <a class="moz-txt-link-freetext" href="https://www.ebay.com.au/itm/192421526775">https://www.ebay.com.au/itm/192421526775</a>) for around $50 each.
SFP+ cabling could either be passive (such as
<a class="moz-txt-link-freetext" href="https://www.fs.com/au/products/30856.html">https://www.fs.com/au/products/30856.html</a> for $17) or somewhat fancier
active setup using fibre (such as 2 *
<a class="moz-txt-link-freetext" href="https://www.fs.com/au/products/74668.html">https://www.fs.com/au/products/74668.html</a> for $22 each).
It seems that loveday is fussy when it comes to booting when certain
types of PCIe cards are installed - should this be an issue and the
above-mentioned hardware be effectively unusable then the ceph cluster
could be configured using the other machines with 10GbE (ie.
murasoi/medico/maltair), albeit with the loss of the convenient Proxmox
ceph configuration UI, and the spare 10GbE card could be put to use
elsewhere. BIOS/firmware upgrades on loveday permitting.
Let me know if you have any thoughts about this.
Best regards,
Felix von Perger
UCC President & Wheel member
_______________________________________________
List Archives: <a class="moz-txt-link-freetext" href="http://lists.ucc.asn.au/pipermail/tech">http://lists.ucc.asn.au/pipermail/tech</a>
Unsubscribe here: <a class="moz-txt-link-freetext" href="https://lists.ucc.gu.uwa.edu.au/mailman/options/tech/frekk%40ucc.asn.au">https://lists.ucc.gu.uwa.edu.au/mailman/options/tech/frekk%40ucc.asn.au</a>
</pre>
</blockquote>
</body>
</html>