[tech] Ceph vmstore-bigssd was Re: Proxmox 6 upgrade complete
Nick Bannon
nick at ucc.gu.uwa.edu.au
Mon Jun 15 09:54:32 AWST 2020
On Thu, May 07, 2020 at 12:13:25PM +0800, Coffee wrote:
[...]
> At this point I have yet upgraded Ceph to the latest version as it is a
> somewhat involved procedure (see
> https://pve.proxmox.com/wiki/Ceph_Luminous_to_Nautilus). Until Ceph is
> upgraded it the Proxmox 6.x tools for Ceph (pveceph) should not be used to
> manage the Ceph storage, as they are not intended to work with Ceph
> Luminous.
> Regards, Zack [CFE]
Good news! I believe this happened:
On Sun 24 May 00:00:06 AWST 2020, dylanh333 [333] wrote at
https://discord.com/channels/264401248676085760/264401248676085760/713794597083545650
> Local storage is nice for performance, especially on
> Magikarp, but it can make it an absolute pain to live-migrate VMs
> between hosts in a timely manner, if we need to do maintenance on a host
> Status update for (not pinging) (at)wheel: James and I have seemingly
> successfully upgraded Ceph from "Luminous" to "Nautilus" on the cluster,
> and rebooted each host it was upgraded on to ensure it worked after a
> reboot (and also to apply the latest kernel update), but there is still
> a bit of weirdness at the moment, with the manager daemon supposedly
> crashing on Loveday, although that may have been transient
root at maltair:~# ceph mon dump | grep min_mon_release
dumped monmap epoch 7
min_mon_release 14 (nautilus)
Upshot: we have 5 VM hosts contributing to the Ceph cluster,
all with quality, albeit not Enterprise, SSDs. (See sneer here
[1].) This is enough space to move the majority of the small VMs off
nas-vmstore/molmol onto vmstore-ssd, which makes it that much easier
to think about molmol's OS upgrade, as discussed (again ::-) ) in
/home/wheel/docs/meetings/2020-05-23.txt [2].
[1] https://openzfs.github.io/openzfs-docs/Project%20and%20Community/FAQ.html#other-general-guidelines
[2] Next one coming soon, Saturday 2020-06-27, RSVP's encouraged.
root at maltair:~# ceph -s
cluster:
id: 276217a2-99c8-47a7-950a-fdb27ddd3636
health: HEALTH_OK
services:
mon: 5 daemons, quorum medico,maltair,loveday,mudkip,magikarp (age 33h)
mgr: loveday(active, since 3w), standbys: medico, maltair, mudkip, magikarp
osd: 5 osds: 5 up (since 2w), 5 in (since 2w)
data:
pools: 1 pools, 64 pgs
objects: 72.40k objects, 280 GiB
usage: 832 GiB used, 1.2 TiB / 2.1 TiB avail
pgs: 64 active+clean
I've recently started a VM for a matrix.org Synapse server, it's currently
running a test instance at https://riot.gnuperth.org/ - come try! I'm
hoping it's the revival that https://www.uniirc.com/ could probably use...
It needs bulk space for longterm database growth: so I've got a couple
of 2TB Samsung 860 QVO's, which I plan to turn into a Ceph vmstore-bigssd
as discussed here:
https://discord.com/channels/264401248676085760/264401248676085760/719033678063337492
*** The time is: Sun 7 Jun 11:00:02 AWST 2020
<Nick> ...the matrix server is going to need more storage, so it's
mostly for that, but if we want to add to it, that would be neat.
<Nick> dylanh333: magikarp/mudkip, if they have spare sleds?
<dylanh333 [333]> Yep, they certainly do!
<Nick> Cool.
<dylanh333 [333]> If it's on them, let me check what I named the existing VGs and volumes
<dylanh333 [333]> I'd name the VG "ssd<something>", then the volume under that "vm store"
<dylanh333 [333]> For comparison, the current RAID array with the Optane
is the VG "rustarray", and the volume for VM storage under that "vmstore"
<dylanh333 [333]> Then all that's left to do for Mudkip is get it 10Gbps networking
<dylanh333 [333]> (ie. get more SFP+ modules, or a 3m direct attach SFP+ cable or something)
<Nick> Sounds good... maybe ssdbulk or ssdqbulk, vs ssd?
<dylanh333 [333]> bigssd
<tec> bigssd
Current clubroom restrictions mean... [TEC], [MPT] - can I get these two
QVO's to you for installation soon? When/where's good?
Nick.
--
Nick Bannon | "I made this letter longer than usual because
nick-sig at rcpt.to | I lack the time to make it shorter." - Pascal
More information about the tech
mailing list