[tech] [Conversation Starter]: GPU server

tec tec at ucc.gu.uwa.edu.au
Thu Oct 24 18:49:38 AWST 2019


That does sound better. I just assumed that the obvious way to do thing was acess via VMs.
Does motsugo have any x16/x8 expansion slots?

On Thursday, October 24, 2019 15:47 AWST, Lyndon White <oxinabox at ucc.asn.au> wrote:
 Why would you put it on VM?Why not a the general user server (motsugo)It's designed for compute,It has your home directory on a local disk and a ton of RAM. Putting it on a VM is just making more work. For no reason.You have to ensure everything passes through without overhead. [*OX] On Oct 24, 2019 7:19 AM, tec <tec at ucc.gu.uwa.edu.au> wrote:Hi All,

With the general rise in GPU-accelerated compute tasks, and particularly ML I think it could be a good idea to be able to have VMs with access to GPU resources.

Give me your thoughts!

Also, here's some relevant copy-pasta from discord:______________________________________________________________________________
 tec28/09/2019  I'm looking to try to install a bunch of stuff to try out GPT2, and hopefully not break anything. Have we got any GPUs I can access via a VM?   I would install on my desktop, but my distro doesn't have the nvidia cuda stuff packaged ______________________________________________________________________________
 coxy28/09/2019  loveday has some shitty nvidia card   bah, can't get the dvb tuner working on windows10 ______________________________________________________________________________
 tec28/09/2019   Just wanna check gpu ______________________________________________________________________________
 coxy28/09/2019  think you need to ssh in as root, check uccpass   when we get the new proxmox hosts we should put something in heathred ______________________________________________________________________________
 NickBOT28/09/2019  @tec @DylanH[333]👌: If you're up for it, that would be pretty handy and I'm sure there would be club funding available for that sort of thing (and/or donations?). "HowToGPGPU at UCC" ______________________________________________________________________________
 tec28/09/2019  I don't know how yet, but I'd like to ______________________________________________________________________________
 DylanH[333]👌28/09/2019  I'd attend a session on something like that, but I'   m not knowledgeable enough to run one   I could potentially help with GPU passthrough in ProxMox though   I've done PCIe passthrough before for a NIC ______________________________________________________________________________
 NickBOT28/09/2019  @DylanH[333]👌: Yep, step one is expanding the cluster and hardware installation - happy to help experiment ______________________________________________________________________________
 DylanH[333]👌28/09/2019  Yeah, I need to get back onto that. I still haven't taken the time to ask committee about ordering SSDs   I'm now of the mind: didge the SD cards and reinstall Proxmox on 500GB+ SSDs, and dedicate an LVM volume for Ceph   I just haven't had the time/motivation ______________________________________________________________________________
 NickBOT28/09/2019  As for the OpenCL? software setup, @tec will no doubt help? ::-) ______________________________________________________________________________
 DylanH[333]👌28/09/2019  Also, we'd have to use one of our existing hosts in the cluster for a GPU, unless we have one lying around that's no thicker than one slot ______________________________________________________________________________
 NickBOT28/09/2019  Sounds good to me, one big drive per machine now that we have >3 hosts? though the SD boot is neat, too. ______________________________________________________________________________
 DylanH[333]👌28/09/2019  The 1RU HPs have a full-sized PCIe slot, but it's pretty thin ______________________________________________________________________________
 tec28/09/2019  PCIe extension ribbon? ______________________________________________________________________________
 DylanH[333]👌28/09/2019  Still doesn't solve where to actually fit the GPU ______________________________________________________________________________
 NickBOT28/09/2019  Loveday should be good for physical space. ______________________________________________________________________________
 DylanH[333]👌28/09/2019  @tec This is the sort of clearance the HPs have to work with ______________________________________________________________________________
 TRS-8028/09/2019  also how do you get power to the gpu? ______________________________________________________________________________
 DylanH[333]👌28/09/2019  PCIe slot - so it'd have to be 75W or less iirc   I suppose most GPUs that fit that power limit will probably fit that size limit as well 


 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://lists.ucc.gu.uwa.edu.au/pipermail/tech/attachments/20191024/0bcbf6d9/attachment-0001.htm 


More information about the tech mailing list