Osgeo7

From OSGeo
Revision as of 22:16, 5 June 2019 by Strk (talk | contribs) (Add link to gitea wiki)
Jump to navigation Jump to search

Osgeo7 is an Ubuntu 18.04 LTS machine administered by SAC, hosted on OSU OSL servers since June 2018. It is setup with LXD 3.12 (installed via snap install lxd) so should stay at the latest stable version of LXD.

Up-to-date info about containers can be found (password-protected) in https://git.osgeo.org/gitea/sac/osgeo7/wiki/

Hardware

Ordered from Silicon Mechanics, May 2018, Delivered OSUOSL June 2018

       1   1U X11DDW 815TQC-R706W                             $7232  $7232.00 
       Details:
       CPU:  2 x Intel Xeon Silver 4110, 2.1GHz (8-Core, HT, 2400 MT/s, 85W) 14nm 
       RAM:  128GB (8 x 16GB DDR4-2666 ECC Registered 1R 1.2V RDIMMs) Operating at 2666 MT/s Max 
       NIC:  Intel Dual-Port Ethernet Controller RJ45 - Integrated 
       Management:  IPMI 2.0 & KVM with Dedicated LAN - Integrated 
       Drive Controller:  14 Ports 6Gb/s SATA (Intel C621 Chipset) 
       Backplane:  12Gb/s SAS3 4-port direct connect backplane 
       NOTE:  For RAID with CacheVault, please select both controller and CacheVault kit below. 
       PCIe 3.0 x16 - 1: No Item Selected 
       LP PCIe 3.0 x8: No Item Selected 
       SATA DOM: No Item Selected 
       M.2 Drive: No Item Selected 
       NOTE:  Drives will be connected to onboard SATA3 controller unless otherwise specified 
       NOTE:  SED and 4Kn drives may have an extended lead time. To order, please contact our sales department. 
       Hot-Swap Drive - 1:  HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA 
       Hot-Swap Drive - 2:  HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA 
       Hot-Swap Drive - 3:  HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA 
       Hot-Swap Drive - 4:  HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA 
       Optical Drive:  Blanking Panel - No Optical Drive 
       Front Panel:  Blanking Panel - No Front Inputs 
       Power Cables:  IEC60320 C13 to C14 Power Cable, 16AWG, 240V/15A, Black - 6' 
       Power Supply:  Redundant 750W Power Supply with PMBus & PFC - 80 PLUS Platinum 
       Rail Kit:  Quick-Release Rail Kit for Square Holes, Outer Slide Extendable Length 25.6 - 33.05 Inches 
       OS:  Customer declined OS 
       Management SW:  Supermicro Update Manager (SUM) Out-of-Band Management Software 
       Standard Warranty:  5 Year Silicon Mechanics Standard Warranty - Tier 1 ($0 - 10,000) 
       NOTE:  Advanced Parts Replacement service covers the cross shipping of replacement parts. 
       Advanced Parts Replacement:  5 Year Advanced Parts Replacement 
       NOTE:  For onsite service, international coverage, or additional options please contact our Sales department.
       Notes:
       No RAID
               No OS
               SUM=YES
       
       **** Additional Components ****
       Optane:  2 x Intel 280GB 900P Series (3D XPoint, 10 DWPD) HHHL PCIe 3.0 x4 NVMe SSD
       Drive:  Samsung 512GB SM961 MLC (4GB/s, NVMe) PCIe 3.0 x4 M.2 2280 SSD

Setup

As of 2019-04-24 the ssh of the main host (the physical server) is 2222 and there is only one non-root account on it and can only be accessed via key access. At this time only wildintellect, strk, robe, martin, pramsey and jef have their keys installed

So to SSN - ssh tech_dev@osgeo7.osgeo.osuosl.org -p 2222


 DONE: Installed 18.04.1 on the Samsung 512GB MZVKW512HMJP (whole drive), and only installed OpenSSH (140.211.15.30, 140.211.15.57)
    • ZFS DONE
  • OS and lxd are installed on SAMSUNG regular ext4 partition (500GB).
  • DONE Created an LXD ZFS pool called osgeo7 that takes up the other disks in a RAID 10 config (16 TB total (of the 32 TB)). Note we went with simple single ZFS pool RAID 10 takes up the remaining disks (Non-SAMSUNG).
    • LXC/LXD - storage options we'll go with zfs (need to decide how big of storage),
      • also the fact btrfs allows going back to older snapshot, does zfs have a comparable feature, or might we consider having two storage containers, one zfs and one btrfs)
      • we'll have containers dedicated to databases one for user databases, one for system stuff like ldap, gitea, trac?


Containers and Services

LXD based containers:

  • dronie-server [DONE] https://dronie.osgeo.org runs drone 1.0 in a multi-machine mode with one local drone agent. No direct ssh access.
  • nginx [DONE] - used to proxy web traffic from all other containers on osgeo7
  • Download [DONE] - download.osgeo.org / upload.osgeo.org / bottle.download.osgeo.org -- all proxied thru nginx container -- config for download under git
  • Secure (LDAP) [DONE] - rebuilt as debian 9 - new-secure container - now used by all osgeo services that require ldap via ldaps://ldap.osgeo.org, no direct ssh access, need to go thru download.osgeo.org (and then ssh new-secure.lxd or connect to osgeo7 -p 2222 and then lxc exec new-secure bash)
  • nextcloud-ubuntu - nextcloud.osgeo.org [DONE - now used by more than just board members] - no direct ssh access (and only has tech_dev account for ssh. no ssh ldap, though the nextcloud.osgeo.org does use OSGeo LDAP)
  • old-projects [DONE] - spatialreference.org, community-review.osgeo.org
  • old-adhoc [DONE] -- adhoc.osgeo.osuosl.org, adhoc.osgeo.org, demo.mapserver.org

Proposed

  • Webextra (Foss4g Archives)
  • Wiki [In progress] [lxd-p2c (tool to convert vm or physical snapshot take) ] - wiki.osgeo.org is currently proxying thru osgeo7 nginx container, but is still hosted on osgeo3