Difference between revisions of "Osgeo7"
(Created page with "== Hardware == == Setup == == Services ==") |
|||
(40 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
+ | '''Osgeo7''' is an Ubuntu 18.04 LTS machine administered by [[SAC]], hosted on [[SAC_Service_Status#Servers_at_OSL|OSU OSL servers]] since June 2018. | ||
+ | It is setup with LXD 3.12 (installed via snap install lxd) so should stay at the latest stable version of LXD. | ||
+ | |||
+ | Up-to-date info about containers can be found (password-protected) in https://git.osgeo.org/gitea/sac/osgeo7/wiki/ | ||
+ | |||
== Hardware == | == Hardware == | ||
+ | Ordered from Silicon Mechanics, May 2018, Delivered OSUOSL June 2018 | ||
+ | |||
+ | 1 1U X11DDW 815TQC-R706W $7232 $7232.00 | ||
+ | Details: | ||
+ | CPU: 2 x Intel Xeon Silver 4110, 2.1GHz (8-Core, HT, 2400 MT/s, 85W) 14nm | ||
+ | RAM: 128GB (8 x 16GB DDR4-2666 ECC Registered 1R 1.2V RDIMMs) Operating at 2666 MT/s Max | ||
+ | NIC: Intel Dual-Port Ethernet Controller RJ45 - Integrated | ||
+ | Intel Corporation Ethernet Connection X722 for 1GbE (rev 09) | ||
+ | Management: IPMI 2.0 & KVM with Dedicated LAN - Integrated | ||
+ | Drive Controller: 14 Ports 6Gb/s SATA (Intel C621 Chipset) | ||
+ | Backplane: 12Gb/s SAS3 4-port direct connect backplane | ||
+ | NOTE: For RAID with CacheVault, please select both controller and CacheVault kit below. | ||
+ | PCIe 3.0 x16 - 1: No Item Selected | ||
+ | LP PCIe 3.0 x8: No Item Selected | ||
+ | SATA DOM: No Item Selected | ||
+ | M.2 Drive: No Item Selected | ||
+ | NOTE: Drives will be connected to onboard SATA3 controller unless otherwise specified | ||
+ | NOTE: SED and 4Kn drives may have an extended lead time. To order, please contact our sales department. | ||
+ | Hot-Swap Drive - 1: HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA | ||
+ | Hot-Swap Drive - 2: HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA | ||
+ | Hot-Swap Drive - 3: HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA | ||
+ | Hot-Swap Drive - 4: HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA | ||
+ | Optical Drive: Blanking Panel - No Optical Drive | ||
+ | Front Panel: Blanking Panel - No Front Inputs | ||
+ | Power Cables: IEC60320 C13 to C14 Power Cable, 16AWG, 240V/15A, Black - 6' | ||
+ | Power Supply: Redundant 750W Power Supply with PMBus & PFC - 80 PLUS Platinum | ||
+ | Rail Kit: Quick-Release Rail Kit for Square Holes, Outer Slide Extendable Length 25.6 - 33.05 Inches | ||
+ | OS: Customer declined OS | ||
+ | Management SW: Supermicro Update Manager (SUM) Out-of-Band Management Software | ||
+ | Standard Warranty: 5 Year Silicon Mechanics Standard Warranty - Tier 1 ($0 - 10,000) | ||
+ | NOTE: Advanced Parts Replacement service covers the cross shipping of replacement parts. | ||
+ | Advanced Parts Replacement: 5 Year Advanced Parts Replacement | ||
+ | NOTE: For onsite service, international coverage, or additional options please contact our Sales department. | ||
+ | Notes: | ||
+ | No RAID | ||
+ | No OS | ||
+ | SUM=YES | ||
+ | |||
+ | **** Additional Components **** | ||
+ | Optane: 2 x Intel 280GB 900P Series (3D XPoint, 10 DWPD) HHHL PCIe 3.0 x4 NVMe SSD | ||
+ | Drive: Samsung 512GB SM961 MLC (4GB/s, NVMe) PCIe 3.0 x4 M.2 2280 SSD | ||
== Setup == | == Setup == | ||
+ | As of 2019-04-24 the ssh port of the main host (the physical server) is 2222 and there is only one non-root account on it | ||
+ | and can only be accessed via key access. At this time only wildintellect, strk, robe, martin, pramsey and jef have their keys installed | ||
+ | |||
+ | So to SSH - ssh tech_dev@osgeo7.osgeo.osuosl.org -p 2222 | ||
+ | |||
+ | As of 2020-12-20 some configurations of this machine are deployed using [[AnsibleDeployment]] | ||
+ | |||
+ | * Ubuntu [http://releases.ubuntu.com/18.04/ 18.04] [https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes (Release Notes)]. [https://git.osgeo.org/gitea/sac/osgeo7 more details about install steps] | ||
+ | DONE: Installed 18.04.1 on the Samsung 512GB MZVKW512HMJP (whole drive), and only installed OpenSSH (140.211.15.30, 140.211.15.57) | ||
+ | |||
+ | ** ZFS DONE | ||
+ | * OS and lxd are installed on SAMSUNG regular ext4 partition (500GB). | ||
+ | |||
+ | * '''DONE''' Created an LXD ZFS pool called osgeo7 that takes up the other disks in a RAID 10 config (16 TB total (of the 32 TB)). Note we went with simple single ZFS pool RAID 10 takes up the remaining disks (Non-SAMSUNG). | ||
+ | |||
+ | **** Additional Reference | ||
+ | **** [https://github.com/zfsonlinux/zfs/wiki/Ubuntu-18.04-Root-on-ZFS Installing Ubuntu on ZFS root using terminal] | ||
+ | **** [https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Ubuntu-18.04-to-a-Whole-Disk-Native-ZFS-Root-Filesystem-using-Ubiquity-GUI-installer Installing Ubuntu with ZFS root using Ubiquity] | ||
+ | *** Root pool on spinning disks, either as raid-z2 or a pair of 2 device mirrors? (How do we decide?) | ||
+ | *** ZIL slog on mirrored Optane drives, consider placing the boot pool ('bpool') here as well. | ||
+ | *** L2ARC on M2 SSD drive. | ||
+ | *** If using legacy BIOS boot mode, install grub to all devices; UEFI requires additional partitioning. | ||
+ | ** LXC/LXD - [https://lxd.readthedocs.io/en/stable-3.0/storage/ storage options] we'll go with zfs (need to decide how big of storage), | ||
+ | *** also the fact btrfs allows going back to older snapshot, does zfs have a comparable feature, or might we consider having two storage containers, one zfs and one btrfs) | ||
+ | *** we'll have containers dedicated to databases one for user databases, one for system stuff like ldap, gitea, trac? | ||
+ | |||
+ | |||
+ | ==== Containers and Services ==== | ||
+ | |||
+ | Refer to [[SAC Service Status#osgeo_7]] | ||
+ | |||
+ | == Proposed == | ||
+ | |||
+ | * Webextra (Foss4g Archives) | ||
+ | * '''DONE''' Wiki [In progress] [lxd-p2c (tool to convert vm or physical snapshot take) ] - wiki.osgeo.org is currently proxying thru osgeo7 nginx container, but is still hosted on osgeo3 | ||
− | + | [[Category:Infrastructure]] |
Latest revision as of 18:03, 21 September 2022
Osgeo7 is an Ubuntu 18.04 LTS machine administered by SAC, hosted on OSU OSL servers since June 2018. It is setup with LXD 3.12 (installed via snap install lxd) so should stay at the latest stable version of LXD.
Up-to-date info about containers can be found (password-protected) in https://git.osgeo.org/gitea/sac/osgeo7/wiki/
Hardware
Ordered from Silicon Mechanics, May 2018, Delivered OSUOSL June 2018
1 1U X11DDW 815TQC-R706W $7232 $7232.00 Details: CPU: 2 x Intel Xeon Silver 4110, 2.1GHz (8-Core, HT, 2400 MT/s, 85W) 14nm RAM: 128GB (8 x 16GB DDR4-2666 ECC Registered 1R 1.2V RDIMMs) Operating at 2666 MT/s Max NIC: Intel Dual-Port Ethernet Controller RJ45 - Integrated Intel Corporation Ethernet Connection X722 for 1GbE (rev 09) Management: IPMI 2.0 & KVM with Dedicated LAN - Integrated Drive Controller: 14 Ports 6Gb/s SATA (Intel C621 Chipset) Backplane: 12Gb/s SAS3 4-port direct connect backplane NOTE: For RAID with CacheVault, please select both controller and CacheVault kit below. PCIe 3.0 x16 - 1: No Item Selected LP PCIe 3.0 x8: No Item Selected SATA DOM: No Item Selected M.2 Drive: No Item Selected NOTE: Drives will be connected to onboard SATA3 controller unless otherwise specified NOTE: SED and 4Kn drives may have an extended lead time. To order, please contact our sales department. Hot-Swap Drive - 1: HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA Hot-Swap Drive - 2: HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA Hot-Swap Drive - 3: HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA Hot-Swap Drive - 4: HGST 8TB Ultrastar He10 (6Gb/s, 7.2K RPM, 256MB Cache, 512e, ISE) 3.5" SATA Optical Drive: Blanking Panel - No Optical Drive Front Panel: Blanking Panel - No Front Inputs Power Cables: IEC60320 C13 to C14 Power Cable, 16AWG, 240V/15A, Black - 6' Power Supply: Redundant 750W Power Supply with PMBus & PFC - 80 PLUS Platinum Rail Kit: Quick-Release Rail Kit for Square Holes, Outer Slide Extendable Length 25.6 - 33.05 Inches OS: Customer declined OS Management SW: Supermicro Update Manager (SUM) Out-of-Band Management Software Standard Warranty: 5 Year Silicon Mechanics Standard Warranty - Tier 1 ($0 - 10,000) NOTE: Advanced Parts Replacement service covers the cross shipping of replacement parts. Advanced Parts Replacement: 5 Year Advanced Parts Replacement NOTE: For onsite service, international coverage, or additional options please contact our Sales department. Notes: No RAID No OS SUM=YES **** Additional Components **** Optane: 2 x Intel 280GB 900P Series (3D XPoint, 10 DWPD) HHHL PCIe 3.0 x4 NVMe SSD Drive: Samsung 512GB SM961 MLC (4GB/s, NVMe) PCIe 3.0 x4 M.2 2280 SSD
Setup
As of 2019-04-24 the ssh port of the main host (the physical server) is 2222 and there is only one non-root account on it and can only be accessed via key access. At this time only wildintellect, strk, robe, martin, pramsey and jef have their keys installed
So to SSH - ssh tech_dev@osgeo7.osgeo.osuosl.org -p 2222
As of 2020-12-20 some configurations of this machine are deployed using AnsibleDeployment
DONE: Installed 18.04.1 on the Samsung 512GB MZVKW512HMJP (whole drive), and only installed OpenSSH (140.211.15.30, 140.211.15.57)
- ZFS DONE
- OS and lxd are installed on SAMSUNG regular ext4 partition (500GB).
- DONE Created an LXD ZFS pool called osgeo7 that takes up the other disks in a RAID 10 config (16 TB total (of the 32 TB)). Note we went with simple single ZFS pool RAID 10 takes up the remaining disks (Non-SAMSUNG).
- Root pool on spinning disks, either as raid-z2 or a pair of 2 device mirrors? (How do we decide?)
- ZIL slog on mirrored Optane drives, consider placing the boot pool ('bpool') here as well.
- L2ARC on M2 SSD drive.
- If using legacy BIOS boot mode, install grub to all devices; UEFI requires additional partitioning.
- LXC/LXD - storage options we'll go with zfs (need to decide how big of storage),
- also the fact btrfs allows going back to older snapshot, does zfs have a comparable feature, or might we consider having two storage containers, one zfs and one btrfs)
- we'll have containers dedicated to databases one for user databases, one for system stuff like ldap, gitea, trac?
Containers and Services
Refer to SAC Service Status#osgeo_7
Proposed
- Webextra (Foss4g Archives)
- DONE Wiki [In progress] [lxd-p2c (tool to convert vm or physical snapshot take) ] - wiki.osgeo.org is currently proxying thru osgeo7 nginx container, but is still hosted on osgeo3