Difference between revisions of "Infrastructure Transition Plan 2014"

From OSGeo
Jump to navigation Jump to search
m (+category)
Line 8: Line 8:
  
 
Current hardware has for the most part met the original stated goals of hosting websites for projects, issue tracking, version control, and mailing lists. Uptime has been generally good, performance occasionally not so good when things aren't configured right (open proxy, excessive WMS requests, large numbers of 404 from bots). Most services were not configured with redundancy as small amounts of downtime were deemed acceptable, which may no longer be the case.
 
Current hardware has for the most part met the original stated goals of hosting websites for projects, issue tracking, version control, and mailing lists. Uptime has been generally good, performance occasionally not so good when things aren't configured right (open proxy, excessive WMS requests, large numbers of 404 from bots). Most services were not configured with redundancy as small amounts of downtime were deemed acceptable, which may no longer be the case.
 +
 +
Our biggest dilemma has been lack of people power. We currently only have about 4-5 people who partake in core system administration. Several other people kindly manage nabble, and some other external resources. Ideas on how to balance the workload and recruit more help is important to being able to keep the systems running.
  
 
== Future Needs ==
 
== Future Needs ==

Revision as of 09:03, 21 April 2014

This is a draft document for the purposes of collaborative planning of the new server transition. This notice will be removed once SAC has determined it's final course of action.

Background

Current Physical Machines hosted OSL are entering the latter part of their life expectancy. With the recent replacement of hard drives and raid card batteries effecting performance it's time to start planning for the next 3-5 years of computing needs. We have a recently acquired large backup machine at OSL, 9TB usable space. OSGeo1 at Peer will be off as of May 2014.

Past Performance

Current hardware has for the most part met the original stated goals of hosting websites for projects, issue tracking, version control, and mailing lists. Uptime has been generally good, performance occasionally not so good when things aren't configured right (open proxy, excessive WMS requests, large numbers of 404 from bots). Most services were not configured with redundancy as small amounts of downtime were deemed acceptable, which may no longer be the case.

Our biggest dilemma has been lack of people power. We currently only have about 4-5 people who partake in core system administration. Several other people kindly manage nabble, and some other external resources. Ideas on how to balance the workload and recruit more help is important to being able to keep the systems running.

Future Needs

  • More projects are using static websites built from version control, primarily with Sphinx.
  • Some projects have expressed interest in continuous integration services.
  • There's a renewed interest in global mirroring or GeoCDN type setup for redundancy and speed. Something similar to OSM, or maybe even swapping space with OSM.
  • More redundancy to increase uptime of important websites.

Projects, please list specific needs you would like met.

Ideas

  • Buy new hardware
    • Possibly use SSDs
  • Take advantage of various free hosting
    • e.g. Readthedocs.org
  • Pay for external hosting
    • github pro
    • hetzner (QGIS is currently renting a server)
    • bluehost
    • digitalocean
    • rackspace
    • linode
    • etc...
  • Pool resource with Projects that have bigger budgets
  • Leverage OSGeo-ICA labs for hosting nodes