Infrastructure Transition Plan

From OSGeo
Revision as of 22:21, 6 May 2018 by Jive (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
  1. REDIRECT Archived

This is a reduced form of the Translation Plan available at https://board.osgeo.org/servlets/GetAttachment?list=board&msgId=22236&attachId=2.

Hosting Provider

Because OSUOSL needs to retain their non-commercial status, we are unable to obtain an SLA through them, so we will be looking at other hosting options. See grid at bottom of page.

The proposed hosting solution is to use our selected provider for hosting most of the components listed in the Tool Requirements section below: web site, code repository, wiki, mailing lists, bug tracker. The additional items could be moved there as well, but some movement has already been made to host them on Telascience services which seems like a good use of those resources and fits with the goals of Telascience as a group. It is proposed to use Telascience hardware as the primary North American mirror for offsite backups of our other hosted services.

It is proposed that we hire a systems administrator for a few months to aid in setting up our services and help migrate content over to this new service. This will help address the tight timelines that we have for migrating by the end of the calendar year. Afterward, we would have an administrator on retainer (e.g. ¼ time) to help maintain our system over time. It will be critical to maintain a Service Level Agreement (SLA) with OSL to guarantee access to engineering resources and the resolution of issues in a timely manner. Much of the day-to-day work would be maintained by volunteers through WebCom and SAC.

Proposed Tools

The following appear in their approximate order of priority and implementation.

  1. DNS Management - tool undecided - location undecided - Outsource DNS services to third party, set up temp DNS name to work on during migration (e.g. osgeo.net)
  2. Mailing Lists - Mailman - OSL - Need to ensure migration of archives. Use of forums needs to be assessed.
  3. Security and Authentication - OpenLDAP - OSL - Note we also need to get the SSL certificate for osgeo.org (and perhaps osgeo.net). osgeo.org is currently held by CN.
  4. Source Code Control - SVN - OSL - Need to ensure migration of history.
  5. Web Pages - Drupal CMS - OSL - WebCom supports movement to this CMS and has experience maintaining it. Serves as a powerful base for other web reporting and membership management needs.
  6. Wiki - mediaWiki - OSL - Move from Terrestris.de to OSL. Some types of content could be migrated into the CMS for more official management. Not high priority ­is working well. We will also likely want some project specific wiki instances (eg. for GDAL).
  7. Bug / Issue Tracking - Trac - OSL - Trac is proposed as the bug/issue tracking tool. It has several methods for tying intoother parts of the infrastructure, e.g. SVN and other features. Unclear how easy to extract from current CN tracker.
  8. Download Server (source, binary, data files) - TBD - OSL & Telascience - Code on OSL. Data on Telascience. Will need to compute our required bandwith needs.
  9. Automated build and smoke test - Cruise Control and BuildBot - Various processes currently in use, unclear on amount of work to migrate.
  10. Demo site - n/a - Telascience - Build demonstration apps to run.
  11. IRC - n/a - freenode.net - Become an official Freenode project and make donation for use of services. Move logging of IRC from QGIS host to OSL.
  12. Language Translation Tools - TBD - TBD
  13. Communication servers - TBD - TBD

Integration

One question about these services is how tightly we will be able to draw them together. For example, it will be ideal to bring together CMS, project issue tracking and mailing lists. We will also want to have them all be searchable and feeding into each other easily. Initially this will be done through the CMS as much as possible. However, in the longer term a management framework such as Gforge may need to be considered. Having multi-project management tools through one common set of services is an ideal end goal, providing an infrastructure that makes code easier to track, documentation easier to contribute, people easier to communicate with, and software easier to repackage.

The other side to the migration is that of bringing more of the OSGeo projects under one roof; to provide a common presence that will enhance "branding" and co-distribution. To date, most projects have chosen to stay with their current (external) infrastructure because of effort required or comfort with their stack of tools. It is hoped that the proposed ideas can be somewhat debated and a happy medium for all projects can be found. It is critical for other projects (that will come on board later) to have the option of moving to a well supported infrastructure as in the proposal. The collective volunteer effort in maintaining their project's systems could be reduced by introducing further cross-project efficiencies.

Persistence of previous services (URLs, protocols, etc.) is an important feature to aim for, particularly for documentation, list archives and distribution facilities - anything that is indexed by a search engine. A mapping between projects' existing services and new ones needs to be maintained wherever possible. Having a thoughtful plan for this will help to make migration into or out of OSGeo hosted infrastructure less painful.

Key Milestones

Timelines are highly dependent on the resources available. The table, below, is a very simple example based on using volunteers. The values are somewhat meaningless except that they show the general timelines required to meet the year-end migration deadline. At present, timelines depend ultimately on the capacity of SAC and WebCom. When the Executive Director (E.D.) begins, there will be more dedicated focus on successful implementation within these timelines.

The hiring of a dedicated systems administrator will help set up, test and migrate services more quickly. The following table outlines a rough potential timeline, assuming the help of a sysadmin and E.D.

  1. Approve service provider - 22-Sep-06
  2. Contract set up for sys. admin - 29-Sep-06
  3. Finetune migration plan - 29-Sep-06
  4. Approve migration plan - 29-Sep-06
  5. Server purchase and set up - 6-Oct-06
  6. Install tools - 13-Oct-06
  7. Migrate content & services complete - 30-Nov-06

Status Notes

  • Board has approved overall plan, and authorized purchase of a server for OSL.
  • Jason Birch leading server specification and purchase activity.
  • Members of Indictrans team available now to work on Drupal Portal work, notably the Membership Application, and Service Provider Directory as well as a more general drupal front end for managing the ldap groups for users.
  • Preliminary BuildBot Configuration build and smoke tests rolled out at telascience by Mateusz Loskot.

Provider Matrix

Provider Name Package Cost Network/Physical Plant SLA Hardware SLA Managed/Colo details Root Access Disk Space Bandwidth (In/Out) CPU Memory OS Connectivity Comments
RackSpace Basic package Starting at $425 100% uptime, or 5% of monthly fee credited per ½ hour downtime 100% uptime, or 5% of monthly fee credited per 1 hour downtime Managed hardware, OS, and??? Yes? 80 GB EIDE Unltd/150Gb Athlon 64 3200 1Gb RHEL Tier 1 multihomed These guys have a rep for excellent service, but also for charging a premium for their reputation. I would have had to actually phone them to get a quote.
LiquidWeb Pro Dedicated 2 Managed Server $339 for this config 100%, exclusive of maintenance and malicious attacks, 10x downtime credit Replacement within 2 hours of problem identification. 10x downtime credit Managed hardware, OS, and Common Daemons Yes 120Gb SATA RAID1 + 120Gb SATA single 1Tb/1Tb Dual Xeon or Athlon64 2Gb CentOS Tier 1 multihomed I hadn't run across LiquidWeb before, but they have good reviews and reasonable prices. A tech answered my technical presales questions within a couple minutes. Here's an excerpt: [JB] OK, so with managed hosting, you would take care of OS and standard application patches & upgrades, but we would still have the freedom to do our own application installs as root? [LW]Yes, this is correct. We handle everything related to the Operating System and a lot of popular Daemons (MySQL, Apache, Exim, etc) but do not provide support for all of them (We're smart, but don't have time to learn about every piece of software developed extensively)
ev1servers Dedicated server (unmanaged) Starting at $299 Could not find SLA Could not find SLA Managed hardware Yes 2x 73Gb SCSI 2Tb Dual Xeon 2Gb RHEL Tier 1 multihomed Look like a reasonable value offering. We could optionally hire a management company to do OS and basic software management.
EasyStreet Co-Managed Software ? Variable with management plan. 1 week credit for each hour of downtime caused by EasyStreet Variable with management plan. 1 week credit for each hour of downtime caused by EasyStreet Managed Hardware and OS Yes (sort of) ? ? ? ? ? ? Recommended by the OSL sys admin. Waiting for contact from CEO. Change process seems rigiorous, and they can arbitrarily refuse installation.
iweb.ca X-Intense $349 Managed Hardware Yes 2x 80Gb IDE 4x1GB / 2000GBmonth Dual Xeon 2.4Ghz 1Gb Fedora 4 Tier 1 peers good experience with gdal.org hosting. Since 1996 19, 916 sites hosted PEER1 part of iWeb's carrier network
Blue Genesis VM Server Starting at $40 99.9% uptime Service 99.7% Critical Services Uptime Guarantee Credit if not met 99.9% uptime Service 99.7% Critical Services Uptime Guarantee Credit if not met Yes - VM Server 120 Gb 30 Gb Burstable Bandwidth Intel P4 2.4 Ghz 1Gb RHEL adding software is limited
1and1 Root Server $169 99.9% uptime 99.9% uptime Managed hardware and OS 120 Gb Intel P4 3.06 Ghz 2Gb FC4 40Gbit of external Carrier Class connectivity Root Servers are "sold out"5.87 million customers on paid services
PEER1 Need Quote 100% uptime PEER 1 will apply a credit to in an amount equal to five percent (5%) for each hour of downtime 100% uptime PEER 1 will apply a credit to in an amount equal to five percent (5%) for each hour of downtime 4 tier-1 upstream providers
Hurricane Electric Dedicated Server Charge according to bandwidth 100% uptime 100% uptime charge based upon average space consumed throughout month Our choice multiple full gig-e Since 1994 Concentrate on business needs