Difference between revisions of "Benchmarking 2013"
(add category to page) |
|||
Line 21: | Line 21: | ||
We will probably not be able to do all of this but it will at least highlight the complexity of the exercise and give some hints as to the parameters which are relevant when choosing a mapping platform on the green field or as replacement for an existing solution. | We will probably not be able to do all of this but it will at least highlight the complexity of the exercise and give some hints as to the parameters which are relevant when choosing a mapping platform on the green field or as replacement for an existing solution. | ||
− | ---- | + | |
+ | == Links == | ||
+ | The following links point to interesting external resources: | ||
+ | * http://www.esdm.co.uk/mapserver-and-geoserver-and-tilecache-comparison-serving-ordnance-survey-raster-maps | ||
+ | * http://www.esdm.co.uk/further-load-testing-of-geoserver-and-mapserver-and-tilecache | ||
[[Category:FOSS4G2013]] [[Category:FOSS4G]] [[Category:Benchmarking]] | [[Category:FOSS4G2013]] [[Category:FOSS4G]] [[Category:Benchmarking]] |
Revision as of 05:24, 4 June 2013
Benchmarking Session at FOSS4G
The following text has been submitted as a regular presentation just to have a foot in the door. Probably it would make sense to have it in the plenary at the end of the conference.
Abstract
In the last years FOSS4G set the scene for performance shootouts between the most powerful and renowned mapping software around - both open source and proprietary. Each year the teams gave the best they could and unanimously agreed that they learned a lot and were able to considerably improve the software. So far the goal was to accelerate the process of grabbing a geometry, rendering it and pushing it out. This has been optimized to a degree that makes differences hard to distinguish, we can honestly say that the contestants are really, really dead fast.
The next level of speed can only be achieved by fine tuning data stores, kernels and twisting virtual arms at a very low level. But this will at the same time defeat the point of comparability. So what to do? The Benchmarking team will find an answer until FOSS4G starts - or rather: ends, so we have three days more to excel at speeding up.
But like everything in this world we do not want to stagnate, wither and die but evolve. Therefore the 2013 edition of the benchmark proposes to look into evaluating three aspects:
- The well known dedicated map server contest
- As known on comparable platforms
- plus (potentially) a session where experts tune kernel parameters and everything they put their heads around to highest level the team is capable of or interested in - including a readable documentation of what they did
- Ease of use / Flexibility
- how easy is it to set up a web map service (tile rendering engine), again including a documentation
- How easy is it to create a map with a Desktop GIS package using the Open Data published by the Ordnance Survey plus an OSM overlay (probably the pubs) and a randomly chosen CSV file with some coordinates and attributes. In 2006 the TU Delft implemented a GML relay contest in which software development teams / vendors received a GML and had to load, modify, save and then pass it on to the next contestant - again reporting on what worked and what didn't. This exercise could follow similar patterns.
- Cartography
- using the OGC SLD SE as published by the Ordnance Survey as a starter
- plus then going into nitty gritty details and enhancements and maybe some interested cartographers from ICA
We will probably not be able to do all of this but it will at least highlight the complexity of the exercise and give some hints as to the parameters which are relevant when choosing a mapping platform on the green field or as replacement for an existing solution.
Links
The following links point to interesting external resources: