Talk:Benchmarking 2013

From OSGeo
Jump to navigation Jump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

Arnulf's notes for the closing plenary talk at FOSS4G 2013

5 minutes

The Shootout that did not happen. 
The Benchmark ... that did not happen.
The WPS comparison: 
How can we do more? 
How can we do better? 
We need dedication. Throughout the year. Involve research and academia. 
Involve sponsors. 
Make a committee? 

Please discuss, add ideas, etc.:


A few points to discuss (in no particular order):

The goods:

  • There's great interest for a benchmark from the general population, just because it's interesting to see projects have a friendly fight or simply because it is somewhat informative.
  • From the project developpers point of view, it's really helpful and all developpers involved could see huge benefits out of it.
  • From a marketing point of view, the winning teams (those at the top of the chart) have one more tool to sell their project.

The bads

  • This exercise has become (ie always been) very heavy for the developpers involved. In my sense, the 2010 was such a success because the people didn't knew in what they were getting into. I don't think we can ask the propretary vendor to get back for a similar exercise.
  • Project champions all told us that they were burnt out at the end. This is not good.
  • The scope (speed) may be to narrow as a yearly subject to keep being interesting to all party involved. (Not sure of this one)
  • The benefits of having a FOSS4G oriented exercise is not that appealing to projects. Do projects want more from this time and energy investment than a cool presentation with charts.

The potential solutions:

  • The time required to integrate the benchmark exercise should be reduce for this to work. We can't ask Jeff or Andrea (To only mention those two) to spend 4 months (or more) of their (unfunded) time on this.
  • We need sponsors for both hardware and money.
  • We need a Sprint to get all project champions together and get this out of the door in a week
  • This could/should a service offered to participating project where they can run the benchmark and see the results. This would help to see how projects evolve performance wise over time. Thinking about it, this would make a really interesting presentation.
  • The test case needs to be kept at minimal. Start small. Render 10 million points with no style for example. I would see between one and three tests the first year so the steps to join are not an issue.
  • We may have to remove the hardcore tuning optimisation from the benchmark if that's what is taking time.

Other ideas (Replace those that you don't agree with):

  • It would be cool to have a simple testsuite that anyone could run on any computer and see the performance of their project. Just like I have testsuite to test my video card, I would be able to test my MapServer installation. Maybe the benchmark could become only that...
  • Our potential sponsors are not projects, but institution or company using the software (Ordnance Survey for example). This is the people we have to aim when planning what the exercise should become. What are their benefits out of this? Is it to be able to chose the right software for the right job? Maybe we could allow sponsors to define their usecase and add that to the test suite.