Difference between revisions of "FOSS4G2007 Lessons Learned"
Wiki-Dpatton (talk | contribs) (page created) |
|||
Line 17: | Line 17: | ||
= Workshops = | = Workshops = | ||
+ | |||
+ | In 2007, there were 12 slots for 3 hour workshops (for the workshop day), and 16 slots for 1.5 hour labs (for the remainder of the conference). All the computers were rented, 120 in all, and broken into 6 labs of 20 computers, with 2 attendees per computer, for a total of 240 attendees on workshop day. | ||
+ | |||
+ | Workshop submissions were accepted by email, in a standard document template, and the information was entered into a summary spreadsheet (for title, name, short abstract) and document (for all information). Submitters were asked to indicate what kind of physical infrastructure they required, and whether their submission was a 3 hour or 1.5 hour submission. Some submitters indicated they could do both formats, others only indicated one. It was not made clear to submitters that "both" was an acceptable answer, which led a bad selectivity situation later on. | ||
+ | |||
+ | 52 workshop submissions were received. 33 submissions indicated they could do 3 hour formats, which made almost 3-to-1 demand versus supply (12 slots). 22 submissions indicated they could do 1.5 hour formats, which was much closer to the 16 slots available in the lab format. | ||
+ | |||
+ | The committee first selected the 12 3-hour workshops, via a ranking process (more below). Submissions that were accepted as 3-hour workshops were removed from the 1.5 hour list, and then the 1.5 hour workshops were selected using the same ranking process. | ||
+ | |||
+ | The submissions that were only in the 3-hour list had a much harder time making the cut than the ones that were dual-listed. Some submitters would probably be "OK" with that, since their content would not fit into a 1.5 hour format. Others would have been happy to present in either format, if they knew they had the option when submitting. | ||
+ | |||
+ | '''Lesson:''' Having two formats is handy, for using infrastructure more effectively, and submitters need to be properly instructed on their options for submitting, so that they have the best chance to compete for slots. | ||
+ | |||
+ | == Timing == | ||
+ | |||
+ | Workshop selection forms part of the registration process, so workshops must be selected prior to the opening of registration. Workshops are also a good piece of "web site content" that help convey what the conference is "about" for potential attendees. As with every other piece of conference content, the sooner these things are selected and published, the easier you will find it to attract non-traditional attendees, who form their opinions based on the content you provide on the web site. | ||
+ | |||
+ | In 2007, the Call for Workshops went out in early February (8 months prior to conference) and closed in early March (7 months prior to conference). Workshops were selected by the end of March (6 months prior to conference). | ||
+ | |||
+ | == Ranking == | ||
+ | |||
+ | Workshop submissions were ranked using a multi-criteria system, scored from 1-5, and each workshop was supposed to be given scores for all criteria by all committee members. In the end, some members found the process too onerous and only gave one score for each workshop, based on a holistic understanding of all the criteria. Some members ranked in a range of 3-5, others used the entire range of 1-5. In general, because of different process, the amount of "information" pulled from the ranking process was not as high as it could have been. Ideally, each committee member would provide equal "information" to the decision, but in 2007, those members who used the whole range of their scores, and only provided one score, had higher influence than members who ranked more judiciously and provided all the criteria separately. | ||
+ | |||
+ | '''Recommendation #1:''' Before publishing the Call for Workshops, agree on the decision criteria, and publish them along with the Call. | ||
+ | |||
+ | '''Recommendation #2:''' Do not attempt to gather individuation scores for each criterion. Have members score the submissions holistically, keeping all the criteria in mind as they do so. | ||
+ | |||
+ | '''Recommendation #3:''' Do not use a 1-X ''scoring'' system, but instead use an ordered ranking system, where each member returns a sorted list of submissions, from most to least desirable. This approach ''maximizes'' the amount of information gathered about each submission, and sidesteps "plumping" strategies (where a member gives all the submissions a "0" except for the two he really likes which he gives a "5", thereby accentuating his affect on the overall average). | ||
+ | |||
+ | '''Recommendation #4:''' Potentially, remove the workshop committee process entirely and move to a community scoring model. However, given the limited number of slots, and the benefits for workshop presenters (free admission), a community model might be a tempting target for vote pooling and other forms of influence. Also, because workshops must be selected far far in advance of the date, it will not be possible to bring in the opinions of conference registrants who are not members of the "usual" OSS community of interest. | ||
= Presentations = | = Presentations = |
Revision as of 12:09, 4 May 2007
Back to Conference_Committee
This page is a collection of information, comments, and suggestions, based on the planning, organizing, and hosting of the FOSS4G 2007 conference. The intent is to provide a resource that can be used by organizers of future FOSS4G conferences. This is not a forum for conference attendees to comment on the conference.
Timetable
Anything related to the timetable of the conference, from the initial RFP response through to post-conference 'wrap up' should be added here. Some items may also be repeated in the relevant sections below, to provide context.
Sponsorships
Social Venues
Conference Venue
Accommodations
Transportation
Workshops
In 2007, there were 12 slots for 3 hour workshops (for the workshop day), and 16 slots for 1.5 hour labs (for the remainder of the conference). All the computers were rented, 120 in all, and broken into 6 labs of 20 computers, with 2 attendees per computer, for a total of 240 attendees on workshop day.
Workshop submissions were accepted by email, in a standard document template, and the information was entered into a summary spreadsheet (for title, name, short abstract) and document (for all information). Submitters were asked to indicate what kind of physical infrastructure they required, and whether their submission was a 3 hour or 1.5 hour submission. Some submitters indicated they could do both formats, others only indicated one. It was not made clear to submitters that "both" was an acceptable answer, which led a bad selectivity situation later on.
52 workshop submissions were received. 33 submissions indicated they could do 3 hour formats, which made almost 3-to-1 demand versus supply (12 slots). 22 submissions indicated they could do 1.5 hour formats, which was much closer to the 16 slots available in the lab format.
The committee first selected the 12 3-hour workshops, via a ranking process (more below). Submissions that were accepted as 3-hour workshops were removed from the 1.5 hour list, and then the 1.5 hour workshops were selected using the same ranking process.
The submissions that were only in the 3-hour list had a much harder time making the cut than the ones that were dual-listed. Some submitters would probably be "OK" with that, since their content would not fit into a 1.5 hour format. Others would have been happy to present in either format, if they knew they had the option when submitting.
Lesson: Having two formats is handy, for using infrastructure more effectively, and submitters need to be properly instructed on their options for submitting, so that they have the best chance to compete for slots.
Timing
Workshop selection forms part of the registration process, so workshops must be selected prior to the opening of registration. Workshops are also a good piece of "web site content" that help convey what the conference is "about" for potential attendees. As with every other piece of conference content, the sooner these things are selected and published, the easier you will find it to attract non-traditional attendees, who form their opinions based on the content you provide on the web site.
In 2007, the Call for Workshops went out in early February (8 months prior to conference) and closed in early March (7 months prior to conference). Workshops were selected by the end of March (6 months prior to conference).
Ranking
Workshop submissions were ranked using a multi-criteria system, scored from 1-5, and each workshop was supposed to be given scores for all criteria by all committee members. In the end, some members found the process too onerous and only gave one score for each workshop, based on a holistic understanding of all the criteria. Some members ranked in a range of 3-5, others used the entire range of 1-5. In general, because of different process, the amount of "information" pulled from the ranking process was not as high as it could have been. Ideally, each committee member would provide equal "information" to the decision, but in 2007, those members who used the whole range of their scores, and only provided one score, had higher influence than members who ranked more judiciously and provided all the criteria separately.
Recommendation #1: Before publishing the Call for Workshops, agree on the decision criteria, and publish them along with the Call.
Recommendation #2: Do not attempt to gather individuation scores for each criterion. Have members score the submissions holistically, keeping all the criteria in mind as they do so.
Recommendation #3: Do not use a 1-X scoring system, but instead use an ordered ranking system, where each member returns a sorted list of submissions, from most to least desirable. This approach maximizes the amount of information gathered about each submission, and sidesteps "plumping" strategies (where a member gives all the submissions a "0" except for the two he really likes which he gives a "5", thereby accentuating his affect on the overall average).
Recommendation #4: Potentially, remove the workshop committee process entirely and move to a community scoring model. However, given the limited number of slots, and the benefits for workshop presenters (free admission), a community model might be a tempting target for vote pooling and other forms of influence. Also, because workshops must be selected far far in advance of the date, it will not be possible to bring in the opinions of conference registrants who are not members of the "usual" OSS community of interest.