Home | Inviting Agility | Publications | WorkSpace | Experience | CONTACT


WorkSpace | RecentChanges | Preferences | Random | Index | Search

Evaluating Open Space Meetings

it's best to avoid this altogether, if at all possible. not because open space doesn't work, but because it works so well that any attempt to squeeze the spirit and momentum of an event or meeting into those little 5-point scales will totally miss the point of the whole endeavor.

perhaps even more importantly, asking participants to evaluate the meeting usually implies, in the end, that the meeting planners were somehow responsible for the quality of the participants' experiences... which is in direct opposition to the message and structure of the opening and working sessions of the meeting, in which participants were given full responsibility for the quality of their own experience, learning, working and results.

sometimes giving people a sheet of paper with 'evaluation' at the top of it absolutely cannot be avoided. in those cases try this...


or this, from ChrisCorrigan?...

in lieu of evaluation, pass out page that requests the following:

pass them out with the proceedings on day three or at the end of the conference, so that they can be filling these out at the same time as they are ranking, voting, action planning.

could ask participants to bring them to the closing circle and pass them around with the talking stick... so the pile of results gets bigger as the stick progresses around the circle.

in some groups/settings may need to emphasize that these pages are for storytelling about the event, about their work during and expected progress after the event... NOT an evaluation of the conference. these pages are about specific learnings and actions, not general comments and evaluations.

some other questions that may be interesting...


further note on evaluation of results

ChrisCorrigan? reminds me that we have to be sure to use the SAME instrument BEFORE and AFTER the ost meeting... if we are doing the open space because sponsors look around and notice that the org 'feels bad' then we should evaluate how it 'feels' afterward. if cash on hand is 50 days and the purpose is to find more days of cash, then measure that. if we do ost to respond to and address issues raised in a survey, then repeat the survey. we have to use the same gauge before and after the session to get a fair baseline. if sponsors' "sense" of things is that people are "disconnected" and afterward they are saying "people got connected" but the results didn't show up, we have to ask what results they were measuring... and how. can't get what you don't ask for and not fair to change the means of measurement/scoring after the game is over.

WorkSpace | RecentChanges | Preferences | Random | Index | Search
This page is read-only | View other revisions
Last edited October 4, 2003 10:41 pm CentralTimeUSA by MichaelHerman
© 1998-2020 Michael Herman and www.michaelherman.com, unless signed by another author or organization. Please do not reprint or distribute for commercial purposes without permission and full attribution, including web address and this copyright notice. Permission has always been granted gladly to those who contact me and say something about themselves, their work, and their use of these materials. Thank you and good luck! - Michael