By Diane Warbutton, Sciencewise Evaluation Manager
Sciencewise recognises that processes designed to inform and influence public policy and decision-making – including public dialogue - need to be rigorous and impartial, relevant, accessible, legal and ethical, and that all such processes need to be assessed against agreed standards. At the most basic level, rigour and impartiality require quality assurance of these processes to guarantee the quality of the outputs.
A new edition of the Sciencewise Quality in Public Dialogue Framework, published in March 2016, is designed to provide an improved approach to a quality assurance process for public dialogue.
The Framework has been developed on the basis of learning from Sciencewise project evaluations over recent years. This new edition of the Framework takes into account experience of using the framework since the launch of the initial working paper published in March 2015. It also builds on new input from a range of academics, government departments and practitioners.
The Framework provides a set of questions on the context, scope and design, delivery, impact and evaluation of public dialogue practice, designed to stimulate thinking and open up design options. It is not intended to be prescriptive, limiting or bureaucratic but to provide ways of addressing the basic questions that are very often asked of public dialogue including:
• How many is 'enough' participants or locations?
• Should the role of scientists and other specialists involved in dialogue events primarily be to provide information, or should they also be participants in the dialogue?
• What makes a dialogue 'deliberative' and how much time needs to be given to providing information to participants compared to time for discussion?
• To what extent should dialogue processes include non-deliberative techniques such as polling techniques, and attempt quantitative analysis to present what is inherently a qualitative process (e.g. measures of scale to demonstrate strength of feeling)?
• What forms of analysis and reporting are appropriate and what role do participants have in reporting dialogue results (e.g. reports based on agreements reached collectively among or with participants)?
• What will count as sufficiently robust processes to enable decision makers to be able to know how and when to use dialogue results with confidence in decision making alongside other forms of evidence?
We hope the Framework will be of use as initial briefing on what public dialogue involves, as a checklist for those designing and delivering public dialogue – and for those who want to test the robustness of a dialogue project at all stages of planning, design, delivery and evaluation.