AI is a complex, technical policy area. Shouldn't experts make the decisions?
For the most part, deliberative processes are used to make normative or political decisions, not technical ones. Of course, it is not possible to completely disentangle the two. For example, most decisions involving technology, particularly at the scale of large AI systems or online platforms, have normative and political consequences because they impact how benefits and burdens are distributed. But to the extent possible, remits for deliberative processes typically focus more on normative or political questions like:
- What kind of society do we want to live in?
- How should benefits and burdens be distributed?
- How should we proceed, given fundamental trade-offs?
- How should we proceed, given conflicting preferences?
- How should we adapt to achieve [outcome], given that the possible options distribute benefits and burdens differently?
Rather than technical or empirical questions like:
- What options do we have to achieve [outcome]?
- What will happen if we do [intervention]?
That said, deliberative processes typically include a learning component during which participants are brought up to speed, to the extent possible in the available time, with technical and empirical knowledge relevant to the domain. This includes hearing from and being able to question people with relevant expertise.
In the context of well-designed deliberative processes, randomly selected groups of people have a track record of making good decisions, even in highly technical domains. See, for example, the processes run by Sciencewise, a publicly funded body in the UK.