How does this relate to similar efforts?
Several research and policy areas work towards similar aims: building AI systems that serve the public interest and governance systems that can work better for people in a rapidly changing world. In particular:
-
pluralistic alignment is the project of ensuring model behavior is not monolithic and, in various ways, represent a population in which there are many different preferences or perspectives;
-
participatory AI is a broad research area that explores different ways in which stakeholders and affected parties can be involved in the design, development, and deployment of AI systems;
-
public AI focuses primarily on public ownership of compute infrastructure, public oversight of the development process, and public ownership of resulting models; and
-
AGI institutions (or “AGI-ready institutions,” “full stack alignment,” or “Post-AGI equilibria”) is an interdisciplinary research area focused on “the robust co-alignment of AI systems and institutions with what people value, from each individual’s pursuit of their vision of the good life to the collective achievement of shared values and ideals.”
Our work overlaps with all these, but with a particular focus on institutional design (how are decisions made?), with the practical goal of navigating the governance challenges posed by AI and ensuring that what happens in society stays broadly aligned with a democratic notion of the public interest in the long term.