The ethics of using powerful new large language model (LLM) AI chat bot products is urgent and unclear. Millions of people around the world are now experimenting with these new technologies such as ChatGPT without knowing how to determine the boundary conditions of when the technology is defensible to use and when it should be eshued.
When there is uncertainty around the ethics of using technology in the absence of regulation (Tannert, Elvers & Jandrig, 2007), a challenge for technology use is well expressed in the Collingridge Dilemma (Collingridge, 1982; Collingridge & Reeve, 1986; Genus & Stirling, 2018).
The Collingridge Dilemma provides quandary in which efforts to control technology development face a double-bind problem:
🔭 an information problem: impacts cannot be easily predicted until the technology is extensively developed and widely used, and
👩⚖️ a power problem: control or change is difficult when the technology has become entrenched.
BetterBeliefs is a concrete and assertive digital tool for enabling practice of critical citizen engagement and participatory deliberation to manage the ethical uncertainty of technology development.
As Genus & Stirling (2018) states, responsibility for new technologies lies not in engineering consensus, but in exploring dissensus.
BetterBeliefs system affordances align with Collingridge qualities of inclusion, openness, diversity, incrementalism, flexibility and reversibility.
Inviting diverse, inclusive and transparent participation; encouraging respectful dissent and change of belief; and facilitating evidence-based decisions, BetterBeliefs helps governments and organisations involved in the governance of research and innovation responsibly manage steep power gradients and strongly asserted interests.
One of the most important properties of responsibility is increasing accountability, humility and pluralism in the face of ignorance and contending interests.
The most responsible way to govern innovation is by democracy (Genus & Stirling, 2018).
Read case studies on BetterBeliefs being used by the Governments of Australia and the The Netherlands to engage a wide diversity of stakeholders to explore how AI should be conceived of and deployed in military contexts of use.
References Tannert, C., Elvers, H. D., & Jandrig, B. (2007). The ethics of uncertainty: In the light of possible dangers, research becomes a moral duty. EMBO reports, 8(10), 892-896.  Collingridge, D. (1982). The social control of technology. St. Martin’s Press. New York.  Collingridge, D., & Reeve, C. (1986). Science speaks to power: The role of experts in policy making. St. Martin’s Press. New York.  Genus, A., & Stirling, A. (2018). Collingridge and the dilemma of control: Towards responsible and accountable innovation. Research policy, 47(1), 61-69.