The OECD.AI Policy Observatory Catalogue of Tools & Metrics for Trustworthy AI makes it easier to help AI actors to build and deploy AI systems that are trustworthy. These tools and metrics are designed to help AI actors develop and use trustworthy AI systems and applications that respect human rights and are fair, transparent, explainable, robust, secure and safe. BetterBeliefs was selected and added to the catalogue as a trustworthy tool in March 2023
The government of the Netherlands hosted the first global Summit on Responsible Artificial Intelligence in the Military Domain: REAIM 2023. REAIM 2023 was a platform for all stakeholders to discuss the key opportunities, challenges and risks associated with military applications of AI 15-16 February 2023 at the World Forum in The Hague. BetterBeliefs was a participatory research platform used at REAIM Summit by multiple organisations to facilitate breakout events on weaponised drones, operationalising AI principles and reduction of civilian harm.
BB helps project managers implement value-based engineering best practice by working with stakeholders to propose and systemically evaluate contexts of operations, values, requirements and design features using external evidence sources as well as drawing on the expertise of stakeholders. BetterBeliefs help translate normative ethical theories and values into good engineering practices and system requirements within a risk management framework.
BetterBeliefs compliments efforts to engage with community with an intuitive, social media-like interface that requires all ideas on the platform to be connected with evidence in support of or opposed to them. The interaction of stakeholders on evidence items (rating them out of five stars) as well as the ability to add confirming and disconfirming evidence really sets out platform apart. Allowing decision makers to confidently make decisions that are justified for action and transparent to community.
CSIRO Megatrend, ‘Unlocking the human dimension: The elevating importance of diversity, equity and transparency in business, policy and community decision making’ highlights the social drivers influencing future consumer, citizen and employee behaviours. BetterBeliefs is a digital platform for evidence-based, scientific policy to respectfully and responsibly address this megatrend
At BetterBeliefs we have a better mechanism for governments to receive direct feedback from people, organisations, and communities. When a diversity of stakeholders and subject matter experts are invited to participate and given the opportunity to genuinely engage, BetterBeliefs realises the dream of a participatory and evidence-based democracy.
Inviting diverse, inclusive and transparent participation; encouraging respectful dissent and change of belief; and facilitating evidence-based decisions, BetterBeliefs helps governments and organisations involved in the governance of research and innovation responsibly manage steep power gradients and strongly asserted interests. One of the most important properties of responsibility is increasing accountability, humility and pluralism in the face of ignorance and contending interests. The most responsible way to govern innovation is by democracy
Why do we call our company ‘BetterBeliefs’? Why are beliefs important and what does it mean to make them better? Beliefs lie at the heart of what makes us human; beliefs shape the organisation and functioning of our minds; beliefs define the boundaries of our cultures; and beliefs guide our motivation and behavior.
BetterBeliefs was sponsored to attend the BiiG Public Sector Innovation for Impact Festival 2023 by Queensland Chief Entrepreneur and GovReady. BetterBeliefs enables evidence-based stakeholder engagement for justified and actionable government decision making. Helping governments choose better ideas to move forward with.
BetterBeliefs was used by Defence to produce a technical report on ethical AI uses in Defence contexts – A Method for Ethical AI in Defence. The report summarises the discussions from the workshop, and outlines a pragmatic ethical methodology to enhance further communication between software engineers, integrators and operators during the development and operation of AI projects in Defence. The report offers a practical methodology that could support AI project managers and teams to manage ethical risks. This methodology includes three tools: an Ethical AI for Defence Checklist, Ethical AI Risk Matrix and a Legal and Ethical Assurance Program Plan (LEAPP) and the Responsible AI in Defence (RAID) Toolkit, promoted through Defence AI Research Network (DAIRnet)