Advancements in technologies such as artificial intelligence have the potential to open up many opportunities for organisations. But this brave new world of technology also has its risks.
The BetterBeliefs platform can help organisations produce evidence-based solutions to manage risk at any time of significant change or development including engaging in AI technologies.
Organisations can have the utmost confidence that BetterBeliefs has the experience and know-how to work in high risk, confidential environments – whether your looking after your community or defending the entire nation.
Background
The Australian Department of Defence has recognised the significant potential of Artificial Intelligence (AI) technology to increase Defence capability and they acknowledge that more work needs to be undertaken to ensure that introducing the technology does not result in adverse outcomes.
Defence’s challenge is that failure to adopt the emerging technologies in a timely manner may result in a military disadvantage, while premature adoption without sufficient research and analysis may result in inadvertent harms.
What happened?
To address this, Jericho Disruptive Innovation Royal Australian Air Force, Defence Science and Technology Group (DSTG), and Trusted Autonomous Defence Cooperative Research Centre (TASDCRC), ran a workshop in Canberra in 2019, bringing together thought leaders from Defence, academia, industry, government agencies, non-profits and the media.
The workshop used BetterBeliefs to elicit evidence-based hypotheses regarding ethical AI from a diverse range of perspectives and contexts, in order to produce pragmatic methods to manage ethical risks on AI projects in Defence.
Attendees contributed evidence-based hypotheses to discussions with a view to developing a report with suggestions as a starting point for principles, topics and methods relevant to Defence contexts for AI and autonomous systems to inform military leadership and ethics.
Key outcomes
Using BetterBeliefs data from the workshop, DSTG produced a technical report on ethical AI uses in Defence contexts – A Method for Ethical AI in Defence.
The report summarises the discussions from the workshop, and outlines a pragmatic ethical methodology to enhance further communication between software engineers, integrators and operators during the development and operation of AI projects in Defence.
A further outcome of the workshop was the development of a practical methodology that could support AI project managers and teams to manage ethical risks. This methodology includes three tools: an Ethical AI for Defence Checklist, Ethical AI Risk Matrix and a Legal and Ethical Assurance Program Plan (LEAPP) and the Responsible AI in Defence (RAID) Toolkit, promoted through Defence AI Research Network (DAIRnet).
What we did
The ethics of AI and autonomous systems is an ongoing priority, and Defence used BetterBeliefs as part of its commitment to engaging a wide range of experts and evidence-based practice to develop effective and practical methodologies for the use of AI in Defence contexts.
LISTEN: Our method was report of the week in the CNA ‘AI with AI’ podcast’ (from 30:22): https://www.cna.org/news/AI-Podcast
Contact us to find out more about how BetterBeliefs can work with you to use evidence-based decision making to tackle ethical challenges in your organisation.
The follow up and engagement facilitated by Kate Devitt and her team was is an excellent model for collaboration and innovation that could be applied in other areas of Defence thinking. An excellent report and absorbing topic for those in our community. – Workshop participant, Katherine Ziesing, Managing Editor Australia Defence Magazine Group.
Please note: A Method for Ethical AI in Defence does not represent the views of the Australian Government.