24
However a flexible, case-by-case, approach such as this complicates
matters for those looking to operationalise explainable AI decision-
systems. It may therefore be useful to develop a list of explanation types
to support organisations in identifying appropriate ways of explaining
particular AI decisions, and delivering explanations, to the individuals
they concern.
2. Education and awareness
Project ExplAIn is focused on explanations of AI decisions. But findings
from the public and industry engagement research activities serve as a
welcome reminder that explanations alone cannot address all the
challenges associated with AI and its use in decision-making. Jurors in
particular signalled the importance of education, awareness-raising, and
involvement of the public in the development and application of AI.
This suggests that, as well as one-off engagement at the time an AI
decision is made, there should be broader public engagement. This may
help individuals gain a better understanding of the extent of AI decisions
in everyday life, making them better equipped to anticipate its use and
empowering them to be confident in interacting with such systems.
There are risks that awareness raising could simply serve to normalise the
use of AI decisions, disproportionately emphasising its benefits so
individuals are less likely to question its use and expect explanations. A
campaign purely focused on the risks and potential negative
consequences would be equally as harmful. Although no clear message
emerged from the research around who should be responsible for a broad
educative piece for the public, it is important that there are diverse voices
behind this work to ensure a balanced message.
Consideration will be given to how the planned guidance, and broader ICO
and Turing work on AI can support industry, government and other bodies
to increase awareness and better engage the public on this complex topic.
3. Challenges
In the public engagement research, jurors primarily focused on when and
why they did, or did not, prioritise explanations of AI decisions over
accuracy. It is therefore interesting that a number of jurors considered
the cost and resource burden on the organisation delivering the
explanation. Although comments predominantly related to scenarios
involving the use of public money, several jurors also acknowledged
issues of cost and resource for the private sector organisation in the
recruitment scenario. This suggests that, although not an excuse for
failure to provide any explanation of an AI decision, individuals are