Occasional Paper
Royal United Services Instute
for Defence and Security Studies
Artificial Intelligence and UK
National Security

Alexander Babuta, Marion Oswald and Ardi Janjeva



Alexander Babuta, Marion Oswald and Ardi Janjeva
RUSI Occasional Paper, April 2020
Royal United Services Instute
for Defence and Security Studies
189 years of independent thinking on defence and security



complex challenges.


Royal United Services Instute
for Defence and Security Studies
Whitehall

United Kingdom

www.rusi.org




             


Contents
Acknowledgements v
 
Note on Sources ix
Introduction 1


I. National Security Uses of AI 7

 
 
 
II. Legal and Ethical Considerations 21
 
Machine Intrusion 24
 
 
III. Regulation, Guidance and Oversight 33
 
 
 
Conclusions 39
 
 
Acknowledgements
T
very grateful to all research participants who gave up their valuable
time to contribute to this study. The authors would also like to thank a number of
individuals who provided helpful feedback on an earlier version of this paper, particularly


Babuta, Oswald and Janjeva vii
Executive Summary
R


is to establish an independent evidence base to inform future policy development
regarding national security uses of AI. The findings are based on in-depth consultation with
stakeholders from across the UK national security community, law enforcement agencies,
private sector companies, academic and legal experts, and civil society representatives. This
was complemented by a targeted review of existing literature on the topic of AI and national
security.
The research has found that AI oers numerous opportunies     
rapidly
derive insights from large, disparate datasets      

given to UK intelligence agencies, use of AI could give rise to addional privacy and human
rights consideraons
enhanced policy and guidance is needed to ensure the privacy and

analysis methods are applied to data.

 The automation of administrative organisational processes could offer significant
efficiency savings, for instance to assist with routine data management tasks, or improve
efficiency of compliance and oversight processes.
2.  cybersecurity purposes, AI could proactively identify abnormal network traffic or
malicious software and respond to anomalous behaviour in real time.
  intelligence analysis        

a. Natural language processing and audiovisual analysis, such as machine
translation, speaker identification, object recognition and video summarisation.
b. Filtering and triage of material gathered through bulk collection.
c. Behavioural analytics to derive insights at the individual subject level.
None of the AI use cases idened in the research could replace human judgement. Systems



of analysis tasks focused on individual subjects.
viii Arcial Intelligence and UK Naonal Security
The requirement for AI is all the more pressing when considering the need to counter AI-enabled
 Malicious actors

           
other threat actors, including cybercriminal groups, will also be able to take advantage of these

Threats to digital security include the use of polymorphic malware that frequently
changes its identifiable characteristics to evade detection, or the automation of social
engineering attacks to target individual victims.
Threats to political security        
synthetic media and disinformation, with the objective of manipulating public opinion
or interfering with electoral processes.
Threats to physical security are a less immediate concern. However, increased

interconnected critical national infrastructure will create numerous vulnerabilities which
could be exploited to cause damage or disruption.
There are opportunities and risks relating to privacy intrusion. AI arguably has the potential
to reduce intrusion, by minimising the volume of personal data that is subject to human review.
However, it has also been argued that the degree of intrusion is equivalent regardless of whether
                



Algorithmic proling could be considered more intrusive than manual analysis and would raise further
human rights concerns if it was perceived to be unfairly biased or discriminatory. Safeguarding against

at all stages of an AI project, as well as ensuring demographic diversity in AI development teams.
Much commentary has raised concern regarding the ‘black box’ nature of certain AI methods, which
may lead to a loss of accountability of the overall decision-making process. In order to ensure that




              
addional sector-specic guidance 
              

the UK intelligence community can adapt in response to the rapidly evolving technological
environment and threat landscape.
Note on Sources
T
  in this paper are based on a combination of open and
closed-source research. The content is primarily derived from confidential interviews and
focus groups with respondents from across the UK national security community. Although
open-source references are included throughout, it is not always possible to provide a specific
source for research findings and conclusions.
Introduction
R


project is to establish an independent evidence base to inform future policy development
and strategic thinking regarding national security uses of AI.
             
              
in-depth consultation with practitioners and policymakers from across UKIC, other government
departments, law enforcement agencies, military organisations, private sector companies,
academic and legal experts, and civil society representatives. This was complemented by a
targeted review of existing academic literature, research reports and government documents
on the topic of AI and national security.

analysis. Due to subject-matter sensitivities, certain content has been omitted or sanitised in
consultation with project partners. These revisions in no way influence the overall findings or
conclusions of the research.
This paper is structured as follows. The introduction provides a brief overview of the context
of the project and the issues under consideration. Chapter I examines potential uses of AI
in the national security context, as identified in the research. Chapter II summarises the
            

of existing AI guidance, regulation and oversight frameworks, before considering what additional
sector-specific guidance and oversight mechanisms may be needed in the national
security context.
The Context
The UK continues to face serious national security threats from a range of sources.¹ There is

methods that may allow them to do this more effectively. At the same time, the public expects

freedoms are respected. Achieving this balance is a major challenge for those in the national
security community, particularly at a time of such considerable technological change. At the
same time, public discourse is increasingly focused on the governance and regulation of data
 

2 Arcial Intelligence and UK Naonal Security
analytics, and there appears to be increasing concern that existing structures are not fit for
purpose in terms of the governance and oversight of AI.

² The ongoing, exponential increase in digital data necessitates
the use of more sophisticated analytical tools to effectively manage risk and proactively respond

considering hostile uses of AI that already pose a tangible threat to UK national security, such

or automate disinformation campaigns. Against this backdrop, there is a clear driver for UKIC
to implement advanced data science techniques to effectively respond to future threats to the

While AI offers numerous opportunities for UKIC to improve the efficiency and effectiveness
of existing processes, these new capabilities raise additional privacy and human rights
considerations which would need to be assessed within the existing legal and regulatory
framework. Recent commentary has highlighted potential risks regarding the implementation of
AI and advanced analytics for surveillance purposes, particularly relating to the potential impact
on individual rights.³
      
troves of data is moving faster than our current legal and ethical guidelines can manage. We can now



Addressing these concerns is a high priority for the national security community. According

in the national security space that will deliver the answers and approaches that will give us

 


 


The
Rise Of Big Data Policing: Surveillance, Race, And The Future Of Law Enforcement

 Forbes
 
RUSI Commentary
Babuta, Oswald and Janjeva

that he is particularly interested in AI ‘because of our need to be able to make sense of the

‘[technology] will never replace our need to also have human insight, because technology and

Most AI methods under consideration are rapidly becoming more prevalent throughout the
commercial sector. However, UKIC is subject to additional levels of scrutiny regarding the


than many commercial applications, and many capabilities will not be readily transferable from
other sectors.
Clear and evidence-based policy is needed to ensure that the UK national security community
can take full advantage of the opportunities offered by these new technologies, without
compromising societal and ethical values or undermining public trust.
What is AI?
There is no universally accepted definition of AI. However, a distinction is often made between


               

many decades away.
Narrow AI can be understood as ‘a set of advanced general-purpose digital technologies that
 AI is usually defined in terms
         ¹¹ and can be
 

 
Financial Times
 


 

remains governed primarily by data protection frameworks.
 Paul Martin, The Rules of Security: Staying Safe in a Risky World

 

4 Arcial Intelligence and UK Naonal Security

¹²
Recent progress in Narrow AI has been driven primarily by advances in the sub-field of ML. ML
enables computer systems to learn and improve through experience, and is characterised by the
use of statistical algorithms to find patterns, derive insights or make predictions. An algorithm
can be defined as ‘a set of mathematical instructions or rules that, especially if given to a
¹³ ML is a specific category of algorithm
that is able to improve its performance at a certain task after being exposed to new data. There

In supervised learning
            
training data could include many photographs of different types of fruit, and labels
defining which fruit is depicted in each photo. The trained model is considered to

new, unfamiliar photos.
In unsupervised learning, ‘the agent learns patterns in the input even though no explicit

thousands of individual photographs of five types of animal but no labels identifying
the animals. The model is considered to perform well if it is able to correctly divide the
photographs into five piles, each containing the photos of one type of animal.
Reinforcement learning is a goal-oriented form of learning, where the agent improves at

recommender systems, a human listener may be recommended music based on
their previous listening habits. The user provides feedback indicating whether they
like the computer-recommended track. This feedback helps the algorithm to learn
         
accurate over time.
Semi-supervised learning is a fourth category of ML, involving datasets where some

fruit classification example, the model can be pre-trained on the entire training set

 Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach

 

 Russell and Norvig, Artificial Intelligence
 Ibid.
 See Ibid. for all four definitions.
Babuta, Oswald and Janjeva
The use of ML has grown considerably in recent years, driven by an exponential growth in
computing power coupled with an increased availability of large datasets. In healthcare,
ML-based image recognition is used for complex tasks, such as predicting the risk of autism in
babies or detecting skin cancer. Local councils are deploying ML algorithms to assist social
            In
policing, ML algorithms are used to forecast demand in control centres, predict re-offending
             
algorithms are increasingly being used to streamline tasks, such as waste removal, traffic
management and sewerage systems. These trends are likely to continue in the coming years,

²¹
It is important to note, however, that most AI advancements have been made either in the private
sector or academia.²² The UK government is yet to take full advantage of these opportunities. As
summarised by the Committee on Standards in Public Life, ‘despite generating much interest and
commentary, our evidence shows that the adoption of AI in the UK public sector remains limited.
Most examples the Committee saw of AI in the public sector were still under development or
²³ In the coming years, taking full advantage of the opportunities
presented by these technologies will be a high priority for the UK government.
 
Nature
New Scientist
 


 
RUSI Occasional Papers 
20. Financial Times
 


22. 

 Ibid.
24. 



I. National Security Uses of AI
R
   highlighted the acute challenges posed to intelligence

Allen and Taniel Chan, ‘there is more data to analyse and draw useful conclusions from,


Review, David Anderson described how changing methods of communication, the fragmentation
of service providers, difficulties in attributing communications, ubiquitous encryption and the
             
intelligence agencies. These challenges call for the development of more sophisticated
              

learning and AI to improve our operational outcomes. We can tackle these large problems and
potentially deliver intelligence and security solutions to help keep the UK safe, in ways which

There are numerous ways in which UKIC could apply AI to improve the efficiency and effectiveness
of existing processes. Potential use cases identified in this research are discussed in turn below,

 

 

 David Anderson, A Question of Trust: Report of the Investigatory Powers Review

 
Arcial Intelligence and UK Naonal Security
Figure 1: 
National Security Uses of AI
Natural Language
Processing
Audiovisual
Analysis
Organisational
Process
Automation
Cyber Security
Augmented
Intelligence Analysis
User Authentication
Antivirus
Network Detection
Human Resources
Finance, Accounting
and Logistics
Compliance and
Oversight
Filtering, Flagging
and Triage
Cognitive Automation
Behavioural
Analytics
Source: Authors’ research.
Babuta, Oswald and Janjeva

As for all large organisations, the most immediate benefit for UKIC in the use of AI will most
likely be the ability to automate organisational, administrative and data management processes


        This could include assisting
with tasks such as human resources and personnel management, logistics optimisation, finance
and accounting.

These can be divided into front office and back office uses. In the front office, a combination
of computer vision and NLP can be used in processes such as handling insurance claim forms
and accompanying information like photographs, carrying out query resolutions more quickly
and efficiently by guiding users through repositories of information, and making chatbots act as
the first point of contact for enquiries on e-commerce websites. Back office functions include
the automation of data capture when scanning images for invoice processing, cross-referencing
data between application forms and supplementary documents when servicing loans, and

financial data.³¹ Similarly, the effective use of AI could significantly reduce administrative
workloads across the UK goverment, from improving the efficiency of room booking and diary
management systems, to managing job applications or conducting routine background checks.
       
            
         

³² AI could conceivably be applied to any one of
   

            
  ³³ Automating aspects of authorisation
and oversight processes could not only help to ensure compliance with relevant legislative
 


 


 Ibid.
 

 Ibid.
 Arcial Intelligence and UK Naonal Security
requirements, but would also free up staff time within oversight bodies to provide scrutiny
and advice regarding more complex technical issues.
Cyber Security
Modern-day cyber security threats require a speed of response far greater than human
            
attacks, AI cyber defence systems are increasingly being implemented to proactively detect and

on virus signatures, AI-based antivirus can recognise aspects of software that may be malicious
without the need to rely on a pre-defined list. As summarised in a recent report from Darktrace,

          

cyber-threat. This capability has become necessary in recent years, as advanced cyber-criminals


Similarly, AI-based network detection systems could be trained to learn what constitutes
           
analysis of log data and respond in real time. Relatedly, these techniques could be used to

             


User authentication is another area of potential value to UKIC. Recent research has focused on

digital activity, such as how they handle their mouse or compose sentences in a document.
Such active authentication systems could enhance cyber security by ensuring ongoing user
authentication following an initial session login.
 


 

 

Babuta, Oswald and Janjeva 
Augmented Intelligence Analysis
AI-assisted intelligence analysis could offer significant benefits in deriving insights from
unstructured and disparate datasets, thereby improving the efficiency of the intelligence
workflow and potentially reducing collateral intrusion by minimising the volume of content that
is subject to human review.

 Cognitive automation       

2. Filtering, flagging and triage of data gathered through bulk collection, as part of an

 Behavioural analytics to derive insights at the individual subject level.
Cognive Automaon


            
operators to interpret large volumes of data, while also potentially reducing intrusion by
minimising the volume of content that is subject to human review.
         

clear benefits, either applied to transcribed text or directly to audio data. In addition, speaker
identification could make large quantities of voice data searchable in a more efficient way.
             


many language modelling benchmarks and performed basic reading comprehension, machine
translation, question answering and summarisation. Recent research has also demonstrated
the potential uses of ML techniques for authorship attribution based on linguistic analysis of
stylometric features.
 

 Hoshiladevi Ramnial, Shireen Panchoo and Sameerchand Pudaruth, ‘Authorship Attribution

Intelligent Systems Technologies and Applications, 

 Arcial Intelligence and UK Naonal Security
AI could also improve the efficiency of video data processing. Object classification and facial
matching could substantially reduce the amount of time analysts spend manually trawling through
video footage. Another benefit is the ability to classify material in order to shield analysts and

summarisation is a further area of interest. An example is the use of ML algorithms to generate
a unique summary of a video by selecting key frames which accurately capture the content and
context of the original video. This can be used to identify a change that has happened over time
and create a video highlighting that change to an analyst. In the military context, the Software

US military organisations that aims to notify operators of significant events, such as the planting

and search for patterns of life across multiple videos, with the ultimate goal of predicting future

Filtering, Flagging and Triage
It is publicly reported that bulk data gathered by UKIC is processed using a series of automated
volume reduction systems to filter, query and select material for examination. Incorporating
AI into these systems could improve the efficiency of filtering processes, ensuring that human
operators have access only to the information that is most relevant to the analytical task at
             





liberty concerns.

             The report


potential intelligence value of their communications. A degree of filtering is then applied to the
traffic of selected bearers, which is ‘designed to select communications of potential intelligence
 



40. National Research Council, Bulk Collection of Signals Intelligence: Technical Options

 David Anderson, Report of the Bulk Powers Review

Babuta, Oswald and Janjeva 
             
communications are then subjected to the application of queries, both simple [relating to
an individual target] and complex [combining several criteria], to draw out communications

to specific targets of interest, a triage process is applied to determine which items are most
useful. ‘Analysts use their experience and judgement to decide which of the results returned by
 In their

‘we are confident that the majority of data gathered by way of bulk collection is not reviewed
by analysts, although it will be automatically screened against specific criteria to enable the

If deployed effectively, AI could identify connections and correlations within and between
multiple bulk datasets more efficiently than human operators, improving the accuracy of this
screening and filtering process. However, a crucial distinction must be drawn between using AI
to identify content of interest to flag to a human operator, and applying behavioural analytics

intelligence from bulk data, AI is likely to be most useful when deployed as part of an interactive

It could be argued that AI has the potential to reduce collateral intrusion when searching or
filtering data gathered through bulk collection, by minimising the volume of content that is
subject to human review. However, it has also been argued that machine analysis is not necessarily
inherently less intrusive than human review. This issue is discussed further in Chapter II.
Behavioural Analycs
           
individual-level data to derive insights, generate forecasts or make predictions about future
human behaviour. There are various ways in which intelligence agencies could hypothetically
implement AI to make predictions about future behaviour. These include insider threat detection,
predicting threats to individuals in public life, identifying potential intelligence sources who may
be susceptible to persuasion and predicting potential terrorist activity before it occurs.
The use of behavioural analytics for counterterrorism purposes has attracted significant


42. Ibid.
 Annual Report 2018

44. 


 Arcial Intelligence and UK Naonal Security


of closed SOIs [subjects of interest] but in relation also to active SOIs and previously unknown
       
 which
will be achieved by ‘increasingly sophisticated use of artificial intelligence and behavioural

                

             

  
have their place when applied to persons who are already under suspicion.

 debate about which approach is
more accurate, justified or informative is intense and ongoing. A number of empirical studies
      
accurate predictions than unstructured clinical judgement, across many disciplines and in a
wide range of decision-making contexts. However, experts argue that aggregated ‘predictive

level, and the evidence shows that violence risk assessment approaches that incorporate a
degree of professional judgement yield more successful results than relying purely on statistical
methods. Recent research into prediction of life outcomes using a mass collaboration approach
 David Anderson, Attacks in London and Manchester, March-June 2017: Independent Assessment of
MI5 and Police Internal Reviews
 

 Ibid.
 Ibid.
 
Legal and Criminological Psychology

 Clinical Versus Statistical Prediction: A Theoretical Analysis and a Review of the
Evidence
Science
Psychological
Assessment

Counseling Psychologist
 

Babuta, Oswald and Janjeva 

for prediction, the best predictions were not very accurate and were only slightly better than

Moreover, given the relative infrequency of terrorist violence, there is a significantly smaller


         
motivations, and in the precipitatory factors that ultimately lead them to commit an act
           As
summarised by John Monahan, ‘existing research has largely failed to find valid nontrivial
[statistically significant] risk factors for terrorism. Without the identification of valid risk
         Another concern of
incorporating statistical methods into terrorism risk assessment processes is the potential
loss of relevant contextual information which should be considered when making judgements

         
          
to support human analysis. This is achieved by collating relevant information from multiple
sources and flagging significant data items for human review. A degree of human judgement is



AI is overrated. The role of machines is not to replace but facilitate human reasoning. Augmented

to make data-driven decisions in a more transparent and accountable way



International Journal of Forensic Mental Health

Criminal Justice and Behavior

 Matthew J Salganik et al., ‘Measuring the Predictability of Life Outcomes with a Scientific Mass
Proceedings of the National Academy of Sciences

 Lone-Actor Terrorists: A Behavioural Analysis
 Psychology, Public Policy, and
Law
 Arcial Intelligence and UK Naonal Security
at an intelligence agency, this might mean turning data stored in documents, reports, and tables into








how technology can work to improve human intelligence rather than to replace it. That feels much
closer to how we in policing are using technology. I also believe a licence to operate technology in those
human terms feels much closer to what the public would expect and accept.
In sum, the evidence reviewed for this paper suggests that it is neither feasible nor desirable
         
            
systems to collate relevant information from multiple sources and flag significant data items for

of analysis tasks focusing on individual subjects. Care will be needed, however, to ensure that

statistically significant in historic data.
Adversarial AI
Malicious actors will undoubtedly seek to use AI to attack the UK. It is likely that the most
capable hostile state actors, which are not bound by an equivalent legal framework, are
developing or have already developed offensive AI-enabled capabilities. In time, other threat
actors, including cybercriminal groups, will also be able to take advantage of these same
innovations. The national security requirement for AI is therefore all the more pressing when
considering the need to combat potential future uses of AI by adversaries. This paper divides
 
physical security.
 

 

 
Babuta, Oswald and Janjeva 
Digital Security
The threat from AI-enabled malware is likely to grow and evolve in the coming years. Specifically,
polymorphic malware that employs complex obfuscating algorithms and frequently changes its
identifiable characteristics could reach a level of adaptability that renders it virtually undetectable
to both signature- and behaviour-based antivirus software. AI-based malware could proactively
prioritise the most vulnerable targets on a network, iteratively adapt to the target environment
and self-propagate via a series of autonomous decisions, potentially eliminating the need for
    A further concern is the use of domain-generation

points between infected devices and C2 servers, which would make it considerably difficult to
successfully shut down botnets.

online information, attackers can automatically generate malicious websites, emails and links


during longer and more creative online dialogues.
The increased adoption of AI across the UK economy will also create new vulnerabilities which
           
could cause AI systems to behave in erratic and unpredictable ways, or allow attackers to install
                
classify a particular malware as benign software.
 


attacker or cybercriminal which is used to send commands to systems compromised by malware



 
Proceedings of the 25 USENIX Security Symposium

 
 
Proceedings of the 2017 ACM on Asia Conference on Computer and Communications
Security 
 Arcial Intelligence and UK Naonal Security
Polical Security

emerged as a significant concern. Deepfakes involve the use of ML algorithms to combine or

           
of a person speaking based only on a single photo of that person. The disruptive potential


showing candidates Boris Johnson and Jeremy Corbyn endorsing each other for prime minister.
This was intended to warn the public of how AI technology can be used to fuel disinformation,
erode trust and compromise democracy. Ahead of the 2020 US presidential election, experts


At present, modified data can be readily identified by media forensic experts. Nevertheless, in
the time-sensitive context of an election, the identification of a fake video might simply come

legitimate concern that individuals in positions of power could take reactive decisions based on
false information, with potentially catastrophic consequences.
Physical Security
At present, there are few real use cases of how AI may be weaponised to directly threaten
physical security. One area of concern could be the repurposing of commercial AI systems by

or cause serious crashes. These risks may increase as the use of AI becomes increasingly
          
 
Computer Law and Security Review
 


 
Evening Standard
 
Inverse

 
New York Times
Inverse

 
Babuta, Oswald and Janjeva 
            Moreover, it is likely that
AI will transform what would previously be classed as high-skill attack capabilities into tasks
which low-skill individuals can perform with little effort. This may take the form of ‘swarming

speed, provide ubiquitous surveillance to monitor large areas and groups and execute rapid,

            
and interconnected critical national infrastructure will create numerous new vulnerabilities
which could be exploited by threat actors to cause damage or disruption. While these potential
physical threats are yet to materialise, this situation could change rapidly, requiring government
agencies to formulate proactive approaches to prevent and disrupt AI-enabled security threats
before they develop.
 


 
 

II. Legal and Ethical
Considerations
T
   the legal framework regulating UKIC and its use of AI,
before considering potential legal and ethical issues that could arise from the use of AI for
national security purposes.

The statutory functions of the UK intelligence agencies are set out in the Security Service Act

         
            
           
communications, equipment interference, obtaining of communications data and the acquisition
 The IPA regime
subjects the agencies to additional levels of scrutiny regarding their acquisition of data and use
of investigatory techniques, scrutiny and oversight to which the private sector is not subject.
              
requirement for the public authority to consider ‘whether what is sought to be achieved by the

Directed and intrusive surveillance and the use of covert human intelligence sources continue
to be governed by the Regulation of Investigatory Powers Act 2000.

              
internal guidance and policies.        sets out a separate
data protection regime for the intelligence services. There are a number of national security
 
 
 
 The powers given to UKIC are subject to a specific oversight regime set out in intelligence and

protection frameworks.
 
 
 
22 Arcial Intelligence and UK Naonal Security
exemption certificates in place pursuant to the national security exemption in the Act,
although the agencies continue to be required to ensure that the use of personal data is both
lawful and secure.


            
fundamental human rights and political freedoms, subject to certain restrictions. Some of these




  
that the state has the power to interfere with these rights provided that such interference is
Conversely, the state has

measures within the scope of their powers which, judged reasonably, might have been expected
 One could therefore also argue that the agencies
have a positive obligation to adopt new technological methods that would improve their ability
to protect the public from threats to their safety.






and quickly, but the mechanism is there.

             
             
subject to the required ongoing human rights proportionality assessment. The UK Supreme
Court has developed a four-stage proportionality test for assessing, pursuant to the Human

 

 

 
European Journal of Law and Technology
Babuta, Oswald and Janjeva 
This test was set out in the Bank Mellat 
 Is the objective of the measure pursued sufficiently important to justify the limitation
of a fundamental right?
2. Is it rationally connected to the objective?
 Could a less intrusive measure have been used without unacceptably
compromising the objective?
4. In regard to these matters and to the severity of the consequences, has a fair balance
been struck between the rights of the individual and the interests of the community?
This human rights proportionality test provides criteria that the agencies can use to assess the
legitimacy of new uses of technology, including AI. However, because existing authorisation
processes focus on the collection 
will need to continue to re-assess the necessity and proportionality of any potential intrusion if
AI is subsequently applied to data previously obtained. This reflects a point made in a report by

         

impact but will vary according to the people whose data it is and what other data is available and may
be combined with the original data. 
.
In his oral evidence to the House of Commons Public Bill Committee on the Investigatory Powers
   
data is collected but before it is reviewed by an analyst, where additional safeguards may be
needed to account for the analytical processes applied between the point of collection and
human analysis.
Much concern over the acquisition of communications data focuses on the insights that can be
 The type of analysis applied to a collected dataset
has direct implications in this regard, implying the need for an additional assessment of the extent

focused specifically on the analytical processes which may be applied to collected data. This is
 
 


 


 

24 Arcial Intelligence and UK Naonal Security
particularly important considering the analysis of bulk datasets will include the processing of
data about many individuals who are not of intelligence interest.
Any future policy or guidance for national security uses of AI must pay due regard to issues such
as necessity and proportionality, transparency and accountability, and collateral intrusion risk.
  
and principles-based, establishing standardised processes to ensure that AI projects follow
recommended routes for empirical evaluation of algorithms within their operational context,
and assess each project against legal requirements and ethical standards.
Machine Intrusion
The question of whether the use of AI represents increased privacy intrusion or a method by
which intrusion could be reduced remains a matter of debate. The use of AI arguably has the
potential to reduce intrusion, both in terms of minimising the volume of personal data that
needs to be reviewed by a human operator, and by resulting in more precise and efficient
targeting, thus minimising the risk of collateral intrusion. However, it has also been argued that
the degree of intrusion is equivalent regardless of whether data is processed by an algorithm or
a human operator. According to this view, the source of intrusion lies in the collection, storage


              
constitute an infringement of human rights, regardless of whether it is reviewed by a human or
a machine. The Anderson report highlighted a 





In R (National Council for Civil Liberties) v Secretary of State for the Home Department, the
court highlighted a fundamental difference of opinion between the claimant and the government


private life at all material stages, including at the stage when data is obtained and retained. However,

 
 Anderson, A Question of Trust
 

Babuta, Oswald and Janjeva 



 more
intrusive than parametric keyword searches. ‘If the automatic collection and storage of

Court of Human Rights — the algorithmic analysis of data that goes beyond a simple keyword
         Use of AI could result in additional
material being processed which may not have previously been possible for technical or
capacity-related reasons. This would need to be taken into account when assessing
proportionality of any potential intrusion, balanced against the increase in effectiveness of
analysis that may result.
             

should not be assumed that the use of automated data processing methods is inherently less







          

It is important to note, however, that standardised processes already exist to assess the
necessity and proportionality of any potential intrusion when accessing previously collected

     


priority for each search



 Ibid.
 
 Arcial Intelligence and UK Naonal Security


Considering the potential privacy implications as new analysis methods are applied to previously
collected datasets, such internal processes will need to continue to assess the necessity and
proportionality of any potential intrusion if AI is subsequently applied to previously collected data.


when automated systems interact with each other, resulting in an interconnected network of
systems that results in significantly greater levels of intrusion than in the case of each system
in isolation. As discussed in the Anderson report, ‘intrusions into privacy have been compared,

 This suggests the need for internal processes to
monitor the overall cumulative effects of automated data processing systems and any resulting
compound intrusion risk, as well as the extent to which this is judged to be both necessary and
proportionate.

Collecon
Large datasets may be needed to train ML algorithms, and much of the information contained
therein may not be of national security concern. Training data could come from a number of


a system to identify potential targets or relationships between entities within bulk datasets.
The privacy and human rights implications will vary considerably depending on the source of
training data used and how it is acquired.
           
example, 
Ireland and further
afield. Where alternative methods exist, they are often less effective, more dangerous, more
 In Big Brother Watch v UK, the court concluded
that ‘it is clear that bulk interception is a valuable means to achieve the legitimate aims pursued,
 In
 IPCO, Annual Report 2018
 Ibid.
 Anderson, Report of the Bulk Powers Review

personal datasets.
 
Babuta, Oswald and Janjeva 
 a recent court ruling concluded that ‘in some areas,
particularly pattern analysis and anomaly detection, no practicable alternative to the use of
BPDs exists. Where an agency does not have the “seed” of intelligence usually needed to begin
 This conclusion
was based on operational examples provided to the court, reflecting the importance of such
evidence to any future determination of necessity.
The ongoing challenges by Privacy International and Liberty to the IPA bulk powers should
              
questioning whether the activities of intelligence agencies relating to bulk communications
 Challenges to the use of bulk data are likely to continue with
implications that would need to be considered carefully for the potential use of such data
within AI systems.
Data Retenon and ‘Model Leakage’
There is an ongoing academic debate over whether the retention of a trained ML model

will have implications for the retention, security classification and handling requirements of
trained ML models.
Recent academic research has demonstrated that ML methods can be vulnerable to a range of
cyber security attacks that may lead to breaches of confidentiality. Of concern in this regard
   

machine-learned model from a one-way one to a two-way one, permitting the training data to

the training data, but instead recovers information about whether or not a particular individual
     The authors conclude that ‘where models are vulnerable to such
product potentially protected
 


 

 


will have implications for any data protection adequacy decision after the Brexit transition period.
 

 
Philosophical Transactions of the Royal Society
 Arcial Intelligence and UK Naonal Security
by intellectual property rights, but also a set of personal data, conceptually close to the idea of

It is important to note, however, that this is only one academic interpretation of the legal


... as opposed to databases, inversion and membership inference models can only ever contain
              


The authors conclude that there is no personal data contained within a model and a
 The extent to which
these concerns are relevant to UKIC is also unclear, as the models in question would be deployed
in a secure environment and are therefore less likely to be vulnerable to confidentiality attacks
from adversarial actors.
Where possible, differential privacy methods may protect models from potential confidentiality

also be used to store data in such a way that it is possible to conduct analysis without inferring
properties about individuals, while homomorphic encryption could make it possible to perform
operations on a dataset without needing to decrypt the data. It is likely that the protections
required for a trained model will depend largely on the type of ML used, the extent to which
these methods are vulnerable to model inversion or membership inference attacks, and the
context and environment in which the models are used.

Does it Work?
A potential risk to individuals is the reliability of AI systems used to process personal data. In
a national security context, the consequences of errors can be very high, particularly if an AI
system is integrated into a decision-making process which may result in direct action being taken

that the capability they are seeking to deploy will deliver the desired outcomes while balancing
the potential benefits against the level of intrusion arising from data collection and analysis.

establishing context-specific evaluation processes that assess the real-world effectiveness of a
 Ibid.
 
International Data Privacy Law
 Ibid.
Babuta, Oswald and Janjeva 
tool when deployed in a live operational context. As well as evaluating reliability and statistical
accuracy, this process should also include developing standardised terminology for how error
rates and other relevant technical information should be communicated to human operators.
Behavioural Proling, Bias and Discriminaon
Concerns have been raised regarding the ability of ML algorithms to build comprehensive
methods do not.
could be considered inherently more intrusive than manual analysis of collected data, and would
raise further human rights concerns if it were perceived to be unfairly biased or discriminatory.
            

and the ‘lack of transparency, public knowledge, consent, and oversight in how data systems are

regard, with claims that they over-predict individuals from certain racial groups, or particular
 In an intelligence
context, there is also a risk that biases in historic data may result in important case-specific
information being overlooked, and that the reliance on historic data may only reveal insights
related to threats which appear similar to data items that have been encountered previously.
However, while much commentary has focused on the ability of AI systems to replicate or amplify
biases inherent in collected data, it is often argued that these systems are likely to be no more
  
biases, and although some of these are more trivial than others, research has consistently
shown that human decision-makers do not have the insight into their own decisions that is
often assumed. More importantly, the use of AI could potentially reveal underlying biases in
datasets which would otherwise go unnoticed. As summarised by Helen Margetts, ‘some of our

 See, for example, Dimitra Kamarinou, Christopher Millard and Jatinder Singh, ‘Machine Learning

 

 California Law
Review

New York University Law Review


Proceedings of Machine Learning Research

 See Daniel Kahneman, Thinking, Fast and Slow
The Atlantic
 Arcial Intelligence and UK Naonal Security
good things about machine learning technologies is that they have exposed some bias which

Nevertheless, law and regulation has developed over time to govern such human frailties and
to safeguard against bias in human decision-making, but the same safeguards do not yet exist

ensure fairness in algorithm-assisted decision-making. Throughout all stages of an AI project,

display any evidence of unfair discrimination. Processes are needed for ongoing tracking and
  
  
diversity in AI project teams, as ‘a workforce composed of a single demographic is less likely
 Workforce diversity is not only
important for identifying the risk of bias within datasets, but also for identifying operational
impacts that may be more detrimental for certain demographic groups.
Transparency and Accountability

which may lead to a loss of accountability of the overall decision-making process. Deep
learning methods are generally inscrutable to human users, meaning it is not possible to assess
the factors that were taken into account during computation. In some cases, the use of black-box

other cases, particularly when AI systems are deriving insights at the individual subject level, it
may be unacceptable for human users to have no knowledge of the factors that were considered
during computation. There is also a related risk that operators may become over-reliant on AI

rendering the resultant decision a de facto automated one.
 Helen Margetts cited in Committee on Standards in Public Life, ‘Artificial Intelligence and Public

 Ibid.
 
RUSI Journal
Nature



 

RUSI Journal

RUSI Journal

Babuta, Oswald and Janjeva 

and accountability. The extent to which it is necessary to explain the factors which were
considered when arriving at a certain output will depend largely on the context in which the
algorithm is applied and the overall decision-making process that it informs. In order to ensure
that human operators retain ultimate accountability for the overall decision-making process
informed by analysis, it will be essential to design systems in such a way that non-technically
skilled users can interpret key technical information, such as the margins of error and uncertainty
associated with a calculation. Intelligence professionals are trained to make decisions in
conditions of uncertainty. The output of an AI system should be treated as another source of
information for the user to consider in conjunction with their own professional judgement.
Context-sensitive internal oversight processes are needed to ensure AI tools are used to support

rise to different transparency and accountability challenges.
It will also be important to maintain senior organisational accountability for the development
and deployment of AI systems, ensuring those with management, monitoring and approval
responsibilities fully understand the limitations and risks associated with different methods.
Achieving this will require developers and technical experts to be able to translate complex
             
decision-makers can assume overall accountability for the tool and how it is deployed operationally.
Arizona Law Review 

III. Regulation, Guidance and
Oversight
T
existing guidance and professional standards relating to the
development and deployment of AI, and considers additional sector-specific guidance
that may be needed in the context of national security. This is followed by a review of the
roles and responsibilities regarding monitoring and oversight.

Although discussions on the ethical use of AI are now well-established, these are largely yet
to translate into operationally relevant guidance that stakeholders can implement in practice.
Without establishing clear boundaries regarding permissible and unacceptable uses of AI, the
fear of falling on the wrong side of the ethical divide may impede the potential of better results
from newer, often experimental methods.

most relevant guidance has been provided by the Department for Digital, Culture, Media and
Sport and the Alan Turing Institute.¹¹¹
¹¹² and the HM Treasury Aqua Book¹¹³
relevant. These focus on maximising the potential of data analytics projects in a responsible,
proportionate way which is mindful of the potential limitations at each stage of a project lifecycle.
 


 


 

2020.
 HM Treasury, The Aqua Book: Guidance on Producing Quality Analysis for Government

 Arcial Intelligence and UK Naonal Security
 and the UN focus on aspects of

 Microsoft and IBM have
been proactive in setting out their recommendations for how companies should approach AI
projects in a responsible and ethical way. These recommendations focus on issues of unfair bias,
  
 and the AI Now Institute has
¹²¹

model, and together with additional explanatory documentation, recommends its use to chief
constables.¹²² Algocare aims to translate key public law and human rights principles into practical
considerations and guidance that can be addressed by public sector bodies when implementing
AI, and could also be a useful starting point for national security-specific AI guidance.
 

 

 




 

 CHI ’19: Proceedings of the 2019
CHI Conference on Human Factors in Computing Systems

 


 


 


 
Information and Communications Technology
Law 
Babuta, Oswald and Janjeva 

of AI, including ¹²³
Office, the Office for AI, parliamentary and independent committees, bodies with
sector expertise or policymaking functions, and campaigning organisations. The roles and
responsibilities of these stakeholders, as well as their regulatory remit, will need to be more
clearly defined to ensure that work is not duplicated and they are able to provide meaningful
oversight of government AI projects.

UKIC operates within a highly specific regulatory framework. The agencies may wish to
implement AI systems in very different ways and for different purposes, and will therefore need
to consider a range of factors which may not be relevant for other sectors. This, in turn, demands

standardised processes to continuously assess the risks and benefits of national security AI
deployments on an ongoing basis. An agile approach within the existing oversight regime to
anticipate and understand the opportunities and risks presented by new AI capabilities appears
essential to avoid creating excessive layers of oversight. Without finding this balance, there is

rapidly evolving technological environment and threat landscape.
Moreover, discussions regarding the potential risks of AI are often focused on extreme examples
of theoretical future uses, which are typically detached from the reality of how the technology
is currently being used. As a result, valid concerns regarding the ethical implications of AI
may be overshadowed by speculation over unrealistic worst-case-scenario outcomes. It may
therefore be difficult for organisations to develop clearer, operationally relevant guidance for
the legitimate use of AI if discussions do not consider the likely and realistic applications of AI
as an incremental development of existing capabilities and processes.
 

 
 

 


 


 


 Arcial Intelligence and UK Naonal Security
In developing a clearer policy framework for national security uses of AI, there is an opportunity
for UKIC to take a more active role in government AI policymaking more broadly. The current
approach to AI development across the UK government has been characterised as disjointed
and uncoordinated, for instance by the Committee on Standards in Public Life, which found
that ‘[p]ublic sector organisations are not sufficiently transparent about their use of AI and it
 
Developing a more coherent cross-government approach will require drawing on diverse,
multi-disciplinary expertise from across the public sector, and there would be considerable
value in leveraging the deep technological expertise within the agencies for the benefit of wider
government policy development.
But policy and guidance can only go so far. The legitimate use of AI will also require complex
and context-specific judgements to be made by individuals on a case-by-case basis. In a context
where the regulatory framework is not yet fully established, this gives rise to the risk of increasing

context. This raises further questions about the distribution of responsibility if mistakes were
made in the operationalisation of AI. In light of these issues, it is important to foster a culture
where users and decision-makers feel empowered to make informed ethical judgements,
supported by a collaborative environment in which open communication is actively encouraged.
Monitoring and Oversight

reassure the public of the robustness and resourcing of oversight. Recent events have highlighted





both to the programme of work to fix the compliance problems identified and to service this

¹³¹
These recent public disputes illustrate the pressing need to ensure the regulatory apparatus is
appropriately equipped to provide robust and comprehensive oversight of complex technical
issues. IPCO has a central role to play as the appropriate regulatory body responsible for

authorisation and inspection of warrants, it will be important to ensure that specific technical
 
 Financial Times

 

Babuta, Oswald and Janjeva 
issues regarding the development and deployment of AI can be reviewed and discussed
on an ongoing basis. The legal and ethical issues discussed above can be highly subjective,
and mechanisms are needed to ensure that the national security community considers the
perspectives of a diverse range of stakeholders when making internal policy decisions. As

coming age of AI, there will need to be a strong system of vernacular accountability in place,
with contributions from individuals from a diverse range of backgrounds, questioning everyday
¹³²
Beyond its statutory oversight and inspection roles, IPCO could also play an important role in
convening external experts to discuss these issues in a confidential environment. Its most recent
annual report details how it has been involved in various external engagement activities with


             



¹³³
In addition to IPCO, there are a number of other stakeholders to consider in the context of
monitoring and oversight, including the Intelligence and Security Committee of Parliament and
the Independent Reviewer of Terrorism Legislation.
This research has highlighted the importance of drawing on diverse multidisciplinary expertise
when understanding the opportunities and challenges posed by AI in the national security
context. This will need to be reflected both in the resourcing of oversight and the approach
to external stakeholder engagement. In addition, it is crucial to ensure that those responsible
for monitoring and oversight have access to sufficient technical expertise and the information
needed to make informed and context-specific judgements regarding acceptable uses of new
technology, including AI.
 Secrets and Spies: UK Intelligence Accountability after Iraq and Snowden

 IPCO, Annual Report 2018
Conclusions
AI
   to enhance many aspects of intelligence work. Taking full
advantage of these opportunities requires establishing standardised processes for
developing, testing and evaluating new AI tools in their operational context. The
agencies may seek to deploy AI in numerous ways. These vary considerably in terms of their data
requirements, potential impact on decision-making and ethical implications. Many uses will be
uncontentious, if they simply reduce the time and effort required to work through large volumes
of data which would have previously been processed using less efficient manual methods. Other
uses may raise complex privacy and human rights concerns, requiring processes for regular
review and reassessment of the necessity and proportionality of any potential intrusion, the
choice of training data used to build a model and the decision-making process into which an
algorithm may be embedded. At the outset of any new AI project, internal processes are needed
to assess potential privacy and human rights implications and the level of oversight that will
therefore be needed.
UKIC operates within a tightly restricted legal framework. The IPA regime subjects the agencies to
additional levels of scrutiny regarding their acquisition of data and use of investigatory techniques



introduces a number of additional considerations, suggesting that enhanced policy and guidance
are needed to ensure that AI analysis capabilities are deployed in an ethical and responsible
way and with due regard to issues such as necessity and proportionality, transparency and
accountability, and collateral intrusion risk.
             
continue to disagree over fundamental questions such as the relative level of intrusion of machine
analysis when compared with human review. Despite a proliferation of ethical principles, there
is a lack of clarity on how these should be operationalised in different sectors, and who should
be responsible for oversight and scrutiny.
Moreover, it is crucial for UKIC to continue to engage with external stakeholders to inform the
development of internal policy regarding its use of new technologies, including AI. In addition
to engaging with other government departments and those with oversight responsibilities,
this should also include incorporating views from civil society organisations and other public
interest groups, as well as drawing on lessons learned from other sectors in the development
and deployment of AI.
About the Authors
Alexander Babuta 
technologies for national security and policing. He publishes regularly on issues related to
surveillance policy, data ethics and artificial intelligence.
Marion Oswald
          West Midlands

member of the Advisory Board to


Ardi Janjeva is a Research Analyst at RUSI. His research currently spans numerous areas within
Organised Crime and National Security, including the application of emerging technologies for
national security and law enforcement, intellectual property crime and counterfeiting, and
cyber-enabled fraud.
Annex: Selected AI Guidance
and Ethical Principles
Organisaon
Publicaon Content
Department for Digital,
Culture, Media and Sport
UK







Seven principles against which to





plan.


UK










tailored to the design of AI
systems.

Centre
UK

Assessing Intelligent Tools for





in procuring or developing AI
security tools. Includes a series of




using data correctly to ensure
the AI tool can learn its task



assessing reliability, resilience

HM Treasury
UK
The Aqua Book: Guidance on
Producing Quality Analysis for
Government


guide for those producing
analysis for government.
Includes principles of quality





44 Arcial Intelligence and UK Naonal Security
Organisaon
Publicaon Content

Technology Policy
US
‘Principles for the Stewardship of





2020.


approach or holding AI systems



assessment.
US Department of Defense
US



AI to Advance our Security and









accelerate delivery of AI-enabled



EU





Outlines a focus on areas where
legal frameworks intersect

which support a regulatory
and investment-oriented







UN

‘High-Level Panel on Digital



April 2020.




2020.

leverage digital technology


awareness, exchanging

synchronising stakeholder aims.
Babuta, Oswald and Janjeva 
Organisaon
Publicaon Content








use AI to drive inclusive growth

include adequate safeguards

allow for responsible disclosure


of AI systems throughout life

operators accountable for proper


Private sector




Seven principles to assess AI






Private sector
Saleema Amershi et al.,

CHI ’19:
Proceedings of the 2019 CHI
Conference on Human Factors in
, Paper No.




helping users understand

AI language and behaviours

immediately update how user

behaviours of an AI system.
IBM
Private sector
Ryan Hagemann and Jean-Marc












bias.


Academia / NGO / civil
society

Autonomous and Intelligent






in the design and development

ensuring the people behind
these systems have the requisite


 Arcial Intelligence and UK Naonal Security
Organisaon
Publicaon Content
Partnership on AI
Academia / NGO / civil
society







April 2020.




Applies themes of ‘nature







of the target audiences of
explainable AI.

Academia / NGO / civil
society





Ten guidelines for AI
procurement processes

of using AI while assessing


on mechanisms of algorithmic
accountability and transparency


provider with acquiring party.

Academia / NGO / civil
society








conduct self-assessment of
automated decision systems


researcher review processes to



solicit public comments to clarify

avenues for people to challenge
inadequate assessments or
inappropriate system uses.