Imagining a global security framework for artificial intelligence

Add bookmark
Katie Sadler
Katie Sadler
11/21/2017

AI and robotics are of paramount importance to future generations—but growth will not be without cost if challenges remain unaddressed, according to the UN

security cam hero
Photo by Scott Webb on Unsplash

On September 7, 2017, the United Nations Interregional Crime and Justice Research Institute (UNICRI) announced the launch of the Center for Artificial Intelligence and Robotics to enhance understanding of the risk-benefit duality posed by AI and robotics.

The center, established to improve awareness, coordination and eradication of potential threats, will monitor global developments and contribute to policy making.

“Building consensus amongst concerned communities (national, regional, international, public and private) from theoretical and practical perspectives in a balanced and comprehensive manner is integral to the Center’s approach,” said Cindy Smith, director of UNICRI.

“Many of the challenges also present opportunities that can be developed if the implications involved by this technological revolution are addressed from the very beginning.”

Regulating for security: The tip of the iceberg

This comes as high profile figures from the AI and robotics fields continue to raise concerns over the need for improved governance and legal frameworks.

In an open letter to the UN in August, over one hundred AI specialists urged the organization to ban further development of autonomous weapons and called for tighter security around the progress of intelligent technologies. “Once developed, [lethal autonomous weapons] will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter stated.

Professor Roman V. Yampolskiy, director of the Cyber Security Laboratory at the University of Louisville, echoes calls for greater regulation: “Advanced AI accidents present high risk of financial loss and potential fatalities. Concerns about AI safety are very timely.”

On the other hand Tom Charman, CEO of machine learning-based application KOMPAS and TEDx speaker, calls for a change of thought. “The focus should be on the long-lasting risks to security and society rather than the threat of ‘killer-robots’,” he said. 

“Currently, we are working towards creating ‘intelligent’ machines, without considering the long-term risks such as security, control and verification.”

“Although those developing the technology may be doing so with the aim to create a positive impact, concerns around who controls the technology and risks surrounding state actors having an unfair advantage over others are areas that UNICRI should be focusing on,” said Charman.

Securing AI: Goals and values

The short-term risks posed by artificial intelligence technologies, such as unemployment and inequality, are not fundamentally different from potential downsides of other disruptive technologies said Professor Huw Price, academic director of the Leverhulme Centre for the Future of Intelligence. But in the long term, he said, very powerful AI systems may carry out many crucial jobs.

“We’ll put a lot of things that matter on autopilot. These autopilots will be extremely good at doing exactly what we tell them to do,” said Professor Price.

”We need to be very sure that we know how to give the right instructions—if we leave loopholes the AIs will find them.

Professor Price borrows an idea from Stuart Russell of the University of California, Berkeley:

“If your domestic robot doesn't know that pets are off-limits as food, it will make you a delicious low-cost meal from the neighbor’s cat!”

Discovering how to give AIs the right instructions—better known as the ‘value alignment problem’—is imperative in ensuring security and safety, Professor Price added.

“We want to make sure the AI’s goals or values are aligned with ours. It’s a serious issue, and we’ll need to solve it to make sure that AI is safe in the long run.”

AI and cybersecurity: Cure or curse?

Cybersecurity, AI and machine learning have become deeply entwined in recent years; a natural alliance to overcome and prevent cyber attacks. But according to Malcolm Harkins, chief security and trust officer at Cylance, there is no substantial new risk present with AI than with any other technology.

“There could be the risk of a vulnerability in the technology and subsequent exploitation which could cause harm,” said Harkins. “That’s why a strong security development lifecycle and privacy by design should be done for any and all technology and subsequent operation.”

computer screen code hero

Photo by Luca Bravo on Unsplash

It’s about recognizing that, “risks of social/economic upheaval and instability are real across many sectors, but cybersecurity does not have the same sort of systemic foundational issues”.

However, universal AI integration within cybersecurity could escalate a potential attack. Professor Price believes “cybersecurity is another arena in which too little cooperation means that everyone loses in the long run. Combining AI and cybersecurity makes this risk worse.”

Likewise, widespread development of the technology has opened the door to AI as an attack weapon. “AI is the best attack method. We are about to see some spectacular cyberwars. Such tools are very hard to govern,” warns Professor Yampolskiy.

Putting words into action

In the face of all this uncertainty, the new UNICRI office will be tasked to observe and predict AI-related threats and examine the plethora of legal, ethical, security and societal concerns—each of which is no small task.

"We certainly do not want to plead for a ban or a brake on technologies. We will explore how the new technology can contribute to the sustainable development goals of the UN,” said Irakli Beridze, senior strategic adviser at UNICRI, during an interview with Dutch newspaper de Telegraaf.

Read more: What does AI really mean for big pharma?

How UNICRI monitors development while encouraging the advancement of AI for the greater good is a balancing act. According to Charman, regulatory bodies are best suited to measure risks surrounding AI by writing policy around the benefits that are created.

“It is their job to actively promote the safe development of the technology, without implementing draconian regulation,” he said. “Should this happen, the development of AI will be driven underground, only to develop in a way that’s even more unregulated than it is in today’s society.”

What does this mean in practice?

It’s a steadfast warning to both developers and regulators of the importance of cooperation when it comes to the development of pro-active safety regulations, said Professor Price. Regulatory bodies, he said, “should work in conjunction with technology developers to encourage a safety-first research culture. It is better to avoid the mistakes in the first place than to try to punish them when they happen.”

security hero lock
Photo by James Sutton on Unsplash

It’s not about being first over the line, but rather encouraging international cooperation on the risks and challenges implicit in the AI-security debate.

“Wherever AI is first developed its impacts will soon be global, and being first to develop it isn't going to provide protection against the risks. It is in everyone’s interests to collaborate on the safety issues and avoid taking short cuts,” Professor Price added.

This is a point on which Harkins agreed: “We have just gotten started. AI maturation coupled with the potential for quantum computing could unleash way more things that we haven’t even dreamt of before.”

Want more? Become a member of the AI & Intelligent Automation Network for free access to industry news, whitepapers and articles. Sign up today.


RECOMMENDED