With AI developing at pace, both for businesses and cyber attackers, how can we build trust into the system and how can businesses keep pace? These were just some of the questions explored in Airmic’s innovation panel debate at its conference in Liverpool.
AI presents three key risks to companies today, delegates heard. First, autonomous cyber attacks: AI will be able to generate attacks at a speed and scale “we can’t yet comprehend”. This is already starting to happen with very targeted, personal and effective phishing attacks that take threats to a new level.
Second, deep fake technologies which are getting more sophisticated by the day. This is already fuelling social engineering attacks and, with sophisticated software so accessible to the masses, it is likely to be “weaponised” for political gain or to create social unrest.
Third, data poisoning and manipulation. Cyber criminals are starting to corrupt the data going into systems, creating biased outputs and poor decision-making. For example, this technique can be used to bypass facial recognition security systems by embedding fake profiles within the system.
The good news is AI is also advancing cyber security efforts. “The future of cyber warfare is AI versus AI,” said Kirsty Kelly, Global CISO, CFC. “It’s about knowing that AI-enabled attacks are coming and asking how can we implement that into the defensive side? We need to move from reactive to proactive – to predict the way an attack path is going so then we can get ahead of the threat.”
By way of example, Kelly cited automated instant response technology as a “fantastic opportunity” that is very close to becoming a reality. “Imagine your security team being able to open up the platform when there’s an incident and end points are already isolated,” she said.
Building trust and boardroom engagement
Building trust into AI processes and systems will be vital, especially as AI systems develop agency, the panel agreed. Businesses need to think carefully about how AI is being used, integrated and programmed to behave so it works effectively and doesn’t take autonomous decisions that are detrimental to the business.
The risks associated with cyber and AI need greater boardroom visibility, according to John Maguire, head of cyber resilience and incentives at the government’s Department for Science, Innovation and Technology (DSIT). “It’s not at the levels it should be,” he said. “Often there’s a lack of senior ownership of digital risk. It should be seen the same way as any other core risk.”
He noted that three quarters of large organisations in the UK experienced a cyber attack last year and that the recent cyber attack on M&S has served to raise the profile of the risk. “There is recognition that this needs to be treated as a fundamental business risk, not just a digital risk that only the CISO is responsible for.”
Risk and insurance takeaways:
AI is advancing both cyber attack and cyber defence capabilities at speed and scale;
Building trust into AI processes and systems will be vital, especially as AI systems develop agency;
Boardrooms must take greater ownership of cyber-related risks.
The panel: In a digital world with dynamic growth of risk – the relevance of innovation.
Moderator: Tom Hoad, Head of Howden Venture
Speakers: Tom Draper, UK Managing Director, Coalition; Kirsty Kelly, Global CISO, CFC; John Maguire, Head of Cyber Resilience and Incentives, DSIT.