In the World Economic Forum’s Global Risks Report 2021, cautionary words were shared: ‘Cybersecurity failure could be one of the greatest challenges of the next decade’. Just two years on and recent developments in AI sees the technology becoming increasingly more embedded worldwide. Cybersecurity professionals are making clear the case for balancing progress alongside protection.
At present, AI developments are unrestrained by the law or ethical boundaries, leaving cybercriminals open to use the technology to develop innovative new hacks. The Washington Post recently published a piece stating that “cybersecurity faces a challenge from artificial intelligence’s rise”. In this piece, Journalist Joseph Menn goes on to detail how savvy cyber criminals have been using the technology to orchestrate attacks by impersonating senior staff within organisations. One example included an Indian based company called Zscaler whereby a sales director received a call from the organisation’s Chief Executive. The call displayed the Chief Executive’s photo and in his words said “I need you to do something for me”. The call then dropped off and the employee received a WhatsApp stating “I am having bad network coverage as I am traveling. We can text in the meantime.” The texts then asked for assistance to move money from to a bank located in Singapore. At this moment, a manager intervened and handed over the phone to internal investigators who identified that it was a scam to steal company money. The criminals had been able to recreate the Chief Executive’s voice using clips from remarks he had made in public.
This is just one example of an attack. As generative AI developments continue to pick up pace, evidence of new kinds of cyber attacks are coming to the surface. These include Advanced Persistent Threats (APTs), AI-powered malware, Distributed Denial of Service (DDoS) attacks and Deepfake attacks. An APT is a sophisticated and sustained cyber attack that takes place when an intruder enters a network undetected and is able to remain within the network for a long period of time, continuously accessing sensitive data. Such criminals use AI to avoid their detection. AI-powered malware is technology that has been taught to think for itself. It then adapts its course of action subject to a situation. This can be particularly challenging should such a technology embed itself into a victims system as its response is difficult to predict. DDoS attacks use AI to identify and exploit any vulnerabilities within a network. By doing so, attackers can amplify and scale an attack. Finally, Deepfake attacks can be used to impersonate real people by using AI generated synthetic media such as images and videos. Such AI can be used to carry out fraud or generate disinformation campaigns.
The level of threat that organisations face today is unprecedented. The problem with AI is that the technology is being used by cybercriminals to launch new and sophisticated attacks at a large scale. Generative AI is enabling bad actors to innovate and develop new strategies to attack companies. Such innovation is dangerous as it allows cyber criminals to stay one step ahead of cyber security defense measures.
Firms need to be able to plan for and defend themselves against such attacks. Firstly, such a strategy includes having a system-specific analysis of the business and a critical understanding of how to cater for security requirements deriving from the domain of application. Secondly, a cybersecurity strategy must have specific elements to cater for AI. This is inclusive of traceability of data and ongoing testing procedures to test an organisation’s risk posture and resilience. RFA is a world leading cybersecurity firm who works with clients to design, develop and execute upon risk management and defense strategies. If you would like to learn about how we can help your business, contact us today.