The world of technology continues to evolve at an unprecedented pace. 2023 has witnessed massive amounts of commentary regarding the LLM (large language model) artificial intelligence tool Chat GPT following its launch in November 2022. The potential of Chat GPT is huge, and the software has proven itself capable of complex creative work. Last month, Microsoft announced plans to bring Copilot, an AI tool to Outlook, Word, Excel and PowerPoint, signaling a profound change in the way that millions of people will carry out their work on a daily basis. However, like many developments in technology, there remains to be a dark side to its development and implementation: the security of data and privacy of information. The adoption of AI is a cybersecurity issue.
2022 was a turbulent year for cyberattacks, with the European Union Agency for Cybersecurity describing the period as volatile due to the geopolitical landscape. This was in part due to cyberwarfare and hackers taking advantage of a vulnerable climate. According to new publication Entrepreneur, 2022 was the year that AI-generated phishing emails experienced high rates of being opened in comparison to those that had been manually crafted. This increase of hacktivism has been fueling growth in the market for AI-based security products. Last year, CNBC reported the following statistic: ‘the global market for AI-based cybersecurity products is estimated to reach $133.8 billion by 2030, up from $14.9 billion in 2021’.
The issue with AI-generated phishing emails is that all too often they are generated within a company asking for access to a restricted part of a server or for the transfer of funds. With the advancements of AI capability, the level of sophistication of such emails has sky-rocketed. An employee can receive a direct email from their manager with an urgent request that they need to review. The email can demonstrate the real time zone and location of the manager, so it appears to be real, even in a hybrid environment. Whilst reading the email, you receive a call from your manager who explains they wanted to talk first to make sure you got the email request. The voice on the phone sounds familiar and reiterates the demand explained in the email. The quality of fake voices and AI crafted emails have improved so much that its now become even harder to decipher what might be created by criminals using AI. Hackers are leveraging AI to make their attacks look genuine and authentic. The reality is that many organisations will be presented with the challenge of overcoming the AI related cybersecurity threats that put their business and assets at risk.
Despite these risks there is a flip side. Businesses can use AI and machine-learning powered tools to help depend them against such attacks as RFA have been doing now for 0ver 5 years to help with pro-active prevention in our cyber armoury for our clients and our own systems. A key part of managing this risk will be outline the potential vulnerabilities the business has and where AI can be applied. However, in the current landscape, it is critical that firms invest in a cybersecurity strategy that is equally as sophisticated as the cybercriminals who initiate the attacks. The strategy requires continuous and constant evaluation through governance and risk policies and businesses should also consider investing in working with cybersecurity experts who specialise in AI-focused solutions. Please reach out if I can help in any way.