Headlines in 2023 have heavily documented the developments of Artificial Intelligence (AI) systems. With such documentation within the news, the world’s attention has been forced to consider the cybersecurity implications of AI developments.
Such questions have been at the forefront of the European Union Agency for Cybersecurity (ENISA) own agenda. In response to the developments seen in 2023, the agency released 4 reports in June that investigates the future of AI, as well as hosting a conference regarding the safety and trustworthiness of the technology. The conference acted as a sounding board for industry leaders and companies to share their experiences regarding AI, pinpointing key challenges and opportunities that the EU will face in the near future.
Key insights from the conference included the promotion of cooperation with the AI cybersecurity community in the European Union (EU). The ENISA wants to work towards creating a proposal that could enable the EU to become a leader in regulating AI.
The Executive Director of Cybersecurity for ENISA, Juhan Lepassaar commented ‘If we want to both secure AI systems and also ensure privacy, we need to scrutinise how these systems work’. To do this effectively, ENISA is exploring the technical complexity of AI in order to understand how to best mitigate the cybersecurity risks. At the same time, ‘it is crucial that firms operating in the EU are able to strike the right balance between system performance and security’ Lepassaar shared.
In order to promote a safe and successful future for AI, ENISA explored the following: good cybersecurity practices, cybersecurity and privacy in AI and AI and cybersecurity research.
Good cybersecurity practices
Establishing security considerations for the cybersecurity certification of AI systems is vital to safeguarding the future of business, the success of the technology and the wellbeing of users. The EU wants to be able to support policy makers such as national authorities by creating guidelines for best practices. The reports published in June seek to achieve this goal by acting as a scalable framework to guide those operating in the AI community and national cybersecurity authorities (NCAs) in how best to design secure AI systems, processes and operations. The framework includes three layers that are delivered in a step by step approach to delivering good cybersecurity practices. They are: cybersecurity foundations, AI-specific cybersecurity and sector-specific cybersecurity for AI.
Cybersecurity and privacy in AI
In the reports shared by ENISA, the agency outlined privacy threats and vulnerabilities in AI technology that can be exploited. The primary focus on the report was to explore privacy issues, as the EU considers this to be one of the ‘most important challenges facing society today’. However, machine learning threats and vulnerabilities that can hinder security were also explored as they are closely linked to privacy. In order to move forward, it is imperative that the EU and national policymakers are able to establish a balance on optimising security efforts and privacy without compromising the overall quality of software performance.
AI and cybersecurity research
A critical takeaway from the report is that five key research areas were identified as being essential for designing AI for cybersecurity and securing AI for future EU policy developments and funding initiatives. ENISA has established that such areas of research will require the ‘ the development of penetration testing tools to identify security vulnerabilities or the development of standardised frameworks to assess privacy and confidentiality among others’.
With generative AI fast developing, it is essential that firms using the technology are supported by IT agencies that can support the rapid developments of AI. RFA is a unique IT, financial cloud and cyber-security provider to the financial services and alternative investment sectors that is able to support businesses through the AI revolution. If you would like to know how we can support you operating in the EU, contact us today.