On September 11 2023, the European Commission published on its official website the Cybersecurity of Artificial Intelligence in the AI Act, Guiding principles to address the cybersecurity requirement for high-risk AI systems, which contains the necessary cybersecurity requirements for high-risk artificial intelligence (AI) systems.
The European Commission’s proposal for the AI Act represents a significant step in the regulation of artificial intelligence to address health risks, the security of AI systems and the fundamental rights associated with such technologies.
As far as cybersecurity is concerned, regulated by Section 15 of the AI Act, high-risk AI systems must be designed to resist attempts to alter their use, behaviour and performance and to compromise their security properties.
More specifically:
- an AI system may include one or more AI models, or even interfaces, sensors, databases, network communication components, computing units or monitoring systems. However, cybersecurity requirements must be applied to the AI system as a whole and not directly to its components.
- A risk assessment is required, taking into account the internal architecture of the AI system and its application context. Specifically, two levels of risk assessment can be identified: a cybersecurity risk assessment and a higher-level regulatory risk assessment involving several requirements, as described in Article 9. The risk assessment aims to identify specific risks, translate higher-level IT security requirements and carry out the necessary implementation measures.
- The protection of AI systems requires an integrated and continuous approach using proven practices and specific controls. This process must exploit current IT security procedures, using a combination of existing controls for software systems and measures specific to AI models.
- The current landscape includes a wide variety of technologies with varying degrees of complexity, ranging from traditional machine learning models to the latest deep learning architectures. However, not all technologies may be ready for use in high-risk AI systems unless their cybersecurity limitations are adequately addressed.
Artificial Intelligence systems are undertaking a rapid transformation, raising the technological level and effectiveness of Medical Devices. However, AI harbours several pitfalls and threats that need to be properly addressed, such as the security of sensitive data. It is precisely for these reasons that the AI Act assumes particular relevance, becoming a crucial element in ensuring security in high-risk systems.
>> Through the services of Strategic-Regulatory Consulting, support to and if required, European Authorised Representative, Thema can help you implement the requirements of the Medical Device Regulation MDR (EU) 2017/745.
Source: