The European Commission has made available the final text of the Artificial Intelligence Act (AIA), the world’s first comprehensive regulatory framework dedicated to artificial intelligence. This regulation defines a harmonised set of rules to promote ‘reliable, human-centred’ artificial intelligence, applicable to Medical Devices, in vitro diagnostic devices (IVD) and other products. The AIA aims to ensure the safe placing of these products on the market, as also recognised by the European Parliament
Published in the Official Journal on July 12, the Regulation came into force on August 2, 2024, with the requirements for high-risk devices becoming mandatory on August 2, 2026.
The AIA aims to improve market functioning by prohibiting specific AI practices and introducing additional obligations for high-risk AI systems. In addition, the regulation requires AI system providers and operators to ensure that their personnel have an appropriate level of expertise.
Obligations for High-Risk AI Devices
The AI Act includes several provisions relevant to Medical Devices that integrate or rely on artificial intelligence. However, the legislation does not treat all applications in the same way. By adopting a risk-based approach, a large part of the new compliance, reporting and liability obligations will be mainly borne by ‘suppliers’ (Art. 3(3), AI Act) of AI components or solutions considered ‘high risk’, Class IIa or higher.
The regulation specifies that manufacturers of high-risk Medical Devices must implement a risk management system for the entire product life cycle, ensure governance to certify that the data are error-free, and produce technical documentation to prove compliance with the regulation.
Among his many obligations, the manufacturer must:
- Adopt a quality management system under Art. 17, rigorously documented through written policies, procedures and instructions.
- Ensure that the high-risk AI system complies with the requirements of the AI Regulation and undergoes the conformity assessment procedure necessary to be marketed.
- Put the CE marking on the high-risk AI system or, if this is not possible, on a packaging or accompanying document.
- Put in place a post-marketing monitoring system to collect and analyse data on the performance of the AI system, such as automatically-generated logs (Art. 19) and provide necessary updates.
- Ensure that the high-risk AI system complies with the accessibility requirements of former Directives 2016/2102 and 2019/992.
In addition, the obligations under the GDPR concerning the processing of personal data for suppliers, in their role as data controllers or data processors, continue to apply regularly.
Finally, the supplier should identify appropriate pre-market human surveillance measures. High-risk AI systems must be designed to provide transparency and allow operators to understand and interpret the output provided by the system. The instructions for use should be ‘concise, complete, correct and clear’.
For lower-risk devices, the regulation lays down less stringent requirements than for high-risk products.
Changes and Regulatory Strategies
According to recent analyses, many AI applications in healthcare will be classified as ‘high risk’ and subject to demanding compliance requirements.
The AI Act changes the regulations for Medical Devices, introducing new rules and definitions for using artificial intelligence in healthcare. This represents a significant change in the development and use of advanced technologies in medicine, forcing companies to review their compliance and innovation strategies.
Fundamental, first of all, is to understand when a medical device using artificial intelligence falls into the high-risk category.
>>> Thema experts are ready to guide you through every step necessary to achieve and demonstrate compliance for Medical Devices using artificial intelligence, accompanying you throughout the product life cycle.
SOURCE:
https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=OJ:L_202401689