NB: The information on this site is provided for general information purposes only. The site cannot be relied on for legal advice. Each project should make individual legal assessments.
Overarching framework
The EU has proposed a general legal framework for AI systems across many different sectors (the ‘AI Act’). The proposal entails a comprehensive set of requirements which AI systems and their providers must comply with before the systems can be put to practice or marketed within the European Union. For example, the proposed AI Act sets out requirements relating to data quality and data governance measures. Training data, validation data and test data shall be relevant, representative, error-free and complete. The datasets shall be appropriately composed in respect of the populations on which the AI system is intended to be used.
Other requirements in the AI Act relate to technical documentation, documentation of the development process and logging of events during the operational phase. Specific measures are required in order to ensure human oversight with AI systems, as well as a reasonable level of transparency, considering the intended use of an AI system. There are also requirements concerning accuracy/performance, resilience, and cyber security.
We are monitoring the legislative process and intend to update this site as the AI Act progresses in the EU legal system.
AI systems as medical devices
AI systems that are intended to be used in connection with individual courses of treatment are often deemed to be “medical devices” according to the EU Medical Device Regulation (MDR). This is, for example, the case for clinical decision support systems and patient monitoring systems. Manufacturers of medical devices are subject to a comprehensive regime of safety and performance requirements.
The MDR also regulates the process of assessing the safety and performance of medical devices through clinical trials.
In the next months, we will develop this site with further information about the rules and procedures for development, clinical trials and implementation of AI systems under the MDR.
Machine learning and the GDPR
Machine learning and AI typically rely on large amounts of health data, which must be processed in accordance with the fundamental right to privacy and data protection. In the EU/EEA, the rules for processing of personal data, including health data, are specified in the General Data Protection Regulation (GDPR).
Machine learning projects that process personal data must demonstrate a lawful basis for the processing in accordance with the GDPR. For the processing of health data, the GDPR requires that one first establishes one of the general legal grounds found in Article 6(1) and, additionally, one of the legal grounds for the processing of health data in Article 9(2). When establishing a general legal basis under Article 6(1), several alternatives could be relevant to the project. For instance, one might consider if the processing is “necessary for compliance with a legal obligation to which the controller is subject” (Article 6(1)c). However, reliance on this alternative is usually depending on the existence of a specific legal obligation in the national law that applies to the project. Therefore, depending on the relevant national law, one could instead consider if the “processing is necessary for the performance of a task carried out in the public interest” (Article 6(1)e). For example, tasks related to the improvement of health services through innovation and research could be seen as tasks carried out in the public interest.
Under Article 9(2) GDPR, relevant grounds of processing include necessity for the purposes of medical treatment (letter h), necessity “for reasons of public interest in the area of public health” (letter i) and necessity for scientific purposes (letter j). Notably, these provisions can only be relied on if there is also a supplementary legal basis for the processing in national law or EU law. As an example, Norwegian health legislation has provisions according to which one can apply for access to health data. When the competent authority permits access to health data in accordance with those provisions, the decision constitutes the required supplementary legal basis in Norwegian law. In addition to the legal grounds we have mentioned so far, health data can also be processed on the basis of each data subject’s explicit consent.
Once a project is engaged in the processing of personal data, there are many provisions in the GDPR that must be complied with. On this website, we will not go into detail on the full list of requirements. It is worth mentioning some of the fundamental principles for processing of personal data, cf. Article 5 GDPR. Particularly, the principles of data minimisation and purpose limitation are sometimes seen as contradictory to the nature of many machine learning projects. The principle of data minimisation entails that one should not collect or process more data than what is necessary in relation to the purposes for which they are processed. Purpose limitation means that the data must only be collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes. In some machine learning projects it might be challenging to determine, at the time of data collection, exactly which parts of a dataset will contribute to the development of a useful model.
To comply with the principle of purpose limitation, machine learning projects should articulate, as specifically as possible, the intended end goals of the project. For instance, the idea could be to develop an AI system that can be used to support certain types of medical decisions, or it could be to develop generalisable knowledge that can be used to suggest changes in how health services are provided.
The Norwegian framework for access to health data
The Norwegian version of this site contains more information on the Norwegian legal framework concerning access to health data. Norwegian health and care services are highly digitalised when it comes to the collection, storage and processing of data. Consequently, vast amounts of well-structured, high quality health data exist in the digital infrastructures of Norwegian health institutions and in health registries, where data have been collected for secondary purposes such as research, quality improvement, statistics, et. seq. Pursuant to Norwegian law, health data can be made available from sources such as electronic health records and health registries, under certain conditions. In practice, the most important procedure for accessing health data in machine learning projects is through application for a dispensation from the duty of confidentiality. The dispensation provides a legal basis for accessing and using health data for further specified purposes.
One of the recent developments in the Norwegian framework for access to health data is a provision that specifically gives the dispensatory authority the power to permit the use of health data for the purpose of developing and implementing clinical decision support systems. More information on the remits of this provision is provided on the Norwegian version of our site.