A draft by the European Union (EU) proposing regulations on artificial intelligence (AI) in Europe has been leaked and revealed the plans to establish a central database of high-risk AI systems.
The plan aims to protect the rights of EU citizens by banning the use of AI systems that can manipulate human behaviour as well as AI systems that are used for surveillance and social scoring. If these regulations are not respected, the offender can be fined up to 20m €.
Hence, the plan asks AI systems providers to be validated and to provide all the information about data models, algorithms, and test datasets that they use to verify their systems.
The draft also lists various AI implementation considered high risk and covers using AI for medical emergencies, educational and training purposes, crime detection, among others. The plan wishes then that AI technology puts people first so that it remains safe and trustworthy.
It also stipulates that the training and testing of datasets should be relevant and representative enough so that the EU can approve it. Thus, providers must provide EU regulators with information about the technology, from conception to creation. Besides, the EU asks the providers of high-risk AI systems to share their information on the limits and dangers of the AI so it can be avoided.