The recent EU proposal about artificial intelligence (AI) includes various measures such as regulating input data in order to improve AI explainability and reduce algorithmic bias.
Indeed, it was stated that unbalanced data is one of the reasons for misrepresentations in AI models. In the past, AI was mostly governed by a series of disparate AI ethics and guidelines, instead of clear regulation. By regulating dataset features inputted into a model, some ethical tech issues such as bias in facial recognition systems could be avoided.
Hence, having an input-based rule could be the best way towards explainable AI and fewer algorithmic biases.
Moreover, the new legislation also suggests that feedback loops of products already on the market should be continually addressed, as these could lead to biased outputs.
Although the Commission has focused its draft on ‘high-risk AI’ use cases, which could put in danger the health, safety, and fundamental rights of persons, it was reported that the legislation could also be extended to non-high-risk models.
The EU AI legislation aims to be like the existing General Data Protection Regulation (GDPR), thus any company with an AI model output that could be affecting European citizen’s data will be forced to comply with the new guidelines.