AI to block harmful content on Instagram

Since last week, a new feature is available on Instagram that recognizes self-harm and suicide content. Using artificial intelligence (AI), this new system identifies posts, images and words that can be considered harmful and flag them, makes them less visible, and sometimes even, deletes them.

 

Social media apps have been working on finding a way to tackle harmful content for a while now. These platforms are looking to remove the hurtful content all the while helping the users in need. They aim to develop a platform of excellence in suicide prevention and the online environment; hence guidelines have been put into place to create safer online spaces and decreasing the possible number of harmful content and reinforcing the opportunities for support.

 

Yet, the platform wants to be a place where users should feel safe about admitting they have thought about self-harm or suicide in order to destigmatize taboos around it. All is about finding the right balance.

 

Usually, when an algorithm finds harmful or potentially dangerous content, it is directed to human moderators, who in turn, decide what decision to make about it. Thus, they can also direct the user who posted it to help organizations and services.

 

However, the new Instagram system in the UK and Europe doesn’t have a human referral because of data privacy considerations in relation to the General Data Protection Regulation (GDPR). Implementing it is still in the works.

 

Therefore, this lack of regulation and human referral can make this new AI system rather dangerous than helpful, especially for young people. By making the posts almost impossible to find or, at worst, deleted completely, without a human referral for those in the EU and in the UK, is insensitive and can isolate the users even more. It is true that we need a way to regulate harmful content but this new system, however, represents only limited progress.

Related Posts

Menu