Instagram’s public policy director in Europe, Tara Hopkins, said: “In the EU at the moment, we can only use that mix of sophisticated technology and human review element if a post is reported to us directly by a member of the community.”

She said that because in a small number of cases an assessment would be made by a human reviewer on whether to send additional resources to a user, this could be considered by regulators to be a “mental health assessment” and therefore a part of special category data, which receives greater protection under GDPR.

Ms Hopkins said the company was in discussions with the Irish Data Protection Commission (IDPC) – Facebook’s lead regulator in the EU –  and others over the tools and a potential introduction in the future.

In a blog post announcing the update, Instagram boss Adam Mosseri said it was an “important step” but that the company wanted to do “a lot more”.

He said not having the full capabilities in place in the EU meant it was “harder for us to remove more harmful content, and connect people to local organisations and emergency services”.

Facebook and Instagram are among the social media platforms to come under scrutiny for their approach to and handling of suicide and self-harm material.

Concerns have been raised about self-harm and suicide content online, particularly how platforms handle such content and its impact on vulnerable users, especially young people.

Source Article