Meta Content Moderation Tool: Let’s Start Cleaning the Internet

Meta Content Moderation Tool: Let’s Start Cleaning the Internet

Come January and Meta will take its place as chair of the board of cross-industry counter-terrorism organization Global Internet Forum to Counter Terrorism (GIFCT). The upcoming appointment has pushed Meta content moderation policies into overdrive, as the company aims to be worthy of its position.

One of the founding members of GIFCT, Meta will share data with other companies to keep the Internet free form violent images, terrorism and human trafficking.

Content moderation policies
Meta is working with other companies to police terrorism-related content online. (a woman looks up content moderation on her desktop; Image Credit – Freepik)

Meta’s Content Moderation to Counter Terrorism

In recent times, Meta’s growth has been hit by inflation, and lawsuits as governments question its content moderation and data policies. 

As part of Meta’s commitment to safeguard people from harmful content, the company is launching a new free tool to help platforms identify and remove violent content.

Meta’s Hasher-Matcher-Actioner (HMA) will be a free, open-source content moderation software tool “that will help platforms identify copies of images or videos and take action against them en masse,” Meta President of Global Affairs Nick Clegg said in a release.

The HMA will be adopted by various companies to stop the spread of terrorist content on their platforms. It will especially be useful for small organizations, which do not have access to better resources the way big companies do.

It is a valuable tool for companies that do not have in-house capabilities to moderate content in high volumes. Member companies of the GIFCT will diligently monitor their networks with the HMA and keep their platforms free of harmful and exploitative content.

It is estimated that Meta has spent over $5 billion globally on safety and security in 2021. There are over 40,000 people working on enhancing its features.

Meta content moderation hopes to tackle terrorist content, as part of its bigger plan to protect users from harmful content. The California-headquartered tech giant also uses AI to help with content moderation and remove harmful content.

The company also revealed that its content moderation tools have significantly reduced hate speech visibility and it regularly blocks fake accounts to contain the spread of misinformation.

Matthew Schmidt, associate professor of national security, international affairs and political science, at the University of New Haven, told ABC News most organizing of terrorism events or human trafficking happens on the dark web.

Schmidt admitted that open-source software is key to limiting these evil powers from wreaking havoc in society, as it limits their reach. He also mentioned that most content moderation efforts have come from private companies rather than the government.  

Content Moderation Policies

On September 13, 2022, California enacted a broad social media transparency law (AB 587) requiring social media companies to post their terms of service with, and to submit semi-annual reports to, the California Attorney General’s office.

The legislation applies to social media companies with revenues of over $100 million as of the previous year. The law does not define whether or how social companies must moderate content.

For now, it expects social media companies to submit their current terms of service and semi-annual reports on content moderation to the AG’s office.

Content Moderation and data privacy issues have been a topic of hot discussion in the past few years. Both federal and state agencies have attempted to bring in policies that safeguard users while reining in hate speech.

Earlier, Florida and Texas passed content moderation laws, hoping to bring some order to what is shared on the Internet. The Florida law restricted internet services’ ability to moderate content and called for some obligatory disclosure.

The Texan law, on the other hand, prohibits social media platforms from “censor[ing]” users or content based on viewpoint or the user’s geographic location in the state.  It does not prevent companies from moderating content on unlawful expression or specific discriminatory threats of violence.

As nations wake up to the power of online platforms, social media companies are slowly finding themselves pressured to implement stricter laws so that they do not indirectly encourage unlawful activity.

#Meta #Content #Moderation #Tool #Lets #Start #Cleaning #Internet

About admin

Leave a Reply

Your email address will not be published. Required fields are marked *