Improving Content Moderation: Meta and YouTube Deploy AI to Remove ‘Disturbing Content’ Videos, But Challenges Remain

With the use of AI, both Meta and YouTube are attempting to remove disturbing content from their websites, but experts remain uncertain of its ability to differentiate between violent content and evidence of human rights violations.

Meta and YouTube are using AI to remove any disturbing content from their websites, according to BBC. However, experts are skeptical of the move as AI technology is not able to differentiate between videos of violence and videos that could be evidence of human rights abuses. This raises questions about the effectiveness of AI in removing disturbing content from social media and video platforms. While utilizing AI to remove such content represents a positive step, further measures are necessary to enhance its effectiveness in accurately identifying and removing disturbing content.

Swift Removal of Disturbing Content Videos on Facebook and Instagram

Ihor Zakharenko, a former journalist, attempted to post videos of human rights violations by the Russian Army in Ukraine on Facebook and Instagram. Unfortunately, the removal of the videos occurred quickly. Following that, Zakharenko made attempts to upload the videos again using dummy accounts, but the BBC report indicates that three out of the four videos were taken down within 10 minutes. This incident highlights the challenge of sharing information about human rights abuses on social media platforms. It emphasizes the significance of freedom of expression and the necessity to safeguard those who courageously voice opposition to injustice.

Balancing Act: Understanding the Complexities of Content Moderation Policies

Social media companies have community guidelines that allow for the removal of videos of disturbing content and graphic violence. Experts assert that AI lacks the ability to make nuanced judgments about allowing videos related to important and newsworthy events. Instagram’s guidelines prohibit graphic violence, but exceptions may be made if it is shared to condemn, raise awareness, or educate. AI can’t make these decisions yet, so social media companies must monitor content and enforce guidelines.

YouTube has strict policies that prohibit content intended to shock or disgust viewers, as well as disturbing content that promotes violent acts. Prohibited content includes videos of accidents, disasters, war aftermath, attacks, immolation, corpses, protests, robberies, medical procedures, and similar scenarios. YouTube encourages users to report any content that violates their guidelines and will take appropriate action to remove it.

Striking the Balance: Navigating the Delicate Art of Balancing Act

Meta and YouTube promise to strike a balance between bearing witness and protecting users from harmful and disturbing content. However, completely relying on a general artificial intelligence tool to remove all forms of violent content lacks balance in practice. According to experts quoted by the BBC, social media companies should consider either removing AI from content moderation and relying on human reviewers for decisions or developing AI that can better differentiate between videos depicting violence and those that may serve as evidence of human rights abuses.

Previous articleCRAZY RULES: CFTC Unveils Overhaul to Prepare Firms for Crypto Volatility and Digital Asset Risks
Next articleUpcoming Crypto Airdrops: These have the HIGHEST potential!
Chris Griffin
Chris has had a career as an advisor to the tech industry, incubating start-ups in the tech industry. Welcoming Chris to contribute his expertise covering the latest things he sees in blockchain