In an effort to ensure online safety, a short-form video hosting service, TikTok, owned by Chinese internet company, ByteDance, has removed a staggering 11,887,516 videos from nine African countries during the second quarter of 2024.
These countries include Egypt, Nigeria, South Africa, Algeria, Somalia, Libya, Ethiopia, Sudan, and Morocco with Egypt and Nigeria spearheaded at the crackdown targeted content violating TikTok’s community guidelines on integrity, advertising, privacy, and security.
This initiative marks a growing commitment by the platform to create a safer digital environment, as TikTok confirmed that their goal is to uphold a safe, positive space for users while respecting cultural values.
According to TikTok’s latest Transparency Report, the platform relied heavily on automated tools to accomplish this mission.
With 80% of violative content removed through AI technology; a significant leap from the 62% recorded in the same period last year.
A spokesperson revealed, underscoring TikTok’s increasingly vigilant approach.
“With our proactive detection rate hitting 98.2%, we’ve significantly reduced the number of videos restored after removal.”
In terms of specific countries, Egypt led the way with 2,754,574 videos removed, closely followed by Nigeria with 2,137,687. Algeria, Somalia, and Libya accounted for the next largest removals, with Somalia logging 1,380,154 videos taken down.
The swift action reflects the platform’s ongoing response to rising scrutiny from African nations on content safety.
South Africa, which saw the removal of 614,406 videos, also recorded a high volume of account bans.
Report has it that in South Africa alone, 143,998 accounts were banned for policy violations, with 137,663 of them linked to users believed to be under the age of 13.
In recent years, TikTok has faced mounting pressure across Africa. Following a $92 million settlement in 2021 for alleged unauthorized data collection from minors, the platform has since bolstered privacy settings, particularly for younger audiences.
Egyptian authorities in August announced plans to tighten monitoring, emphasizing the need for content to align with local values.
Kenya, in turn, has taken a more measured approach, requiring TikTok to submit quarterly compliance reports.
Kenyan authorities stated that the action is part of a broader strategy to mitigate harm without imposing a ban on TikTok.
Similarly, TikTok has ramped up its engagement across Africa, partnering with the African Union Commission’s Women, Gender, and Youth Directorate to increase awareness around digital safety among youth and families.
This collaboration aims to address online risks through locally relevant content and culturally tailored outreach.
In its latest bid to enhance accountability, TikTok has set up an African council of internet experts to advise on policies addressing hate speech and misinformation.
Approach to TikTok Content moderation
Content moderation on TikTok is a structured process aimed at ensuring the platform remains a safe, welcoming space while upholding community standards.
TikTok enforces content moderation through a combination of artificial intelligence (AI) algorithms and human moderators who identify and remove content that violates its policies.
- The use of advanced AI technology to detect harmful or violative content such as violence, hate speech, misinformation, and inappropriate content as soon as it is uploaded.
- TikTok’s Community Guidelines outline clear policies on what content is not allowed, including content related to harassment, misinformation, illegal activities, hate speech, and child exploitation.
- While AI handles a large portion of content moderation, human moderators are essential for nuanced and context-sensitive decisions, especially when content involves complex themes or cultural sensitivities. As such, TikTok employs moderators across various regions to ensure that local contexts and cultural norms are considered, reducing the likelihood of inappropriate or culturally insensitive actions.
- TikTok also implemented age restrictions, parental control features, and additional moderation for content likely to be viewed by younger users. In regions like Africa, TikTok has strengthened these controls to address concerns related to minors, including automatic removal of accounts suspected to belong to users under the age of 13.
- TikTok also tailors its moderation policies to meet specific regulations and cultural standards in various countries, which is especially relevant in regions like Africa, where countries may have unique standards on online behavior and privacy. Local partnerships, such as its collaboration with the African Union also helped to promote online safety and counter misinformation effectively.
- TikTok in addition, allows users to report content they find inappropriate, enhancing community-based moderation. Users can flag content for review, report accounts, and submit concerns through an accessible reporting system.
- TikTok publishes regular transparency reports detailing the number of videos removed, categorized by content type, country, and reasons for removal. This openness aims to build trust with users and regulators by showcasing its efforts and progress in creating a safer digital environment.
- As user-generated content and digital threats evolve, TikTok continually updates its technology and policies to address emerging risks. The platform has invested in better AI training, cross-industry partnerships, and policy reviews to keep moderation practices effective and aligned with global standards. Through this multi-layered approach, TikTok aims to protect user privacy, promote digital safety, and maintain community trust while adapting to different regions’ needs and cultural norms.