Breaking News
Meta Cracks Down on Predatory Behavior, Removes 6 Lakh Accounts Across Instagram and Facebook
2025-07-25
Meta is also sharing this data with other platforms through the Tech Coalition’s Lantern program to curb cross-platform abuse.
To enhance online child safety, Meta has announced the removal of more than 600,000 accounts from Instagram and Facebook. This mass takedown is part of Meta's ongoing efforts to combat predatory and exploitative behavior targeting minors on its social media platforms.
According to the company, over 135,000 accounts were flagged and removed for posting sexualized comments or soliciting explicit content, with many of these accounts specifically targeting children or child-managed profiles. An additional 500,000 accounts were taken down for inappropriate interactions, including associations with previously flagged users or content linked to child exploitation.
Meta’s latest enforcement action is a crucial part of its broader strategy to maintain digital safety for minors and build a more secure online environment. The company is now working with other major tech platforms through the Tech Coalition’s Lantern Program, which facilitates cross-platform data sharing to help prevent child abuse content from spreading across the digital ecosystem.
This proactive approach strengthens Meta’s stance on child protection, digital well-being, and responsible content moderation. It also aligns with growing global regulatory pressure to tackle online abuse, cybercrime, and harmful content on social media.
Meta has been investing heavily in AI-powered content moderation tools, human review teams, and reporting mechanisms to swiftly identify and take action against abusive behavior. These measures not only aim to protect vulnerable users but also support broader goals of digital trust, user safety, and platform accountability.
As Meta continues to face scrutiny over platform safety, this latest move underscores the tech giant’s commitment to safe social media experiences, especially for younger audiences. With increasing adoption of digital platforms by children and teens, such enforcement actions are vital in shaping a safer online future.
According to the company, over 135,000 accounts were flagged and removed for posting sexualized comments or soliciting explicit content, with many of these accounts specifically targeting children or child-managed profiles. An additional 500,000 accounts were taken down for inappropriate interactions, including associations with previously flagged users or content linked to child exploitation.
Meta’s latest enforcement action is a crucial part of its broader strategy to maintain digital safety for minors and build a more secure online environment. The company is now working with other major tech platforms through the Tech Coalition’s Lantern Program, which facilitates cross-platform data sharing to help prevent child abuse content from spreading across the digital ecosystem.
This proactive approach strengthens Meta’s stance on child protection, digital well-being, and responsible content moderation. It also aligns with growing global regulatory pressure to tackle online abuse, cybercrime, and harmful content on social media.
Meta has been investing heavily in AI-powered content moderation tools, human review teams, and reporting mechanisms to swiftly identify and take action against abusive behavior. These measures not only aim to protect vulnerable users but also support broader goals of digital trust, user safety, and platform accountability.
As Meta continues to face scrutiny over platform safety, this latest move underscores the tech giant’s commitment to safe social media experiences, especially for younger audiences. With increasing adoption of digital platforms by children and teens, such enforcement actions are vital in shaping a safer online future.
See What’s Next in Tech With the Fast Forward Newsletter
Tweets From @varindiamag
Nothing to see here - yet
When they Tweet, their Tweets will show up here.