A record 29.3 million child abuse images were found and deleted on the internet in 2021, according to data from the US nonprofit responsible for coordinating reporting on the issue.
The figure released by the National Center for Missing and Exploited Children represents a 35% increase from 2020.
The center said the increase in reporting was not necessarily alarming and could represent an improvement from the platforms. “A higher number of reports can indicate a variety of things, including more users on a platform or how robust an ESP is. [electronic service provider’s] efforts are to identify and remove abusive content,” he said.
“NCMEC applauds ESPs who make identifying and reporting this content a priority and encourages all companies to increase their reporting to NCMEC. These reports are essential to help remove children from harmful situations and end to further victimizations.
The overwhelming majority of reports made to NCMEC came from Facebook. There have been 22 million child abuse images reported by Facebook alone, and for the first time data has been broken down for its owner Meta’s other products, revealing Instagram made 3.3 million reports and WhatsApp 1.3 million.
Google made 875,783 reports and Snap 512,522. Adult social network OnlyFans was represented on the list for the first time, with owner Fenix International making 2,984 reports in 2021.
Some companies stood out for their small footprint. Apple, despite running a messaging platform and photo-sharing service, only found and reported 160 child abuse images during the period.
Andy Burrows, NSPCC’s Child Safety Online Policy Manager, said: “The record number of child abuse reports received by NCMEC last year is another reminder of the scale of the breaches that are taking place. are currently happening online and the risks that children continue to be exposed to when using social media.
“With the Online Safety Bill beginning its journey through Parliament, it is imperative that politicians seize this opportunity to forge the strongest possible legislation that will protect children from preventable harm and stifle grooming and sharing. child sexual abuse images.”
The report highlights the complexity of the discussions surrounding the prevention of harm to children online. End-to-end encryption, which prevents platforms from reading the content of messages between their users, has come under attack from the government on the grounds that it hampers efforts to tackle child abuse.
But the data tells two stories on the subject. Comparing reports from WhatsApp and Facebook, which have similar numbers of users, suggests that the technology can indeed hide millions of instances of abuse; while comparing reports from WhatsApp and Apple, both of which offer end-to-end encrypted messaging services, shows just how much the companies can do to eliminate abuse even within those limits.
Antigone Davis, Global Head of Security at Meta, said: “We flag the most content because we work the hardest to find and remove it. It’s part of our long-standing commitment to protecting children online, but we can’t do it alone. It is time for other players in the industry to invest more so that we can work together to prevent the spread of this hateful content. We have made detection technology available to all technology companies because it will require investment from everyone in our industry to prevent this harm. »
Snap declined to comment. OnlyFans and Apple did not respond to requests for comment.