[ad_1]
The Instagram logo is displayed on a smartphone.
SOPA Pictures | Light Rocket | Getty Images
Instagram’s recommendation algorithms have linked and promoted accounts that facilitate and sell pedophilia content, according to an investigation published Wednesday.
meta The photo-sharing service stands out from other social media platforms and “appears to have a particularly serious problem” with accounts displaying self-generated child sexual abuse material, or SG-CSAM, Stanford researchers wrote in an accompanying study. These accounts are allegedly run by minors.
“Due to the widespread use of hashtags, the relatively long lifespan of sellers’ accounts, and especially its efficient recommendation algorithm, Instagram serves as a key discovery mechanism for this specific community of buyers and sellers,” according to the study, which was cited in the investigation By The Wall Street Journal, Internet Watch, Stanford University’s Electronic Policy Center and University of Massachusetts Amherst.
While the accounts can be found by any user searching for explicit hashtags, the researchers discovered that Instagram’s recommendation algorithms also promoted them to “users who view an account in the network, allowing discovery of the account without keyword searching.”
A Meta spokesperson said in a statement that the company is taking several steps to fix the issues and has “established an internal task force” to investigate and address the allegations.
“The exploitation of children is a terrible crime,” the spokeswoman said. “We are working aggressively to combat it both on and off our platforms, and support law enforcement in their efforts to arrest and prosecute the criminals behind them.”
Alex Stamos, former chief security officer at Facebook and one of the paper’s authors, said in a statement tweet On Wednesday, the researchers focused on Instagram because its “position as the most popular platform for teens globally makes it an important part of this ecosystem.” But, he added, “Twitter continues to have serious problems with child exploitation.”
Stamos, who is now director of the Stanford Internet Observatory, said the problem persisted after Elon Musk took over Twitter late last year.
“What we found was that Twitter’s basic check for known CSAMs broke after Mr. Musk’s takeover and was not fixed until we notified them,” Stamos wrote.
“They then cut off our access to the API,” he added, referring to the software that gives researchers access to Twitter data to conduct their studies.
Earlier this year, NBC News mentioned Multiple Twitter accounts offering or selling CSAM remained available for months, even after Musk vowed to tackle the social messaging service’s child exploitation problems.
Twitter did not provide comment for this story.
He watchesYouTube and Instagram would benefit the most from banning TikTok
[ad_2]