Accounts found by the researchers are advertised using blatant and explicit hashtags like #pedowhore, #preteensex, and #pedobait. They offer "menus" of content for users to buy or commission, including videos and imagery of self-harm and bestiality.
When researchers set up a test account and viewed content shared by these networks, Instagram took them down a rabbit hole of more accounts.
The WSJ reports: "Following just a handful of these recommendations was enough to flood a test account with content that sexualises children."
In addition to problems with Instagram's recommendation algorithms, the investigation also found that the site's moderation practices frequently ignored or rejected reports of child abuse material.
The WSJ said incidents where users reported posts and accounts containing suspect content (including one account that advertised underage abuse material with the caption "this teen is ready for you pervs") only for the content to be cleared by Instagram's review team or told in an automated message.
The report also looked at other platforms but found them less amenable to growing such networks.
The Stanford investigators found "128 accounts offering to sell child-sex-abuse material on Twitter, less than a third the number they found on Instagram" despite Twitter having far fewer users, and that such content "does not appear to proliferate" on TikTok.
The report noted that Snapchat did not actively promote such networks as it's mainly used for direct messaging.
In response to the report, Meta said it was setting up an internal task force to address the issues raised by the investigation. "Child exploitation is a horrific crime," the company said.
Meta said that in January alone, it took down 490,000 accounts that violated its child safety policies and, over the last two years, has removed 27 paedophile networks. The company, which owns Facebook and WhatsApp, said it had blocked thousands of hashtags associated with the sexualisation of children and restricted these terms from user searches.