Any online ‘kidfluencer’ content or images of children can be sexualised, as Four Corners report shows. So what can be done?
- Written by Catherine Jane Archer, Senior Lecturer, Communication, Edith Cowan University
The latest episode[1] of the ABC’s flagship investigative program, Four Corners[2], makes for grim viewing.
It reveals cases of men making inappropriate and sexual comments[3] on online images of children, often posted on accounts run by their parents. In some cases, the posts were made on “kidfluencer” platforms such as BrandArmy, which in the past allowed parents to run “junior creator” accounts to monetise their child’s online presence.
BrandArmy, which says[4] it stopped accepting new “junior creators” late last year, lists on its site the rules[5] parents running these accounts must follow. These include bans on “sexually suggestive content or language”, twerking videos, certain emojis and “rear (buttocks) imagery or videos”. Bikinis are allowed, under certain circumstances.
Four Corners reported on BrandArmy accounts it said were from girls under 18 that appeared to offer bikini pictures to paid subscribers.
When Four Corners sent questions to the company, BrandArmy removed some of the material.
The program also reported on men making sexual comments on photos posted on Instagram accounts depicting children — “usually young girls into dancing, modelling and gymnastics”. Four Corners reported “kidfluencer” content was also being shared in sexually explicit ways on encrypted chat channels.
Meta, the parent company of Instagram, told The Conversation Instagram requires everyone to be at least 13 years old to create an account, saying:
Accounts representing someone under 13 must be actively managed by a parent or manager, who is responsible for their content and can control who is able to message and comment on the account’s posts. We’ve developed a range of features that help people protect themselves from unwanted contact, including Hidden Words, which lets you filter comments and messages that contain certain phrases, as well as blocking and reporting. On top of that, we recently updated our policy so that accounts primarily posting content focused on children aren’t eligible to use our monetisation tools to receive payments from other Instagram users.
Meta also has rules[6] against child exploitation, including sexualisation, and told Four Corners it takes action whenever it becomes aware of such material.
The latest revelations are shocking but perhaps not surprising. Here’s what could be done at the policy level, and what parents need to know.
A broad spectrum of ‘kidfluencer’ content
I have been researching[7] “kidfluencer” content and families’ use of social media for more than a decade. It’s worth noting not all online kid-themed content is the same.
At one end of the spectrum are parents posting fairly innocuous kid content – such as a child playing or reaching a milestone – without seeking financial gain. We know many everyday internet users post images of their kids on social media to get reassurance, connection and reaction from friends and family.
Then there are parents who get financial gain from content, such as parent-bloggers or online personalities, where children are used as an extension of the parent’s brand. Children are naturally appealing and are a reliable way to get more “likes” and engagement online. Sponsored content and brand deals often follow.
Other forms of monetised kid content include unboxing videos or toy reviews, broadcast via subscription channels on platforms such as YouTube. Brands have cottoned onto the fact children are appealing and move product. This has become enmeshed in the world of “kidfluencer” content, which markets products to children (many of whom may not even be aware they are watching an ad). But there is, of course, nothing to prevent adults viewing this kind of content with a sexualised gaze.
Another category is parents who post images they see as innocent, but which can all too easily attract a sexualised gaze – this includes images of children in the bath, in swimwear or tight clothing, or doing dance or gymnastics routines. One thing the Four Corners report makes clear is that you cannot always be sure where your child’s image will end up or how it will be used.
Then there are those accounts which, according to the Four Corners report, appeared to be openly advertising underage content such as bikini pictures to paid subscribers.
You can’t tar all kid content with the same brush, but the element that ties all these together is that children are centred in the content.
History is littered with cases of people sexualising children; it’s just that now we have all these new technologies to facilitate it so easily.
What could governments and platforms do?
At a policy level, there are several things platforms and governments could consider.
One is looking at whether “kidfluencers” need to be protected by child labour laws. This was an issue Hollywood grappled with in the 1930s with regard to child actors. It led to laws requiring a child actor’s employer to set aside a portion of the earnings in a trust (often called a Coogan account), to ensure the child’s parent didn’t take it all.
There’s also an opportunity for platforms to take more responsibility to monitor content and comments more tightly, especially in cases where parents are posting “kidfluencer” content. It shouldn’t be up to the media to find and highlight the kinds of examples seen in the Four Corners episode.
Bringing in new laws to regulate platforms is not easy, given they are international entities. South Australia is considering[8] banning children under 14 from having social media accounts, but is yet to explain how this law would be designed or enforced, and how it would affect “kidfluencer” accounts run by parents.
Australian lawmakers may also look more closely at what European countries[9] have done with regard to a child’s “right to be forgotten[10]” (meaning their right to request deletion of personal data shared when they were a child).
And as the Four Corners report notes, the federal government could consider expanding the act that governs what kinds of content the eSafety Commissioner can order to be removed.
What can parents do?
In research I am conducting with colleagues in the Digital Child[11] Australian Research Centre of Excellence, we are talking with parents about their plans in relation to their children’s digital engagement.
For all parents, it would be useful to consider the following questions: what pictures would you share online of your child, if any? What kinds of images would you never share? What kinds of privacy settings would you have on any images you do decide to share?
How well do you understand that you cannot really control where your child’s image ends up? What is your family’s plan to teach your child about their own internet use? How will your plans change over time?
Unfortunately, a sexualised gaze can fall on any image of a child online, no matter how innocently it was shared. But there is room for all of us – whether that’s parents, policymakers or platforms – to do more to protect children online.
References
- ^ episode (www.abc.net.au)
- ^ Four Corners (www.youtube.com)
- ^ men making inappropriate and sexual comments (www.abc.net.au)
- ^ says (brandarmy.com)
- ^ the rules (brandarmy.com)
- ^ rules (transparency.meta.com)
- ^ researching (theconversation.com)
- ^ considering (www.abc.net.au)
- ^ countries (www.lexology.com)
- ^ right to be forgotten (ico.org.uk)
- ^ Digital Child (digitalchild.org.au)