The last week has seen a growing list of large advertisers publicly state they will be ceasing all Facebook advertising as part of a #StopHateForProfit campaign. The goal is to stop the platform being used to spread and amplify racism and hate; Mars, Coca Cola, Ben and Jerry’s, The North Face and Patagonia to name a few, have already joined the cause with more committing to it daily.
To be clear – this is not specifically an ‘ad’ issue, but one more focused on the environment ads are seen in, drawing potential association. The newsfeed – the core of social platforms – is made up of user-generated content. Completely unique and based on connections, interactions and engagement you have shown on the platform. It’s not what many people once thought; a list of immediate friends’ status updates.
Facebook’s controversy around data has been well documented, and several measures have been brought in to reassure advertisers and users alike. But content is something that sits more at the core of the platform’s purpose, and the reason that almost 50% of the UK adult population accesses the platform at least once per day.
Can this be moderated? When does moderation become censorship? Facebook has set up the Oversight Board with the mandate: “The Board will take final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram”. The board is external to Facebook and made up primarily of high profile and deeply committed individuals to freedom of speech, and human rights. The full board can be seen here. As well as this Ofcom has been given the power by the UK government to impose a “duty of care” covering harmful, but not illegal content.
All Response Media viewpoint
“Facebook is arguably now the world’s largest content platform, across media, news, information and misinformation. It boasts more content publishers than have ever existed in one platform, as essentially every single user can be one.” The platform is easier than ever to access, and anyone can publish and distribute information without question. Newsfeed algorithms mean new content is shown regularly and to a mass audience, which leaves little variation in what we see.
Regulation has existed, and will continue to, but can it act fast enough? Content has been removed from Facebook – in May 2020 alone 10 million posts were taken down for violation of their hate speech policy – but what we don’t know is how many connections and people saw them before this, and their subsequent influence.
This isn’t a first. Programmatic display and video have run up against brand safety issues previously and Google (across the Google Display Network and YouTube) has invested heavily in tech solutions. Brand safety and protection is something All Response Media has always maintained the highest standards for, utilising MOAT across programmatic, pre-bidding and content verification. Mitigation and caution can be made; regular reviews of platform and placements, review vs. the audience network and a clearly defined targeting strategy limiting the use of vague audiences or expansion tactics (detailed targeting expansion) where we could see ads stray.
However, to ensure a ‘safe’ environment, the Facebook platform remains a semi-closed ecosystem leaving a solution firmly for Facebook to bring to market.
Topline Facebook stats in the UK right now – YouGov data:
• 44.8m UK adults on Facebook – around 80% penetration
• over 60% of users use the platform more than once a day
• 60% of users do so to keep in touch with friends or family
For more information on the digital services we offer click here.