For as long as playing ball is optional, horrific violence will remain on social media

We’re sorry, this feature is currently unavailable. We’re working to restore it. Please try again later.

Advertisement

Opinion

For as long as playing ball is optional, horrific violence will remain on social media

As the horrific violence of last week spilled online and worked its way through too many people’s social media feeds, attention has turned towards the need to refine our online safety legislation.

The nation’s eSafety Commissioner is still negotiating with X and Meta to pull graphic images from Bondi Junction and Wakeley from the internet. And while Meta is said to be co-operating, X owner Elon Musk called the commissioner a “censorship commissar” after she issued a take-down order on violent content relating to the Bondi Junction and Wakeley attacks, and X labelled the order “unlawful and dangerous” in a statement, saying the content does not breach its user guidelines.

X owner Elon Musk.

X owner Elon Musk.Credit: Reuters

Outside the violent footage of the respective Sydney stabbings, X is awash with pornography, Facebook is filled with AI “trash”, and TikTok is repeatedly found to “algorithmically supercharge” anxiety. It feels like social media is deteriorating at rapid speed. But this content is a symptom of a broader problem.

Social media is getting worse in large part because companies are stripping cautionary investments and shifting resources away from user safety and user protections. Without serious, legally enforceable incentives, the trend toward safety minimalism will only continue and disturbing footage like that we’ve seen this week will continue to circulate.

The eSafety Commissioner’s focus on content take-downs is like catnip for free-speech champions like X Musk. But any critique he or like-minded users may have over these requests would be better directed to the Online Safety Act and the government’s proposed misinformation bill, which was shelved last year but now looks to be revived.

Loading

Both instruments are positive steps forward, but not enough to confront the issue at its root. The act and the bill share elements of an increasingly outdated approach to digital platform regulation, where well-meaning policymakers have carried across principles from traditional broadcasting to digital media distribution that cannot scale, burden the wrong players, and may inadvertently stoke institutional mistrust.

As it currently stands, tech accountability amounts to regulators tailing global multinationals and issuing letters or threats of hefty fines once the harm has already happened. But it can be so much more than this.

Social media companies have a deep knowledge of how their platforms work and access to real-time and granular data on operating conditions. And yet, despite this information asymmetry and capability gap between the tech giants and the government, thanks to the Code of Practice on Disinformation and Misinformation, the industry still enjoys self-regulation and an industry-crafted voluntary code.

Advertisement

What’s more, signatories’ compliance with the code is demonstrably variable, breaches of obligations just mean a relaxation of duties, and the annual reporting process appears to accept misleading statements so long as they are not materially false. Expecting anything more on a voluntary basis from powerful global businesses with only a passing interest in middle markets like Australia would be naive.

In the first version of the misinformation bill, the government’s plan was to backstop the industry code with “co-regulation”, meaning that the Australian Communications and Media Authority would be empowered to effectively ask signatories questions and request information, with penalties for non-compliance. This framing repeats the implementation tensions of the Act: regulators run ragged pursuing platforms largely after-the-fact.

Loading

With the Online Safety Act up for review and there are signs of a more fit-for-purpose systemic approach – placing positive safety obligations onto platforms such as an overarching duty of care, and compelling platforms to identify risks proactively and demonstrate how they address them.

The same needs to be done for the misinformation bill – the fixation on “downstream” content problems is both the wrong target and a fraught message. The government needs to reframe its counter-misinformation efforts as a pursuit of corporate accountability rather than a content regulation attempt with a digital bolt-on. Chasing user-generated content simply cannot scale for the business models of the digital world, and it’s an unavoidably perilous political project.

The only winners in a race to the bottom on user safety are Silicon Valley investors. Tech companies, through their acts and omissions, and industry whistleblowers through their testimonies have persuasively demonstrated that social license and good corporate citizenry are simply not high-ranking issues for social media giants.

If we can agree the problem is the negligence and impunity of social media companies, then the solution needs to shift the burden of responsibility from us as users back to the businesses themselves. The way to do that is the same as we’ve done for any other bullish, multinational harm-producers: positive duties and obligations, prescriptive reporting requirements, auditable information, and a meaty enforcement model that makes misconduct an unacceptably high cost of doing business.

Alice Dawkins is the executive director of Reset.Tech Australia, a tech accountability organisation.

Get a weekly wrap of views that will challenge, champion and inform your own. Sign up for our Opinion newsletter.

Most Viewed in Politics

Loading