As a relatively young, discourse-based platform, Twitter’s charms lie in the brevity and ephemerality of its content; its ability to host millions of content- creators who can interact with each other, or simply scream into a void. Unlike other platforms of its kind, Twitter encourages a network that is held together by common interests and communal identity. Hashtags and trending topics act as organising principles for conversation, and scrutiny moves quickly – the Twitter feed of today is not the feed of tomorrow.
In comparison, Facebook (the older social media network with a larger user base) is organised around a user’s central timeline – a chronology of personal achievements and life events, ordered by the images, events and relationships that build a person’s life. Discourse on Facebook typically occurs between networked individuals who have some form of connection offline as well as online – you are more likely to argue with an aunt, or an estranged school friend, than a stranger.
As with all social spaces, both platforms come with implicit and explicit social rules that order their operation. In contrast to a traditional publisher, however, governance of these spaces is not a matter of external law. Rather, it falls to the platforms themselves, for which we have Section 230 of the United States’s 1996 Communications Decency Act to thank. Section 230 allows online platforms to act as hosts for conversations without becoming liable for said content. It’s a move that frames content producers – i.e. users – as publishers in their own right, making them legally distinct from the platform and able (theoretically) to retain ownership over their intellectual property. It is, according to the nonprofit Electronic Frontier Foundation, “one of the most valuable tools for protecting freedom of expression and innovation on the Internet”.
Section 230 allows the internet as we know it to exist, insofar as it lets websites host user-generated content without any of the legal implications faced by print publishers. In the case of criminal content, such as child pornography, Section 230 shields a website or app as long as it is unaware of crimes being committed on its platform. While platforms are legally obliged to report such content if discovered, they are not required to search it out. According to Section 230(c)(2), the act’s protection extends to all online providers that act in “good faith” to restrict access to “obscene, lewd, lascivious, filthy, excessively violent, harassing or otherwise objectionable” content. It is the legal framework around which social media platforms have been designed – without it, they would be heavily restricted and censored, erring on the side of caution in avoiding liability. If every tweet opened Twitter up to the possibility of a libel lawsuit, you’d find Twitter censoring a lot of tweets.
Today, however, the freedom of social media platforms to govern their own spaces has become increasingly contentious, particularly given that the shape of the internet is vastly different now than it was 30 or so years ago. Section 230 was instated to protect small start-ups from costly liability lawsuits – what Ron Wyden, one of the co-authors of the law, called a “sword and shield” in a 2019 interview with Vox. “[The] shield is for the little guys, so they don’t get killed in the crib,” said Wyden, “and the sword would give platforms the opportunity to take down things like opioid ads while providing protections for the good actors.” But the law did not anticipate global social media, nor its being dominated by a handful of monopolistic companies. These giants not only have unprecedented power over our media landscape, but are also authorised by Section 230 to operate almost entirely according to their own internal rules. The blanket protections that the law provides further entrenches their power by making them accountable to no-one but themselves.
The platforms seek to justify this status quo by maintaining that they are politically neutral. In September 2017, Facebook CEO Mark Zuckerberg reiterated this stance in a post on his platform, stating that “Trump says Facebook is against him, liberals say we helped Trump. Both sides are upset about ideas and content they don’t like. That’s what running a platform for all ideas looks like.” Meanwhile, Section 230 ensures that they retain immunity from the consequences of making user-generated content available, even when those consequences may have significant political implications. In 2018, for instance, it was revealed how the political consulting firm Cambridge Analytica had used Facebook’s data to influence national elections through propaganda hosted on the site. “We just put information into the bloodstream of the internet and then watch it grow, give it a little push every now and again over time to watch it take shape,” Alexander Nix, the company’s CEO, was later secretly recorded boasting. “And so this stuff infiltrates the online community, but with no branding, so it’s unattributable, untrackable.”
While platforms such as Facebook and Twitter may attempt to adopt a politically neutral standpoint, there is nothing in the law that actually mandates this. Sites are not required to carry user-generated content: they aren’t legally obliged to host both sides of a political debate, nor are they required to or restricted from highlighting misinformation. In his Vox interview, Wyden confirmed that Section 230 is not about political neutrality, but rather “all about letting private companies make their own decisions to leave up some content and take other content down.” This was intended to ensure that platforms appear liberal or conservative not from external input, “but through the marketplace, citizens making choices, people choosing to invest. This is not about neutrality. It’s never been about the republisher.”
Political neutrality may not be legally required, but it is likely the most profitable avenue for social media platforms, allowing them to retain the broadest user base for data gathering and advertising – a model that Senator Elizabeth Warren has labelled a “disinformation- for-profit machine”. “Big tech companies cannot continue to hide behind free speech while profiting off of hate speech and disinformation campaigns,” Warren told Vox in 2019, and her concerns over the current system are not atypical – users from both sides of the political spectrum are increasingly unsatisfied, with both right- leaning and left-wing users complaining that social media spaces exhibit bias towards their political opponents. Nevertheless, the platforms persist, with Vijaya Gadde, Twitter’s head of trust and safety, telling Vice’s Motherboard that Twitter’s ‘fundamental mission’ is to “serve the public conversation”: “[In] order to really be able to do that, we need to permit as many people in the world as possible for engaging on a public platform, and it means that we need to be open to as many viewpoints as possible.”
The situation reached a head on 26 May 2020, when President Trump alleged, without evidence, that postal votes enable electoral corruption: “mail boxes will be robbed, ballots will be forged & even illegally printed out & fraudulently signed.” Twitter flagged Trump’s tweet as containing misinformation, displaying a link beneath the tweet that it highlighted with an exclamation mark: “Get the facts about mail-in ballots”. Clicking the link takes the user to a Twitter-curated page of fact-checking articles from CNN and the Washington Post, and a series of tweets from journalists debunking Trump’s claim. The choice to flag Trump’s tweet is in line with Twitter’s new “misleading information” policy, updated on 11 May this year, which states that flagging a tweet can “provide additional explanations or clarifications in situations where the risks of harm associated with a Tweet are less severe but where people may still be confused or misled by the content”. Notably, the tweet itself was not removed – it only had additional context added below it.
In a remarkably petty response to what he perceived as censorship, Trump threatened to repeal Section 230 with an executive order that would allow federal agents to challenge tech giants over their moderation standards. Trump followed this executive order with a tweet that made clear the personal nature of his feud with Twitter, and his belief that the platform unfairly targets conservative users. “Twitter is doing nothing about all of the lies & propaganda being put out by China or the Radical Left Democrat Party,” he wrote. “They have targeted Republicans, Conservatives & the President of the United States. Section 230 should be revoked by Congress. Until then, it will be regulated!” Free speech laws, as they exist in the US and which govern American companies such as Facebook and Twitter, protect individuals’ rights to hold opinions, and to receive and impart information free from government censorship and retribution. Freedom of expression is not, however, the same as freedom from consequence and, crucially, Twitter is not the government.
The irony of the leader of the American government – with his overwhelming levels of visibility, power and privilege – decrying the curtailment of his freedom of speech is overwhelming. Freedom of speech does not give carte blanche to hate speech, or libel, or misinformation. Concerns surrounding censorship and challenges to free speech have also come from liberal-leaning quarters, however. On 7 July, a now notorious open letter was posted online by Harper’s, signed by 150 writers and academics, condemning an “intolerant climate” for free speech. The letter critiques the “restriction of debate, whether by a repressive government or an intolerant society” as though those are equally horrifying ends of a binary; as though being dismissed on social media holds the same weight or destructive implications for a livelihood as being repressed by a government.
In its equation of disparate phenomena, the Harper’s letter claims that “censoriousness[...] [is] spreading more widely in our culture: an intolerance of opposing views, a vogue for public shaming and ostracism, and the tendency to dissolve complex policy issues in a blinding moral certainty.” The vague examples of censorship that the letter actually alludes to, however, dismiss the possibility of any consequences to (or valid criticism of) viewpoints, swallowing these up under the need to resist the nebulous threat of “cancel culture”. The letter suggests that it is “now all too common to hear calls for swift and severe retribution in response to perceived transgressions of speech and thought”, although no concrete examples of these transgressions are given – we are simply assured that the retribution is “hasty” and “disproportionate” to casual misdeeds such as “running controversial pieces”, making “clumsy mistakes”, or even just “quoting works of literature in class”.
This inflammatory framing of overblown punishments for innocuous behaviours is in itself manipulative and, again, comes with an irony – the signatories represent some of the world’s most prolific authors and academics, many of whom have long enjoyed privileged access to traditional publishing platforms. The tides of public opinion may be damaging to the career of the everyman, but their impact on the established elite is likely to be negligible. Trump continues to speak and enjoys a global audience whether he is fact- checked or not. As does Harry Potter author J.K. Rowling, one of the Harper’s signatories, who presumably felt compelled to sign the letter following the backlash she has experienced after repeatedly muddying our collective childhood memories with ill-informed, dangerous views about the transgender community, and who regretfully will not stop speaking – no matter how often her fans beg her to. There seems little self-awareness from these powerful figures campaigning against the restriction of their already bloated influence – if there are groups who suffer reduced visibility under censure, by platforms or the general public, it isn’t them. The paradox stings.
The platforms themselves do, of course, play active roles in shaping and generating conversation, but this is largely thanks to their advertising-based algorithms and users’ personal networks shaping these spaces differently for each user – not over-zealous and politicised moderation policies. I, for instance, exist, work and play in Black Twitter, but my neighbour might only be fleetingly aware of the side of the platform that I live on. Regardless, Twitter and Facebook position themselves slightly differently when it comes to moderating speech, but their policies have a similar general impact, and both use interstitial warnings and the removal of posts as their main moderation tools. As the larger and wealthier company, Facebook invests more financially into moderation, but does not publish the detailed rules its moderators use to determine what to allow or delete. There is therefore limited transparency as to what is permissible on Facebook, and a high chance of having your content removed without explanation, or made less visible to followers via a process of “shadow- banning” – a term used to refer to the obscuring of a user’s content without their knowledge.
In 2018, Facebook finally published a list of moderation guidelines to show something of how its moderators decide whether to remove violence, spam, harassment, intellectual property theft and hate speech from the site – part of what Zuckerberg termed Facebook’s effort “to develop a more democratic and independent system for determining Facebook’s Community Standards”. These guidelines continue to change over time, as Facebook adapts to workarounds implemented by users to evade moderators, and also as the company works to improve poorly received policies that have left certain groups unprotected. The notorious “protected categories” blunder, exposed in June 2017 by ProPublica, meant that attacks against protected categories – those based on race, sex, gender identity, religious affiliation, national origin, ethnicity, sexual orientation and serious disability/disease – were monitored, but collapsed when it came to “subsets” of protected categories. It was a lapse that saw “white men” considered a protected group, but which did not extend this moderation to “Black children” because “children” as a subset are not considered a protected category. An algorithm that doesn’t take into account the racialised hierarchies and systems of oppression that leave Black children significantly more vulnerable to attack than white men is worthless, and particularly distasteful in light of Facebook’s reliance on small armies of underpaid content moderators in the Global South.
The subsequent modification of the rules to resolve this issue was an encouraging sign. Following the publication of the guidelines, Monika Bickert, Facebook’s vice president of global policy management, told reporters that their public availability was a potential risk due to concerns that hate groups will be able to adapt workarounds to Facebook policy, but that “the benefits of being more open about what’s happening behind the scenes outweighs that.” Bickert added that the category of “Black children – that would be protected. White men – that would also be protected. We consider it an attack if it’s against a person, but you can criticize an organization, a religion”. This reformulation of aspects of a problematic policy shows how critique can be levelled at platforms, and how the platforms can, in turn, go on to provide transparency and improved moderation.
In contrast, Twitter takes a more hands-off approach to content moderation. On its ‘Safety and Security’ page, users are reminded that people are “allowed to post potentially inflammatory content, as long as they’re not violating the Twitter Rules”. Twitter relies on both automated moderation and a global content moderation team, stating that it will “not screen content”, “remove potentially offensive content”, or “intervene in disputes between users”. Instead, users are encouraged to do their own work to screen their feeds by blocking and ignoring undesirable users, and reporting offensive Tweets for the attention of Twitter moderation to deal with. Rather than being removed, offensive or misleading tweets are placed behind interstitial warnings in the name of transparency. “There are times when we could simply disappear something. We don’t do that,” said Gadde in an interview with Motherboard in October 2019. “We downgrade things and we put them behind interstitials and we’re very clear when we’ve done that,” she went on to say, “and the reason for that is because our platform is meant to be transparent. We need people to trust that it operates in a certain way.”
This is patently false – some Tweets are “disappeared”, and justifications as to what remains visible but behind warnings and what is deleted are variable. In an earnings letter to shareholders published in October 2019, Twitter declared its commitment to “proactively reduce abuse on Twitter” by cracking down on abusive tweets, stating that more than 50 per cent of tweets identified as violating Twitter rules are now flagged and removed by automated moderation tools before they are reported by users. The issue, however, is the ongoing lack of transparency about what content is deleted, or placed behind warnings, or allowed to remain. Multiple users have complained about inconsistencies in Twitter’s moderation.
Most recently, on 2 October, Twitter told Motherboard that users are not allowed to openly hope for Trump’s death following the announcement that he had contracted Covid-19, and that tweets that do so “will have to be removed” with the corresponding accounts put into a “read only” mode. Twitter referred to an “abusive behavior” rule from April 2020, arguing that it would “not tolerate content that wishes, hopes or expresses a desire for death, serious bodily harm or fatal disease against an individual or group of people”. This statement was received with incredulity and derision by many marginalised users, who are regular victims of death threats with no such action taken by Twitter. As the actor Mara Wilson tweeted: “Do you know how many people on here are constantly calling for genocide against Jews or Muslims or Black people or LGBTQ people”. These inconsistencies surrounding what harmful content is removed, placed behind warnings, or simply ignored, challenges Twitter’s own vision of itself as a platform that is transparent with its moderation policy.
Twitter’s historical ambivalence to conflict on the site, and the desire to create a platform where all opinions (however offensive) can be shared, has led to an environment where hate speech flourishes, especially among white supremacists. Unlike Facebook, Twitter has not banned accounts related to the ideology – ideologues of white supremacy, such as Richard Spencer, have accounts – and has instead adopted a position of “researching” how white supremacists use the platform to see whether leaving their accounts online may be a necessary part of a deradicalisation process. In her Motherboard interview, Gadde said that Twitter believes “counter-speech and conversation are a force for good, and they can act as a basis for de-radicalization, and we’ve seen that happen on other platforms, anecdotally.” This decision, of course, comes at the expense of users of colour – the targets of white supremacists who have to share space with them at the cost of their own wellbeing.
For some users, the protections provided by Facebook and Twitter are not encompassing enough, while for others the moderation makes them feel as though their content and ability to express ideas is being repressed. What ought to be incontestable, however, is that Facebook’s policy of colour-blindness, which defends protected characteristics equally – i.e. treating disparaging comments about white men with the same severity as those about Black children – operates outside of understandings of how racialised hierarchies affect groups differently in the offline world, and leaves some groups more vulnerable than others.
Platforms already have a hard time making sense of the differing needs and experiences of the demographics using their spaces, and algorithmic bias further complicates the issue. Although Twitter and Facebook have provided some insight into their moderation policies, tech companies are under no legal obligation to reveal how they moderate content. This makes it near impossible for users to meaningfully critique and improve practices that adversely impact them. For example, many Instagram users are aware of how algorithms may limit the reach of their posts, but not how, or why. In August 2020, journalist Paula Akpan investigated the impacts of shadow-banning, and the consequences it has for minoritised users specifically. Shadow banning affects content Instagram identifies as “borderline”, but not in direct violation of its community guidelines. What qualifies as borderline is unclear, but in practice, Akpan found that minority users suffer shadow-banning disproportionately. Similarly, in October 2019 the digital newsletter Salty interviewed its userbase and found that of the 118 participants surveyed, “many of the respondents identified as LGBTQIA+, people of colour, plus sized, and sex workers or educators.” These users “experienced friction with the platform in some form, such [as] content taken down, disabled profiles or pages, and/or rejected advertisements.” Fat users are more likely to have content that shows skin labelled as inappropriate nudity; Black bodies are flagged as sexually suggestive content more often than white ones; and, famously, men’s nipples are allowed to proliferate where women’s fear to tread. Akpan highlighted the racial bias that allows for some unfair practices to flourish: “despite attempting to make ‘neutral’ systems, programmers and technical designers do not exist in a vacuum. Rather, they live and work in a society that has always understood the bodies of Black people through constructs established under colonialism, including the idea that Black people are morally lax and present themselves as sexually aggressive.”
In cases like this, where moderating practices and algorithmic biases are clearly discriminatory towards marginalised groups, colour-blind approaches to moderating social media spaces are patently inadequate. Were the algorithms and rules of moderation used by Facebook and Twitter made visible to marginalised users, perhaps we would be able to do a better job of making the spaces safer for ourselves, rather than relying on the powerful to do it (badly) for us. Currently, the most vulnerable simply try to protect one another, as we have always done. Facebook’s closed groups and community pages provide spaces for connective networks to form around shared experience, and participate in community-based rules, practices and moderation which keep these groups safe. On Facebook, Black activist pages moderate language to evade wider surveillance by the platform, creatively avoiding talking directly about white people and white supremacy, for instance, by changing spellings slightly: “white” becomes “whyte”, or “yt”, or “w*hite”. This self-moderation allows groups to avoid scrutiny, both from potential aggressors, and from platform moderation that still doesn’t accommodate for the nuances of hierarchical oppression.
Similarly, on Twitter, Black users are deliberately reducing their visibility, and thus their vulnerability to surveillance and hate speech, by utilising creative ways of speaking about race without using direct language. Users are moving away from the hashtag as an organising principle of discussion, after relentless poaching of our ideas, language and cultural production. While Section 230 affirms that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,” in practice Black content is rarely attributed to Black creators: tweets are still scraped without consent by journalists surveilling Black Twitter; trends led by Black users are appropriated by brands without due credit; and everyone uses speech patterns derived from African-American Vernacular English (AAVE) while denying its origins with Black people. Examples of these regularly dominate our media, where hashtags created and popularised by Black Twitter users become news fodder, as with #BlackLivesMatter, or even the popularisation of terms we use to describe our experiences like “misogynoir” – a word that describes the intersecting oppressions of racism and sexism that Black women experience, but which is rarely attributed to its progenitor, the scholar Moya Bailey. As Rachelle Hampton notes for Pacific Standard, “The same formula usually follows: The hashtag will trend, media organizations will compile a list of tweets (typically without consulting the user), and publish a piece.” The lack of consent, or opportunity to benefit from our own experiences and creativity is the violence here – Section 230’s mandate on treating the user as publisher of their own content falls short of protecting Black users’ intellectual property.
In order to bypass the need for the hashtag and evade this surveillance, Black users are instead constructing culturally connected networks. Circles of influential, networked Black users can spark a discussion that spreads through their predominantly Black followers, or live-tweet television shows as a community so that topics trend at specific times. If you somehow miss a topic trending on Black Twitter, you probably have a private group chat of other Black users who can bring you up to speed. These dense, homophilic networks ensure that, whether a topic is hashtagged or not, it will invariably come to the attention of its intended audience, and has the added benefit of limiting discourse to its community, thereby reducing the possibility of surveillance. The inclination towards linguistic self-censorship, and careful pruning of personal networks allows marginalised users to experience the platform on their own terms and with their own curated audience: at 9pm in the summertime, my Twitter feed is dominated by Black Britons tweeting about Love Island. In February, we watch Love Is Blind as a family.