Published in ACM SIGCAS Computers and Society
General paranoia is the term that best describes a user’s social media experience. The spaces we go to socialize online are full of others suspecting bad-faith actors and advertisements that seem to know your every move. This attention-grabbing, habit-forming culture is sold on dreams of limitless love between family, friends, and community (as promised by the Facebook slogan “bring the world closer together”). Genuine connection can be found online, but the heart of this network lies outside fiber optic cables.
In the last issue of ACM’s Special Interest Group on Computers and Society, Sual Almualla posed the question: “[Is social media] guided by push algorithms that decide for us what to watch and hence what to think?” This question is compelling due to the idea that computers could control the way humans think, and not the other way around. We could push this line of questioning further by asking what are the underlying functions that make this control possible? The answer lies in the nature of social media algorithms and the political conditions that help them function. In short, social networking sites create an illusion of freedom that is controlled by hegemonic forces.
Platforms such as Facebook, Twitter, TikTok, and YouTube all have attention economy business models. This means that their stockholders’ bottom line rests on the shoulders of the user. Actions such as watch time or retweets become engineered variables used in recommendation systems. According to ex-Google employee Guillaume Chaslot, who worked on Youtube’s recommendation algorithm, “The problem is that the AI isn’t built to help you get what you want–it’s built to get you addicted to Youtube. Recommendations were designed to waste your time.” These sites are deliberately programmed to make you stay online for as long as possible. The sites also aim to keep you engaged and stay reacting to content. Taking from the playbook of social engineering, sites do this through content that baits a user based off of their data. Clickbait, a specific type of content that maliciously hooks in a viewer, undermines user experience and its nature is not weighed in consideration for recommendation engines (Zannettou et al., 2018). True clickbait uses tactics such as headlines that ask provocative questions that a user must click through to find the answer, or falsely implying the article contains must-see content such as the identity of the Zodiac killer. But this obvious manipulation is not the only form of bait. In its more generalized form, bait is ubiquitous online. It attacks two archetypes: the consumer and the doomer.
We’ve all seen this dynamic play out, in both its forms, many times. First, let’s consider the consumer. The act of idly browsing apolitical content, searching for types of products, and checking notifications on a new selfie are ways to act as a consumer, and brands target accordingly. This online archetype is created through exploitation of a user’s data footprint for advertising purposes. A consumer’s game is buying the hottest trends and engaging with popular brands. They take in information, publish quick thoughts, and produce nothing. Consumers get baited in ways that keep them content, and when that peace is disrupted by neural inhibitors not getting their normal level of serotonin, they seek out notifications and products to fill their impulse (Kuss & Griffiths, 2011). To put a price tag on this phenomenon, the social media market leader Facebook made $28.6 billion off of ad revenue the first three months of the Covid-19 pandemic (https://www.nytimes.com/2021/07/28/business/facebook-q2-earnings.html).
The doomer is created out of similar manipulations, but they are specifically targeted to react in “political” ways. A doomer’s game is online activism: posting infographics, news, or donation links, posting arguments in favor of their ideas, or arguing in the comments of someone with an opposing view. They circulate information regarding their belief systems, which can come from any point on the political spectrum. Doomers get baited in ways that make them feel anger by exposing them to views they find harmful (such as someone concerned about climate change seeing climate change denial), or comfort by exposing them to information that confirms their views (such as that same person seeing climate change statistics). Both cases are manipulations based on the impulsive reaction of fear. The instinct to respond to this type of content is not necessarily wrong, but it does benefit social networking sites and content they react to gets pushed back into their user model.
This type of online activism could be more accurately described as slacktivism. Youtuber Khadija Mbowe describes slacktivism in saying, “It makes it easy for people to feel engaged in a lot of issues or topics, but it doesn’t really go beyond online engagement because all they have to do is repost or retweet something to make it seem like they’re aware, or that they’re paying attention, or that they care. [It is] virtue signalling.” [https://www.youtube.com/watch?v=xkipck28Jg8&t=1798s] In the same video, Khadija goes on to talk about the ways online activism can be useful in circulating information, get donation links to people who can help, and transform into on-ground activism within one’s physical community.
However, the dividing line between useful online engagement and virtue signaling isn’t always clear. This lack of distinction is what makes slacktivism so insidious. Because doomers tend to engage in both, a distinction within oneself must be made between working towards social progress and lying in bed feeding an algorithm. Obsessive reading of dystopian news is aptly known as “doomscrolling” [https://www.businessinsider.com/doomscrolling-explainer-coronavirus-twitter-scary-news-late-night-reading-2020-4]. Not only does it rarely lead to change, it also causes the user depression, paranoia, anger, and a general sense of hopelessness. This does not lead to productive, material change; further, it is how doomers get engineered. They are supposed to keep clicking, posting, and watching without threatening the systems controlling their time.
The same individual can act in ways that invoke both the doomer and the consumer at different times or on different social networks. The archetypes are not mutually exclusive and are fueled by similar control schemas. Both consume media in ways fueled by biased recommendation systems. And both are driven by fear, whether it’s of missing out on social life or of environmental destruction. Fear is a positive from the network’s perspective because it drives engagement. It gets clicks and views.
According to a study by Pew Research Center on tweeters in the United States, 39% of surveyed users tweeted at least once about national politics over the year, while 97% of these tweets came from just 10% of users (https://www.pewresearch.org/politics/2019/10/23/national-politics-on-twitter-small-share-of-u-s-adults-produce-majority-of-tweets/). Additionally, Twitter has been shown to statistically amplify reactionary politics (Huszár et al., 2021)– a post asserting a reactionary idea, such as “Cimate change isn’t real”, is more likely to appear in a user’s feed without their searching for it, compared to posts with other views. These facts together suggest a deliberate corruption of user experience, especially within the political realm, with an aim of reaction. A doomer’s fear can be justified, but their responses may not be helpful within the community they desire helping or even their own mind.
The idea of such a feedback loop that exploits a user’s emotional vulnerability is described in Wendy Hui Kyong Chun’s concept of control-freedom. This neologism is defined as the deliberate conflation of the limitless radical possibilities of cyberspace—freedom—and the exploitation of this dream’s vulnerabilities—control. It is a deliberate illusion of agency that increase the user’s confidence while actually making them more vulnerable.
In this system, the role of the user is very different than they may realize. Most people expect their role to be that of a flâneur, or lurker, with the job of idly scrolling through content. However, this only part of a user’s role. As Chun notes, “In order to operate […] the Internet turns every spectator into a spectacle: users are more like gawkers—viewers who become spectacles through their actions—rather than flâneurs. Users are used as they use” (p 45). Social media is a churning machine producing spectacles fueled by spectacular reactions. Its grandious promise of social progress devolves into vain madness causing more paranoia than change. This fear of being a spectacle is a part of the anxiety of being online. For example, a frequent fear on Twitter is becoming the “character of the day”, which is a term used to describe becoming the most posted about user on Twitter in a day (@maplecocaine, 2019). While users who log on to talk politics have real fears (about environmental destruction or whether social security will still exist when they retire), in the process of posting this is replaced by paranoia about how their posts will be perceived and whether they risk becoming a “main character.”
Attempts to control fear through posting may only make the problem worse. Elsewhere in Chun’s discussion of control-freedom, she describes virtual methods of resistance as inferior to on-ground action, similar to slacktivism. Virtual methods in isolation further the profitable reaction of fear: “[…] Technological solutions alone or in the main cannot solve political problems, and the costs of such attempts are too high: not only do such solutions fail but their implementation also generalizes paranoia” (p 25). This idea highlights the folly of the doomer in trying to allay fear through posting:In fact, posting has only so much use before it begins to amplify fear.
This political angst must be taken outside of its virtual environment in order to break free of its constructed boundaries. A final quote by Chun describes how users can turn control-freedom on its head to change power and knowledge:
By questioning the position of the consumer – and its counterpart, the user – we can begin to expose the objectification and virtualization of others that underlie this myth of supreme agency, and begin to understand how the Internet can enable something like democracy. By examining the privatization of language, we can begin to understand the ways in which power and knowledge are changing. (Control-Freedom, p 40)
Self acknowledgment, therefore, is a way to circumvent control-freedom devices. Using the internet as an active tool instead of a passive influencer may help users reclaim cyberspace. Online zines, blogs, and forums, for example, provide spaces for long-form critical analysis rather than shallow 280-character posts. The process of writing substantive arguments that aren’t direct responses to someone else’s post allows users to think through ideas and form content based on thought instead of reactions.
The lack of distinction between rational fear and paranoia is a real and dangerous consequence of technological use. You start to question if your emotions are your own, or were manufactured by whatever you are consuming. People become distracted by vapid aesthetics and are weaponized politically, while others justifiably fear control but are mentally restricted from meaningful action. Sometimes collective action fights this dynamic, but it is often quelled by actions such as violent police response, misinformation campaigns in media, and corrupted legislature. It takes great efforts of organizing and publicizing to get collective actions off the ground, only for them to be combated with a little bit of effort by the powerful. Viable solutions then seem to be getting off corporate-controlled social media networks and creating content that deliberately jams commodified culture.
Huszar et al., 2021 https://cdn.cms-twdigitalassets.com/content/dam/blog-twitter/official/en_us/company/2021/rml/Algorithmic-Amplification-of-Politics-on-Twitter.pdf
Kuss & Griffiths, 2011