Terror Machines

Social Bots in Struggles for Hegemony in Digital Publics

In recent years, automated or semi-automated computer programs have been used to influence moods and attitudes of users of digital publics. In the literature, the social bot is defined as a “computer algorithm that can automatically produce content and interacts with humans on social media.” [1] If social bots appear in political communication spaces, they are also called political bots. [2] Controversial subjects of discussion, social and political bots represent the rise of an invisible digital power that creates a mix between the visible and the non-visible. Computer-generated social bots only become visible to the masses, for example, when they simulate approval and influence the opinions of actual humans. When hundreds of thousands of bots simulate a collective consent, they influence a climate of opinion, within which it is difficult for the ordinary user to comprehend what is real and what is fake.

Accordingly, Assenmacher et al. even discuss social bots as a threat to the foundations of liberal democracy:

Social bots, (semi-)automated accounts in social media, gained global attention in the context of public opinion manipulation. Dystopian scenarios like the malicious amplification of topics, the spreading of disinformation, and the manipulation of elections through ‘opinion machines’ created headlines around the globe. [3]

In any case, scientists, but also the public, must deal with the question of the authenticity of the data and information generated, and ask themselves if the large number of manipulated and automatically produced hashtags, messages, and images destabilize communicative practices, relegating these practices to areas of invisibility and insignificance. [4]

As a digital public, I understand a multitude of digital environments that connect users with one another online to exchange communicative content. Within these digital environments, those involved fight for the sovereignty of interpreting their opinions, and for what is seen as a legitimate social debate about the ‘common good’ and ‘public opinion.’ If, following Jürgen Habermas, liberal democracy and the bourgeois public sphere were still viewed as a place where a discourse of argumentative consideration, joint deliberation, and understanding about public affairs were carried out, today the economy of attention prevails in online public spheres: Due to the information overload, not all arguments can be heard and only arguments that have already attracted a lot of attention in the form of likes, followers, and comments are made visible. [5] If, in the public debate, the only arguments that can be made visible and sayable (algorithmically) are those that have gone through a kind of digital plebiscite (an interactive vote) before they appear, then could one not manipulate this approval of certain arguments in such a way that one’s own interests can be enforced? This interest in the exclusion and marginalization of democratic communication [6] has accelerated the development of social bots in recent years.

To be able to act credibly, these computer programs—known as social bots—pretend to be humans on social networks and imitate human behavior. Social bots act as ‘terror machines;’ they spread thousands of messages every day with the aim of conquering the interpretive sovereignty of a specific topic. The communication scientist Nonnecke and his co-authors developed an empirical study on Twitter communication on the occasion of the 2018 US midterm elections:

We analyzed the strategies of influential bots seeking to affect the immigration debate before the 2018 U.S. midterm elections. Our findings reveal that the 10 most influential bots in our dataset all presented an anti-immigration viewpoint, and both posted original tweets and retweeted other bot accounts’ tweets to give a false sense of authenticity and anti-immigration consensus. Bots’ messages relied heavily on negative emotional appeals by spreading harassing language and disinformation likely intended to evoke fear toward immigrants. Such accounts also employed polarizing language to entrench political group identity and provoke partisanship. [7]

As different from one another as the content and strategies of the bots may be, common characteristics link these phenomena, and the programmers and their interests remain invisible. Social bots simulate human users in social networks; if they do not give any indication that they are machines, they can be classified as fake accounts. By automatically generating further bots and spreading them in large numbers, bots are used to try to simulate mass phenomena. This needs some further explanation:

At first glance, social bots are not easy to recognize as automatic computer programs; after all, their strategy is based on deception and concealment. Acting (seemingly) independently on platforms such as Facebook, Instagram, Twitter, or Tumblr, bots have their own accounts, within which they seek to communicate biographical elements as credibly as possible (using fake names, profile photos, etc.). They then network with other accounts—real or invented—which generally share their interests, managing their own content and manipulating that of their network. By following other users and liking or retweeting posts, they can distribute ready-made messages or even create them themselves, and this is where the “social” becomes “political.” By imitating social movements and political interests, and falsifying social power relations, bots aim to massively influence public opinion, in creating what appears to be mass phenomena.

The agency of political bots, as such, is based on large numbers. When thousands of bots amplify an actor, or piece of content, with the help of likes, retweets, or comments, they manipulate the social platforms’ algorithms to bring their own messages to the forefront, as those platforms weight the relevance (and therefore visibility) of their content based on large numbers (of followers, likes, retweets). Because of the sheer prevalence of these repeated messages, it can be almost impossible for ordinary users to fact-check what they see.

Neither Twitter users nor researchers can easily distinguish between automated and non-automated accounts. Detecting social bots has become a specific research field, particularly in computer science. A common approach is to train machine learning algorithms on a data set with labelled bot and non-bot accounts. Those feature-based classifiers often yield high accuracies and are subsequently able to classify unseen data. A weakness of these classifiers lies in their inability to detect new classes of bots that have not been represented in the training data. [8]

For a superficial media reception, these large numbers nevertheless function as instances of ‘objectivity,’ that signal mass movements, social trends, political developments, and so on. In doing so they function as a vote, comparable to a digital plebiscite. The difference between visibility and invisibility can also be paraphrased with the strategic relationship between front end and back end. Algorithms and programs operate in the backend, trying to develop bots that credibly affirm or distribute media content (whether or not that content itself is inherently credible). In the front end, where the users are, the bots try to create a certain social or political climate of opinion. They can only be effective in the visible sphere if the relationship between front and back ends remains invisible and unclear.

The strategic and tactical goal of bots is therefore essentially to create and maintain an ambiguity between true human identities and computer-based programs, complicating or even erasing the boundaries between humans and machines. It is difficult for the front-end user to decide whether the other user is a bot or not. On the back end, on the other hand, bots are often far more recognizable, because the automatic behavior of the bots can be statistically evaluated and thus identified more easily.

Political bots, deployed as mass content and mass movement, operate nearly invisibly in their efforts to make visible something that does not exist. They are communication terror machines because they aim to maximize the effectiveness of information policy, but refer to an empty commodity value that stakeholders use to assert their interests at any price.

1_Social Bots in the Public Feedback Loop

A core intent of this essay is to work out the ambivalences of visualization in online communication. The visibility of public opinion and social movements on the Internet makes communication processes more transparent and open, but at the same time the access to making it visible is not granted to all. The obvious visible is intertwined with the invisible when it conceals the manner of its own making, and when it pushes the visual process and related practices into the background or into invisible or inaccessible realms.

Today, social bots can be found in all socially relevant areas of communication and public opinion-forming. They influence communication in social media and online platforms and have become an integral part of PR campaigns, public relations, election advertising, and marketing. For many people, media communication on the Internet is the most influential source of political and social information today, and thus represents nearly their sole basis for opinion-forming. Opinion-forming is largely linked to the availability of information. In this sense, social bots have two central tasks: on the one hand they spread information (e.g., with the help of retweets) and on the other hand they also determine the social value of information by reacting to information (using likes, comments, etc.). By programming social bots to support specific opinion formation, they act as both essential active and receptive processes in the formation of public judgement. Used en masse, in the guise of human actors, bots can quietly yet significantly influence public opinion.

If bots are exposed and it turns out that many fake profiles have contributed significantly to the formation of opinion on the Internet, this can lead to scenes of media communication being rendered implausible. Users withdraw from these places in disappointment because they can no longer discern between human actors and automatic communication devices. This can give the impression that democratic processes on the Internet are falsified and rather reflect the interests only of those who are willing to invest in the infrastructure of bots.

Bots are instruments that specifically influence the formation of opinions and can create false images of reality by conveying one-sided reports. Bots can make certain topics invisible, can push them into the background, can settle them below the threshold of perception. They are not a natural event or a force of fate that appears suddenly, on the contrary, they are specifically programmed to influence public opinion for certain interest groups and to suppress certain media content and make it invisible.

In their study titled “Is That a Bot Running the Social Media Feed? Testing the Differences in Perceptions of Communication Quality for a Human Agent and a Bot Agent on Twitter,” [9] Edwards, Edwards, Shelton, and Spence investigated how people apply the same social rules to computers that they have already learned in dealing with other people. In this sense, the subjects of this study saw social bots as a credible source of information; in fact, in their everyday encounters with bots, test subjects hardly see any difference between social bots and human actors.

Bots represent a new variety of media propaganda, one which often can lead to political opponents being discriminated against and defamed. Social bots are a very efficient means of spreading hate speech, as they are simply programs and cannot themselves be prosecuted; it is also very difficult to hold accountable those who might be seen as being behind the bots. If bots are deleted, they can be replaced very easily. In this sense, democratic opinion-forming processes are endangered by social bots: because tens or even hundreds of thousands of automatically programmed messages can be generated every day, they ensure that the existing channels of communication can no longer be properly utilized or can even be destroyed by the bots’ presence. The mixture of artificial intelligence, social media, and algorithmic control of everyday communication that the bots enable is mainly used in the field of political disinformation and harassment campaigns on social media platforms.

2_Algorithmic Economy

From a strategic point of view, social bots increase the statistical visibility of users for economic interests. In the online media attention market, bot software acts as an attraction for group identities. If bots are liked, commented upon, and linked, then they also move human users into new forms of digital perception: statistical visibility, profile-based databases, economic exploitation interests.

The media scientist Oliver Leistert draws attention to the close interweaving of the commercial orientation of the social web and the massive boom in automated communication with the help of social bots: “The expansion and explosive multiplication of social bots in recent years goes hand in hand with the gigantic success of commercial platforms of social Media that have dramatically changed and challenged the social fabric over the past few years.” [10]

The short message service Twitter has systematically evaluated our text content, the career portal LinkedIn has optimized our careers, and the video platform YouTube has sorted our moving images. But Facebook has risen to prominence primarily thanks to its datafication of our social relationships, which allows it and others to use this personal data economically in a more or less structured form. From its inception, Facebook has pursued a corporate-centric algorithmic economy business model, tracking large-scale social and cultural preferences to enable consumer profiling tasks, namely, socially targeting potential customers.

From an entrepreneurial point of view, Facebook can therefore be viewed as a corporate-controlled social media platform, which is understood to be a digital application system that provides its users with functionalities for identity management, by presenting themselves in the form of a profile, and also for the management of their own contacts, by networking with others users. From evaluation of the data it collects, the Facebook Data Team expected insights into the relationship practices and the value orientations of the users integrated into the social network site. [11]

Mark Andrejevic blames the “digital enclosure” of online communication for the market expansion of social bots, describing the systematic monetization and political stratification of communication in digital environments. [12] Not only are communication content and forms commercially exploited and monitored, but they are also regulated and transformed; not only are individual communication acts changed, the entire communication culture is changed. In order to maximize the sale of personal data, the habits of the users themselves must be changed. The strategic goal is to create users who generate data willingly and continuously, and then to make this data available to the public. To this end, new incentive systems such as the ‘Like’ button are constantly being created to encourage users to produce more data.

Facebook introduced the Like button in 2009 to enable the systematic collection, consolidation, and evaluation of information about customers and customer groups in the long term, creating an informational basis for determining customer relationships with products and their possible sales markets. As such, it can be viewed as a commercial variation of the social bot, as it automatically evaluates social interaction and connects it to communication spaces with market analyses. The example of the Like button also shows that IT infrastructures are not simply used to measure behavior, but that they can actively stimulate social and cultural preferences by making them statistically visible. If influencers, close friends, or majorities like certain content, then the liked preferences also become action-guiding orientations for other users who may try to initiate social affiliations with their consumer decisions.

To be able to control users in their consumption habits, social bots are used to simulate moods, trends, and developments, which users are intended to then adopt as the relevant guidelines for their own consumption behavior.

3_Camouflage Techniques as Forms of Political Influence

On Twitter, around 15% of accounts on Twitter are social bots; these are not only used for shareholder interests to influence product advertising, but also for election campaigns and opinion polls. [13] Political parties want to mobilize voters. To do this, they use social media such as Twitter and Facebook, through which many citizens get their daily news. Social bots are smuggled into social media and online platforms in large quantities to spread the parties’ election campaign slogans millions of times in an effort to influence potential voters. This comparatively simple and low-threshold manipulation by political bots has led to a large proportion of online social networks being infected with opinion robots. To be effective, the political bots’ intentions must be undetected. Therefore, camouflage techniques are applied.

One of the most essential features of social bots is the use of camouflage techniques in communicative practices. The use of camouflage originally comes from the military field and describes tactical methods of misleading and deceiving the war opponent. Social bots are only effective and efficient if they are taken for what they are not, i.e., if their deception succeeds and if they are not exposed. The setup of the social bots consists of specific procedures that attempt to control the formation of public opinion in digital online environments. The following methods and techniques are used in this context:

Crowdturfing describes the veiling of grassroots movements, i.e., local, political, or social initiatives or organizations, to exert an influence on a commercial or political situation.

Fake followers are used to fake popularity. Social bots are used as fake followers to suggest that socially shared content is popular and has broad approval. In this sense, the social bots help to increase the credibility of the shared content, falsely creating the impression that many fans of the account are behind a product and that there is an overall good mood within the community. There are numerous politicians and celebrities who want to gain statistically greater popularity by buying fake followers and, for example, want to increase their value on Twitter. For example, personalities with a larger number of accounts that follow them are associated with greater social influence due to their high reach and are therefore more interesting for potential advertising partners.

Fake retweets simulate the artificial popularity of a message. A large number of social bots are active on Twitter, whose task is to automatically retweet certain content and thus contribute to its dissemination.

Account Hijacking: Accounts that have been temporarily or completely taken over by attackers through account hijacking are called compromised accounts. Programmed bots use phishing, malware, or cross-site scripting to obtain users’ login data. Compromised accounts are more valuable than machine-based bots for spreading disinformation or propaganda as they have already established trust with legitimate users.

Fake profile pic: To make social bot accounts more credible, they are provided with profile pictures. So-called grabber scripts and PHP scripts [14] ensure that internationally available images are redirected to accounts and platforms. With a program, thousands of images from platforms can be distributed to the respective accounts to suggest that the fake accounts are run by actual humans.

In the smoke screening process, messages on a topic or hashtag are distributed to make relevant posts on a topic more difficult to find due to the large number of other posts. The technique of misdirection is used to divert attention from one topic to another by spamming posts to a hashtag unrelated to the original topic. This tactic was used by Syrian bots, which tweeted about various events in other parts of the world unrelated to the hashtag used, to smother messages tagged “#Syria” pro-revolution. [15]

A social botnet consists of networked bots that react to and send messages to each other. This gives the impression of a well-networked movement that can be used for political mobilization. Hegelich and Janetzko examined the social botnet of the Ukrainian Euromaidan movement from 2013 and 2014, and found the following structural characteristics: “Mimicry: The bots try to hide their bot identity. Window dressing: To be interesting to normal users they are promoting topics by pushing hashtags and retweeting selected Tweets and messages.” [16] Hegelich and Janetzko also quantitatively examined a Twitter source sample from February 22, 2014 by reviewing the metadata and the friend/follower networks. In the 1.3 million tweets sent that day with the hashtag #Ukraine, Hegelich and Janetzko discovered approximately 15,000 uniquely identifiable bots within that day’s bot network.

4_Social Bots and Digital Democracy

In their study “Social bots distort the 2016 US Presidential election online discussion,” the authors Alessandro Bessi and Emilio Ferrara draw attention to three possible threats to the democratic communication culture:

The presence of social bots in online political discussion can create three tangible issues: first, influence can be redistributed across suspicious accounts that may be operated with malicious purposes; second, the political conversation can become further polarized; third, the spreading of misinformation and unverified information can be enhanced. [17]

The comprehensive digitization of everyday life is also changing the social institutions of democratic communication. The ubiquity of social media has made dialogue between politicians and citizens more personal; communicating directly with their followers also allows political leaders and public figures to exert greater influence. In contrast to conventional propaganda, social bots generate personal and personalized statements. For example, during the US election campaign, many Twitter users with Spanish names praised Donald Trump. Later, it was discovered that these people did not exist. And so, the question becomes: Who creates information today? Journalists, citizens, consumers, everyone? The right of individuals to express their opinions is highly valued within democracy. But what if the individual is not a person at all?

In their study, the two communication scientists Tobias Keller and Ulrike Klinger worked out that in Donald Trump’s 2016 election campaign, the electoral movement on the Internet was massively influenced by fake accounts:

Approximately one-quarter of Donald Trump’s Twitter followers during the 2016 U.S. presidential campaign were bots. Social bots did influence the U.S. presidential campaign, with about 20% bots involved, generating about 20% of the political debate on Twitter. Bots intervened in the Brexit debate, and the online petition for a second referendum on Brexit in June 2016 was ‘signed’ by 77,000 bots (BBC, 2016). Bastos and Mercea (2017) discovered a network of 13,493 Twitterbots supporting the Leave EU campaign. [18]

In their study, the two authors also point out that the use of automatically generated and machine-supported accounts by numerous political movements and actors has become a global phenomenon of party advertising and political communication on the Internet:

Social bots drove the #MacronLeaks disinformation campaign: ‘the users who engaged with MacronLeaks are mostly foreigners with a pre-existing interest in alt-right topics and alternative news media, rather than French users with diverse political views. Concluding, anomalous account usage patterns suggest the possible existence of a black-market for reusable political disinformation bots’ (Ferrara, 2017). A study of Germany’s 2017 election campaigns at the Oxford Internet Institute found that ‘highly automated’ tweeting increased from 5.7% to 7.4% between February and September 2017. [19]

In this sense, social media are multipliers of public discourse, wherein information can be shared and disseminated more efficiently. Against this background, social bots can be assessed as an impairment of digital democracy:

Our findings suggest that bots can affect political discussion networks in several significant ways. We found that bot-like accounts created the appearance of a virtual community around far-right political messaging, attenuated the influence of traditional actors (i.e., media personalities, subject matter experts). [20]

They are used to influence the formation of opinion and political processes with the help of camouflage communication. Social bots and bot networks¾as described above¾are also used systematically to disrupt existing communication practices with the aim of causing the actors to withdraw from existing communication spaces. Bot networks do not have the intelligent means of exchanging arguments, rather they operate with the strategy of large numbers, they generate thousands of messages per hour and crowd out other actors and their content, making their contributions invisible and irrelevant because they can hardly be mapped by search algorithms. Finding and deleting the bot messages can be programmed with filter software, but a manual confrontation with the automatically generated noise is hardly possible at this level.

As a technically induced mass phenomenon of opinion, the bots aim to produce a majority. They suggest that a large group articulates their specific interests. The bots do not want to be perceived as individual voices with which one could exchange opinions. The strategic added value of bot production lies in making the visible invisible: because bots are primarily a static variable, they appear as mass indicators and above all communicate a statistical signal¾for example, in a statement suggesting “500,000 users have agreed to X.” In this way, individual opinions are repressed and made invisible. These signal effects of the large numbers push more differentiated arguments and considerations into the background. Discussion spaces are in fact flooded with content by the bots. Bots also like and comment within botnets, which means that individual bots gain influence in the ranking of content and can reach many users. In this way, bots displace other content that can no longer be perceived. As a result, bots pursue a specific visibility policy; in their mass appearance, they displace ‘alternative’ content and reproduce the content for which they were programmed. Bots also complicate the relationship between human subject and machine, ultimately taking advantage of invisibility policies themselves to operate successfully.

In concluding this essay, a change in perspective should also be reflected: social bots not only represent a destruction or disintegration for the democratic communication culture, they also enable a revival of the democratic exchange within a discursive public. This positive turn is made possible by users no longer believing in the mainstream of quantitatively measurable attention-grabbing, but instead focusing more on qualitative dialogues, the exchange of arguments, and attentive listening and questioning, because the bots cannot simulate this form of high-quality discussion culture. Bots are programmed to produce a quantitative increase in attention, but the simulation of human-like communication has so far not been very pronounced and cannot be credibly simulated by smart intelligence. In this sense, the phenomenon of bots can lead to a return to discursive traditions and cultures of communication.

_How to Cite:

Ramón Reichert. “Terror Machines: Social Bots in Struggles for Hegemony in Digital Publics.” On_Culture: The Open Journal for the Study of Culture 13 (2022). <https://doi.org/10.22029/oc.2022.1301>.

CC-By-4-0
CC-BY 4.0

_Endnotes

  • [1] Emilio Ferrara et al., “The Rise of Social Bots,” Communications of the ACM 59, no. 7 (2016): 96–104, here: 96.
  • [2] Elizabeth Dubois and Fenwick McKelvey, “Political Bots: Disrupting Canada’s Democracy,” Canadian Journal of Communication 44, no. 2 (2019): 27–33. Doi: <10.22230/cjc.2019v44n2a351>.
  • [3] Dennis Assenmacher et al., “Demystifying Social Bots: On the Intelligence of Automated Social Media Actors,” Social Media + Society 6, no. 3 (2020). Doi: <10.1177/2056305120939264>.
  • [4] Loni Hagen et al., “Rise of the Machines? Examining the Influence of Social Bots on a Political Discussion Network,” Social Science Computer Review 40, no. 2 (2020): 264–287. Doi: <10.1177/0894439320908190>.
  • [5] Franz Josef Röll, “Öffentlichkeit in postdemokratischen Gesellschaften,” in Medienkritik im digitalen Zeitalter, eds. Horst Niesyto and Heinz Moser (München: kopaed, 2018), 33–44.
  • [6] It is crucial that social bots simulate a democratic climate of opinion. The simulation of democratic opinion formation is based on the assumption that the majority of all votes represent a political interest.
  • [7] Brandie Nonnecke et al., “Harass, Mislead, & Polarize: An Analysis of Twitter Political Bots’ Tactics in Targeting the Immigration Debate before the 2018 US Midterm Election,” Journal of Information Technology & Politics (2021): 1–12, here: 1. Doi: <10.1080/19331681.2021.2004287>.
  • [8] Franziska Martini, Paul Samula, Tobias R Keller and Ulrike Klinger, “Bot, or Not? Comparing Three Methods for Detecting Social Bots in Five Political Discourses,” Big Data & Society 8, no. 2 (2021). Doi: <10.1177/20539517211033566>.
  • [9] Chad Edwards, Autumn Edwards, Patric R. Spence and Ashleigh K. Shelton, “Is That a Bot Running the Social Media Feed? Testing the Differences in Perceptions of Communication Quality for a Human Agent and a Bot Agent on Twitter,” Computers in Human Behavior33 (2013): 372–376. Doi: <10.1016/j.chb.2013.08.013>.
  • [10] Oliver Leistert, “Social Bots als algorithmische Piraten und als Boten einer techno-environmentalen Handlungskraft,” in Algorithmuskulturen: Über die rechnerische Konstruktion der Wirklichkeit, eds. Robert Seyfert and Jonathan Roberge (Bielefeld: transcript, 2017), 215–234, here: 217. Doi: <10.25969/mediarep/2756>, [transl. R.R.].
  • [11] Ramón Reichert, “Facebook und das Regime der Big Data,” Österreichische Zeitschrift für Soziologie 39, no.1 (2014): 163–179; Antoinette Rouvroy and Bernard Stiegler, “The Digital Regime of Truth: From the Algorithmic Governmentality to a New Rule of Law,” La Deleuziana 3 (2016): 6–29; Oana B. Albu and Hans Krause Hansen, “Three Sides of the Same Coin: Datafied Transparency, Biometric Surveillance, and Algorithmic Governmentalities,” Critical Analysis of Law 8, no. 1 (2021): 9–24.
  • [12] Mark Andrejevic, “Surveillance in the Digital Enclosure,” The Communication Review 10, no. 4 (2007): 295–317. Doi: <10.1080/10714420701715365>.
  • [13] Miranda Mowbray, “Automated Twitter Accounts,” in Twitter and Society, eds. Kathrin Weller et al. (New York: Peter Lang Publishing, 2014), 183–194, here: 188.
  • [14] PHP is a server side scripting language that is used to develop websites or web applications.
  • [15] Norah Abokhodair, Daisy Yoo, and David W. McDonald, “Dissecting a Social Botnet: Growth, Content, and Influence in Twitter,” Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing (2015): 839–851. Doi: <10.1145/2675133.2675208>.
  • [16] Simon Hegelich and Dietmar Janetzko, “Are Social Bots on Twitter Political Actors? Empirical Evidence from a Ukrainian Social Botnet,” Proceedings of the International AAAI Conference on Web and Social Media 10, no. 1 (2021): 579–582, here: 582, <https://ojs.aaai.org/index.php/ICWSM/article/view/14764>.
  • [17] Alessandro Bessi and Emilio Ferrara, “Social Bots Distort the 2016 US Presidential Election Online Discussion,” First Monday 21, no. 11 (2016). Doi: <10.5210/fm.v21i11.7090>.
  • [18] Tobias R. Keller and Ulrike Klinger, “Social Bots in Election Campaigns: Theoretical, Empirical, and Methodological Implications,” Political Communication 36, no. 1 (2019): 171–189: here 174.
  • [19] Keller and Klinger, “Social Bots,” 9.
  • [20] Hagen et al., “Rise of the Machines?”