Essay by Joshua A. Geltzer

Building resilience on modern communications platforms.

Chelsea King disappeared on February 25, 2010. She was 17 years old when she went missing. She had gone out for a run near her home, but as hour after hour went by she failed to return. In the days that followed, thousands of volunteers searched across San Diego County for any sign of King. Strangers wished her well and expressed their concern on Facebook pages her family had created to organize the search for her.1

But the outpouring of admirable sympathy soon became marred by astonishing cruelty. Authorities arrested a suspect named John Gardner, who had previously been convicted of molesting a child. In response, a San Diego man named Mike McMullen created a Facebook page called “I Bet a Pickle Can Get More Fans than Chelsea King.” The page showed a pickle dressed in underwear holding a photoshopped cutout of King’s head.

Television news stations reacted with horror. When McMullen was interviewed by a reporter with San Diego’s ABC affiliate, McMullen was unfazed by the criticism: He dressed up like a pickle for the interview, and spoke in the pickle’s persona. Television coverage only accelerated the influx of what is now widely called “trolling.” Seemingly misanthropic Internet users flocked to McMullen’s page to post graphic, vicious commentary and images. As if summoned by a siren call, the Internet’s masses descended on and relished the tragedy, adding to the King family’s pain.

The glee with which a group of strangers reacted to King’s death is deeply unsettling. Sadism existed before social media; so did mobs. But the particular ways in which sadistic mobs form, and the consequences of their formation, are conditioned by the media environment that gives rise to them.

Meanwhile, another form of sadism was emerging online—on its surface seemingly similar, but at a deeper level rather different. By the summer of 2014, the Islamic State of Iraq and al-Sham (ISIS) had seized control of large parts of Syria and was advancing deep into Iraq.2 The terrorist group’s strategy hinged on using its battlefield successes as a basis for a global recruitment campaign conducted, in large part, online. The group’s digital output became increasingly violent. One British fighter used Twitter to share an image of himself holding the severed head of a victim of ISIS’s brutality.

That, it turned out, was just the beginning. Severed heads on fence posts gave way to rows of crucified men hanging on crosses, which in turn yielded to a photograph of a seven-year-old boy holding a severed head his father had given him. Worse still, ISIS grabbed the attention of a horrified West with a series of videos showing the beheadings of American and British hostages that were directly and instantly downloaded from YouTube and rapidly shown, at least in part, across television channels and on newspaper covers. When images of beheaded hostages were first widely disseminated in an era before social media had become ubiquitous—in the early 2000s—they were shocking, repulsive, and thankfully somewhat difficult to find. By the time ISIS made the practice almost commonplace a decade later, that was no longer true. The horror remained, but it disseminated more quickly than it ever had before, thanks in large part to social media.

ISIS was “staying on message,” as a communications consultant might put it. Under the group’s deliberate strategy, overseen by Abu Muhammad al-Adnani, ISIS escalated its savagery in steady, calculated fashion. That escalation went hand in hand with increasingly forceful calls for sympathizers from around the world to leave their homes and fight under ISIS’s flag on the battlefields of Syria and Iraq.3 Even today, after losing essentially all of the physical territory the group once held in Iraq and Syria, ISIS retains a virtual foothold that allows it to recruit and radicalize online.

An image of a severed head shocks the conscience, much as the mocking of King’s murder does. Coping with tragedy and the exploitation of tragedy require resilience. But undifferentiated discussion of “violent imagery” online can lead us to overlook the distinct forms of resilience—responses that mitigate rather than accentuate the problem—that are most useful when confronted with different types of such online content.4

Are particular depictions of violence intended to advance particular political aims by an organized group? Thwarting an organized terrorist recruitment campaign can require an organized, planned response. But such organized responses can prove ineffective—even counterproductive—as a reaction to disorganized barbarism, which can feed off the attention. When ought we respond as individuals, and when must we mobilize as a community that includes Internet users, technology platforms, governments, civil society, the media, and more? 

Judgments about which provocations merit particular responses change over time. Consensus about where the balance ought to be struck is, to say the least, difficult to come by. Many parents might once have urged their children to respond to schoolyard bullies by fighting back. Some still do. “Tattletaling” persists as a social taboo—to an extent. The boundary between individual resilience and an appeal to authority to respond to violence or to the threat of violence, whether in a playground or a bar, has shifted over time. 

But social media as it exists today demands some form of intervention in response both to bullies and to violence-preaching ideologues. MySpace and Facebook are, in a sense, optimized for the rapid formation of groups—including ill-intentioned ones—while Twitter’s hashtags facilitate speedy sharing of hateful ideas. Responding in kind to online bullies, however, can fuel their fire. Arguably, this was always true, but the ways in which that feedback loop operates online are distinct from how they worked in the past. Similarly, individual decisions to leave Detroit or Brussels or Marrakech to fight in Syria have global repercussions, and so demand a distinct response. The consequences might be direct, as when fighters themselves return from battlefields to conduct or otherwise facilitate terrorist attacks, or indirect, when they serve as instruments of recruitment and radicalization. So the challenges posed by today’s online trolling and recruitment to extremism demand distinct responses suited to our digital era. 

Clear examples exist on each end of a spectrum ranging from unorganized trolling to centrally planned recruitment campaigns. It is, however, a spectrum rather than a dichotomy—and a spectrum that can be difficult to mark with precision. For example, Alek Minassian, who rammed a van into a Toronto crowd in 2018, killing ten people apparently on the basis of his misogynistic “incel” beliefs, defies clear placement along this spectrum. And although alienated and disaffected nihilist school shooters have been inspired by one another at times, there is not an organized front of nihilists (by definition) seeking to attain a particular set of political objectives through these tragic shootings. Thinking about this spectrum as an analytic construct helps to inform effective responses.

One key distinction is between challenges arising from unorganized harmful behavior online and challenges cropping up from carefully calibrated efforts to use modern communications platforms for nefarious ends.  And, while (as noted) this is more of a spectrum than a dichotomy, there is still a material difference between the organic efforts of, say, online trolls to harass and the far more directed strategy of terrorist groups like ISIS to recruit, radicalize, and mobilize followers.  The difference involves the presence or absence of deliberate direction. Terrorist recruitment exhibits strategic direction: it organizes means in the pursuit of particular ends, such as recruiting foreign fighters to the battlefield and inciting terrorist attacks globally, even if the implementation can be accomplished by individuals effectuating the strategic direction provided. Harassment by trolls, by contrast, demonstrates organic massing: it often arises without a predetermined set of objectives and revels in the activity itself, as both means and end. Trolls can, of course, have leaders who coordinate trolling in a tactical fashion; but it is difficult to argue that such leaders are coherently striving towards a set of political ends—in contrast to terrorist leaders, who are doing just that, as odious as their ends and their means to achieve them are.

The distinction is not hard and fast—both individual and collective resilience can help in grappling with both types of problems. But the degree of organization behind violent imagery is central to formulating an effective response grounded in resilience as, at least often, organization on the part of those spreading violence will require organization on the part of those responding.

Trolling and the Resilience of Self-Confidence

Consider first the phenomenon of online trolling unleashed on Chelsea King’s suffering family, a phenomenon that provokes its victims as a form of sadistic humor rather than in pursuit of political aims. As Whitney Phillips, a communications scholar at Syracuse University, wrote in a landmark 2015 study:5

[T]rolls take perverse joy in ruining complete strangers’ days. They will do and say absolutely anything to accomplish this objective, and in the service of these nefarious ends deliberately target the most vulnerable—or as the trolls would say, exploitable—targets. Consequently, and understandably, trolls are widely regarded as the primary obstacle to a kinder, gentler, and more equitable Internet.

A few key points emerge from Phillips’ study of trolling.6 Trolls can inflict exceptional damage, emotional and otherwise, on their targets. Phillips finds that trolls are primarily motivated by a desire to have fun—to indulge in “perverse joy.” Phillips’ study documents, through extensive observation and interaction with trolls, that “trolls are motivated by what they call lulz, a particular kind of unsympathetic, ambiguous laughter.”7 As Phillips goes on to explain, where the rest of us might see tragedy and experience sympathy, “[a]ll trolls see—all they choose to see—are the absurd, exploitable details;” that is to say, “all that matter[s] is the punch line.”8

Phillips describes how, even as trolls shifted toward more stable online personas with the emergence of platforms like Facebook, they continued to act as a flash mob, grouping together spontaneously to troll different targets and then going their own way again.9 Note that this activity is best understood as “unorganized” in the sense that it is not deliberately organized. It is not, however, entirely random. Leaders sometimes emerge for particular campaigns of harassment, but their authority is generally minimal and contingent. Fellow trolls even sometimes work from the same script.10 But, ultimately, the activity proceeds without clear, ends-oriented direction from a recognized leader or authority figure.

Despite that lack of centralized coordination, Phillips argues—convincingly—that trolls pose a real and continuing challenge to the Internet. The British tabloid The Sun reported in May 2017 that groups of trolls on Facebook were offering money to those who generated the nastiest taunts of disabled children.11 This is hardly the Internet that was to make “the world . . . a better place,” in the words of Evan Williams, a founder of Twitter who now worries that “the internet is broken [a]nd it’s a lot more obvious to a lot of people that it’s broken.”12

The organic nature of trolling speaks to the power that properly cultivated resilience can have in tamping down trolling’s effects and momentum. Trolls “regard public displays of sentimentality, political conviction, and/or ideological rigidity as a call to trolling arms.”13 When sentimentality, conviction, and rigidity emerge in response to trolling itself, the trolls’ glee is especially pronounced. Recognizing this is to acknowledge the power that individuals have to respond to trolling in ways that lessen rather than aggravate its impact. Understand what trolling is, and you’ll stop reacting to trolls in precisely the ways that energize the feeding frenzy. Stop encouraging trolls by providing them with a reaction, and they begin to lose the sense of mirth that motivates them. Stop covering trolls in mainstream media and thus amplifying their trollish voices, and the call to arms that energizes other trolls becomes quieter. 

How do we start taking steps to minimize the effect of trolls? One way is through education: there seems to be value in deliberate programming, adapted (as all education must be) to local needs and context, in order to boost social-media literacy, which in modern society should become part of any well-rounded civic education. A second part of combating trolls is through affirmation: there is an increasing need for a sense of self that transcends one’s digital profile, and families and friends can help preserve one’s self-confidence even at moments of greatest vulnerability. A third way is to post reminders at key opportunities: just as signs along the highway offer periodic reminders to buckle seatbelts and avoid sending text messages while driving, the superhighways of today’s myriad communications platforms could trigger well-placed alerts and reminders for users in the interests of their and others’ health and safety. Imagine if an alert reminded a user to avoid re-posting stories without actually reading them in full and reflecting on them; or if a reminder helped a user to consider that online harassment has led to suicides. These interventions might, today, seem heavy-handed—but so, too, did the early seatbelts and cautions against drunk driving. Today, however, those are of course commonplace; and they continue to save lives. Moreover, automation, grounded in machine learning, can help to identify key opportunities where such reminders may be particularly valuable—including, for example, circumstances that are algorithmically identified as ripe for a descent into the spiral of trolling.

To be clear, none of this should be taken to suggest that individual resilience can singlehandedly address the persistence of trolling behavior online and the very real injury that it inflicts on its targets. Likewise, other similar, not deliberately organized online behavior, such as individual agitators’ sharing of fake news stories or deliberately provocative expressions of racism or homophobia, cannot be defeated by individual resilience alone. The platforms on which trolling occurs, which have at a minimum a responsibility to enforce their own sense of what their community should permit as manifested in their terms of service and community guidelines, must also be held accountable; and—of course—none of this should be understood to let the trolls themselves off the hook. But this discussion aims to underscore the power of resilience based in self-confidence in the face of unorganized online challenges such as trolling. This paper also invites a contrast with the type of resilience needed to take on organized challenges like terrorist recruitment online, to which the discussion now turns.

Terrorist Recruitment and the Resilience of Community Mobilization

The ways in which ISIS modernized terrorist recruitment were central to what the group would regard as its success. Indeed, the techniques ISIS pioneered are already serving as models for other terrorist groups. ISIS built on earlier online recruitment by al Qaeda in the Arabian Peninsula (AQAP) and al-Shabaab (an al Qaeda affiliate in Somalia). ISIS produced more content on social media, in a broader range of languages, than its predecessors. It also insisted on better production value and used consistent themes more deliberately. Exemplifying these traits is an ISIS video entitled “No Respite,” which relies on the same tools and tactics exploited in Hollywood trailers—a shift from violence to warmth, cued by subtle alterations in lighting and music—to entice those potentially sympathetic to ISIS’s cause.14

Charlie Winter, now a Senior Research Fellow at the International Centre for the Study of Radicalisation and Political Violence at King’s College London, has catalogued particular narratives ISIS has pushed on social media: brutality, mercy, victimhood, war, belonging, and utopia, to name a few.15 Winter shows how ISIS mixes and matches these themes in ways designed to boost the group’s overall appeal and respond effectively to real-world developments. For example, as ISIS gained territorial control, it relied more heavily on portrayals of its efforts to govern and provide services to local populations. As ISIS subsequently lost territory, its online propaganda emphasized the group’s purported victimhood and martyrdom.

ISIS’s reliance on this array of themes, taken together and consciously interwoven, attempts to make ISIS recruitment a form of radical “identity politics.” The interwoven themes provide a narrative about one’s relation to the world that, if adopted, becomes all-consuming. 

A sense of belonging is perhaps the most crucial of ISIS’s themes. As my former White House colleague Jen Easterly and I have written, ISIS’s most distinctive innovation was providing a sense of belonging to potential followers worldwide, a belonging grounded in the physical territory of ISIS’s so-called caliphate but resonating with many who would never travel there: “The Islamic State brings its portrayal of the marketplaces of Raqqa in Syria directly to computers and iPhones everywhere, its attacks in the cafes of Dhaka, Bangladesh, and the streets of Nice in France directly to the Facebook pages and Twitter feeds of the whole world.”16 This was, we explained, the well-organized, if perverted, brilliance of “the Islamic State’s manipulation of modern communications technologies to reach those who would feel alone, offer a sense of community, and provide inspiration and just enough direction to spark attacks that fulfill [the group’s] own strategic purposes.”17

This is not a strategy that can operate in unorganized fashion, à la trolling. To the contrary, this type of messaging consistency, fluency, and sophistication requires some degree of top-down direction and accepted leadership, even if it is then implemented by a diverse array of followers scattered around the globe. As David Patrikarakos describes it in his book War in 140 Characters, the “Islamic State had an army of networked individuals under its command, all with their own Twitter, Facebook, WhatsApp, and Skype accounts, who could act as individual broadcasters and recruiters”—but, to reiterate Patrikarakos’s apt characterization, they were still “under its command” and taking cues from ISIS’s leadership.18

Terrorists have, of course, tried to spread their messages rapidly and globally before. Half a century ago, European radicals like the Baader-Meinhof Gang fully expected images of their violence to be transmitted around the world by newspaper and television. Osama bin Laden made the most of the technology available to him by faxing to Western news agencies his denunciations of what he claimed were apostate regimes in the Middle East and their Western supporters. These groups could not “go direct” to the vast majority of their would-be followers: beyond those in close proximity with whom they might be able to share pamphlets or recordings or even reach via radio broadcast, they overwhelmingly relied on newspaper editors, radio broadcasters, and television news anchors to get their message right and then spread it. That’s no longer true for the likes of ISIS: via today’s communications platforms, ISIS can directly reach those who might take up the group’s battle cry.

A singularly driven exploitation of modern communications platforms for deadly ends demands, among other responses, the cultivation of resilience on the part of those who may be exposed to ISIS’s recruitment materials online—in other words, all of us who use the Internet. But boosting individual self-confidence hardly seems enough. One tends to know trolls when one sees them, and one can then choose to walk away. By contrast, ISIS’s recruitment strategy, thanks to its deliberate nature, is designed to be a wolf in sheep’s clothing. Users who are, at first, merely intrigued by ISIS videos or hashtag campaigns exhibiting warmth—and perhaps no violence whatsoever—find themselves simply “liking” such content, then even explicitly asking for more to be posted. That is the cue for ISIS’s recruiters to move from the deliberately open setting of social media and file-upload sites to the closed channels of encrypted communications offered by Signal or Telegram. Even still, they tend not to show their hand, and instead, they pose as mere fellow travelers sharing a mutual interest in understanding what ISIS is all about. Only once a rapport and sense of trust have been established do they urge the potential recruit to consider joining ISIS in, say, Syria or Libya—or perhaps engaging in violence right in his or her own home country.

It would be naïve to think that boosting an Internet user’s self-confidence is sufficient when the tactics at work are so deliberate and choreographed, and especially when they are directed at the young and vulnerable. The challenge posed by terrorist recruitment online calls for a systematic response. That response should involve the provision by the government to the private sector of information on new trends and tactics government experts are seeing from terrorist groups online, especially as those tactics cross platforms and thus escape the purview of any one company. That response should also involve the augmentation of cross-industry collaboration on identifying particular pieces of terrorism-related content, building on the existing exchange of such content itself, and expanding it to involve key information about the accounts associated with such content. And that response should involve the steadily refined use of new technologies like machine learning.

Efforts by social media companies to use machine learning to identify more rapidly terrorist content and police their platforms for it are still in their infancy. Facebook, for example, reports that machine learning is helping the company to identify terrorist content before any user spots and reports it.19 All machine learning is only as good how it is coded to learn. That means that the use of emerging technologies should be honed and refined so as to avoid, at least as much as possible, prohibiting (even if just until further review) content that is not actually terrorist content but, for example, mainstream news coverage of terrorist acts. But, given that leading companies report the utility of machine learning in this context while also expressing solicitude for this concern, there are grounds for hope that technology can increasingly be one element of an enhanced response to the challenge.

To be clear, machine learning on its own—like any technology on its own—will not solve the challenge posed by terrorist content online. But it does hold at least the potential to address two key aspects of the problem: scale and speed. Scale poses a challenge for the current heavy reliance on human review, given the sheer enormity of the usage of today’s leading social media and file-upload platforms. Speed poses the additional challenge insofar as, once a particular piece of terrorist content has been uploaded, even its removal from the particular platform to which it has been uploaded almost instantly becomes an imperfect solution, given that the content can be rapidly saved by users and shared elsewhere. Machine learning can, at least in theory, help to address both scale and speed. While machine learning will of course reflect any biases embedded in its initial programming and subsequent refinement, it need not be more biased than the human reviewers currently asked to apply the often vague language of various platforms’ terms of service to particular pieces of content uploaded to those platforms. Indeed, the current heavy reliance on rapidly increasing numbers of human reviewers holds the potential for wild inconsistencies in applying those terms, which machine learning could avoid. And machine-driven determinations, or at least representative samples of them, can and should be reviewed as quickly as feasible by human reviewers, so that determinations deemed erroneous can be corrected—and also so that the machine learning tools can themselves learn from their mistakes, once those are identified by human reviewers. At least theoretically, there need not be anything inherently more speech-promoting or speech-suppressing in relying on machine learning than there is in relying on human review—it all depends on how the machine performs versus how the human performs, and how both are evaluated and improved over time. If used responsibly, both approaches demand constant review and refinement—and both would benefit from transparency as to how key determinations are made, so that the public can understand and engage with those determinations. But at least machine learning holds the potential to keep up with the distinctive—and growing—challenges posed by the scale and speed of today’s technology platforms.

So, too, does the augmentation of existing efforts to collaborate across industry to tackle these types of problems. One promising form that such augmentation could take would involve expanding from sharing information about particular pieces of terrorism-related content to sharing key information about the accounts associated with that content. This would allow other platforms to determine if they are hosting similar accounts engaged in terrorism-related violations of those platforms’ terms of service or community guidelines.

Addressing this challenge more effectively also includes, as noted above, the U.S. government sharing expeditiously new trends and tactics associated with terrorist activities online with the private sector so that the private sector can better understand them. These might include new themes terrorist groups are emphasizing in their latest recruitment materials and how those reflect developments in the physical world; new platforms on which particular terrorist groups are operating and how they are directing users to those platforms and away from established ones; and new methods of rapidly regenerating suspended accounts and the followers associated with them. The government already informs a wide array of private sector actors about threats to critical infrastructure and cybersecurity—often without mandating any particular response. Technology companies seem a natural recipient of increasing government-led insights into how terrorists are using online platforms to recruit and radicalize followers. These insights could include how terrorists are coordinating their activities across platforms; what the relationship is between recruitment efforts “in the open” (such as on Twitter and YouTube) versus recruitment efforts in encrypted channels (such as on Surespot and Telegram); and how developments in the physical world, including on the battlefield, appear to relate to new messaging themes online. A number of checks would be important to ensuring that the government does not abuse this information flow to characterize actors as “terrorists” when they are not, including appropriate review and auditing within the executive branch as well as an ability for a user to pursue an appeal within the ranks of the technology companies themselves. Ultimately, one key check would be the continuing ability of technology companies simply to decline to act on information provided by the government that the companies recognized to be inappropriate or irrelevant.

Concluding Thoughts: A Range of Challenges and a Range of Resilience

Trolling and terrorist recruitment are, of course, not the only challenges facing modern communications platforms—other challenges include foreign countries interfering with domestic elections and democratic dialogue more generally; the related spread of fake and polarizing news stories by inflammatory actors both foreign and domestic; and the sharing of user data in ways that intrude on users’ privacy. But online trolling and terrorist recruitment represent two types of such challenges that are different from each other in key ways. One category, epitomized by trolling, involves spontaneous, generally unorganized behavior. The resilience of self-confidence is critical to addressing this challenge: a sense of individual self-worth that can be cultivated through education, training, and well-placed reminders, and then maintained by each of us. The second category, epitomized by terrorist recruitment, features deliberately organized and crafted use of modern communications platforms. The resilience of community mobilization is essential here: this category demands policy responses and perhaps even legal responses that involve, at their core, technology companies and governments working together. Recognizing the distinction in how bad actors use emerging technologies to cause harm is a critical first step in understanding who should be expected to lead the response and what they can do to help—as well as the concrete steps that must be taken to combat such harm.

View Footnotes

1 - This series of events is described in Whitney Phillips, This Is Why We Can’t Have Nice Things (Cambridge: MIT Press, 2015), pp. 74-75.

2 - This series of events is described in Jytte Klausen, “Tweeting the Jihad: Social Media Networks of Western Foreign Fighters in Syria and Iraq,” Studies in Conflict & Terrorism, Volume 38, Issue 1, pp. 4-20.

3 - Peter Bergen and David Sterman, “When Americans Leave for Jihad,” CNN.com, August 29, 2014, http://www.cnn.com/2014/08/27/opinion/bergen-sterman-isis-american/index.html.

4 - See, for example, Kalev Leetaru, “Can AI Rescue Us from Violent Images Online,” Forbes.com, April 29, 2017, https://www.forbes.com/sites/kalevleetaru/2017/04/29/can-ai-rescue-us-from-violent-images-online/#1ced092c2570.

5 - Phillips (2015), p. 10.

6 - The discussion here focuses on what might be called traditional trolls, and not so-called “troll farms” like those organized by the Russian Government to interfere with America’s 2016 presidential election. The planned, directed activities of such farms more closely resemble the online activities of terrorist groups than they do the trolls with whom they share a name—misleadingly, at least in this respect. See Joshua A. Geltzer, “Stop Calling Them ‘Russian Troll Farms,’” CNN.com, August 17, 2018, https://www.cnn.com/2018/08/17/opinions/stop-calling-russian-operatives-troll-farms-geltzer/index.html.

7 - Phillips (2015), p. 24.

8 - Phillips (2015), p. 29.

9 - See Phillips (2015), pp. 77-80.

10 - See Phillips (2015), p. 4.

11 - Ellie Flynn, “Cash for Trash: Sick Facebook Troll Groups Are Offering Money to the Nastiest Bullies Who Taunt Disabled Kids Including Harvey Price,” The Sun, May 2, 2017, https://www.thesun.co.uk/news/3458996/facebook-troll-groups-harvey-price/.

12 - David Streitfeld, “‘The Internet Is Broken’: @ev Is Trying to Salvage It,” The New York Times, May 20, 2017, https://www.nytimes.com/2017/05/20/technology/evan-williams-medium-twitter-internet.html.

13 - Phillips (2015), p. 25.

14 - See “New Video Message from The Islamic State: ‘And No Respite,’” Jihadology, November 24, 2015, https://jihadology.net/2015/11/24/new-video-message-from-the-islamic-state-and-no-respite/.

15 - Charlie Winter, “The Virtual ‘Caliphate’: Understanding Islamic State’s Propaganda Strategy” (London: Quilliam, 2015), http://www.stratcomcoe.org/download/file/fid/2589.

16 - Jen Easterly and Joshua A. Geltzer, “The Islamic State and the End of Lone-Wolf Terrorism,” Foreign Policy, May 23, 2017, https://foreignpolicy.com/2017/05/23/the-islamic-state-and-the-end-of-lone-wolf-terrorism/.

17 - Easterly and Geltzer (2017).

18 - David Patrikarakos, War in 140 Characters: How Social Media Is Reshaping Conflict in the Twenty-First Century (New York: Basic Books, 2017), p. 209.

19 - See, for example, Monika Bickert and Brian Fishman, “Hard Questions: How We Counter Terrorism,” Facebook Newsroom, June 15, 2017, https://newsroom.fb.com/news/2017/06/how-we-counter-terrorism/.

You Might Also Like
Joshua A. Geltzer

Joshua Geltzer serves as the founding Executive Director of the Institute for Constitutional Advocacy and Protection as well as a Visiting Professor of Law at Georgetown University Law Center. He is also an ASU Future of War Fellow at New America.

Geltzer served from 2015 to 2017 as Senior Director for Counterterrorism at the National Security Council staff, having served previously as Deputy Legal Advisor to the National Security Council and as Counsel to the Assistant Attorney General for National Security at the US Department of Justice. He also served as a law clerk to Justice Stephen Breyer of the US Supreme Court and, before that, as a law clerk to Chief Judge Alex Kozinski of the Ninth Circuit Court of Appeals.

Geltzer received his JD from Yale Law School, and his PhD in War Studies from King’s College London, where he was a Marshall Scholar. Before that, he attended Princeton University, majoring in the Woodrow Wilson School of Public and International Affairs.

He is the author of US Counter-Terrorism Strategy and al-Qaeda: Signaling and the Terrorist World-View (Routledge, 2009) and his work has appeared in the Atlantic, Foreign Policy, Parameters, Politico, Studies in Conflict & Terrorism, the Journal of Constitutional Law, the Berkeley Journal of International Law, and the Washington Post.

You Might Also Like