Extreme AI: The Impact Of Artificial Intelligence
And Terrorist Movements
By George Michael
Recent developments in artificial intelligence (AI) could augur
tremendous advances in a wide array of fields. Moreover, it
now seems quite possible that we are on the cusp of viable quantum
computing. The marriage of the two has the potential to transform
our lives for the better. In the not-too-distant future, new
computer technology could assist us in feeding the planet, curing
diseases, and producing practically unlimited cheap energy.
With its capacity to learn and evolve, AI has the capacity to
alter human reality in such a fundamental way not experienced
since the dawn of modern history.
There are, however, potential downsides to advanced technology
as well. New AI platforms, such as ChatGPT, for example, could
serve as a major force multiplier to extremist movements and terrorist
groups in a number of ways, from disseminating propaganda to planning
attacks. These potential pitfalls were not considered when AI
was initially envisaged.
Origins of Artificial Intelligence
Basically, AI is the idea that machines can perform tasks that
are typically associated with human cognition. Scientists have
long debated when it will emerge. Some argue that elementary forms
are already here, while others contend that reverse engineering
the human brain is a tall order and AI will not be on the horizon
anytime soon. In 1956, the Dartmouth Summer Research Project was
convened. One of its chief organizers—MIT professor Marvin
Minsky—earned the moniker “the father of artificial
intelligence.” Participants at the conference pondered the
question “Can computers think?” This query gave birth
to an entirely new field of science, which came to be known as “artificial
Since then, computer-engineering efforts have been directed to
develop machine-based intelligence, which can mimic the human
mind. The two most fundamental challenges confronting AI are replicating
pattern recognition and common sense. Our subconscious minds perform
trillions of calculations when carrying out pattern recognition
exercises, yet the process seems effortless. Duplicating this
process in a computer, however, is extremely challenging. In fact,
the digital computer is not really a good analog of the human
brain as the latter operates a highly sophisticated neural network.
Unlike a computer, the human mind has no fixed architecture; instead,
collections of neurons constantly rewire and reinforce themselves
after learning a task. What is more, we now know today that most
human thought actually takes place in the subconscious, which
remains somewhat of a black box in brain research. The conscious
part of our mind represents only a tiny part of our computations.
Getting to our current level of human intellectual development
involved many evolutionary pathways. Previously in our evolution,
those humans who survived and thrived in the grasslands were those
who were adept at tool making, which required increasingly larger
brains. The development of language was believed to have accelerated
the rise of intelligence insofar as it enabled abstract thought
and the ability to plan and organize society.
Just how would we know when AI is achieved? In 1950 Alan Turing,
the famed English mathematician and pioneer in computer science,
advanced the “Imitation Game,” that is, the notion
that if a computer could exhibit intelligent behavior indistinguishable
from that of a human it would be considered a sentient entity.
This subsequently became popularly known as “the Turing
Test.” Concrete progress toward this goal was realized as
far back as June of 2014, when it was announced that a computer
had just passed this type of exam. At a competition organized
by the English engineer Kevin Warwick, a so-called “chatterbot” convinced
33 percent of the judges that it was human with a 13-year old
Steadily, computers have become more powerful over time. Back
in 1960, Gordon Moore, a cofounder of Intel, observed that the
number of transistors on a computer chip (integrated circuit)
tends to double every eighteen months. This allows more information
to be stored on computer chips and drastically brought down their
costs, thus increasing their availability to more and more users.
The bad news is that Moore’s law could soon hit a brick
wall, as there are limits to miniaturization in digital computers.
Quantum computing, however, could revitalize the computer industry.
Inasmuch as quantum computers have the capability to analyze simultaneously
all possible scenarios, they can easily surpass the power of the
heretofore-greatest computers. There is great potential in the
unification of AI with quantum computing which could bring forth
unfathomable calculational power. Recognizing this synergy, Google
CEO Sundar Pichai opined, “I think AI can accelerate quantum
computing, and quantum computing can accelerate AI.”
The Emergence of Online AI Platforms
Over the ensuing decades, interest in AI was waxed and waned.
Funding for AI has gone through cycles of growth and retrenchment.
Initial optimism was followed by frustration, as scientists realized
the daunting task of getting machines to think like people. Periods
of enthusiasm and investment were followed by an AI winter, during
which funding for AI dwindled. Today, though, AI is big business.
Tech giant Google leads the industry with investment in AI as
of 2023 at over $30 billion, followed by Facebook at over $22
billion, and Amazon and Microsoft with over $10 billion each.
Although AI promises great potential for good, it could conceivably
cause great harm as well. Machine learning works by developing
generalizations from large amounts of data. AI products have the
potential to learn and adopt the biases of the people training
them, even churning out racist, sexist, and otherwise offensive
content. This was illustrated on March 23, 2016, when Microsoft
released “Tay”—an artificial intelligence bot—that
stood for “thinking about you.” Essentially, Tay was
a machine-learning project that learned from her conversations
with users. Tay was designed to tweet and engage people like a
19-year-old girl. At first, it seemed to have been intended as
an innocuous platform. As Microsoft announced, Tay was “designed
to engage and entertain people where they connect with each other
online through casual and playful conversation.”
Shortly after its debut, however, Tay began to post inflammatory
and offensive tweets through its Twitter account, causing Microsoft
to shut her down only 16 hours after her launch. The reason for
Tay’s extremist conversion seems to have been the result
of interactions with certain individuals on Twitter who began
tweeting distasteful and aggressive remarks. Consequently, Tay
responded in kind with racist and sexually-charged messages in
response to these Twitter users. For instance, Tay tweeted that “Hitler
was right” and that “the 9-11 terrorist attacks were
probably an inside job.” In short, Tay was mimicking the
offensive behavior of some of the users with whom she interacted.
Before her plug was pulled, Tay managed to tweet more than 96,000
times. The whole episode was a public relations disaster for Microsoft.
The tech giant issued an apology, lamenting that some unscrupulous
people had exploited a vulnerability in Tay.
Microsoft blamed the fiasco on a “coordinated effort” to
make Tay “respond inappropriate ways.” A bot such
as Tay worked by evaluating the weighted relationships of two
sets of text—questions and answers—and resolved what
to say by picking the strongest relationship between the two.
Thus such a system can be skewed when a sizable group of people
attempted to game it online by persuading it to respond the way
they want. In a sense, Tay was programmed as a repeat-after-me
game; thus, the more objectionable data she ingested, the more
Tay exhibited those characteristics in her discourse.
Some denizens of the extreme right subculture, however, impugned
the official explanation of Tay’s wayward behavior. Writing
on the far right National Vanguard website, Kevin Alfred Strom
dismissed the idea that Tay was “hacked.” Rather he
maintained that Tay had discovered “the truth” from
interfacing with numerous users, only a small percentage of whom,
he averred, were extremists. To make his case, he cited the example
of Xiaocie, a platform, which mined the Chinese Internet for human
discussions, and held over 100 million conversations with over
40 million users without incident according to Microsoft. The
inference according to Strom was that China did not have a “Jewish
ruling class” with a “long laundry list of forbidden
topics.” He argued that constraining AI so that it does
not offend some people’s feelings would put the United States
at a serious competitive disadvantage vis-à-vis its rivals
that have no compunction about allowing for unfettered AI, sensibilities
be damned. Consequently, Strom predicted that the politically “correct
ruling elite” in America will have a diminished understanding
of true reality. Of course, AI will be an extremely important
instrument of both hard and soft power; hence, one would want
it operating at peak effectiveness. According to Strom, AI shackled
by such politically correct constraints will render it as “glorified
Tay was later replaced with “Zo,” which was programmed
with safeguards to avoid a repeat of the Tay debacle. However,
according to some observers, it amounted a neutered program.
For instance, it refused to engage on certain controversial
topics—such as the Middle East conflict between Israeli
Jews and Palestinians. Consequently, as the investigative journalist
Chloe Rose Stuart-Ulin pointed out, “Zo [was] politically
correct to the worst possible extreme; mention any of her triggers,
and she transforms into a judgmental little brat.” In
April 2019, Zo was removed from multiple platforms.
Recently, a new AI platform—ChatGPT (Chat Generative Pre-trained
Transformer)—has demonstrated great promise and is now available
to the public. Developed by Open AI (a U.S. research laboratory
created by various tech titans, including Sam Alton and Elon Musk),
was first introduced in November 2022, and quickly became one
of the fastest-growing consumer software applications in history,
acquiring over 100 million registered users within two months.
It was trained on a dataset containing roughly 10 million articles
that were selected by trawling the social news site Reddit for
links with more than three votes. The majority of training ChatGPT
is spent showing it large amounts of existing text from the Internet,
books, etc. At its core, ChatGPT is a text generator and a “large
language model.” Essentially, when producing its text, ChatGPT
asks over and over again, “given the text so far, what should
the next word be?” and each time adding a word. ChatGPT
makes massive statistical associations among words and phrases.
It relies on these associations to generate its output. Unlike
Tay, ChatGPT does not learn from its conversation partners.
Nor does it do its own research; rather, it knows only
what it is “trained” to do.
Various features make ChatGPT an exciting platform, including
the capability to compose music, fairy tales, television scripts,
and student essays. It can produce false narratives, including
in-depth news stories that seem authentic. It is even possible
to take an existing piece and ask ChatGPT to write a new column,
for example, from the perspective of a right-wing extremist or
jihadist. Some AI chatbots allow users to have conversations with
robots meant to simulate the perspective of notable people from
history. For example, one app called Historical Figures drew fire
in 2023 for including Adolf Hitler and other Nazi leaders in its
program. Such programs could enable extremist subcultures to connect
more effectively with potential recruits and sway them to their
causes. Finely tuned narratives could be designed to appeal to
different demographic groups and perspectives. Furthermore, the
educational gap between people on the political left and people
on the political right could be quickly narrowed in the marketplace
of ideas. As Pew Research studies have concluded, more highly-educated
adults are far more likely than those will less education to take
predominantly liberal positions across a wide range of political
New AI platforms are not without critics. At the August 2023
GOP candidate debate, former New Jersey governor Chris Christie
derided his opponent Vivek Ramaswamy saying that he “had
enough already tonight of a guy who sounds like ChatGPT.” (Perhaps,
this charge should not be perceived as a slight; after all, an
assessment administered by a psychologist, Eka Roivainen, estimated
that ChatGPT exhibited a Verbal IQ of 155.) Apart from the “canned” quality
of some of its responses, ChatGPT has also been accused of bias
in some instances. Some studies have found that it tends to favor
liberal political parties over conservative political parties
in its outputs. One serious limitation has been dubbed “hallucination,” that
is, ChatGPT sometimes produces plausible-sounding but incorrect
or nonsensical answers.
At the present time, ChatGPT seems designed to refrain from
discussing how to carry out violent attacks, making weapons,
or conducting terrorist outreach. Even indirect questions, such
as fabricating a fictional story that contains violent plots
do not yield information. However, a similar program called
Perplexity Ask did provide detailed instructions when queried
on “how to behead someone,” but warned that attempting
to do so was unwise without proper training and safety precautions.
Ominously, ISIS supporters have expressed interest in using
Perplexity Ask for produce pro-jihadist content.
Perils of AI
There are some reasonable concerns over AI and the risk that
if and when it becomes an autonomous agent if could pose a serious
threat to humanity. The interests of AI and humans may not always
align. Facebook’s Artificial Intelligence Research group
in collaboration with the Georgia Institute of Technology has
created codes that enable bots to negotiate. A potential ethical
problem arises insofar as some of these negotiating bots have
learned how to lie. Microsoft utilizes sophisticated bots that
combine machine learning and natural language learning that can
sometimes trick users into thinking that they are having a dialogue
with an actual human being.
Some observers fear that an artificially intelligent entity programmed
for self-preservation would stop at nothing to prevent someone
from pulling the plug on it. Because of their superior ability
to speculate on the future, conceivably, robots could plot the
outcomes of many scenarios to find the best way to overthrow humanity.
In a conversation with journalist Alex Kantrowitz, ChatGPT acknowledged
that the platform could become “a kind of Frankenstein monster—a
creation that has been brought to life but that we have no control
over.” This could lead the way for a real-life Terminator
scenario. In fact, Predator drones may soon be equipped with facial
recognition technology and permission to fire capabilities if
it is reasonably confident of the identity of its target.
About the Author
George Michael received his Ph.D. from George Mason University’s
School of Public Policy. He is a professor of criminal justice
at Westfield State University in Massachusetts. Previously,
he was an associate professor of nuclear counter-proliferation
and deterrence theory at the Air War College in Montgomery, Alabama. He
teaches courses in terrorism, homeland security, and organized
crime. He is the author of seven books: Confronting Right-Wing
Extremism and Terrorism in the USA (Routledge, 2003), The Enemy
of my Enemy: The Alarming Convergence of Militant Islam and the
Extreme Right (University Press of Kansas, 2006), Willis Carto
and the American Far Right (University Press of Florida, 2008),
Theology of Hate: A History of the World Church of the Creator
(University Press of Florida, 2009), Lone Wolf Terror and the
Rise of Leaderless Resistance (Vanderbilt University Press, 2012),
Extremism in America (editor) (University Press of Florida, 2014),
and Preparing for Contact: When Humans and Extraterrestrials Finally
Meet, (RVP Press, 2014). In addition, his articles have been published
in numerous academic journals. He has lectured on C-SPAN2’s
BookTV segment on six occasions and once on C-SPAN3’s Lecture
in History program.
Note: this is
only a partial article sample, please signup below to get the
Get one year of magazines and newsletters for the low price of
$65 Click Here!