Artificial Intelligence is the newest weapon for terrorists to add 

to their arsenal. It is especially effective to groom and recruit 

people who are most vulnerable - those who are lonely, 

depressed, or have some other psychological issue that makes 

them eager to have a chatbot to talk to and become their new 

friend. As it is, countless lone wolves have been radicalized 

online. Imagine how much more convincing a friendly chatbot 

could be, while spewing terrorist propaganda.   



You will hear why demands are growing for regulation of AI 

before it becomes like Frankenstein - an out of control monster 

threatening the human race. You will also hear about some of 

the forms of AI that are especially useful to terrorists. Ads for 

chatbots appeal to the very people ISIS, Al Qaeda and other 

terrorist organizations target to recruit. These ads say, “Need 

a friend? Talk to a chatbot….”



Then you’ll hear three real life examples of young men who 

were radicalized and recently convicted of terrorism - from the 

UK to Kansas: Matthew King, Ali Abdillahi and Andrew Dade 

Patterson. Fortunately, they were stopped in the nick of time, 

by authorities who were tipped off by the terrorists' family 

and by their posts online, before they did grave damage. So, 

this will enable you to envision how AI could have radicalized 

them faster and accelerated their plans for terror attacks, such 

that they resulted in countless dead and injured victims.