top of page

Can machines persuade?

For as long as we can remember, persuasion has been the function of gods and humans only — that may be about to change.



Earlier this week, I was a guest of Hamil Harris, the award-winning journalist, at one of his classes at the University of Maryland. During my talk, a student asked me a question I had not previously considered: Is persuasion something that only humans can do? I replied that the answer to that question depends on whether or not he believed in God. If he did, then the answer was no, since all major world religions have examples of a divinity attempting to persuade. If he did not believe in God, then the answer was yes, since any example of divine persuasion was ultimately invented by a human being.


After hearing my response, he agreed that to date only humans (and perhaps gods) have persuaded. But he then asked if it is possible that a machine could one day persuade? He mentioned deep fakes but soon acknowledged that while these artifacts are created with machines they are not created by machines. I did note that if we believe some of the most daring claims made about Artificial Intelligence (AI), then we may be about to arrive at a time when a machine that had achieved some kind of “cognition” could decide to persuade us of something. Indeed, we have seen this scenario imagined in films such as 2001: A Space Odyssey and the Terminator series. In the former, the ship’s computer, HAL, decides to kill the human astronauts on the ship and explains his decision with the argument that "This mission is too important for me to allow you to jeopardize it." In the latter, a robot from the future returns to persuade the heroine of the film that her son is destined to free humanity in the future.

Though machines do not persuade today, perhaps in the near future technologies such as neural networks will become sufficiently sophisticated to convince human beings of ideas or actions that we may not be able to imagine at this time. In 1985, Ray J. Solomonoff — the inventor of algorithmic information theory — postulated that seven developmental milestones needed to be achieved before AI could be fully realized:

  1. The creation of AI as a field, the study of human problem solving (aka. "cognitive psychology"), and the development of large parallel computers (similar to the human brain).

  2. A general theory of problem-solving that consists of machine learning, information processing and storage, methods of implementation, and other novel concepts.

  3. The development of a machine that is capable of self-improvement.

  4. A computer that can read almost any collection of data and incorporate most of the material into its database.

  5. A machine that has a general problem-solving capacity near that of a human in the areas for which it has been designed (i.e., mathematics, science, industrial applications, etc.)

  6. A machine with a capacity near that of the computer science community.

  7. A machine with a capacity many times that of the computer science community.

We might add an eighth milestone to Solomonoff’s list: A machine with the capacity to persuade humans to act or believe in a specific way.

Interestingly, in 2021, a team of researchers published a paper in which they described an IBM experiment called Project Debater. The research team attempted to build an Artificial Intelligence (AI) machine that could win a debate against an expert human debater following the structured 4-pattern illustrated in Figure 1 below:


Figure 1: Project Debater debate format. Source: Nature

The AI machine built its debate strategy around a series of invented and “mined” arguments and evidence, mirroring the way chess computers mine past matches to find winning strategies. Unlike other IBM game-winning AI machines such as Watson, the Projet Debater computer lost its debate against the expert human.


Project Debater debate and post-debate discussion. Source: IBM

Reflecting on their approach and results, the research team concluded that persuading in a debate is a much different task than winning a game of chess and that the techniques developed for the latter do not yield the same winning results in the former. The researchers note that debate:

…requires an advanced form of using human language, one with much room for subjectivity and interpretation. Correspondingly, often there is no clear winner. Moreover, even if we had a computationally efficient “oracle” to determine the winner of a debate, the sheer complexity of a debate — such as the amount of information required to encode the “board state” or to enumerate all possible “moves” — prohibits the use of contemporary game -solving techniques. In addition, it seems implausible to win a debate using a strategy that humans can fail to follow, especially if it is the human audience which determines the winner.

As a result of their analyses, the authors conclude that “the challenge taken by Project Debater seems to reside outside the AI comfort zone, in a territory where humans still prevail, and where many questions are yet to be answered.”


At the present, it seems that persuasion in a debate lies beyond the abilities of even the most advanced AI machines. Yet while machine persuasion in a debate is far off, perhaps we already see hints of machine persuasion when our iPhone nudges us to exercise or a car determines it needs to be serviced. These alerts arise from human programmers today, but it is not too difficult to imagine a future in which a machine independently decides to make a suggestion — about our health, investments, or which news stories to read — and to give us reasons to accept it.


If and when this happens, it will be interesting to see if the rules of persuasion — which have persisted for 2,000+ years — will have to be amended. That remains to be seen. For now, persuasion remains a human activity exclusively, as it has been ever since we began to communicate with one another.

You can listen to an interview with the Project Debater lead engineer, Noah Slonin (and others), here:


PS: Last week I wrote that the first signs out of Russia were that Putin’s propaganda campaign was failing to persuade his own people. The first articles confirming this initial analysis have started to appear:






bottom of page