Dale Musser’s new office in Naka Hall is still sparse. Like his office, the future of his new computer program, [Debate Analyzer](http://debateanalyzer.com), is just as open to change.

The program examines the transcript of the debates and measures the amount of emotions that the words connote. For example, in Trump’s “bad hombres” statement, the program indicated mostly “conscientiousness,” “confident” and “emotional range.”
This new program is not entirely Musser’s own creation, though. He uses IBM Watson’s Tone Analyzer tool, which runs text through an algorithm that determines the emotions those words represent.
Musser has been a computer science professor at MU since 2008. This fall, he also joined the School of Journalism, where he helps with mobile app development and was named the Reynolds Journalism Institute’s chief technology adviser.
He originally wanted to figure out a good way to demonstrate to his journalism students how to use the IBM Watson service.
“The night I was creating the demo was the night of the first debate,” Musser said. “So I finished writing the code to analyze a block of text, and I’m like, ‘What should I analyze?’ I heard Trump say something really stupid, and then I thought, ‘Debate text — that’ll be interesting.’”
He took a few quotes from live reports online of that night’s debate and put it into the program. He became interested in how the program would interpret the entire debate, not just the few comments.
“In the particular piece, it was labeled as very angry and sad,” Musser said. “And I’m like: ‘Is the whole thing angry and sad, or are there good times and bad times? How does this flow?’”
The next morning, when the full transcripts were put online, he started running entire debates through the program. He indicated to RJI Executive Director Randy Picht that this program might be interesting for further research.
“We are really excited to be working with the computer science department on the topic of artificial intelligence,” Picht said. “New technology is making [data] more accessible, easier to analyze, easier to visualize so you can explain it to readers and viewers.”
####The evolution of debates
Musser’s specific alterations to the program allowed the transcript to be shown next to the analyzed tones; it also showed general facts about the entire transcript.
“The first thing that struck me was the numbers relative to how often people spoke and interruptions,” Musser said. “At the point I was looking at this data, I realized at the vice-presidential debate there were 399 statements. That seems like a lot for an hour and a half.”
Since all presidential debate transcriptions have been written in the same format since 1960, it was easy for Musser to see differences in debates from a broad time range.
“I wondered what this was like historically,” Musser said. “Is this a fundamental change? I hadn’t really paid that much attention. I was a kid in the ’60s, but I wouldn’t have actually thought about it that much … So I grabbed Kennedy-Nixon. I jumped forward to Reagan-Mondale, and then McCain-Obama.”
When he looked at the data from all the debates, he started to notice major differences. For the three Clinton-Trump presidential debates, there was an average of 195 candidate statements. For the the fourth Kennedy-Nixon debate, there was a total of 21 statements. Even for the third McCain-Obama debate, there were 114 statements, which is 41 statements less than Clinton-Trump’s lowest.
Even breaks for crowd applause and laughter have massively increased over the years. In Kennedy-Nixon, there were no breaks for applause or laughter, and the candidates never spoke over each other, which the transcripts call “crosstalk.”
“How many times did Reagan talk over Mondale?” Musser said. “How many times did the audience laugh at him?”
In the Reagan-Mondale debate, there was only one break for applause, one break for laughter and no crosstalk.
“In the case of Reagan-Mondale, there were only 53 statements made,” Musser said. “If you look at the text … they are long blocks of continuous, coherent thought. And you put that up beside today and you’re like, ‘These are not the same.’ They are not even close to each other in terms of experience.”
Musser believes these differences have created a huge shift in debate atmosphere.
“This was civil discourse, and this is uncivil discourse, as I see it,” Musser said.
####Implications
Musser said mathematics and computational science can help to dissect the changes in political discourse over time. Comparisons between the atmosphere around elections in the past and present are either too opinionated or less clearly remembered over time.
“A person in that moment in time is not going to be able to separate themselves from that time, but math can,” Musser said. “Then it’s not about how you and I feel. Now it’s like, ‘Let’s talk about what this means.’ We couldn’t do that before. We can have the great debate about what this means, but it usually ended up descending into the hell of personal opinion.”
But Musser said this program is still new, and he does not know exactly how he will conduct more long-term and in-depth research with it. To assess the program’s accuracy, he grabbed famous speeches such as Martin Luther King Jr.’s “I Have a Dream” and Abraham Lincoln’s Gettysburg Address to compare them to common perceptions of those speeches.
For Lincoln’s speech, the program indicated a lot of fear. Musser said the fear made sense because the country was in the middle of the Civil War.
He said he was slightly surprised at first by the results from “I Have a Dream” because the program indicated a lot of anger.
“There’s a lot of statements in there we consider uplifting, but in actuality the core of the speech is: We are angry, we demand change,” Musser said.
Musser said even though the program may be accurate, he does not completely trust it because it analyzes the text without any context. The program also cannot detect sarcasm.
He believes the program can show discrepancies between the emotion people think they are hearing and the actual connotation of the words.
“See if the tones have you re-evaluating the text, because that’s what happened to me,” Musser said.
Picht also said this tool, in addition to causing people to re-evaluate what is said at debates, could get people to think about their own ideas and reactions during the debates.
“It is a very new area, but having the capability to try to get these new perspectives and see what we can learn is really promising,” Picht said. “We are excited to be embarking on that journey.”
Musser wondered if technology like this would shape discourse with candidates trying to fit statements into emotional tones rather than voicing certain positions.
“What if one these candidates did nothing but inspire you?” Musser said. “What if all their statements were crafted to be Kennedy ‘We’re going to the moon’ statements?”
He also said that before fully researching the program, he noticed there are some conclusions he was noticing just from looking at the basic data.
“Here’s what I can say: We are not living in an era of happiness, of joy,” Musser said. “Very little joy showing up in any of this stuff. Interesting question: Has joy ever entered into politics?”
He also said he is worried about how this kind of technology that analyzes what humans should feel could be used. For example, the debate analyzer program could possibly replace traditional debate coverage. Or if this program is meant to explain what emotion certain words should be eliciting, it could lead to the general computational analysis of people.
“That’s the logical conclusion to my work,” Musser said. “I’m talking about debates now, but I could be analyzing you and putting you in the master-score database … The perfect member of our society does all the right things, says all the right things, has all the right friends.”
He is not completely worried that his work will go down this path, but he said he does think that the two sides of developing computational intelligence are like “creating atomic energy or the atomic bomb.”
_Edited by Claire Mitzel | cmitzel@themaneater.com_