News


Latest Faculty of Information News

Talking sense into Artificial Intelligence

Submitted on Friday, March 01, 2024

Every week, the researchers in Assistant Professor Anastasia Kuzminykh’s lab get together to update each other on their work. While they are all investigating human-AI communication, there is a wide range of topics. Some of the students are researching how chatbots like Siri and Alexa are perceived by users. Others are exploring the influence of AI on people’s thinking and whether it can change their minds. And one lab member is working to help AI better understand how different types of music sound to humans.

By learning more about how humans talk to machines and why people perceive AI systems the way they do, this group of some two dozen researchers, with academic backgrounds in a wide variety of subjects, is aiming to build more effective and efficient communication between users and AI systems with the ultimate goal of creating a partnership between
humans and AI. “The big mission, as I see it,” says Kuzminykh, “is to develop an efficient collaboration because I believe that our two types of intelligence are complementary, and learning how to put them together has a tremendous potential.”

One aspect of building a working relationship with an AI is ensuring that it is transparent with its human partners and able to tell them how it reached the conclusions it did. This is because Explainable AI, as it is known, is considered more trustworthy by users.

In a 2023 study, two lab members looked at AI’s explanations for determining whether certain scenarios were sexist or not and the impact the AI’s answers had on humans attempting to answer the same questions. Research Assistant Paula Aoyagui and PhD student Sharon Ferguson found that the human participants in their study would borrow terms directly from the AI explanation texts and use material from the AI to bolster and lengthen their own responses. In at least one case, a participant who couldn’t originally decide if a particular scenario was sexist or not, changed their mind to agree with the AI, who said the statement in question was not sexist.

Assistant Professor Anastasia Kuzminykh (centre) with researchers from her human-AI communication lab (left to right) Pelin Tanberg, post-doc; Omer Imran, MI student; Sharon Ferguson, PhD student, Mechanical and Industrial Engineering; Christina Wei, PhD student, Information. Photo: John Packman.

While Aoyagui, a UXD consultant who graduated from the Master of Information program in 2022, agrees that people could be similarly influenced by books and articles, she has concerns that algorithms and large language models like ChatGPT can replicate harmful biases.

“We found previous GPT models produced sexist text responses like, ’Yes, boys are better than girls at math. And that’s just a fact. That’s not sexist,’” she said.

While OpenAI, the organization that developed ChatGPT, maintains it has significantly improved its algorithms since then, Aoyagui and Ferguson, who is a PhD student
in mechanical and industrial engineering, are not taking the company at its word. They scoured Reddit and other online forums to identify 100 more possibly sexist scenarios and asked ChatGPT versions 3, 3.5 and 4 whether the scenarios were sexist or not. In January, they were comparing the answers to see if and how much they had changed over time. At Kuzminykh’s recommendation, they are also surveying humans to see how they would respond to the same scenarios.

When she founded her lab, Kuzminykh – who started out as an anthropologist, switched to cognitive and neuropsychology, and then completed her PhD in computer science at Waterloo – wanted to create a uniquely interdisciplinary environment where students from varied academic backgrounds could collaborate. The lab also has students at all levels conducting research.There are computer science undergrads, several Master’s students from the Faculty of Information working on
theses, and a new post-doc, Pelin Tanberg, a psychology PhD whose research focuses on understanding human memory mechanisms and how people forget information
on purpose to declutter long-term memory. At the lab, she’s researching what kind of information we retain from human-human interaction versus human-AI interaction.

PhD student Christina Wei, who came to the Faculty of Information from the financial services company Manulife, where she worked as the head of emerging technology and assistant vice president for strategic initiatives, appreciates the “multidisciplinary ecosystem” of the lab. With an undergraduate degree in computer science and a Master’s in mathematical finance, Wei was intrigued by how chatbots and natural language agents were being deployed in the financial sector and interacting with customers.

She is currently researching what’s known as the conversational architecture of AI agents, delving into how agents speak, what they say, and how they say it, as well as the perceptions users develop of the agents. “Given my background in finance, I’m focusing on financial decision making, trying to figure out how perceptions are induced by how we design the agents. Do you perceive it to be intelligent? Do you perceive it to be human-like?”

Wei wants to understand how perceptions lead to different behaviors with the goal of guiding users towards outcomes based on their interaction with agents. That doesn’t mean simply selling more financial products or counting on AI to upsell, she says. “Whatever we discover from a knowledge perspective, we also consider the ethical implications behind it. I’m looking at both the good and the bad.”

Nazar Ponochevnyi’s research is in what’s known as multimodal artificial intelligence, which is defined as a type of artificial intelligence that can process, understand and generate outputs for different data types including text, images, audio and video. He has worked on developing voice interfaces that allow users to create charts. And as the Founder and CEO of a startup called Harmix, he provides music businesses with multimodal AI-based music search for their catalogs.

“What we see is that there are different searches and different modalities, and every modality brings something new to the table,” says Ponochevnyi. “For example, if you do not know how to describe the sound of your voice, you can use an audio reference and specifically provide an example of how it sounds.

On the other hand, if you do not have any examples, and you do not have a clear picture of how [the music you want] should sound, you can describe it in your own words.”

While the system is producing accurate results and identifying the music users want, Ponochevnyi is still trying to uncover how it’s accomplishing this because right now, he says, “it works like a black box magic thing.” For Ponochevnyi, who knew nothing about human-computer interaction and came to the lab from the Ukraine as a Mitacs research intern, the experience has been a fascinating one. “I uncovered a whole new world,” he says.

It’s a comment that is music to the ears of Kuzminykh, given her goal of fostering interdisciplinary conversation. “That means not just bringing your knowledge but also
learning to hear people, who think in different terms and from different angles,” she says. “I’m extremely proud that students are learning how to do that and participating actively.”

Filed under: