Meet My AI—We're A Thing: Makovsky Best Masters Thesis of the Year Winner Talks Artificial Intelligence
The Conference Board uses cookies to improve our website, enhance your experience, and deliver relevant messages and offers about our products. Detailed information on the use of cookies on this site is provided in our cookie policy. For more information on how The Conference Board collects and uses personal data, please visit our privacy policy. By continuing to use this Site or by clicking "OK", you consent to the use of cookies. 

Martin: Siri, when it comes to human relationships with artificially intelligent agents, what do you think?

Siri: I really couldn’t say.

How would you feel if you were on the receiving end of a phone call to your company, and the ‘person’ on the other end was a conversational digital assistant that sounded human but didn’t identify itself as a bot?

What about reading an article that was highly critical of a competitor and reinforced your worst suspicions? So, you shared it online, only to find it was written by a natural language generator. Or say you discovered the ‘thought-leader’ you were following on LinkedIn was nothing more than a bot?

Not so long ago, scenarios like these might have been considered stories from the realm of science fiction.

But the bot-made phone call happened on May 2018 when Google demonstrated its Duplex conversational AI during its annual developer conference. The company had Duplex call a hair salon to book an appointment, and the machine sounded like a person, complete with "umms," and "ahhs." And Google was rightfully criticized for not disclosing it was a machine.

OpenAI developed an algorithm that can turn out a pretty convincing blog post or news story when given a prompt like a sentence. And now, "deepface" photos use generative adversarial networks, the "Spy versus Spy’s" of the algorithm world to create a synthetic image and trick another algorithm into believing the fake is real.

How Will AI Affect Relationships?

Having spent my career in communications, these developments caused me to reflect on how our relationships with other people might be turned on their heads as we spend more time interacting with AI.

And since I had gone back to school to do a Master of Communications Management at McMaster-Syracuse, this seemed like a subject I could delve into for my thesis, “My BFF is a Chatbot: Examining the Nature of Artificial Relationships and the Role They Play on Communication and Trust.”

Working closely with my supervisor, Dr. Alex Sévigny, I began to explore whether it was possible for a person to build and maintain a relationship with an artificially intelligent machine, and if so, what that might look like? I was curious about how and to what extent these "close encounters" might affect communications and trust.

I wondered whether they might fall under Hon and Grunig’s definition of an exchange relationship, where there could be a tacit agreement about what each party provided the other. Or perhaps it would be more of a one-way experience, where the machine would be considered a ‘slave’ and forced to do its human master’s bidding, as MIT professor Max Tegmark wondered.

As I started to formulate and frame my questions, I did a literature review and analyzed research on relationships, trust, human-machine communications, and two-way symmetrical communications.

Then, I conducted a series of in-depth interviews with various experts including computer scientists, communications researchers, PR agency owners, entrepreneurs, and journalists to hear their insights and ideas and compare and contrast their responses in an attempt to find some overarching themes.

Human AI agent relationships are a smartphone away

One response I could not have predicted was all the participants likened the human-AI agent interaction to the movie HerIf you don’t remember it, Joaquin Phoenix plays a professional letter writer who falls for the seemingly sentient voice assistant on his phone.  Maybe it’s because we’re already in a relationship of sorts with our smartphone, and these devices could become the "gateway" to human-AI agent relationships.

When subjects were asked whether children should be taught to be polite to AI agents, virtually everyone said yes because being polite was part of what made us human.

And almost all the participants agreed there would be instances where they would trust the recommendation of what an AI agent said, over a person. This was particularly evident in fact, or knowledge-based questions, versus questions of opinion.

Layers of trust

Yet participants could not agree on which elements were required to foster a successful relationship human-AI agent relationship. Some believed the AI needed "emotional intelligence," while others said it was dependent on better contextual results.

Participants observed that trust between a human and an AI agent was often based on many of the stages a relationship might go through. One participant called this concept "layers of trust," while another described it as a "spectrum of trust." As people moved through the stages, they might encounter elements that could either build or tear down the relationship.

For example, if the user experience was positive, and continued to provide value, the relationship and trust might build over time. However, one participant wondered whether the trust could be shattered if the AI was perceived to be nothing more than a sales agent.

Behind closed doors

When it came to governance around AI privacy, data management, and deployment, there was a broad spectrum of answers ranging from giving the responsibility to the corporations that developed the AI, to letting an NGO take charge, to following the EU’s lead and pushing governments to step in.

Yet, some participants expressed skepticism that governments would be able to handle the complexity of the issue. Several worried that government interference might harm innovation.

Others sought a consensus-building approach. They called for openness, transparency, and a willingness to discuss and debate contentious ideas, similar to the principles outlined in of Grunig’s Symmetrical Model of Public Relations, which could provide a framework for the consultations by encouraging participants to listen, consider other viewpoints, and adjust their views.

Where do we go from here?

It’s clear from the research that we are in the early days of examining these and other issues, and many more questions remain to be explored.

For example, would "polite and cheerful" AI agents that only provided positive reinforcement lead to disappointment and a diminished relationship with people who might be critical and challenge a person’s position?

Will there be a net positive, net neutral or net negative benefits for human relationships?

Do governments have the expertise to develop safeguards, policies, and governance to ensure a fair and ethical treatment of humans as AI becomes more integrated into our lives.

And ultimately, what role, if any, will public relations and communications play in the beneficial deployment and commercialization of AI?

Read Martin Waxman’s, IPR’s Makovsky Best Master’s Thesis of the Year Winner, full thesis here. 

This post was originally published by the Institute for PR.

Meet My AI—We're A Thing: Makovsky Best Masters Thesis of the Year Winner Talks Artificial Intelligence

Meet My AI—We're A Thing: Makovsky Best Masters Thesis of the Year Winner Talks Artificial Intelligence

05 Sep. 2019 | Comments (0)

Martin: Siri, when it comes to human relationships with artificially intelligent agents, what do you think?

Siri: I really couldn’t say.

How would you feel if you were on the receiving end of a phone call to your company, and the ‘person’ on the other end was a conversational digital assistant that sounded human but didn’t identify itself as a bot?

What about reading an article that was highly critical of a competitor and reinforced your worst suspicions? So, you shared it online, only to find it was written by a natural language generator. Or say you discovered the ‘thought-leader’ you were following on LinkedIn was nothing more than a bot?

Not so long ago, scenarios like these might have been considered stories from the realm of science fiction.

But the bot-made phone call happened on May 2018 when Google demonstrated its Duplex conversational AI during its annual developer conference. The company had Duplex call a hair salon to book an appointment, and the machine sounded like a person, complete with "umms," and "ahhs." And Google was rightfully criticized for not disclosing it was a machine.

OpenAI developed an algorithm that can turn out a pretty convincing blog post or news story when given a prompt like a sentence. And now, "deepface" photos use generative adversarial networks, the "Spy versus Spy’s" of the algorithm world to create a synthetic image and trick another algorithm into believing the fake is real.

How Will AI Affect Relationships?

Having spent my career in communications, these developments caused me to reflect on how our relationships with other people might be turned on their heads as we spend more time interacting with AI.

And since I had gone back to school to do a Master of Communications Management at McMaster-Syracuse, this seemed like a subject I could delve into for my thesis, “My BFF is a Chatbot: Examining the Nature of Artificial Relationships and the Role They Play on Communication and Trust.”

Working closely with my supervisor, Dr. Alex Sévigny, I began to explore whether it was possible for a person to build and maintain a relationship with an artificially intelligent machine, and if so, what that might look like? I was curious about how and to what extent these "close encounters" might affect communications and trust.

I wondered whether they might fall under Hon and Grunig’s definition of an exchange relationship, where there could be a tacit agreement about what each party provided the other. Or perhaps it would be more of a one-way experience, where the machine would be considered a ‘slave’ and forced to do its human master’s bidding, as MIT professor Max Tegmark wondered.

As I started to formulate and frame my questions, I did a literature review and analyzed research on relationships, trust, human-machine communications, and two-way symmetrical communications.

Then, I conducted a series of in-depth interviews with various experts including computer scientists, communications researchers, PR agency owners, entrepreneurs, and journalists to hear their insights and ideas and compare and contrast their responses in an attempt to find some overarching themes.

Human AI agent relationships are a smartphone away

One response I could not have predicted was all the participants likened the human-AI agent interaction to the movie HerIf you don’t remember it, Joaquin Phoenix plays a professional letter writer who falls for the seemingly sentient voice assistant on his phone.  Maybe it’s because we’re already in a relationship of sorts with our smartphone, and these devices could become the "gateway" to human-AI agent relationships.

When subjects were asked whether children should be taught to be polite to AI agents, virtually everyone said yes because being polite was part of what made us human.

And almost all the participants agreed there would be instances where they would trust the recommendation of what an AI agent said, over a person. This was particularly evident in fact, or knowledge-based questions, versus questions of opinion.

Layers of trust

Yet participants could not agree on which elements were required to foster a successful relationship human-AI agent relationship. Some believed the AI needed "emotional intelligence," while others said it was dependent on better contextual results.

Participants observed that trust between a human and an AI agent was often based on many of the stages a relationship might go through. One participant called this concept "layers of trust," while another described it as a "spectrum of trust." As people moved through the stages, they might encounter elements that could either build or tear down the relationship.

For example, if the user experience was positive, and continued to provide value, the relationship and trust might build over time. However, one participant wondered whether the trust could be shattered if the AI was perceived to be nothing more than a sales agent.

Behind closed doors

When it came to governance around AI privacy, data management, and deployment, there was a broad spectrum of answers ranging from giving the responsibility to the corporations that developed the AI, to letting an NGO take charge, to following the EU’s lead and pushing governments to step in.

Yet, some participants expressed skepticism that governments would be able to handle the complexity of the issue. Several worried that government interference might harm innovation.

Others sought a consensus-building approach. They called for openness, transparency, and a willingness to discuss and debate contentious ideas, similar to the principles outlined in of Grunig’s Symmetrical Model of Public Relations, which could provide a framework for the consultations by encouraging participants to listen, consider other viewpoints, and adjust their views.

Where do we go from here?

It’s clear from the research that we are in the early days of examining these and other issues, and many more questions remain to be explored.

For example, would "polite and cheerful" AI agents that only provided positive reinforcement lead to disappointment and a diminished relationship with people who might be critical and challenge a person’s position?

Will there be a net positive, net neutral or net negative benefits for human relationships?

Do governments have the expertise to develop safeguards, policies, and governance to ensure a fair and ethical treatment of humans as AI becomes more integrated into our lives.

And ultimately, what role, if any, will public relations and communications play in the beneficial deployment and commercialization of AI?

Read Martin Waxman’s, IPR’s Makovsky Best Master’s Thesis of the Year Winner, full thesis here. 

This post was originally published by the Institute for PR.

  • About the Author:Martin Waxman

    Martin Waxman

    Martin Waxman, MCM, APR, is a digital communications strategist, leads social media workshops, and conducts AI research. He’s the co-founder of three PR agencies, president of a consultancy…

    Full Bio | More from Martin Waxman

     

0 Comment Comment Policy

Please Sign In to post a comment.

    hubCircleImage