The Ethics of Artificial Intelligence in Human Interaction

Read the following case study then answer the questions below.

The Ethics of Artificial Intelligence in Human Interaction

Artificial intelligence (AI) is increasingly prevalent in our daily activities. In May 2018, Google demonstrated a new AI technology known as Duplex. Designed for our growing array of smart devices and specifically for the Google Assistant feature, this system has the ability to sound like a human on the phone while accomplishing tasks like scheduling a hair appointment or making a dinner reservation. Duplex is able to navigate various misunderstandings along the course of a conversation and possesses the ability to acknowledge when the conversation has exceeded its ability to respond. As Duplex gets perfected in phone-based applications and beyond, it will introduce a new array of ethical issues into questions concerning AI in human communication

There are obvious advantages to integrating AI systems into our smart devices. Advocates for human-sounding communicative AI systems such as Duplex argue that these are logical extensions of our urge to make our technologies do more with less effort. In this case, its an intelligent program that saves us time by doing mundane tasks such as making diner reservations and calling to inquire about holiday hours at a particular store. If we dont object to using online reservation systems as a way to speed up a communicative transaction, why would we object to using human-sounding program to do the same thing?

Yet Googles May 2018 demonstration created an immediate backlash. Many agreed with Zeynep Tufekcis passionate reaction: Google Assistant making calls pretending to be human not only without disclosing that its a bot, but adding ummm and aaah to deceive the human on the other end with the room cheering it [is] horrifying. Silicon Valley is ethically lost, rudderless and has not learned a thing (Meyer, 2018). What many worried about was the deception involved in the interaction, since only one party (the caller) knew that the call was being made by a program designed to sound like a human. Even though there was no harm to the restaurant and hair salon employees involved in the early demonstration of Duplex, the deception of human-sounding AI was still there. In a more recent test of the Duplex technology, Google responded to critics by including a notice in the call itself: Hi, Im calling to make a reservation. Im Googles automated booking service, so Ill record the call. Uh, can I book a table for Sunday the first (Bohn, 2018)? Such announcements are added begrudgingly, since its highly likely that a significant portion of humans called will hang up once they realize that this is not a real human interaction.

While this assuaged some worries, it is notable that the voice accompanying this AI notice was even more human-like than ever. Might this technology be hacked or used without such a warning that the voice one is hearing is AI produced? Furthermore, others worry about the data integration that is enabled when such an AI program as Duplex is employed by a company as far-reaching as Google. Given the bulk of consumer data Google possesses as one of the most popular search engines, critics fear future developments that would allow the Google Assistant to impersonate users at a deeper, more personal level through access to a massive data profile for each user. This is simply a more detailed, personal version of the basic worry about impersonation: If robots can freely pose as humans the scope for mischief is incredible; ranging from scam calls to automated hoaxes (Vincent, 2018). While critics differ about whether industry self-regulation or legislation will be the way to address technologies such as Duplex, the basic question remains: do we create more ethical conundrums by making our AI systems sound like humans, or is this another streamlining of our everyday lives that we will simply have to get used to over time?

Discussion Questions:

1. Do you think the Duplex AI system is deceptive? Is this use harmful? If its one and not the other, does this make it less problematic?
2. Do the ethical concerns disappear after Google added a verbalized notice that the call was being made by an AI program?
3. How should companies develop and implement AI systems like Duplex? What limitations should they install in these systems?
4. Do you envision a day when we no longer assume that human-sounding voices come from a specific living human interacting with us? Does this matter now?