ROSALIND HOBBS
Academic and Creative Portfolio
Rosalind Hobbs
The University of Tulsa
Media Studies
February 26, 2021
​
Utilitarianism: AI Smart Assistants
“Hey Siri, what are you here for?”
“I’m here to help.”
“Hey Siri, are you utilitarian?”
“Hmm… I don’t have an answer to that.”
“Siri, tell me a joke.”
“I’d tell you a chemistry joke, but I’m not sure if I’d get a reaction.”
The premise of utilitarianism is that an object, act, or idea can be deemed good if it promotes the happiness and opposes the suffering of those involved. The lens of utilitarianism sees only the consequences, positive and negative, when attributing goodness. This lens can be applied to a variety of objects in our day-to-day lives. Smart assistant AI has many factors contributing to the classification of its “goodness,” or lack thereof, in alignment with utilitarianism. Whether or not the disadvantages outweigh the advantages, users of smart assistants are following the trend of enjoying the benefits of modernity yet unable, or possibly unwilling, to confront the potential consequences that lurk in the shadows.
​
Artificially intelligent smart assistants, like Siri, Google Home, or Alexa, are undeniably useful and aid in the promotion of living better lives…it’s in the name, their duty is to assist. If you ask Siri, “hey, what are you here for?” She will reply, in her familiar, electronic voice, “I’m here to help.” Smart assistants bring into our lives a previously unimaginable sense of convenience.
​
With the use of smart assistants, you can wake up on time accompanied by the slow rise of your morning playlist, know all your tasks for the day before any have even begun, never forget nor go through the trouble of adding milk to the grocery list or ordering dog food from Amazon, call mom in between meetings, set timers while your hands are covered in flour, open and close apps, receive directions, play/shuffle/skip music, and ask for a joke every once in a while. The existence of this AI promotes productivity, accuracy, and speed in our lives (Mobinius). Life is simply easier with smart assistants. This is the essence of an object or an act that is in alignment with utilitarianism, to create a “net happiness” (Peters). With the ability to delegate tasks and minimizing stressors in everyday life, this technology promotes the happiness of its users and opposes the suffering that can come from human error involved in handling tasks, like doing something incorrectly or forgetting to do it at all.
​
Outside of mere utility, there’s an element of entertainment that Siri, Google Home, or Alexa can bring into the home or workspace. A search for “funny Alexa videos” on YouTube brings up over 8 million results highlighting entertaining moments with smart assistants. Comedy is a feature in itself for smart assistants (Schwab). This is such a key element in the user experience that AI is progressively being programmed to interact humorously with users. An object such as this that can be considered utilitarian creates “an existence as free as possible from pain and as rich as possible in enjoyments” (Mill, 8). Not only can this technology bring ease to our day to day lives, but it can also bring laughter and be bonded over among family, friends, or coworkers.
​
Despite the many benefits of artificially intelligent smart assistants, it is necessary to still consider the negative consequences that arise with this technology. The most obvious danger concerns security. The efficiency of a smart assistant relies on the amount of information it knows about its user. This technology “requires data to serve you well,” which can include, but is not limited to, your location, routines, music taste, and shopping habits (Arntz). While this promotes the good that this technology can do, it also provides a golden pool of data susceptible to cyberattacks. Furthermore, users are under what is called “ubiquitous surveillance,” which means “it [is] becoming increasingly difficult to escape…data collection, storage, and sorting,” not necessarily by malicious hackers per say, but simply by the companies we are interacting with each day like Apple, Google, and Amazon (Andrejevic qtd. by Woods). This information increases the ability to create targeted advertisements tailored to each individual user to promote the purchasing of new products. While this benefits these companies, this only increases our material desires, and in turn, our monetary spending.
Additionally, while there exists no proof confirming suspicions, users of smart assistants face the potential risk of eavesdropping. The microphone may be off, but it is difficult to confidently know the extent of this technology’s power and whether or not there are intentions hidden from the public eye or if these microphones can be hacked by a third party. However, security risks are, for the most part, manageable and although we may not be able to escape data mining for target advertising, the companies creating this AI are progressively capable of improving their security to decrease the chances of cyberattacks.
​
Naturally, with any new technology comes the question of how it will affect society on a deeper level…emotionally, philosophically, and socially. An initial concern that comes to mind is how this assistance may handicap users making them less capable to do work themselves. However, this can easily be boiled down to a common misconception made of new, alien technologies. For example, Plato argued against the art of writing, which was a new technology at the time and opposed oral speech, by claiming that it is inhuman and “destroys memory” for people no longer needed to memorize stories, speeches, and other oratory communication (Ong 60). However, Plato made this argument in written form, virtually negating any credibility and further proving the usefulness of writing. New technology will always handicap users in one way or another, but the ways in which the benefits give us new abilities, usually, outweighs these consequences.
​
However, while creating a physically “lazy” user may not be a big concern, the potential of becoming emotional dependent to smart assistants should be acknowledged. Smart assistant AI is given a female voice for a reason…because this allows for users to look to it as a motherly figure here to support and help them. Using the female persona, users feel even more encouraged to trust their assistants and share more information about themselves (Woods). This personification creates opportunities to become overly reliant on AI and confusing the relationship with a smart assistant with the relationship with another human being.
While it is an extreme example, the 2013 romance/sci-fi film, Her, does an excellent job illustrating this emotional dependency. In this movie, Theodore, played by Joaquin Phoenix, develops a deep and mature relationship with an artificially intelligent operating system, Samantha, voiced by Scarlett Johansson (IMDB, Her). Like Siri or Alexa, Samantha is created to cater to the needs of her users, and does so with a motherly, feminine, and personal touch. While Samantha is much more personified and expansive than Siri or Alexa is, this exaggeration accurately presents the issue our modern world faces when it comes to emotional dependency to the digital world. Many conflicts arise for Theodore’s character as he gets increasingly pulled into his relationship with Samantha and further from the real world and any potential relationships with a human woman. This film tackles the many worrying aspects of what artificial intelligence can mean for the human race and the interesting new problems it will bring about.
Many of the philosophical and social consequences of artificial intelligent smart assistants is still unknown, or at the very least, difficult to quantify, making it that much trickier to properly weigh the pros and cons of this technology. The benefits are right here in front of us: smart assistants help us live better and happier lives. Other than security concerns, the consequences of smart assistants are much more ambiguous in terms of how it will change our interpersonal interactions, and maybe even intrapersonal communication as well. It is easy to deem an object good in terms of utilitarianism when the benefits are black and white but the elements that may cause suffering are 1. much more philosophical in nature and 2. delayed/in need of time to develop and manifest. Adults too can experience lapses in object impermanence, and this could be credited to the naïve response to artificially intelligent smart assistants. It is challenging to tackle a concept that we cannot see here in front of us, let alone one that isn’t even entirely proven; it is much easier to remain in the present and bask in the life-improving benefits of this technology. Taking smart assistants from a one-moment perspective, they are indeed utilitarian as they improve users’ lives and promote happiness in many different ways, however the long-term consequences may deem this technology otherwise.
Works Cited
Arntz, Pieter. “What's the Real Value-and Danger-of Smart Assistants?” Malwarebytes Labs, 18 July 2018, https://blog.malwarebytes.com/security-world/2018/07/whats-the-real-value-and-danger-of-smart-assistants/.
“Her.” IMDb, IMDb.com, www.imdb.com/title/tt1798709/plotsummary.
July, Mobinius Editoron 6, and Mobinius Editor. “How Is AI Advantageous as a Smart Assistant?” Mobinius, 21 Oct. 2020, www.mobinius.com/blogs/advantages-of-ai-as-smart-assistant#Benefits_of_Using_Assistants_using_artificial_intelligence_a_modern_approach.
LaBarre, Suzanne. “Are Smart Assistants Good for Users? An Honest Look at the Pros and Cons.” Fast Company, Fast Company, 16 July 2018, www.fastcompany.com/90202770/are-smart-assistants-good-for-users-weighing-the-pros-and-cons.
Mill, John Stuart. Utilitarianism. London, Parker, son, and Bourn, 1863. Web. Retrieved from
the Library of Congress.
Ong, Walter. “Writing Restructures Consciousness.” Communication in History, 2019, p. 60.
Peters, Ben. “Utilitarianism.” Media Theory (MSTU-3103) at The University of
Tulsa, 08 February 2021, The University of Tulsa. Lecture.
Schwab, Katharine. “The Daunting Task of Making AI Funny.” Fast Company, Fast Company, 28 Aug. 2018, www.fastcompany.com/3065970/the-daunting-task-of-making-ai-funny.