Fancy a chat? Except, you're not speaking to an Aimee but an Ai.mee.
When Cleverbot launched their chatbots- Evie- followed by SimSimi’s sudden rise in relevance and the subsequent release of Talking Angela, those growing up in the late 2000s and early 2010s were situated in an era where conversations and interactions no longer involved the “Human”.
The abrupt rise in AI chatbots, especially AI companions, was certainly exhilarating at first- It opened up the vista of a multitude of interactions, hypothetical or otherwise, which one would not conduct with other human beings. To an extent, the initial stages of such an innovation had certainly gone largely unregulated and it did not take long for the grim underbelly of conversational AI to supersede its superficial engagements. If AI could provide a basis for interpersonal communication, was it even necessary to interact with other humans? If AI could mould its responses to suit the needs of the person behind the screen, could customer service and PR also become automated? What would happen if relations were constructed between man and the machine in situations that were normally regulated by human beings?
What is Conversational AI?
Conversational AI refers to Artificial Intelligence tools that can simulate human conversations and enable its users to interact and converse with them. Machine learning and natural language processes are fed into these models to ensure that they can accurately imitate human interactions, especially in terms of recognising contexts.
While we do not necessarily notice their presence, AI chatbots are omnipresent in our daily engagements. iPhone users often lament the fact that one misclick of a button can summon the troublesome entity by the name of Siri, while android users typically flaunt the expertise of Google Assistant. Siri and Google Assistant are the most prominent chatbots that we encounter. Besides these, the customer service messaging tools that pop up on e-commerce websites also incorporate conversational AI.
The “Feeling Economy” and Consumer Satisfaction.
Huang and Rust’s book, The Feeling Economy: How Artificial Intelligence Is Creating the Era of Empathy, accurately evaluates the state of human society in the contemporary technological world. While artificial intelligence manages tasks that involve mechanical and logical skills, humans are pushed towards tasks that harbour their emotional capabilities. Consequently, corporations are in the conundrum of having to readapt work descriptions and tasks to ensure the availability of those pertaining to interpersonal skills and empathy. However, the digitalisation of ordinary life has also enabled modern “fixes” for these dilemmas: AI can perform interpersonal skills.
According to a recent Slalom Report (2023), 70% of corporations surveyed intend on increasing resources and budget towards AI innovations. For AI-driven companies, the matter of public relations and customer service might also be redirected toward automated tools. As a result, one will rarely see customer service kiosks for these have been completely digitalised. Consumers are expected to interact with a chatbot to discuss any issues they might have encountered with the service. These tools are also trained to provide positive responses regardless of the customer’s emotions. The dynamic that is created is largely parasocial since the AI cannot offer specific advice to nuanced issues nor can they detect the subtleties within these interactions. Actions of service are emotionally-charged and unless there is a common understanding between the parties, a consensus seems to be idealistic. Can humans in PR be replaced? It seems highly unlikely despite the attempts that have been made.
AI Companions or Coping Mechanisms?
What would happen if we do succeed in creating AI tools that appear to mimic human emotions? Several films like Her (2013) have delved into such possibilities and their horrifying consequences. Humans developing a dependency on AI tools is not unheard of, but to perceive them as real people with emotions or as substitutes for other human beings is certainly a risky endeavour.
There is an entire subcategory of AI chatbots that are commonly known as AI companions. These include services like Simsimi and CharacterAI which allow the user to interact with fictitious entities or AI replications of fictional characters and even real people. The younger generations are more likely to utilise these softwares. While this does not seem out of the ordinary, the question of what people choose to do with these bots is far more concerning.
Have you ever thought about seeking therapy from Harry Potter or William Shakespeare? AI has made it possible. Teenagers and young adults seeking therapy from chatbots raise some ethical concerns however, given the state of the availability of mental health resources, is it truly surprising that the newer generations have to make do with what is available to them? The very fact that one of the most popular bots on CharacterAI is the “Psychologist” with 95 million messages, is more of a critique of our economy than the users themselves. Recently, I conversed with some of these bots to understand the response patterns that they entail. With the tiny “Remember: Everything Characters say is made up!” at the bottom, it was blatant that everything that was said was devoid of emotions.
Character AI “Psychologist” | Bot created by Blazeman98 | Screenshots taken by Sebanti Hui.
The conversation with the Psychologist certainly felt quite generic, especially since they did not ask any specific follow-up questions before giving me advice. Can this even be trusted? The William Shakespeare bot seemed like a wikipedia extension to me. It felt more like a statement than an opinion. Romeo and Juliet, really? If we were to ask other bots of the same type, they would answer in a similar manner- because these are all automated responses.
Character AI “William Shakespeare” | Bot created by Septy | Screenshots taken by Sebanti Hui.
Besides this, there have been instances where people have attempted to engage in romantic relationships with the chatbots. In all of these cases, it’s quite easy to forget the fact that large language models do not think in the same way that humans do. They are trained to compile data into a string of text without further scrutiny. Human biases and misinformation can permeate through these conversations, making it fallacious as a resource, especially one for mental health.
Conclusion.
Humanity and emotionalism are inseparable concepts. Our ability to readily respond to situations, social circumstances and communicative endeavours distinguishes us from our animalistic and automated counterparts. Despite the numerous attempts to develop alternatives to human interpersonal communication, the perils that accompany them supersede the modern flair. AI companions are first and foremost, machines. The companionship that they entail is a falsity that we fall victim to and the ethical considerations of these dialogues are much more pressing concerns.
Once the glamour of meeting one’s emotional preferences fades, it becomes evident that a machine could never react to situations the same way in which humans do. Unless we respond to the nuances and subtleties of language and speech, can we truly communicate with one another? It is imperative to consider that these AI-generated responses are highly generic and generalised. Situations are personal for the people who are a part of them. There will come a time when such machine-to-man interactions will fall short within the same Feeling Economy it is supposedly a part of. There can be no substitute for human interactions and this is a fact that we cannot refute.
Comments