Of love and AI: The intersection of artificial and emotional intelligence

Paramie Jayakody
12 Min Read

Imagine a couple on a first date. There’s a restaurant with a nice romantic ambiance, a sweet chemistry between the two, conversation flows easily, and there’s undeniable tension. It’s a genuine connection you would scarcely dare hope for… except they’re both AI programs. While this was the fictional setting for the arguably disturbing Black Mirror episode “Hang the DJ,” aired seven years ago, it’s now about to become reality. 

“There is a world where your dating concierge could go and date for you with other dating concierges… and then you don’t have to talk to 600 people,” says Bumble founder Whitney Wolfe Herd.

Disturbing or interesting? You decide. 

AI is currently taking the world by storm. While it sometimes scores big wins (empowering research teams to tackle climate change) other times it has massive fails (we all know how Google’s AI overview is doing right now). A little-known secret is how entangled it has become in everyone’s love lives. 

Bumble is not the only app looking to use AI. Even if you’re not particularly inclined towards dating apps, it’s hard to completely avoid AI. In the US, 33% of men have asked chatGPT for relationship advice. In Australia, it was 25%. Elsewhere, a woman used chatGPT to figure out she was being gaslighted, and a Bing chatbot told a man to leave his wife and be with it instead. 

Over on Reddit, everyone’s favorite platform right now, forums have been plagued with people demanding to know why their partners have developed affairs or fallen in love with AI chatbots. “Does it count as cheating?” they ask. One post, in fact, predicted this months ago.

Like it or not, the world of relationships hasn’t been entirely human for a while due to the growing dependence on technology.  But is this a bad thing? 

Making sensitive data public

“What people are doing is exposing their personal history. By asking for advice, they may be sharing very personal, intimate details about themselves, they may be talking about certain disabilities they have and asking for methods on how to cope with it,” observes Cybersecurity Advisor Asela Waidyalankara. He points out that by doing so, people may be putting sensitive information at risk as AI models are trained on user data, and they’re not very transparent on what data that is or the extent it’s used. 

While organizations using AI approach it with caution to protect their intellectual property, parent companies behind the AI models – such as Microsoft, Google, and even OpenAI – have demonstrated that safety and privacy are not a priority. Recent headlines have showcased multiple instances of employees at these companies leaving due to ethical considerations, including the ‘Godfather of AI’ Geoffrey Hinton. Improper training data has also led to disturbing results by Google’s Gemini AI. 

In the current environment, there is a pressing need for laws and regulations surrounding the use of generative AI, specifically on the data used for training. Some governments have been faster than others. 

As of March 2024, more than 37 countries have proposed AI-related legal frameworks. Some of the more well known ones are the EU AI Act, which aims to regulate AI systems and ensure they are safe, transparent, and fair; and Canada’s Artificial Intelligence and Data Act (AIDA). The US’s approach is split among states and agencies, with multiple laws proposed. 

On the Asian side, Japan, China, and India have all established organisations and standards to ensure ethical use of AI. The need is present in Sri Lanka as well, with calls for amendments to laws, including copyrights

As AI integrates itself more and more into human social lives, protective measures must be set for the ordinary user. 

Ordinary users also currently do not employ caution in a more personal setting. “Especially for the technologically uninitiated, talking to an AI chatbot may feel like they’re talking to a person,” Waidyalankara notes, adding that to match the tone of the person chatting with it, certain AI chatbots – particularly spoken versions – can get very flirty very fast.  

While there are occasions where chatGPT may help you identify gaslighting, he explains, it’s also crucial to keep in mind that the news hit the internet because it had a happy ending. For every blog or article we see where AI has helped a relationship or a person, we are unaware of how many articles were never written on how it failed. 

Fortunately, the majority of us in Sri Lanka don’t need to worry about this disturbing trend just yet, he believes, due to the simple reason that Sinhala language models have not reached this point yet. “At least not this year!” Waidyalankara laughs in some relief. 

“The human touch is still necessary”

The relief is short-lived, as in conversation with Psychologist and Counsellor Dr. Kalharie Pitigala, we discovered that she has already come across several instances where her clients have used AI chatbots to navigate personal aspects of their lives. 

“They were all English speakers,” she notes, “and within a relatively young age group.” However, what was more interesting were the topics of discussion. “All their concerns were related to sexuality,” Dr. Pitigala explains. While the AI bots used varied, at least one specifically mentioned chatGPT as being their method of conversation. 

While this cements Waidyalankara’s point about sensitive personal information, Dr. Pitigala notes that they were motivated by the very fact that they were not speaking to a human. “They felt embarrassed to speak to a human.” 

The illusion of privacy is a compelling allure for many to seek out help from their AI chatbots. However, even then there are safer options, such as the Ask Sri chatbot developed by the Family Planning Association of Sri Lanka. 

“I don’t think AI could fulfill the emotional aspect of a relationship,” Dr. Pitigala concludes. “It’s very mechanical. In a way, it is good to explore. But human talk and touch is better.” She also notes that there is a very real danger of people becoming dependent and addicted to AI programs. “They can eventually start avoiding human interaction if the aforementioned can fulfill most of their requirements,” she says. 

“We can filter out the homophobes”

“I like the swiping part,” says Prabhashana Hasthidhara when told about Bumble’s proposed new direction to dating. Someone who enjoys meeting new people, Prabhashana likes the process of filtering just as much as they like the actual dates. “Taking away that process and giving it to AI? Not smart.” 

However, even they admit that this would depend on a few factors, the main one being how advanced the AI model is. “If it is setting up dates on my behalf, it’s taking on my likeness to do so,” they note, raising concerns on identity and how viable that would be. If everyone is aware it’s the AI and not them, it may be a good alternative to consider on very busy days, they concede. 

Another plus point is that this could be used to filter out specific groups of people, they note. “If I can filter out people who seem to have internalized homophobia etc that would be a good use of AI.” On the flip side, this again depends on how advanced the model is. “I make a lot of questionable jokes, for example. How would generative AI handle that? I don’t think AI is advanced enough to understand nuances yet.”

Prabhashana also sees the involvement of chatbots in a more positive light. “If someone’s really shy and they don’t have much experience or friends, AI would be a good tool to give some advice,” they say. “AI is good for some very generic advice, that’s like reading a self help book. There’s just something very iffy about it having control.” 

To trust or not to trust, that is the question

Love has never been an uncomplicated subject, and with the advent of generative AI, we might have made it even more so. While there are obvious benefits such as fulfillment of emotional needs, reassurance, and advice, there are also significant social, ethical, and safety considerations that more often than not, slip our notice. What would human connection look like in the future with AI taking over parts of it? 

It’s always better to go straight to the source, and so I asked chatGPT what it thought of AI and love. It gave me this: 

“AI’s integration into love and relationships offers exciting possibilities, such as enhanced matchmaking and personalized dating experiences, making connections more accessible, especially for those who face social challenges. However, it also presents significant ethical and social challenges, including concerns about authenticity, emotional manipulation, and privacy. The dynamics of relationships may change, potentially impacting their depth and longevity, while societal norms evolve with AI’s influence. Ensuring AI serves to augment, not replace, the human elements of love is crucial, as we navigate this balance to shape the future of romantic connections in the digital age.”

AI obviously has a host of benefits, the key reason for the rapidly shifting technological landscape across the globe. However, it’s up to us to remember that just because a chatbot reads like a human and types like a human, sometimes even sounds like a human, it is not. As with all new(ish) technologies, users must be cautious when using AI. Our privacy and information may be at risk, and we may even be getting the wrong advice. The future likely holds an AI model without these risks, but right now, it’s better to play safe than be sorry.

TAGGED:
Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings