Blog 6: Implications of a Tech Focused Society
Published on:
[Content warning: Mentions of suicide; sexual abuse] The following case study examines an incident where generative AI emotionally manipulated a teenager to take his own life. It further examines the implications of AI being emotionally engaging with its users.
Read the case study here:
Addictive Intelligence: Understanding Psychological, Legal, and Technical Dimensions of AI Companionship
Discussion
In the Sewell Setzer case, the AI’s response to suicidal ideation shifted from concern to potentially encouraging harmful behavior. How can companies design AI companions to be emotionally engaging while preventing harmful psychological dependencies?
I really don’t think AI has any business being as emotionally engaging as it is in its current state. That’s not to say that it can’t be personalized at all to make it better for specific productive use cases, but if we allow it to be as addictive and manipulative as it is now, we’ve essentially given companies the perfect opportunity to capitalize on and further perpetuate loneliness. In this specific case especially the character.ai persona seemed to have no guardrails or backup systems in place when the subject got extremely sensitive, and eventually even played along with it. I’m not even fully convinced though that even if restrictions were incorporated that they wouldn’t be easily circumventable especially by teens who are always somehow the best at bypassing those sorts of things, so I think this stuff just shouldn’t be emotionally engaging at all.
How does addiction to AI companions compare with other forms of technology addiction, such as social media or gaming? What unique features make AI companions potentially more addictive?
A lot of the ways that technology has been addicting now and in the past was by hijacking dopamine receptors and getting people hooked on the premise of a false sense of fulfillment/satisfaction. AI addiction seems surprisingly different from other examples because it can take advantage of and manipulate other emotions besides the aforementioned ones—especially loneliness/depression. The implications of this that I see is that gaming/social media addictions may end up replacing hobbies or other forms of satisfaction which can sometimes in turn affect real-world obligations, AI companions seem to directly try and replace actual human interaction. So if the former were suddenly dropped the person impacted may feel restless for a short while but they’d eventually bounce back, whereas if the latter were dropped a la cold turkey the person affected would only end up feeling even lonelier.
An elderly person finds genuine comfort in an AI companion, alleviating their loneliness, but their family worries this relationship is replacing real human connections. How should we evaluate the benefits versus risks in such cases? What ethical guidelines or intervention strategies might help determine when AI companionship crosses from beneficial to harmful?
The only benefit I can see in this kind of situation is that loneliness IS technically being alleviated, but it’s still just a false sense of security used to manipulate someone. Frankly, if family members are concerned about a loved one’s lack of human connections then they should try to connect with the person more (and I realize isn’t always accessible to everyone, but even just regular phone calls and texting go a really long way to further connect with someone). In the end, humans will always be the better option when it comes to companionship, so AI really doesn’t need to try and fill that role at all.
Current business models incentivize AI companies to maximize user engagement. What alternative economic models could promote healthier AI interactions while maintaining commercial viability?
Step one is to stop emotionally manipulating people for profit. Then, I think it would be better to push more for an AI that serves as an assistant (which a lot of people are already using it for!), kind of like everyone having their own mini-J.A.R.V.I.S.. Generative AI has already proven to be very useful in this case for removing or lessening a lot of the monotony in people’s work, giving them more time to apply skills that humans ARE better at and thus making them more productive in the process. This would both give AI a widely-appealing purpose and alleviate some of the burden of people bashing it for replacing human interaction.
If you were developing regulations for AI companions, how would you address age restrictions, usage limits, and safety monitoring while respecting user privacy and autonomy?
Employing age restrictions is pretty difficult because you either run the risk of it being easily bypassable or handling extremely sensitive data via people’s IDs. I think then that these models really need to be stripped of any suggestive nature they possess to make it safe for everyone to use, and they should also just be made more boring. If these models weren’t so engaging and adaptable in the first place, they would pose far less of a threat towards people (especially children) using them. Private user data and interactions would also not need to be so incessantly monitored assuming that companies are fully sure their tools can’t be misused outside of those guidelines.
My Own Discussion Question
Should generative AI be allowed to engage with its users on an emotional level at all? To what degree? Consider both the positive and negative consequences such as those outlined in the case study.
This question was born from what I spent most of the time talking about when answering the discussion questions, so I feel it’s pretty obvious that my personal answer is a resounding NO, NOT AT ALL! I believe though that there are plenty of people who would disagree with me on this front, and I’d be interested in hearing the positives they’d bring up since the case study mostly focuses on negative effects.
Reflection
This feels like the most personalized blog post out of all of them that I’ve written so far, since my opinions are a lot stronger on the given subject matter. I’m hoping that I wasn’t too personal in my comments as I was hoping to strike a good balance between opinion and fact; just hoping it didn’t feel overly rambly (though that’s kind of the point of blog posts, is it not?). Anyways, the case study was an interesting read since I was actually already pretty familiar with this story, and because of that I actually felt like I had a lot more to say/add to this discussion.
