How OpenAI and others are making technology safer, more transparent, and more humane

Published on:

How tech companies like OpenAI are addressing emotional safety to make technology more humane.

Press enter or click to view image in full size

A soft pastel illustration of a young girl sitting on a floating platform, speaking to a small AI device with the OpenAI logo, as flamingos wade peacefully in the background under a calm pink sky.
A quiet reflection on how OpenAI and others are rethinking emotional dependence.

There was a time when we would phone a best friend or even write to the agony aunt in the local newspaper for relationship advice. Today, it’s not uncommon for a heartbroken teenager to turn to AI for digital therapy.

At first glance, seeking wisdom or validation from AI might seem harmless. Yet emotional dependence on technology can lead to serious and unintended consequences.

It’s estimated that over a million people talk to ChatGPT about suicide each week. In one tragic case, OpenAI is being sued by the parents of a 16-year-old boy who reportedly confided his suicidal thoughts to ChatGPT before taking his life.

AI chatbots are known for occasionally giving inaccurate information, but reinforcing dangerous beliefs presents an even greater risk. Where most tech companies design for convenience and attention, OpenAI is taking steps to reduce emotional dependence with its safety strategy.

How OpenAI is addressing emotional dependence

To reduce reliance on AI for emotional support, OpenAI is updating ChatGPT to identify distress, de-escalate conversations, and guide users to…

Source link

Related