OpenAI's new voice interface for ChatGPT raises concerns over emotional attachments and misinformation risks

In late July, OpenAI began rolling out a humanlike voice interface for ChatGPT. While this made it easier to interact with the AI, it raised concerns that users might begin to form emotional attachments to the chatbot.
OpenAI addressed this issue in a newly released “system card” for GPT-4o, which details the risks associated with the model. Those risks include amplification of societal biases, disinformation, and the potential for misuse in creating harmful technologies.
OpenAI’s transparency has been praised, but experts like Lucie-Aimée Kaffee from Hugging Face say the system card does not provide enough information about the model’s training data.
However, the evolving risks posed by AI, particularly with the introduction of a voice interface, were laid bare during stress tests of the new feature. Users expressed emotional connections to the AI, with one saying: “This is our last day together.”
OpenAI acknowledged that anthropomorphising the AI could lead to misplaced trust in the AI’s outputs, which could have a knock-on effect on users’ social interactions.
Joaquin Quiñonero Candela said that voice mode could help lonely people, but more research is needed into the dynamics of these emotional relationships.
The report also warned that a humanlike voice might encourage users to overlook inaccuracies, which would aid the spread of misinformation.
OpenAI has come under increased scrutiny in recent months, and this rollout came after the resignations of key researchers. The company has also faced criticism from high-profile figures like Scarlett Johansson, who claimed the AI’s voice sounded too much like hers.