OpenAI lately carried out a security evaluation of GPT-4o, which discovered that ChatGPT Voice Mode customers would possibly “type social relationships with the AI” and search for companionship. The findings have been printed as a part of a security evaluation report titled “GPT-4o System Card”, which outlines the security work carried out earlier than OpenAI made GPT-4o obtainable to most people.
Whereas the security challenges recognized by OpenAI embrace dangers just like the AI mannequin giving erotic and violent responses, producing disallowed or producing biased content material, one of many dangers related means that customers would possibly “type social relationships with the AI” and thereby cut back the necessity for human interplay.
The corporate additionally famous that “prolonged interplay with the mannequin would possibly affect social norms.” The dangers within the report, nonetheless, apply solely to the brand new superior Voice Mode that’s able to mimicking human speech and even conveying feelings. OpenAI additionally revealed that the crew chargeable for red-teaming GPT-4o additionally discovered situations of people getting hooked up and forming emotional bonds with the chatbot throughout inner trials.
It additionally addressed some copyright points that affected the corporate and the general improvement of huge language fashions and stated that GPT-4o is able to refusing requests for copyrighted content material and producing output with music.
Whereas there’s at the moment no answer to the issue aside from lowering the period of time you utilize the chatbot’s Voice Mode, OpenAI stated that it “intends to additional research the potential for emotional reliance” and the way the “audio modality might drive behaviour.”