
OpenAI has removed the “warning” messages in its artificial intelligence (AI)-powered chatbot platform, ChatGPT, that indicated when content might violate its terms of service.
Taking to X (formerly Twitter) OpenAI’s AI model behaviour team member Laurentia Romaniuk stated that the change was intended to cut down on “gratuitous/unexplainable denials.”
Meanwhile, ChatGPT head of product Nick Turley said in a separate post that users should now be able to “use ChatGPT as [they] see fit,” so long as they comply with the law and don’t attempt to harm themselves or others.
Turley stated: “Excited to roll back many unnecessary warnings in the UI.”
The removal of certain content warning messages doesn’t mean that ChatGPT is a free-for-all now. The chatbot will still refuse to answer certain objectionable questions or respond in a way that supports blatant falsehoods.
However, some X users noted, that doing away with the so-called “orange box” warnings appended to spicier ChatGPT prompts combats the perception that ChatGPT is censored or unreasonably filtered.
Earlier, ChatGPT users on Reddit reported seeing flags for topics related to mental health and depression, erotica, and fictional brutality.
OpenAI this week updated its Model Spec, the collection of high-level rules that indirectly govern OpenAI’s models, making it clear that the company’s models won’t shy away from sensitive topics and will refrain from making assertions that might shut out specific viewpoints.