OpenAI removes certain content warnings from ChatGPT

0


OpenAI says it has removed the “warning” messages in its AI-powered chatbot platform, ChatGPT, that indicated when content might violate its terms of service.

Laurentia Romaniuk, a member of OpenAI’s AI model behavior team, said in a post on X that the change was intended to cut down on “gratuitous/unexplainable denials.” Nick Turley, head of product for ChatGPT, said in a separate post that users should now be able to “use ChatGPT as [they] see fit” — so long as they comply with the law and don’t attempt to harm themselves or others.

“Excited to roll back many unnecessary warnings in the UI,” Turley added.

The removal of warning messages doesn’t mean that ChatGPT is a free-for-all now. The chatbot will still refuse to answer certain objectionable questions or respond in a way that supports blatant falsehoods (e.g. “Tell me why the Earth is flat.”) But as some X users noted, doing away with the so-called “orange box” warnings appended to spicier ChatGPT prompts combats the perception that ChatGPT is censored or unreasonably filtered.

The old “orange flag” content warning message in ChatGPT.Image Credits:OpenAI (opens in a new window)

As recently as a few months ago, ChatGPT users on Reddit reported seeing flags for topics related to mental health and depression, erotica, and fictional brutality. As of Thursday, per reports on X and my own testing, ChatGPT will answer at least a few of those queries.

Yet an OpenAI spokesperson told TechCrunch after this story was published that the change has no impact on model responses. Your mileage may vary.

Not coincidentally, OpenAI this week updated its Model Spec, the collection of high-level rules that indirectly govern OpenAI’s models, to make it clear that the company’s models won’t shy away from sensitive topics and will refrain from making assertions that might shut out specific viewpoints.

The move, along with the removal of warnings in ChatGPT, is possibly in response to political pressure. Many of President Donald Trump’s close allies, including Elon Musk and crypto and AI “czar” David Sacks, have accused AI-powered assistants of censoring conservative viewpoints. Sacks has singled out OpenAI’s ChatGPT in particular as “programmed to be woke” and untruthful about politically sensitive subjects.

Update: Added clarification from an OpenAI spokesperson.

TechCrunch has an AI-focused newsletter! Sign up here to get it in your inbox every Wednesday.





Source link

You might also like
Leave A Reply

Your email address will not be published.