OpenAI has the tech to watermark ChatGPT text—it just won’t release it

Enlarge (credit: Getty Images)

According to The Wall Street Journal, there’s internal conflict at OpenAI over whether or not to release a watermarking tool that would allow people to test text to see whether it was generated by ChatGPT or not.

To deploy the tool, OpenAI would make tweaks to ChatGPT that would lead it to leave a trail in the text it generates that can be detected by a special tool. The watermark would be undetectable by human readers without the tool, and the company’s internal testing has shown that it does not negatively affect the quality of outputs. The detector would be accurate 99.9 percent of the time.

Some OpenAI employees have campaigned for the tool’s release, but others believe that would be the wrong move, citing a few specific problems.

Read 8 remaining paragraphs | Comments