More

    OpenAI watermarking method for AI-written text from ChatGPT with 99.99% accuracy and internal debate

    OpenAI has already developed a tool to detect if AI or ChatGPT is used to write essays or research papers. But it has yet to release this technology amid ongoing internal debates. The tool, meant for detecting AI-written text, has been available for about a year. However, its public release is being held back for further considerations concerning the impact and effectiveness of such a system.

    - ADVERTISEMENT -

    The Wall Street Journal reports that the anti-cheating tool makes use of a technique known as watermarking. The technique subtly changes the tokens—Word fragments—output by ChatGPT. These modifications form an imperceptible pattern within the text that can be detected by that tool, signaling AI authorship. An internal document says watermarking is highly effective in detecting ChatGPT, with a 99.9% success rate with enough new text.

    The internal debate over the watermarking tool

    Though the release is quite promising from a technical point of view, it has raised heated debate at OpenAI. Among the principal concerns, one is that it is going to disproportionately affect non-native English speakers, thus ending up penalizing them more than others. OpenAI’s spokesperson highlighted the risks associated with their watermarking technology as they explore other options carefully. They are being cautious about the broader implications of this technology.

    - ADVERTISEMENT -

    Another such challenge is the user experience. Based on a survey (via Tech Crunch) conducted by OpenAI, nearly 30% of the users may never use ChatGPT again if this anti-cheating technology is put into effect. Moreover, according to an experiment, it is possible that watermarks could be removed or altered by using very simple means, such as translating the text or adding and removing emojis.

    The discussion of the release of this tool began almost two years ago. Some employees said the benefits of using the technology outweighed the risks, while others are becoming extremely cautious of the possible drawbacks from the technology. Also, the company has been assessing the feedback by users and educators to the watermarking and is testing to ensure that it doesn’t make big degradations in ChatGPT’s performance.

    These talks have involved top leadership from OpenAI, including CEO Sam Altman and CTO Mira Murati, but none of them have forced the release of the tool. It’s also exploring options for how to distribute the detection technology, as it seeks to make the tool useful for educators while preventing its use by bad actors.

    - ADVERTISEMENT -

    Industry context

    The race for the building of these tools is not led by OpenAI alone. Google is beta-testing a watermarking technology, SynthID, for Gemini AI. It has moved further to include audio and visual watermarking that can detect AI-generated content. On the same note, OpenAI has already released detection technology for images created using its DALL-E 3 model.

    The more OpenAI deliberates, the more worrying the issue of AI-generated content in academic integrity becomes. The educators, like Alexa Gutterman, are struggling to detect work that is generated by artificial intelligence, causing great concern. This is a growing problem that requires urgent solutions to be developed.

    Though OpenAI has traveled a long way toward developing the watermarking tool, its release date is still not confirmed. It is committed to finding middle ground, effective and useful for users, educators, and the integrity of AI technology.

    SUGGESTED FOR YOU

    Stay in the Loop

    Get the daily email from Oneily News that makes reading the news actually enjoyable. Join our mailing list to stay in the loop to stay informed, for free.

    Latest stories

    - Advertisement -

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here

    You might also like...