OpenAI, the San Francisco-based research firm behind ChatGPT, says it has released a new tool to help distinguish between AI-written text and human-written text.
The firm announced the release of an initial version of the release on Tuesday, saying it aims to collect feedback and share improved methods in the future.
The makers of ChatGPT cautioned that it is impossible to reliably detect all AI-written texts. However, the firm believes that good classifiers can help flag automated misinformation campaigns, positioning an AI chatbot as a human, and using AI tools for academic dishonesty.
The “Al Text Classifier’s” launch comes after week of discussion at schools and colleges over fears that ChatGPT’s ability to write just about anything on command could fuel academic dishonesty and hinder learning.
POTENTIAL GOOGLE KILLER COULD CHANGE US WORKFORCE AS WE KNOW IT
Teenagers and college students were among the millions of people who began experimenting with ChatGPT after it launched Nov. 30 as a free application on OpenAI’s website. And while many found ways to use it creatively and harmlessly, the ease with which it could answer take-home test questions and assist with other assignments sparked a panic among some educators.
By the time schools opened for the new year, New York City, Los Angeles and other big public school districts began to block its use in classrooms and on school devices.
The longer a passage of text, the better the tool is at detecting if an AI or human wrote something. Type in any text – a college admissions essay, or a literary analysis of Ralph Ellison’s “Invisible Man” – and the tool will label it as either “very unlikely, unlikely, unclear if it is, possibly, or likely” AI-generated.
CLICK HERE TO GET THE FOX BUSINESS APP
But much like ChatGPT itself, which was trained on a huge trove of digitized books, newspapers and online writings but often confidently spits out falsehoods or nonsense, it’s not easy to interpret how it came up with a result.
The Associated Press contributed to this report.