With Microsoft reportedly putting $10 billion in OpenAI, the creator of ChatGPT, it appears eager to corner the market maker that generative AI text will probably be. They’re not alone. You can almost hear the colossal checkbooks opening worldwide to get in on ChatGPT, its cousins and offspring.
In education, the market and business opportunities are different, more than likely not resting in creating with AI but in spotting it. That’s because, in what was the most predictable of all consequences, students have already been using ChatGPT to cheat. In classrooms everywhere, students are trying to pass off computer-created work as their own. And while a few attentive professors have spotted AI-created writing, sooner or later – and probably sooner – instructors are going to need help.
If we’re being honest, they do already. The global education community cannot rely on the attentive reading of a few teachers to safeguard the integrity of academic work and the value of educational credentials. At least it cannot rely on that alone.
So, as the AI grows, helping teachers peer into its fog of technology will have increasing value. And because being able to detect AI-content is a strong deterrent to using it inappropriately, helping schools avoid massive academic fraud could become its own billion dollar business. Demand for tools to spot ChatGPT won’t be as big as the demand for ChatGPT itself. But there is no reason to doubt it will be pretty big nonetheless.
Based on what we’re being told and what we’ve already seen, the question is not if AI detection systems will exist, but what they’ll look like and when they’ll be in common use. And, of course, whether education providers can risk going without them. In most cases, they likely cannot.
We know the detection regimes are coming because some of them are already here, meaning that the race to quality development and full deployment of these AI detection products and services is already well underway.
One, created by a student at Princeton University, inexplicably scored a ton of media attention. That was even after several companies said they’d already done the same thing. Since then, Futurist and others have reported that his solution has problems. Maybe more than a few. A good effort, but perhaps not the effort that’s going to define this space. Properly tackling AI cheating will probably take more serious investment.
If it’s not there, the to-market solution may come from Australia, where a team recently said their software could spot GPT text.
Or it may come from Europe, from a startup such as CrossPlag. The company says their tech not only reliably spots text created by ChatGPT, but that it’s also accurate at spotting text that’s been run though common AI-supported paraphrasing tools in attempts to fool existing plagiarism detection systems.
CrossPlag says their system is also good at picking up what’s called spinning – a newer but pretty common method of trying to hoodwink teachers and anti-cheating systems by running text through various translation tools, bouncing it from one language to another then back into English. Here, they say being in Europe and dealing with its multiple language requirements is a big help.
Then there’s Turnitin, the industry leader in helping teachers and schools spot suspicious written work. They too say they have a ChatGPT solution they’re testing now. They’ve already released a sneak peek. They’re relying on their experience and expertise to develop a winning detection system.
“While these large language AI-generative writing models like ChatGPT are general purpose, the AI systems to detect their statistical signatures need to be specially built,” said Eric Wang, Vice-President of AI at Turnitin. “In our case, we’ve leveraged our deep understanding of how students write and what educators will find helpful to build a detector giving visibility into how these AI-generative writing systems are being used in student writing. What we’re testing now and rolling out soon is built on 20 years of working with teachers, giving them insights into actual student work.”
And it won’t hurt either that Turnitin already has their systems and software in thousands of schools around the world, in a platform and user experience that instructors already know.
Then there’s always the possibility that OpenAI will be the ones to identify their own work. The company has hinted about adding a watermark to the AI text, making it easy to spot – a “mitigation” they call it.
But of course, if you know anything about technology or academic misconduct, the moment OpenAI adds a watermark, someone will develop an app that takes it out, placing the onus for differentiating AI from human writing back on teachers and back on one or more of these companies.
Wherever it comes from, the company or companies that develop a tool that can actually reliably find and flag suspected bot banter based on how it writes and the words it chooses, will likely enjoy significant market standing. To say nothing of helping to preserve the integrity of academic work and the value of human creativity.
That’s not a bad value proposition – literally or figuratively.
However it plays out, there’s exceptional value and profit waiting to be won with generative AI such as ChatGPT – not just in the AI, but in helping all of us be able to tell who is actually talking to us. Or whether it’s a who at all.
#Big #Profitable #Education #Race #Detect #ChatGPT