Why Seattle’s ban on students using ChatGPT is doomed — and what comes next

James Couture, a world history teacher at West Seattle High School, noticed that some of his students who typically struggle with writing were turning in “bizarrely good” papers.

Back in January, he ran their work through software that detects plagiarism, and found no evidence of it. Couture now suspects his students were using ChatGPT. It’s an artificial intelligence tool, similar to a search engine, that uses data from the internet and complex algorithms to respond in full paragraphs.

Worried it could be used to cheat, Seattle schools blocked students from using ChatGPT in December, and districts across the state and nation have done the same. But experts say there’s no realistic way to stop the use of AI chatbots and some say the best path forward is to embrace this new technology. For better or worse, AI will transform education inside and out.

“Banning ChatGPT is like using a piece of paper to block this flood that is coming,” said Jason Yip, a professor at the University of Washington who is working on ChatGPT and misinformation curriculum for children.

Seattle schools ban students from accessing ChatGPT on district-owned devices or on district-provided internet, but kids can easily use it on their personal devices or at home and email themselves the answers.

And once platforms like Bing, Meta and Google Bard integrate ChatGPT, it might be impossible to block the tool, Yip said.

“Districts are completely confused about what the hell to do here,” Couture said. “I don’t think this district, or any district, is capable of coping with this problem.”

Seattle schools expect to revisit the issue in the months to come.

“This is all so new that the digital team at the district is talking about it constantly,” said Tim Robinson, the district’s spokesperson. “There is nothing set in stone. I think the directive will be modulating here in the future.”

It’s not all bad news for education. The bot can help teachers generate questions for a quiz, identify the primary sources in a student essay or rewrite assignments at varying reading levels. Some students are already using it to help prepare for tests, research topics and write emails to professors or potential employers.

But educators worry chatbots will lead to widespread, instantaneous cheating, work avoidance and plagiarism. And if teachers wrongly accuse someone of using a chatbot, they risk breaking trust with their students.

In addition to Seattle, Bellevue and Northshore schools have blocked ChatGPT for students under the age of 13, an easy choice since ChatGPT’s terms state that people under 13 aren’t allowed to use it. Those between 13 and 18 can only use it if they receive their parents’ consent.

Bellevue is creating a task force to make recommendations about how to responsibly incorporate the AI tool into teaching and learning. Yip thinks that makes more sense than Seattle’s ban.

“There’s other things on the internet that are more nefarious,” Yip said. “How did we decide that ChatGPT was the thing to be banned?”

Why teachers are struggling

Couture, a 21-year veteran world history teacher, is still struggling to figure out what to do when students turn in a paper that was likely written by a chatbot.

Sign up for Education Lab

An easy way to stay connected to education. Delivered to your inbox Thursdays.

“It’s unprovable in any meaningful way,” Couture said, “so that’s tricky.”

He said he can give them a lower grade, but beyond that, the district doesn’t significantly punish students for plagiarism.

“If they have half a brain, they can figure out how to get the thing to collude (by inserting) grammatical errors or have a different voice. So I don’t believe it will be controllable,” Couture added.

Couture has started telling students he runs every assignment they turn in through ChatGPT. But he knows this doesn’t really solve the problem. He said students could use ChatGPT to get the answer and then rewrite it in their own voice.

About a month ago, he started playing with ChatGPT himself, asking it to rewrite some assignments at a fifth-grade reading level for his lower-level reading students. Couture calls that a “godsend,” saving him valuable time.

But he worries about how to assign meaningful homework ever again — any history question he wants his students to answer at home can be answered by ChatGPT. He would have to make the question obscure enough for ChatGPT to not be able to answer it. But what’s the point of such an assignment?

He knows some students will always want to learn and do the work, but others will cheat, and ChatGPT will assist in that. Couture is most worried about the kids in the middle, the ones who will use ChatGPT as a crutch and could become hooked on it.

A concern about trust

Kira Hopkins, an English teacher at West Seattle High School, knows how to spot AI writing. “It has its own tone and feel to it,” she said. “I know what their writing sounds like and I know what students’ learning to write looks like.”

But one time, her internal AI detection meter failed her. She worries she accused an innocent student of cheating. The student grew defensive, claiming he did write the paper.

“When you have that conversation then it really gets in the way of the relationship between you and the kid,” Hopkins said.

If educators have good relationships with students, Hopkins said, students will feel more comfortable coming clean about using ChatGPT to cheat.

Students avoid doing the work when they’re overwhelmed or about to miss a deadline. “That is usually when you see cheating or plagiarism,” she said.

She believes teachers must empower students to want to create their own work.

“We do a lot of discussions about literature and emphasizing the fact that your ideas matter,” Hopkins said. “That is where I think kids will feel more confident and competent that they can do their own writing.”

And when it comes to trust, students should be skeptical about what they get with ChatGPT. The tool can use any data that you put into it, therefore, experts say, you shouldn’t provide private information.

If you ask ChatGPT where it got the information, the source it gives might be fabricated, said Noah Smith, a computer science and language processing professor and researcher at the UW.

Unreliable AI detectors

UW has seen a couple dozen reports of suspected misconduct involving AI since January, said Victor Balta, a university spokesperson.

Some professors use ChatGPT Zero, a tool to help teachers identify ChatGPT outputs in student assignments, but it is unreliable and sometimes reports false positives, Balta added.

Penelope Moon, the director of the university’s Center for Teaching and Learning, doesn’t think educators should use a service to catch ChatGPT. “That technology treats the symptom,” she said, “it doesn’t treat the problem.”

Surveillance technology creates “learning environments that are adversarial in nature. And that assumes students to be bad actors,” Moon added. “Nothing we do in higher education is important enough to justify doing that.”

She fears the educational system punishes risk-taking, meaning students are afraid to fail.

“Yet failure is the basis of learning,” Moon said. “When failure becomes really risky it amplifies the stakes, and research shows that when the stakes go up the motivation for cheating goes up.”

Moon believes educators need to change the system if they are going to change problematic uses of ChatGPT.

At a UW panel in early April, educators and natural-language processing experts pointed out that if a school district can buy a program that “catches” ChatGPT use, the company that owns ChatGPT could change the tool itself to outwit the chatbot detector. ChatGPT is owned by OpenAI, and Microsoft has invested heavily in the company.

Experts stressed that the prevalence of ChatGPT is an opportunity for teachers to create better learning, where they center the student, make their assignments require personal, original thought and highlight growth instead of evaluation.

Moon makes a case for intellectual struggle.

“I think a lot of students are seeing (ChatGPT) as an efficiency,” Moon said. “That is great — to a degree. But (we need to) help students understand that intellectual struggle is really important. You can’t learn without struggling. Tools like AI tools can actually rob them of their ability to have that, their opportunity to have that intellectual struggle.”