How Chatbots and AI Are Already Transforming Kids’ Classrooms
Educators across the country are bringing chatbots into their lesson plans. Will it help kids learn or is it just another doomed ed-tech fad?
Nate Fairchild had just instructed his eighth grade students to write a summary of a disturbing passage from Elie Wieselâs Holocaust memoir Night when he dropped a surprise: Heâd customized a chatbot to help them by masquerading as the Nobel Prize-winning writer and answering their questions. âIs that gonna get weird?â Fairchild asked, then answered his own question. âI donât know, maybe! If it does get weird, let me know.â
If the students in his literature class found the prospect of chatting with a long-dead Holocaust survivorâs synthetic doppelgänger strange, they didnât say so. They were accustomed to this. Theyâd been experimenting with artificial intelligence for months in Fairchildâs classroom in rural Evans, Colorado, using a product called MagicSchool, which is built on large language models from big companies such as OpenAI Inc. and Alphabet Inc.âs Google. Theyâd mostly turned to it for feedback on their writing or to summarize complex texts, but sometimes more offbeat exercises came up, like this one.
With the novelty having worn off, most students skipped the chatbot. But about a third took up Fairchildâs offer. âHello, I am Elie Wiesel,â the chatbot began. The studentsâ questions tended toward the pragmatic: âhow should I start my summary about Night by Elie Wiesel,â one boy wrote. The chatbot suggested a line: ââNightâ by Elie Wiesel is a powerful memoir that recounts the authorâs harrowing experiences during the Holocaust.â
The student (whose parents asked that he not be named) raised his hand. âMister, what if I copy exactly what it says?â he asked. He would get zero points and a chance to redo the assignment, Fairchild said; better to paraphrase the chatbot if he agreed with its output. Nearby classmates whoâd forgone the AI help were already well into writing their summaries. The boy stared at his screen. He toggled to Google and searched for some words. ââNightâ by Elie Wiesel is an important state in time that recites the authorâs experiences during the Holocaust,â he typed.
In 2022, when OpenAI launched ChatGPT and teenagers started gleefully outsourcing their homework to AI, the adults began to panic. Within five weeks, New York City Public Schools had restricted ChatGPT on school networks and devices, citing its potential negative effects on learning and worries about the safety and accuracy of its content. Others followed New Yorkâs lead.
But as US school districts open for their third academic year since ChatGPTâs introduction, things have changed. Most of the biggest districts, including New Yorkâs and the six next largest, are allowing and even encouraging some AI useâparticularly by teachers, though increasingly by students as well. The Consortium for School Networking, a professional association, found this year that just 1% of surveyed districts were banning AI, while 30% said they were embracing it. (The rest said they werenât sure, their plans werenât defined, or they were using it only for certain things.) A little more than half said they were training educators to use AI for instruction.
AI in the Classroom
School principalsâ opinion of AI, by use case
For researchers, educators, parents and students themselves, there are plenty more unresolved questions: How might AI-generated factual errors warp studentsâ understanding of the world? How might AI-generated hand-holding stifle their own self-expression and problem-solving? How might AI-generated learning resources (including chatbot avatars) marginalize the human scholarship itâs based on? How might AI-generated grading perpetuate racial and gender biases? How might AI companies cash in on their access to student and teacher information? What might we all lose in giving up social learning processes for computational ones?
At a time when unprecedented political and financial constraints make it tough to enact proven systemic reforms, district officials are also betting that AI can take some pressure off administrators and teachers facing high burnout and attrition by helping them with tasks such as emailing parents and generating lesson plans. And they hope it can address declining US test scores and student engagement by customizing teaching material for each studentâs needs and interests. Evidence is mixed on whether AI can help accomplish either goal, but in the meantime the financial math has been persuasive: Bringing AI into schools might cost a typical district from nothing to several million dollars a year, depending on the districtâs size and the products used, compared with a much higher price tag for structural changes such as hiring more teachers.
Officials tend to describe their AI conversion as a natural result of research and soul-searching. They also acknowledge the influence of the companies selling the technology. OpenAI, Google and Microsoft Corp. have each developed education-centered versions of their main AI productsâChatGPT, Gemini and Copilot, respectively. (An OpenAI spokeswoman described the version of ChatGPT for educational institutions as being made for colleges and universities but noted the company has signaled an interest in schools too; Google and Microsoft already gear their offerings toward schools.) And those companies and a handful of well-funded startups have crafted a muscular campaign to embed their AI products in educational settings, including a push to get teachers like Fairchild to use them.
While the companies describe motivations that sound like those of district officials, the financial upside for Silicon Valley is undeniable. Grand View Research estimated last year that the global market for educational AI technologies would rise from $6 billion to $32 billion by the end of the decade. And the business potential goes well beyond that. When executives fantasize aloud about the AI superusers of the future, theyâre implicitly referring to the generation still in school. Talking up AIâs educational possibilities also supports companiesâ argument to policymakers that any potential societal harm from their products is offset by potential gains.
The companiesâ outreach efforts have included working with school districts, along with less direct tactics. In 2023 the nonprofit Code.org assembled TeachAI, a group of organizations and companies including Google, Microsoft and OpenAI, whose aim is to get artificial intelligence âinto primary and secondary curricula worldwide safely and ethically.â The same year, the AI Education Project, a nonprofit founded four years earlier to prepare students for âa world where AI is everywhere,â posted revenue of $12 million from donors including Google, Microsoft, OpenAI and the consulting firm Booz Allen Hamilton. In 2024, Google announced donations of more than $25 million to organizations teaching educators and students about AI. Common Sense Media, the nonprofit known for its age ratings of kids movies and TV, has received funding from OpenAIâthe organization wouldnât say how muchâto produce educational materials about AI for teachers and families. And this summer, Microsoft announced partnerships for AI training with the biggest teachersâ unions in the US; Google and OpenAI are working with the unions too.
The companies say their support is meant to make sure AI enters schools in a way that benefits teachers and students. Officials at the affiliated groups, meanwhile, stress their independence, pointing out that they also have noncorporate supporters and at times oppose tech companiesâ AI priorities, question their motives and critique their products. But the groups, like the companies, still tend to highlight the importance of preparing students for an AI-dominated future and the argument that risks can be mitigated by responsible policies and habits. And language to that effect has found its way into the AI plans of districts that lean on the groups for guidance.
The outreach from Silicon Valley seems to follow a playbook developed during a decades-long attempt to turn public education into a market for private products. That effort has seen some high-profile failures: for instance, massive open online courses, which were supposed to expand access through web-based classes, and personalized learning regimes, which were supposed to use software to perfectly match students with material at their level. It has also included the enormously successful diffusion of screen-based education platforms such as Google Classroom. One lesson learned along the way is that school technologies rarely catch on without buy-in from teachers.
Getting the products into classrooms and training educators to use them is an obvious first step. But Inioluwa Deborah Raji, a member of TeachAIâs advisory committee and an AI researcher at the University of California at Berkeley, told me she worries about a lack of âcritical skepticismâ of AI in schoolsâincluding from the consortium she advisesâgiven the dearth of information about whether and how it works. âItâs like putting a car on the road without really knowing how far it can go or how it brakes,â she said. âIt becomes weird to see it so widely adopted without that due diligence.â
If anyone grasps the awkwardness of the dance among the private, public and nonprofit sectors in education, itâs Elisha Roberts. In 2024, Roberts had recently been hired at the nonprofit Colorado Education Initiative (CEI) when she was asked to help manage a program run by the organization in partnership with AIEdu, MagicSchool and others. The program would bring AI education into schools using $3 million disbursed through a state grant program funded largely by federal pandemic relief.
A Denver native and former principal, Roberts was nevertheless in some ways a strange choice for the gig. Sheâd majored in politics at Occidental College in Los Angeles, where sheâd been profoundly taken with the seminal Brazilian educator and philosopher Paulo Freireâs notion that the purpose of education is liberation: Students develop critical consciousness, recognize oppressive social conditions and work to change them.
Freireâs ideas were especially resonant for Roberts because of her identity as a Black, queer woman. Sheâd grown up wanting to join the US Supreme Court, but in college she decided to be a teacher. She studied in Botswana, taught in Japan and earned a masterâs degree in education at Boston College, then returned to Denver and became the principal of a charter school that she tried to infuse with Freirian principles. Roberts had only recently left that position when the chance to join CEI, as an assistant director focused on partnerships, came up.
She tended to be cautious about AI products. âWe donât know enough about the long-term impacts to actually be introducing it to kids,â she told me. But sheâd seen teacher burnout and student disengagement firsthand and was open to the argument that AI could help. âThis isnât about me and my feelings,â she said. âThis is about the tools and how they can support teachers.â
Long before his Elie Wiesel experiment in Evans, Nate Fairchild had gotten curious about AI; when he learned about the fellowship program, he applied and got in. He established a goal for himself: to see if MagicSchool could help both students who struggled with learning and those who were doing well and needed a challenge. He found the training from MagicSchool limited, focused largely on reviewing product features. âIf you want to hear all the ways in which AI is this amazing, revolutionary tool thatâs going to make everybodyâs life better, the companies that are providing the trainings will tell you that all day,â Fairchild said.
So he did some additional research and devised his own 90-minute introduction to AI for his students, problems and all, before opening up their MagicSchool access. He was soon seeing promising results. Those who struggled with reading comprehension could get MagicSchool to explain texts. Those working above grade level could engage with the chatbot about complexities that peers might miss. Students at all levels liked getting writing feedback. One of them, Aesha Garcia-Guerra, told me, âAI gives me a chance to see the mistake before I turn it in, so I donât miss a point.â
But challenges emerged before long. During my visits to Fairchildâs classroom, each time he introduced an assignment for which kids could use AI, he reminded them to critically assess the outputs. I saw Garcia-Guerra do this on occasionâone time she caught the chatbot citing the wrong chapter from a reading sheâd done. Otherwise I rarely observed the students checking its outputs. A couple of times, I saw errors or biases go apparently unnoticed, and in class debates some cited comments from âthe AIâ as evidence supporting their claims.
The chatbotâs impersonations of historical figures seemed especially fraught. Once, when asked to pretend to be John Brownâan abolitionist who murdered five supporters of slavery and characterized his actions as righteousâthe chatbot insisted, as Brown, that violence is never the answer. (This might have been because of safety guardrails keeping the chatbot from generating violent rhetoric.)
Incidents like these bugged Fairchild, but he viewed them as learning opportunities; heâd hoped all along that provocations like the Elie Wiesel chatbot would compel students to critique AI themselves. When kids turned in assignments that seemed too reliant on MagicSchool, he made sure to flag the issue, giving them partial credit and a chance to redo the work. He was also planning to start using inaccuracies, biases and problematic impersonations to initiate more pointed conversations about the shortcomings in real time.
At the same time, Fairchild helped with some professional development sessions for colleagues on how to use MagicSchool and made himself available to company representatives, though he remembers being individually asked for advice only once. He said his independent research and his 15-plus years of teaching experience helped him critically evaluate MagicSchoolâs products without getting caught up in its commercial interests, though he wondered if less experienced colleagues would have a harder time.
Adeel Khan, the chief executive officer of MagicSchool, told me that when his startup joined the program to bring AI education to Colorado schools, it had barely started attracting users; he said heâd been âexcited to spread that all through Colorado.â The company has since grown to be one of the most successful US startups selling AI to schools.
Tony Wan, head of platform at MagicSchool investor Reach Capital, explained to me that AI education companies benefit from teachers and students flagging inappropriate content and otherwise helping guide product development. To that end, he said, âwe often encourage our founders to just get this in the hands of teachers and users as quickly as possibleânot necessarily as a refined product. And I donât mean that in a bad or irresponsible way.â Wan later clarified that this âshould not come at the expense of quality or pose risks.â
During the 2022-23 school year, around the time New York City Public Schools blocked ChatGPT access in schools, someone with the district contacted Microsoft looking for advice. âThey said, âWe need you to come here immediately,ââ Deirdre Quarnstrom, Microsoftâs vice president for education, recalled in an interview. Company representatives traveled to New York and gave what Quarnstrom described as a 101-style introduction to LLMs. That spring, then-Chancellor David Banks wrote in an op-ed that the district would âencourage and support our educators and studentsâ in exploring AI. The district now lets teachers use Copilot and Gemini, with ChatGPT available by request (though students still canât use the products).
The next three largest US school districts ramped up their AI investment as well. Los Angeles Unified enabled Gemini for all employees, with a plan to open access for students in 6th through 12th grade in 2025-26. Miami-Dade County Public Schools trained educators on Gemini and began rolling it out to high schoolers in 2024-25. Chicago Public Schools started testing Googleâs and Microsoftâs products with teachers and was considering opening student access too.
Despite all the investment, by the 2024-25 school year, teachers themselves werenât embracing the technology to the same extent as education officials. A Gallup and Walton Family Foundation poll found that while 60% of teachers had used AI during the school year, they werenât turning to it much. More than half of the respondents said theyâd spent three hours or less learning about, researching or exploring AI tools, compared with about a quarter whoâd spent at least 10 hours. Three-fifths said they never ask students to use AI. Teachers were also far more likely to believe weekly student use of AI would decrease, rather than increase, writing skills, creativity, critical thinking and communication, among other abilities.
But Silicon Valley was working to overcome the doubts. A Google report in April laid out some of the reasons educators in the UK werenât using AI and made the case that teacher training could win them over, citing a pilot program that found those trained in AI used it more often and became more optimistic about its societal impact. The authors urged the British government to guarantee such training for all public-sector workers. âAI habits are easy to form,â they noted.
Besides working with outside groups on training, tech companies created their own training for educators, which could count toward the professional development that states and districts tend to require. A lot of these offerings were similar to what Fairchild received in Colorado, with an emphasis on product tips. They werenât always on point. Googleâs training gave the example of a writing teacher sending an AI-generated email to studentsâurging them to practice writing, of all things. âSummerâs a blast, but donât let your storytelling skills take a vacation!â it read. Training from OpenAI and Common Sense Media had ChatGPT create a multimedia presentation on the Mexican Revolution. âThis image highlights significant figures and moments,â declared a text caption for a resulting picture, in which no one shown resembled any well-known revolutionaries. An OpenAI spokeswoman said that training would be ârefined based on feedback.â Robbie Torney, the senior director of AI programs at Common Sense Media, said the organization agreed that the presentation was problematic and that it and other âoutdatedâ material wouldnât be included in future training. âThat example perfectly illustrates why using AI-generated images is so trickyâtheyâre sophisticated fiction, not factual representations,â he said.
These kinds of fictions were already showing up in classrooms by then. The Houston Independent School District required educators in some schools flagged as underperforming to use district-provided teaching materials generated partly with AI. One worksheet asked students to compare AI-generated imitations of Harlem Renaissance art in which the faces of Black-appearing characters were distorted, prompting a backlash among some community members. (A spokesman for the district said that the AI creations werenât so different from real Harlem Renaissance art with abstract faces and that teachers could flag problematic AI-generated material.)
As the 2024-25 school year came to an end, districts were putting AI to a broadening range of uses: âcoachingâ teachers, detecting guns, chatting with students about their mental health. Then proponents of AI in schools got a boost from the highest level of the US government. In April, President Donald Trump issued an executive order calling for bringing AI education to kindergarten through 12th gradeâincluding using it to design instructional resources and tutor studentsâwith the help of partners in the private sector. âWe must provide our Nationâs youth with opportunities to cultivate the skills and understanding necessary to use and create the next generation of AI technology,â the order read.
If one vision for education is to mold children into capable users and developers of AI products in partnership with private enterprise, a version of it can already be found in an office-like building in downtown Austin. Last winter at the high school campus of Austinâs Alpha School, a well-produced promotional video featuring co-founder MacKenzie Price played on a screen mounted on the foyer wall. Another screen showed an X feed cycling through posts from public figures with some thematic link to the school, including Jeff Bezos, the YouTuber MrBeast and Grace Price, the co-founderâs niece and a recent Alpha graduate specializing in Robert F. Kennedy Jr.-aligned health-care activism.
Alpha School is a private K-12 institution that first opened in Texas, under a business called Legacy of Education Inc., and has annual tuition starting at $40,000 on most of its campuses. Itâs built on the idea that, by personalizing education with AI products, schools can squeeze six hoursâ worth of learning at a typical campus into about two hours, freeing up students to spend more time on life skills and individual pursuits; to accomplish this, the school has relied partly on educational products from a company called 2HR Learning Inc., also co-founded by Price. âItâs allowing their learning experience to be so much more efficient and effective,â Price told me. AI is so central to Alpha, she said, that even its kindergarteners are exposed to it, receiving AI-generated tips on improving their speech patterns: how many words they speak per minute, their use of filler words.
That morning at the high school, students sat silently at tables scattered throughout the building, headphones over their ears. They were spending the first three hours of the day doing coursework for different classesâphysics, literature, calculusâon their computers. Some used a 2HR Learning platform that led them through texts, videos and on-screen assessments. No teachers were in sight. Instead three adults, called âguidesâ in part because they didnât have teaching certification, milled around offering support. Their backgrounds were in human resources, marketing and corporate law.
âI feel like thereâs just so much potential in the kids, and this type of role just really allows you to unlock that potential and be a mentor,â said Carson McCann, the guide with the HR background. In that capacity, he added, âI donât do academics really at all.â The student nearest him was studying calculus. âIâll be honest, I havenât touched calc in seven years,â McCann said. (He ended up leaving the school at the end of the year; he couldnât be reached for comment about his departure, but his LinkedIn page shows he founded a consulting firm.)
If the high schoolers needed help, they could use an on-screen AI chatbot or get AI-generated writing tips (though Price later told me the school stopped offering the chatbot because of cheating concerns). The kids also had remote access to human tutors.
Students were free to use AI products from outside providers as well. Price talked about a student habit of using ChatGPT and other chatbots to convert long texts into factoids on digital flashcards, then memorizing those instead of doing the reading. As long as a student did well enough on the schoolâs on-screen knowledge assessments, Price said, she applauded the shortcut. Showing through quizzes that theyâd mastered concepts earned the students XPâexperience points, like in video gamesâwhich they could convert into dollars for investment into personal projects called âmasterpieces.â
Price expressed pride in the studentsâ masterpieces: a business selling talking stuffed animals that would give AI-generated mental health advice to teens, for example, and one offering AI-generated flirting tips. Masterpieces without potential commercial applications were rarer, though Price told me about a girl whoâd used AI to compose a musical.
Work on the masterpieces took place in the afternoon, beginning with students attending to their BrainLift, a document containing their project notes. Each BrainLift included a list of the contrarian beliefs that made the studentâs masterpiece special, along with evidence to support those beliefs. The students then fed their BrainLifts to a 2HR Learning platform whose built-in AI chatbot could accommodate the contrarian beliefs theyâd described.
When I sat down with the head of the high school, Chris Locke, he told me the school had a name for the contrarian beliefs it encourages in students: âspiky points of view.â For example, âone of Alphaâs biggest spiky points of view is that you donât need a teacher,â he said. Chloe Belvin, the guide whoâd previously worked as a corporate lawyer, chimed in: âItâs funny, because in a traditional school you get in trouble if youâre using AI, and here you get in trouble if youâre not.â She added, âThe starting point of every conversation I have with a kid is, âIs there an AI that can do this, so that youâre not spending your time on it?ââ
The financial arrangements among these entities is unclear, but the filings suggest that Alpha has been serving as a sort of in-house distribution channel for a corporation developing AI products for schools. Trilogy also submitted the initial trademark applications for 2HR Learning and several education products before assigning those rights to 2HR Learning itself. And positions at both Alpha and 2HR Learning were recently posted on Trilogyâs corporate LinkedIn page.
When I asked Price in late July about the relationships among the companies, she didnât address the specifics but said, âItâs high time that we do something different in education, and I believe that allowing capital and industry to go into education is hopefully something thatâs going to work.â Through the school, Liemandt declined to be interviewed. An email to Andrew Price went unanswered, as did messages sent to an email address and a contact form on Trilogyâs website.
Davlantes didnât respond to a request for more information about that company, but recent press coverage and public filings may offer some clues. A publication called Colossus that profiled Liemandt in August said that he had lately been building ed tech products at a âstealth labâ staffed by about 300 people and was preparing to publicly launch a flagship product called Timeback. While the article didnât name the lab, a Texas filing in early August recorded the formation of a company called TimeBack LLC, with Andrew Price named as a manager. A website for a product called TimeBack that fits Colossusâs description, meanwhile, calls it the system behind Alphaâs schools. And Legacy of Education has a trademark pending for the name. The article describes the product as recording a raw video stream of students, monitoring the âhabits that make learning less effective, like rushing through problems, spinning in your chair, socializing,â then generating feedback for kids on how much time theyâre wasting and how to do better.
Alphaâs privacy policy accounts for this sort of tracking and more, claiming far more access to student information than is typical for companies selling AI to schools, including MagicSchool. Alpha can, for example, use webcams to record students, including to observe their eye contact (partly to detect engagement and environmental distractions). And it can monitor keyboard and mouse activity (to see if students are idle) and take screenshots and video of what students are seeing on-screen (in part to catch cheating). In the future, the policy notes, the school could collect data from sleep trackers or headbands worn during meditation.
Student information can be used not only to keep products functioning but also for other purposes, including to analyze usersâ interest in Alpha or 2HR Learning technology âor content offered by othersâ; its operation involves sharing personal data with âbusiness partnersâ for unspecified reasons. Davlantes said that student data is âfiercely protectedâ and that, in practice, it isnât shared outside Alphaâs âeducational systemâ and is used only for âproviding student feedback and improving educational systems or outcomes.â
Across America, the private sectorâs role in bringing AI into schools is only deepening. In June, Trump announced that more than 60 companies and organizationsâincluding Microsoft, Google, OpenAI, MagicSchool and Alphaâhad pledged to make resources such as AI products and training available to schools. In July, not long after the Supreme Court ruled that Trump could keep dismantling the federal Department of Education, Education Secretary Linda McMahon (whose main association with AI remains the time she went viral for pronouncing it âA1,â like the steak sauce) issued guidance detailing how districts could spend federal funds on AI.
The biggest AI companies are also making back-to-school plans, ramping up their outreach to students and their families themselves. Google added studying-oriented features to its search platformâs AI Mode. OpenAI, in addition to announcing a deal to embed its models in the popular Canvas learning management system, introduced a study mode.
Schools Boost Their AI Training
Share of US school districts that reported providing training to teachers about AI use
The companiesâ outreach is extending to the largest US teachers unions too. In July, Microsoft, along with OpenAI and Anthropic PBC, announced a $23 million partnership with the American Federation of Teachers (AFT) to create the National Academy of AI Instruction, which intends to train 400,000 teachersâabout a tenth of the US totalâover five years. Microsoftâs investment in that partnership is part of Microsoft Elevate, a new global initiative focused on AI training, research and advocacy, which aims to donate $4 billion over five years to schools and nonprofits. That initiative also encompasses a partnership with the National Education Association (NEA), which will include technical support and a $325,000 grant.
The president of the AFT, Randi Weingarten, said in an interview that sheâs come to believe that AI will be as transformative as the printing press and that teachers should learn to use it. With limited government support for any large-scale training, she felt she had little choice but to turn to Silicon Valley. âProfessional development done by teachers for teachers is actually the best thing to do,â she said, âbut where are you going to find that money?â Daaiyah Bilal-Threats, senior director for national education policy at the NEA, characterized her unionâs relationships with big tech companies around AIâit has worked with Google tooâin part as a chance for teachers to influence product development. âIt could be dangerous for them to be developing this technology without educator input,â she said.
MacKenzie and Andrew Price, meanwhile, are trying to expand into charter schools outside the Alpha brand. In applications to open schools across the US, theyâve described plans to rely on Trilogy products, positioning Alpha as evidence of past success. Five states, including North Carolina, have rejected the applications, but Arizona approved a virtual school called Unbound Academy. Meanwhile, Alpha itself is opening about a dozen new private-school campuses across the US this fall, including one in New York City that Ackman is helping to promote.
This comes at a time when federal and state laws, including in Alphaâs home state of Texas, increasingly allow the use of public funds for private schooling. âEducationâs a trillion-dollar industryâK-12 in the US,â Liemandt said at the Baja conference. âWe have to go build 10,000 schools. Back to capital, we need a ton. Donations donât get this done. We need, to build this, billions and billions of dollars.â
Some education researchers see dystopian overtones in all these developments. Alex Molnar, a director of the National Education Policy Center at the University of Colorado at Boulder, imagines one possible scenario in which everyone relies so heavily on AI that students canât explain the thinking behind their assignments, teachers canât explain the thinking behind their student evaluations, and administrators canât explain the thinking behind their strategic decisions. All the while, local funds and data flow to faraway private corporations. âWe essentially will have then transformed public education,â he warned, âfrom a civic institution into a portal for funneling money to private interests.â
But none of that is inevitable. A grassroots movement is growing among those determined to resist the proliferation of AI in schools. The Civics of Technology Project, founded in 2022 by educators and researchers, wants to instead have administrators, teachers, students and parents prioritize studying âthe collateral, disproportionate, and unexpected effects of technologyââincluding AI. One option is to imagine, and work to bring about, an alternative future in which AI doesnât dominate. âThere are ways that teachers, caregivers and students, too, can say, âWell, what if I donât want to have to use this technology?ââ said Charles Logan, a research fellow at Northwestern University and a board member for the Civics of Technology Project.
In Colorado, Fairchild was entering the new school year feeling cautiously optimistic about AI. Students who began last year below grade-level expectations seemed to have improved their written and oral communication more than similar past students. A standardized test measuring studentsâ knowledge acquisition toward the end of the last school year also showed better results than in years past.
Yet, Fairchild wasnât sure how much of any of this could be directly credited to MagicSchool. Rather, he suspected his studentsâ use of the platform had forced him to change his teaching. To feel sure they were depending on themselves, heâd pushed them to explain in assignments and discussions how their own backgrounds and experiences informed their perspectives. This is a recognized method for engaging students, he said, and the availability of AI had caused him to use it more. That, he suspected, was an important reason his students had done well. Heâd done the time-consuming, impossible-to-scale work of becoming a better teacher.
I realized that, for all our conversations about how the students used MagicSchool, Fairchild and I hadnât discussed whether he was using AI to generate lesson plans and so on, which the companies typically center in their training materials. When I asked him about this, he admitted he wasnât. He doubted it would actually save him time, and he also had a deeper reason. âI have an artistic resistance to it,â he said. âFor me thatâs where the art of teaching sitsâprocessing my studentsâ needs and building a lesson and then building a rubric and evaluation for it. For me thatâs where the emotional and spiritual dialogue between the teachers and students is, so at this time, Iâm unwilling to hand that off.â
CEIâs Roberts told me that rationale made sense to her. In fact, she said, she wasnât using AI much herself. Having learned all she had about AI and its potential role in education, sheâd arrived at a sharp critique of the technology. At one point, she texted me, âThe negative impacts of tech always impact low-income Black and Brown communities first and more.â
A couple of minutes later, she emailed an article about allegations that a data center owned by Elon Muskâs xAI was spewing pollution near a mostly Black Memphis neighborhood; sheâd previously raised with me how xAIâs Grok chatbot had spouted off in May about a nonexistent âgenocideâ against White people in South Africa. (xAI didnât respond to requests for comment.)
The incidents brought to mind a trope among advocates of AI in schools that irritated her to no endâthe notion that it was at least as useful and as harmless as the calculator, another product whose rollout to schools once drew suspicion. âThe calculator doesnât construct facts about world knowledge and give them to you,â she said. âThe destruction of knowledge is something that should concern any educator.â
Her candor was jarring, coming from someone so involved in one of the highest-profile statewide AI education programs in the US. Sheâd recently been promoted to become CEIâs director of district implementation and partnership, with the statewide AI program set to expand in the 2025-26 academic year. Plans include offering AI training to students and school counselors in addition to teachers. But while all this seemed to conflict somewhat with Robertsâ personal views, she said sheâs constrained by the demands of American education culture.
Itâs a culture in which, with ever-diminishing resources available for proven structural improvements, some educators find that AI assistance makes their life a bit easier and their students a bit more engaged. Itâs also a culture in which schools are viewed less as a route to liberation than as a training camp for a future workforce. Assuming AI companies continue to dominate, the students Roberts cares about could graduate into a more precarious future if people like her donât help them play along.
âIf I could wave my magic wand and AI doesnât exist, Iâd be like, âGreat,ââ she said. In the absence of that, she said, she had a plan. This year she hoped to transform the Colorado program as much as she could. Even as she facilitated the advance of AI products into schools, she planned to raise awareness about the environmental impact of those products, the ideological influence of the corporations behind them and the possible negative impacts on learning. A term already existed to describe the kind of work sheâd be doing. The job at hand, she said, was harm reduction.
Source: https://www.bloomberg.com/news/features/2025-09-01/what-artificial-intelligence-looks-like-in-america-s-classrooms

