
Chatbots are changing the way we access information and what we gain from it. It's happening online, in workplaces, and, over the last few years, in schools themselves.
In the wake of early mass adoption of ChatGPT — years before its parent company OpenAI added age-specific tools and restrictions — schools, including in Los Angeles and New York City, banned chatbots in the classroom outright. Many school officials feared generative AI tools would be used primarily to cheat, and there are still concerns that AI can hamper learning or exacerbate mental health concerns, including child exploitation.
But in the years since launch, some K-12 systems have partially reversed course and embraced AI. Sentiment among teachers has shifted, and students are using AI more routinely. The move may also be influenced by an intentional investment by AI developers hoping to get their products in the hands of teachers and students alike. Thousands of colleges, for example, have deals with AI developers, including OpenAI, Google, and Anthropic — the three companies have also launched "tutor" versions of their products to general users.
On the K-12 level, these AI giants, and others like Canva and Microsoft, have designed tools specifically for teachers and introduced gated AI agents to students themselves. Many schools are in the midst of renegotiating existing educational contracts with such companies to account for free AI products — technology that didn't exist when some institutions agreed to add digital product suites to student and faculty computers.
AI tech is evolving rapidly, and many questions remain. Here's how the nation's three largest school districts approach artificial intelligence:
NYC Public Schools
New York City's public school system serves more than 900,000 students across 1,597 public schools and nearly 300 charter schools. The Education Department is the city's largest agency, with plans to expand services to a new pre-K program, as well. It was also one of the first to ban ChatGPT, and then unblock it.
New York City Public Schools recently announced a new set of AI guidelines for students, teachers, and families created by its AI Task Force. Previously, individual schools took on the responsibility of designing their own policies to address urgent concerns about AI. NYC's rulebook is one of the most user-friendly Mashable has seen so far, but many specifics about AI student use are still unclear.
How should NYC teachers approach AI?
NYC Public Schools mandates all AI tools go under what is known as the ERMA (Enterprise Review Management Application) process. ERMA oversees privacy and security rules, and now includes parameters for appropriate AI use, including: the need for human oversight and review, a prohibition on inputting personal student information into unapproved AI systems, AI tool-specific age restrictions, and discretion over AI outputs.
The guidelines also explain the school system's "traffic light" approach to AI: every potential AI use case is categorized as green (approved), yellow (careful judgment needed), or red (prohibited).
NYC schools can't use AI to make decisions regarding class placement, graduation, eligibility, or discipline, for example. AI cannot be used to create Individualized Education Plans (IEPs), prohibit a student from choosing a specific path of coursework, or confer grades. AI cannot be used to provide emotional or therapeutic counsel to students, and AI-powered surveillance is prohibited. The use of student data for AI training is banned.
Yellow light cases include using AI tools to evaluate data sets and translating critical information for students and parents. Educators get the green light to use AI for tasks such as scheduling, generating accessible materials, and refining communications.
Can NYC students use AI?
For now, students are allowed to use AI for basic "research, exploration, and creative projects," according to NYC Public Schools, but it must be used with educator oversight. The system considers student use of AI in learning a "yellow light" use case, and students aren't encouraged to incorporate AI without their teachers' involvement.
NYC Public Schools has not yet decided if students are banned from using personal chatbots or the extent to which AI tools can be used to complete homework assignments outside of school. Meanwhile, parent advocates have called for a two year moratorium on the technology outright, citing the district's lack of concern for long-term learning consequences, privacy, and the environment.
"Our students are already encountering AI beyond school walls," the public school system writes on its website. "The question is whether they are equipped with critical thinking, ethical grounding, and creative agency—or left to navigate AI alone."
Los Angeles Unified School District
The Los Angeles Unified School District (LAUSD), which serves more than 376,000 students, has been trying to rein in unhampered tech use by students. In 2025, the Los Angeles Unified School District joined several other school districts across the country in implementing a bell-to-bell student cellphone ban, prohibiting phone use during school hours.
In April, the LAUSD school board unanimously approved a new resolution limiting access to technology in classrooms, including instituting screen time restrictions and banning devices for kindergarten and first-grade students.
AI, however, has remained elusive. Following an initial block on ChatGPT, LAUSD introduced its own AI chatbot, "Ed," in 2024. The chatbot was shuttered just three months later, after its developers went out of business, and the district's superintendent has recently been under federal investigation for alleged ties to the company. Months before, an LAUSD AI task force drafted its first usage policies, which are no longer available on the LAUSD website.
However, updated AI policies were distributed in an April 2024 policy bulletin. Across the board, users are only permitted to use district-approved tools, and educators must obtain consent from parents or legal guardians before using certain apps with students. LAUSD employees and users are not allowed to upload copyrighted materials or "share any confidential, sensitive, privileged or private information when using, prompting, or communicating with any AI tools." They must independently verify AI outputs and be wary of hallucinations and bias.
Can LAUSD students use AI?
Students under the age of 13 are banned from using any generative AI tools (and social media), according to the Los Angeles Times. Older students are allowed to use AI under specific conditions and with administrator approval.
As of September, LAUSD also recommended student AI training, including an annual "digital citizenship" course, and distributed a Responsible Use Policy for students and parents to sign.
Students can't upload personal information to district-approved chatbots, illegally download materials, or upload copyrighted materials, and must properly cite all sources. They cannot use AI to generate hateful speech or facilitate bullying.
The policy doesn't oversee personal chatbot use outside of the district network.
Chicago City Public Schools
Last year, Chicago's public school system (CPS) published a lengthy AI Guidebook, pledging to fully integrate generative AI across CPS during the 2025-2026 school year. The system, serving around 316,000 students at 630 schools, is part of a Gates Foundation-funded case study on implementing AI in K-12 schools.
In line with other school policies, students and teachers can only use AI tools permitted by the district. Currently, most chatbots, including ChatGPT and Claude, are not approved for use. Teachers, not students, can use Google Gemini and Microsoft Copilot.
Educators must follow age restrictions set by AI companies and monitor student use. While CPS allows teachers to use AI detection tools to catch plagiarism, the district warns educators should be cautious of false positives.
Can Chicago public school students use AI?
Students are encouraged to use administrator-approved AI tools at CPS schools for tasks such as brainstorming, summarizing information, and setting deadlines and schedules. CPS says students can use approved tools to create digital media or generate creative writing prompts. Students are also encouraged to use GenAI as a study partner and consult AI-powered search engines as needed. However, many of these tools (such as Perplexity or Nano Banana) are not on the list of approved products.
Students are required to cite any AI used in their assignments, which must be "fundamentally" generated by the student. AI plagiarism is handled through the existing Student Code of Conduct. Teachers are tasked with monitoring students' appropriate use of AI.
Nationwide AI policies
Despite an increase in AI use by students and teachers, policies to foster responsible AI use lag across the country. A 2025 survey by government-funded research nonprofit RAND found that 80 percent of students felt their teachers didn't teach them how to use AI for schoolwork. Fewer than half of school principals cited having AI policies, and only around a third of teachers reported having academic integrity policies that addressed AI use.
Meanwhile, around 34 state-level education departments have issued AI policy recommendations, according to AI literacy organization AI for Education. The federal government, including First Lady Melania Trump, has pushed for greater tech integration in children's education. Miami-Dade County schools, the fourth largest school system in the U.S., recently announced a partnership with Google to pilot new classroom AI tools.
Rise of AI-only K-12
While public schools figure out the best way to approach the new technology at scale, private, tech-backed programs are fully embracing AI. This includes the rise of AI-only schools, including a Department of Education darling known as Alpha schools. In direct opposition to the prevailing advice abided by public school districts — to keep humans in the loop at all times — Alpha replaces human teachers with screens, offering students just two hours of AI-powered instruction facilitated by adult "guides," not education professionals.
Alpha is backed by private equity investors, including its co-founder and school "principal" Joe Liemandt, who has funneled personal cash into the AI "school of the future." Meanwhile, public school funding has been on the decline. According to estimates for the 2026 school year, public funding for K-12 schools dropped by 11 percent. Districts across the country are facing teacher shortages and educator turnover rates. AI can only do so much.
Disclosure: Ziff Davis, Mashable’s parent company, in April 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.





















