
The development of a districtwide policy on the use of artificial intelligence is underway in Cambridge. A resolution “promoting thoughtful, age-appropriate engagement with AI tools” from an Aug. 5 meeting of the School Committee sparked conversation about how much schools should explicitly encourage use of the new technology.
Without a policy at the district level, schools regulate use of AI individually.
The resolution for a standard throughout Cambridge Public Schools is to “safeguard the development of foundational academic skills” while preparing students “not only to use these technologies effectively, but to understand and evaluate them as informed participants in a rapidly changing world.”
District parent Gabriel Robinson expressed concerns at the meeting that the resolution assumes an inevitability in artificial intelligence use in schools.
“I reject the premise that it is equally important for students to graduate with a deep and practical familiarity with AI tools as it is for them to develop core skills and knowledge independent of those tools,” Robinson said. “This policy order shills for big tech by adopting the vision that it would like to sell in which an AI-centered future is inevitable, and all that is left for us to decide is whether our children will be among the masters of AI or its victims.”
Member David Weinstein responded to concerns from public comment with an amendment to neutralize the resolution’s language to “ensuring” engagement rather than “promoting” it – but ultimately withdrew the change after other committee members offered an alternate perspective: AI is here to stay.
“We don’t need to promote engagement, it’s already happening. This is promoting thoughtful, age-appropriate engagement,” Elizabeth Hudson said at the meeting.
“AI is gonna be here, it’s a powerful tool,” Richard Harding said. “This is happening across the country – we are actually behind in our policy on AI, way behind.”
Discussions about artificial intelligence in education are taking place globally. An April executive order by president Donald Trump, “Advancing Artificial Intelligence Education for American Youth” encourages and rewards its advancement in schools at the teacher and student level.
Despite the federal push, a new report from the Department of Education shows that many public schools have not yet developed clear policies on the new technology.
“A lot that we don’t know”
Cambridge committee members and public speakers agreed a policy on AI is not to be rushed.
Parent and School Committee candidate Arjun Jaikumar said he has seen the harms of AI harnessed incorrectly within his own profession, the law. “We need to teach our kids the base skills for study and for the workplace before we incorporate AI to make them more efficient,” Jaikumar told committee members.
Questions of academic integrity, environmental harms and impact on brain development make establishing a concrete policy on artificial intelligence in schools controversial.
“There’s a lot that we don’t know about the effects of AI in learning,” Jaikumar told Cambridge Day. “We should take this slowly as we learn more as a society about the effects.”
State offers guidance
Neighboring Somerville also lacks policies around AI in its handbooks for students and parents despite it being raised in School Committee meetings there as far back as February 2024 – though tangentially, in an argument for diversity in computer science to overcome bias in software.
The state Department of Elementary and Secondary Education released a resource document last month as guidance for the implementation of the technology in public schools.
“AI is not a standalone initiative. It cuts across instruction and assessment, operations and communication. It changes how students engage with content, how teachers design learning and how systems make decisions,” according to the document.
Interim superintendent David Murphy assured committee and forum members that his artificial intelligence policy will be handled with caution and care.
“It’s about making sure that we are not sacrificing the types of ethical constraints that we want students to cultivate during their time as students,” Murphy said.



CPS should block Gemini, ChatGPT and other LLMs from Chromebooks.
LLMs draw from millions of sources, and students need to learn how to attribute a quote to a source.
LLMs trick students into believing that “answer-getting” is thinking–which makes students’ minds weak.
And then ChatGPT gives teens dangerous advice about drugs and doing self-injury (see PBS NewsHour). Look up “Adam Raine”