As Los Angeles public schools resumed classes last week, the district and families are dealing with issues with a program touted as the nation's first artificial intelligence program to tailor learning to students.
I summarize the events that unfolded in Los Angeles and suggest questions parents and citizens should ask as AI projects are introduced in their school districts.
To read The Washington Post's most important and interesting stories, subscribe to The Post Most newsletter.
As school officials across the country work frantically to get the most out of AI, Los Angeles' headaches have emerged “again, bigger, harder and worse,” said Alex Molnar, director of the National Education Policy Center at the University of Colorado Boulder.
Molnar starts from a premise that is the opposite of how we usually treat technology: as something to use first and solve later.
He doesn't think school districts should use AI until it passes the test of two questions: Is it a good thing? And is it the best way to achieve the stated goals?
Molnar said he doesn't know of any educational AI that has been proven to meet these requirements, so schools shouldn't adopt it. Students and teachers are already using AI, but it's a different story when schools encourage it and spend taxpayer money on it, as LA has done.
—
What happened with AI adoption in LA Public Schools
In March, the nation's second-largest school system by student population touted a new AI project and chatbot called “Ed.”
Los Angeles Unified School District officials said the AI will be a “personal assistant for students” that can customize learning plans for individuals — for example, a child who performs less well than their classmates in math might be recommended a customized educational game.
Students and parents can also ask the chatbot for resources if they need help with reading or mental health issues, or ask what lunch is available in the school cafeteria.
But within months, the startup behind the district's AI technology ran into financial difficulties, the chatbots were largely shut down, and the Los Angeles Unified School District opened an investigation into whether student data was misused in the AI project, education news organization 74 reported.
School district officials have said they will continue the AI program and may reinstate the chatbot soon, a decision that has some Los Angeles parents questioning.
“AI is a potentially transformative technology and it would be remiss not to consider its implementation in education,” a spokesperson for the Los Angeles Unified School District said. The district also said it “continues to take steps to protect student data.”
The startup, AllHere, did not respond to a request for comment.
Molnar said the problem is “much deeper and more serious than any technical failings” specific to Los Angeles or its choice of technology partners. “It's a problem with its whole mindset.”
He said the district is not equipped to answer, and typically does not ask, the essential questions of an AI project: “What are we trying to accomplish, and is this the best way to accomplish it?”
As an example of the Ed Chatbot, he said many families struggle to find information like tutoring resources or cafeteria menu calendars through a sea of school bureaucracy.
Molnar said the root cause is that many schools can't or don't ensure that information is up-to-date or easy to understand. Chatbots can't solve the problem of garbage information.
Molnar also criticized most AI touted as “personalized” learning, saying it's essentially generic technology that just calls kids by name.
—
Questions school districts should ask as they get started with AI
Molnar suggested parents figure out whether AI meets their needs: Is it easy to use? Does it give them the information they want? In other words, does it work? (Parents know from the start of remote school in 2020 that a lot of technology is terrible.)
Then, be prepared to ask what alternatives the school or district has considered.
For example, if it's a tutoring chatbot, Molnar says you should ask if and how school officials rated the AI as “the best of the options around the world.” Wouldn't that time and effort be better spent putting staff on extra shifts to help kids?
Third, ask how students' personal information will be protected. Molnar said such questions are often brushed aside with assurances that school technology only collects data anonymously and in a way that can't be traced back to the child. But anonymous data is rarely truly anonymous.
Molnar said the best thing that could happen is for parents to pressure lawmakers to require school districts not to use AI until the companies developing the technology can prove that it's effective and won't cause harm, and can be held legally accountable for improper use of students' data.
(In a recent research paper on AI in education, Molnar and his co-authors offer further ideas for policymakers and school officials.)
While surveys show Americans are generally skeptical of the benefits of AI, Molnar and I also discussed their hopes that the technology can help solve some of our toughest challenges in education, health care, climate change and transportation.
It's very human to hope for an AI that understands your children, is fully informed, creative and infinitely patient, Molnar said, but he said it's unrealistic to expect AI to embody our hopes for our children's futures.
Related Content
Tim Waltz grew up in rural Nebraska and seemed to have the perfect life. Then tragedy struck.
Jesse Jackson is 82 and frail, but he's still showing up and keeping hope alive
White House dreams on hold as Democratic candidates gather for DNC