A machine can now do college-level math


For a long time, computer scientists struggled to develop an artificial intelligence capable of solving difficult symbolic mathematics. At best, it might solve high school math problems — and not even well enough to pass those classes. This disappointed Iddo Drori, a computer science professor at the Massachusetts Institute of Technology, whose 500 students in one of his classes a few years ago had more questions than he had time to answer. Lured by the possibility that artificial intelligence could fill this tutoring gap, he and his team set to work trying to develop a machine learning model capable of solving problems in calculus, differential equations, and of linear algebra from undergraduate math courses at MIT and Columbia University.

Now the team has introduced a neural network – an AI algorithm inspired by the structure and function of the brain – that solves college-level math problems at a human level in seconds, according to their Proceedings of the National Academy of Sciences article published last month. The model can also explain solutions and generate new problems that students have found indistinguishable from human-generated problems. The development could help professors create new course content and automate grading in large in-person or massive open online courses. And he could tutor students, explaining the steps to difficult problems.

But some researchers worry that the explanations provided by the algorithm are not yet on par with those offered by humans. And others worry that the algorithm could introduce new ways for students to cheat or the prospect of other unintended consequences.

“We are currently working on an AI that will graduate from MIT in computer science,” Drori said. “He won’t earn a degree formally, but he would complete [and pass] lessons. »

Neural networks excel in problem solving through pattern recognition. They train by looking at large sets of data – the bigger the better – after which they generate new examples. They can produce realistic images of faces, for example, after looking at many images of real faces. But mathematics with symbols such as the integrals found in calculus require precision rather than approximation. This is in contrast to number processing, which computers and calculators excel at. This technology often produces approximate solutions that work well enough for engineers or physicists.

Faculty members can already use the new algorithm, which is available as open source on GitHub, to create curricular content. In addition to reclaiming time so they can focus on human tasks such as coaching and mentoring, professors could use the data it produces to understand if their course prerequisites are, in fact, the right prerequisites. . This could help ensure that the students they meet in class are prepared to succeed.

“I’m surprised this hasn’t been done sooner because undergraduate math courses are such closed systems,” said Doug Ensley, a professor of mathematics at Shippensburg University, who has held a post director at the Mathematical Association of America. “These developments that increasingly confuse the traditional reading-homework-testing framework actually provide more and more reason for instructors to turn to more active classroom strategies.”

But some experts see limitations in the neural network. Explanations of the algorithm, some argue, are a high-level summary of the steps rather than an in-depth account of the process and underlying concepts.

“It can be useful for generating practice problems and evaluating student responses to problems,” said Jay McClelland, professor and director of Stanford University’s Center for Mind, Brain, Computation and Technology. “But there’s still a long way to go to better help students understand the concepts taught in these math classes.”

Drori is aware that his team still has work ahead of him. Although students rated the machine-generated questions on par with human-written questions, his team has yet to assess student perception of the algorithms’ explanations. The AI ​​also cannot handle questions that rely on images such as graphs or questions that involve mathematical proofs. But he is pleased with the results so far and optimistic that more progress can be made.

“We improved a high school math benchmark from 8% to 80% accuracy, and we solved college-level coursework problems for the first time and on a human level,” Drori said. “It’s not every day that you move the needle an order of magnitude.”

A member of the research team, however, expressed reservations about the work.

“It might even be a scary step forward,” said Avi Shporer, an MIT astronomer and co-author of the study. “If a machine can answer these questions, then how do we know that the student has really answered these questions?” Shporer noted that the work was done in a controlled lab environment and could have unintended consequences when released in the real world.

Not everyone is concerned about students using the tool to cheat.

“COVID has taught us that it’s easy enough already,” Ensley said. “It certainly reminds us that there needs to be more to our classes than problem sets and exams.”

Since all math learning isn’t about getting number answers, Drori sometimes encourages students to use the tool to solve problems in his lessons. This way, they take minutes rather than hours to solve problems, giving them the opportunity to brainstorm big design questions and learn technical skills that could be transferred to future careers.

“They’re still solving simple vanilla exercises without the tools,” Drori said, before adding that “that’s part of the progress.” He compared the advancement to the development of calculators and self-driving cars, which also free humans from tedious tasks and encourage the development of new skills.

Drori’s breakthrough in machine learning was likely influenced by his willingness to think differently. Instead of asking the AI ​​to solve symbolic math problems, it asked the program to perform programming tasks. For example, “find the difference between two points” might be asked like “write a program that finds the difference between two points”. (Not all problems were this simple; in some cases, the neural network needed context to understand the problem.) Then, instead of pre-training the neural network by showing it millions of examples of textual problems as is usually the case, he pre-trained it. over millions of text and code examples.

The work is not intended to replace human teachers, according to Drori. Rather, he sees advancement as an opportunity for faculty members to be more thoughtful and creative in their teaching and research.

“Every time you solve something, someone will come up with a harder question,” he said in an MIT press release. “But this work opens the door for people to start solving increasingly difficult questions with machine learning.”


Comments are closed.