Level of Robot Taking Gaokao Math Test Matches Average Level of Students, Exam-Marker Says
Yicai Global
/SOURCE : Yicai
Level of Robot Taking Gaokao Math Test Matches Average Level of Students, Exam-Marker Says

(Yicai Global) June 8 -- The Gaokao, China's annual college entrance examination, is currently underway.

'AI-MATHS', a special exam-taker, sat for the math test in Chengdu in the western province of Sichuan this year.

Its math level matched the average level of his students, an exam-marker said.

The special exam-taker is 'AI-MATHS', a robot developed by Chengdu Zhun Xing Yun Xue Technology Co., Western China Metropolitan Newspaper reported.

AI-MATHS finished the Beijing-version and China-version math tests in 22 minutes and 10 minutes, respectively, scoring 105 points and 100 points.

Though this is much lower than a full score of 150 points and lower than the 110-point target its 'parents' had expected, Lin Hui, chief executive of Chengdu Zhun Xing Yun Xue Technology and director of the big data center of Tsinghua University's Suzhou Research Institute, argued that the time it had spent reading the books for months was not wasted.

AI-MATHS' score suggests that its math level matches the average level of his students, said Qi Zuhai, one of the exam-markers and a senior teacher at Chengdu No. 7 High School.

AI-MATHS performed very well at multiple choice and fill-in-the-blank questions, exam-markers found. It mainly lost points at short answer questions and even scored zero on some.

All math questions 'AI-MATHS' failed to answer correctly contained a lot of words, Qi noted, meaning the Gaokao robot could not understand natural language very well, but found it easier to understand math language.

The robot has made a great progress, he admitted. Take proving a three-dimensional geometry-related theorem for example. The robot applied a different method to prove it than most students would, but did solve the problem in an innovative way.

The biggest difficulty in developing a Gaokao robot lies in getting the system to accurately understand human language, Lin said. "It can easily solve a problem directly expressed in math language. However, its biggest weakness is its inability to understand scenario descriptions in a test question. It may simply be that it couldn't understand the problem."

Follow Yicai Global on
Keywords: AI , University Entrance Examination , Mathematics