Artificial Intelligence Used to Mark Exam Papers

Experts have in the recent past predicted how artificial intelligence would re-weave the fabrics of education in future. And that has started to happen, looking at AI’s new adventure: marking logic-based exam papers.

The wave of the new concept is taking shape in China’s system of education. With trials showing that machine intelligence can indeed match and in other instances surpass teachers’ capability in marking exam papers.

AI Now Interprets General Logic

Source: cambridgeassessment

Although from a different angle machine learning has been on record assisting in the marking of multiple-choice type of questions. And the tech has done a wonderful job, but, according to a post appearing on South China Morning Post, China researchers are experimenting with AI to mark essays.

Acting like the human brain the system has been built to perceive general logic from the context then connect that with the actual meaning of words. That ideally mimics how the mind is able to grasp the direction of a story by scanning the headline, then suck the juice from the rest of the writing.

The algorithm can then undertake human-like judgment while assessing the essay’s quality. After which it grades the paper and offer a summary remark on areas the learner needs to improve. The recommendations might point to sentence structure, writing approach and insights on category selection.

Is this Eliminating the Teacher?

Honestly speaking we might be heading that direction. But at the moment the application of AI in Chinese’ marking of exams is meant to assist the teacher. And so far the tech has hugely compressed the time spend in assessing essays. On the same note, the model has proven reliable resolving inconsistencies often spike from paper to paper.

Analyzing performance between the artificial and biological brain, it’s said that the algorithm and the human teachers both scored 92% in average performance rating, centering the report on a case study of 120 million students from 60,000 schools.

However, it’s projected that the system will soon outperform the supervisor because the model is designed to improve on its own as it tackles more tasks. Meaning the tech works with the evolutionary algorithm, a type of AI software that can learn from mistakes to improve task executions automatically. And in this case, it would take advice and ‘criticism’ from human teachers to match scores.

U.S.’ Take on the Technology

Source: deccanherald

Although from a more controlled approach the Gradescope University of California Berkeley, has been using machine intelligence to score essays. Another example is where U.S. researchers have been deploying AI to achieve the same concept, but with a platform called e-Rater. A form of automated reader build by ETS (Educational Testing Service) which is said to grade as much as 16,000 essays in a span of only 20 seconds.

In comparing the e-Rater’s rating how it scored over 20,000 essays for junior and secondary school students, the software gave very similar results — as those delivered by trained human examiners. This again being according to how experts concluded, after assessing the e-Rater at an event at Akron University.

By compiling all these we can safely say the future of marking exams, in all disciplines and levels of learning is going to be machine dominated.