An Ontology-Based Automated Scoring System for Short Answer Questions

  • Dr. Rebhi S. Baraka -----> Mariam H. Abu Mugasib

 Short answer questions are open-ended questions that require students to create an answer.
They are commonly used in examinations to assess the basic knowledge and understanding
and are an important expression of academic achievement. Unfortunately, they are
expensive and time consuming to be graded by hand. Therefore, teachers are frequently
limited to multiple-choice or true-false standardized tests. In this field, automated scoring
systems is developing technology. It is used to overcome time and cost difficulties found in
paper passed exams. The search for excellence in machine scoring of short questions is
continuing and numerous studies are being conducted to improve the effectiveness and
reliability of these systems.
We propose a hybrid approach for measuring the semantic similarity of text, to overcome
the problems found in similar systems that adopted single approach only. The proposed
approach rely on WordNet ontology for measuring the similarity between two texts. It also
uses traditional string matching to get over the shortage of WordNet as an upper ontology.
Besides that, the proposed system uses some natural language processing tools such as
Parser, Word Segmenter, and Part of Speech Tagger for text preprocessing operations.
The results was modest and still need improvement to make the system scoring as closer as
possible to the human specialist scoring.
Keywords: short question grading, question types, automatic grading, electronic
evaluation, text relatedness, relatedness measure.