Reading Material Classification (RMC) classifies an unclassified text into the readability graded reading material based on its text readability. Recent approaches for RMC have used Natural Language Processing (NLP) methods such as machine-learning-based methods (e.g., Support Vector Machine, Multinomial Naïve Bayes and Latent Semantic Indexing) to overcome disadvantages of using the syntactic features, i.e., insufficiency for modelling the levels of text reading difficulty. Ontologies have been used for sharing and reusing knowledge, and perhaps supporting the inference. It will be used for RMC. Concepts and contexts are treated separately in ontologies. By only using basic NLP techniques such as stemming and word sense disambiguation, integrating contextual information into ontologies, i.e., Contextual Ontology (CO), is proposed which aimed to improve the ontology’s performance and possibly other types of quality for RMC. Since CO is the gradation of concepts, i.e., concept variants, then an alternative view in determining the results of RMC can be obtained and utilised. From the evaluation experiments, we do not claim that our proposed method is better than machine-learning based RMC, our system performance is just on a par with them. Rather than beating them, we aimed to use RMC to show that integrating contextual information into ontologies (RMC-CO) provides a considerable benefit for ontologies than not integrate it (RMC-O). 1.56% and 2.11% improvements can be obtained for validation and testing data, respectively.
contextual information, contextual ontology, machine learning, ontology, reading material classification.