A Model for Determining The Degree of Contradictions in Information

Main Article Content

Ram Gopal Raj
Vimala Balakrishnan

Abstract

Conversational systems are gaining popularity rapidly. Consequently, the believability of the conversational systems or chatterbots is becoming increasingly important. Recent research has proven that learning chatterbots tend to be rated as being more believable by users. Based on Raj’s Model for Chatterbot Trust, we present a model for allowing chatterbots to determine the degree of contradictions in contradictory statements when learning thereby allowing them to potentially learn more accurately via a form of discourse. Some information that is learnt by a chatterbot may be contradicted by other information presented subsequently. Choosing correctly which information to use is critical in chatterbot believability. Our model uses sentence structures and patterns to compute contradiction degrees that can be used to overcome the limitations of Raj’s Trust Model, which takes any contradictory information as being equally contradictory as opposed to some contradictions being greater than others and therefore having a greater impact on the actions that the chatterbot should take. This paper also presents the relevant proofs and tests of the contradiction degree model as well as a potential implementation method to integrate our model with Raj’s Trust Model.

Downloads

Download data is not yet available.

Article Details

How to Cite
Raj, R. G., & Balakrishnan, V. (2011). A Model for Determining The Degree of Contradictions in Information. Malaysian Journal of Computer Science, 24(3), 160–167. Retrieved from https://mojem.um.edu.my/index.php/MJCS/article/view/6552
Section
Articles

Most read articles by the same author(s)

1 2 > >>