Explainable and Accurate Natural Language Understanding for Voice Assistants and Beyond
Published
International Conference on Information and Knowledge Management (CIKM)
Abstract
Joint intent detection and slot filling is invaluable for smart voice assistants, which is also termed as joint NLU (Natural Language Understanding). Most of the recent advancements in this area has been to improve accuracy using various techniques. Explainability is undoubtedly an important aspect for deep learning-based models including joint NLU models since they are considered black box models. Their decisions are opaque to the outside world and hence, tendency to lack user trust. This this proposed work, we show that is it possible to make the full joint NLU model inherently explainable at granular levels of explanations without compromising on the accuracy. Further, as we enable the full joint NLU model explainable, we show that our extensions can be used in other general classification tasks such as sentiment analysis and named entity recognition (NER).