Blog(1)
Recent successes in machine learning have led to numerous Artificial Intelligence applications, such as automatic translation and chatbots. However, the effectiveness of these systems is limited by their opaqueness: predictions made by the machine learning models cannot be easily understood by humans, and hence it is hard to discern what the model learns well and what it doesn’t — which is a fundamental step to build more robust AI systems.
Research Areas(0)
Publications(2)
Explainable and Accurate Natural Language Understanding for Voice Assistants and Beyond
AuthorKalpa Gunaratna,Vijay Srinivasan,Hongxia Jin
PublishedInternational Conference on Information and Knowledge Management (CIKM)
Date2023-10-21
Explainable Slot Type Attentions to Improve Joint Intent Detection and Slot Filling
AuthorKalpa Gunaratna,Vijay Srinivasan,Retiree,Hongxia Jin
PublishedConference on Empirical Methods in Natural Language Processing (EMNLP)
Date2022-12-07
News(3)
Zero-shot sketch-based image retrieval (ZS-SBIR) is a central problem to sketch understanding [6]. This paper aims to tackle all problems associated with the current status quo for ZS-SBIR, including category-level (standard) [4], fine-grained [1], and cross-dataset [3].
We are envisioning that future AI agents can be equipped with knowledge and cognition to intelligently react, adapt, and respond to human commands and interactions under real world contexts. Future AI agents can leverage knowledge and explainable AI to enhance visual and voice understanding capability without the need of large training data, and therefore improve user engagement and trust with AI agents.
Others(0)