Blog(1)
Recent successes in machine learning have led to numerous Artificial Intelligence applications, such as automatic translation and chatbots. However, the effectiveness of these systems is limited by their opaqueness: predictions made by the machine learning models cannot be easily understood by humans, and hence it is hard to discern what the model learns well and what it doesn’t — which is a fundamental step to build more robust AI systems.
Research Areas(0)
Publications(4)
X-CANIDS: Signal-Aware Explainable Intrusion Detection System for Controller Area Network-Based In-Vehicle Network
AuthorSeonghoon Jeong, Sangho Lee, Hwejae Lee, Huy Kang Kim
PublishedIEEE Transactions on Vehicular Technology
Date2023-10-24
Explainable and Accurate Natural Language Understanding for Voice Assistants and Beyond
AuthorKalpa Gunaratna,Vijay Srinivasan,Hongxia Jin
PublishedInternational Conference on Information and Knowledge Management (CIKM)
Date2023-10-21
Zero-Shot Everything Sketch-Based Image Retrieval, and in Explainable Style
AuthorDa Li, Timothy Hospedales
PublishedComputer Vision and Pattern Recognition (CVPR)
Date2023-06-18
News(4)
VQA [1,2] is the field of research that aims to develop methods for answering natural language questions based on the information provided in corresponding images.
Zero-shot sketch-based image retrieval (ZS-SBIR) is a central problem to sketch understanding [6]. This paper aims to tackle all problems associated with the current status quo for ZS-SBIR, including category-level (standard) [4], fine-grained [1], and cross-dataset [3].
We are envisioning that future AI agents can be equipped with knowledge and cognition to intelligently react, adapt, and respond to human commands and interactions under real world contexts. Future AI agents can leverage knowledge and explainable AI to enhance visual and voice understanding capability without the need of large training data, and therefore improve user engagement and trust with AI agents.
Others(0)