SRC-B Ranks Top Places in the Semantic Evaluation International Competition

About SemEval

SemEval (the International Workshop on Semantic Evaluation) is an ongoing series of evaluations of computational semantics systems organized by the Association for Computational Linguistics (ACL). Its mission is to advance the current state-of-the-art semantic analyses and help create high-quality annotated datasets in a range of increasingly challenging problems in natural language semantics.

The 17th edition of SemEval (2023) features 12 tasks on various topics, including idiomaticity detection and embedding, sarcasm detection, multilingual news similarity, and linking mathematical symbols to their descriptions.

SRC-B has always played an essential role in the SemEval competition. This year, SRC-B focused on two tasks, including “Task 1: V-WSD: Visual Word Sense Disambiguation” and “Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER 2).” For Task 1, SRC-B placed 1st in the English Track, while in Task 2, SRC-B placed 2nd in the English Track. Both teams are from SRC-B’s Language Intelligence Team.

Task 1: Visual Word Sense Disambiguation (Visual-WSD)

Task 1 is a multimodal word sense disambiguation task. It targets the disambiguation of polysemy using candidate images, making it a multimodal understanding problem. The task provides a target word, context phrase, and candidate images, which then challenge the system’s information extraction and text–image alignment capabilities.

With 53 teams participating in the Visual Word Sense Disambiguation (Visual-WSD) task, Samsung R&D Institute China-Beijing team submitted a system that applies the sense inventory of semantic networks with prompting on the potential knowledge of large pre-trained VLMs to disambiguate and match cross modalities. This task involved ambiguity and only provided limited context. Specifically, our system collects abundant context information for disambiguation and bridges the gap between image and text. The system garnered 1st place on the English track and 2nd on all languages (English, Italian, and Farsi) on the leaderboards.

SRC-B members for SemEval-2023 Task 1

Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER 2)

Named Entity Recognition (NER) is a basic NLP task that aims to extract named entities for downstream applications. State-of-art models have achieved excellent performance for coarse-grained NER (such as ‘Person,’ ‘Organization,’ and ‘Location’-type entities). However, complex named entities, such as the titles of creative works, are not simple nouns and are still challenging for current NER systems. One main topic of this task is to extract complex entities. In addition, this task requires participants to extract fine-grained NER.

The Language Intelligence Team adopted the RoBERTa large model as the backbone, which has shown great capabilities to represent the semantic information of texts. As the corpus contains large amounts of short texts, the context of potential entities is insufficient. As such, a knowledge-enhancement framework was designed to alleviate this problem. First, a coarse-grained NER model was trained with excellent performance and was used to extract coarse-grained entities from the corpus. Then, extra knowledge was extracted by querying the coarse-grained entities from Wikipedia. Through this method, the context of short texts was greatly enriched.

In addition, to combat performance degradation caused by the long-tail data distribution, a deliberately designed dice loss, called an adjustable loss, was proposed, and it has proven to be noise-robust. In addition, label-smoothing technology was utilized to decrease the overfitting of some fine-grained labels belonging to the same coarse labels.

SRC-B members for SemEval-2023 Task 2