Publications

Efficient Neural Data Compression for Machine Type Communications via Knowledge Distillation

Published

IEEE Global Communications Conference (GLOBECOM)

Date

2022.12.04

Research Areas

Abstract

The anticipated huge number of devices and large traffic volumes impose new challenges on the communication system requirements and design. One of the main requirements of massive machine-type communication (mMTC) is to support network energy efficiency. Data compression is a widely adopted technique that enables higher energy efficiency, lower latency, and better bandwidth utilization. Unfortunately, the current compression techniques are mainly designed for human-type communications (HTC). Therefore, they consider the reconstruction fidelity, rather than the accuracy of inferred decisions, as the sole performance metric. In this work, we propose a novel encoder for data compression in mMTC communications, which is termed Distillation Encoder (DE). Unlike prior work, the design of the proposed DE aims to achieve high compression ratios while preserving the accuracy of the inferred decisions. DE inherits the knowledge of a large teacher model (trained on the raw data) through knowledge distillation. Evaluating the proposed framework on several public datasets shows a clear performance advantage compared with baseline models in terms of the inferred decision accuracy and generalizing to yet-unseen data. Moreover, the DE can be applied to learn efficient quantizers, as shown in the results.