Improving text classification via a soft dynamical label strategy

Jingjing Wang, Haoran Xie, Fu Lee Wang, Lap Kei Lee

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


Labels play a central role in the text classification tasks. However, most studies has a lossy label encoding problem, in which the label will be represented by a meaningless and independent one-hot vector. This paper proposes a novel strategy to dynamically generate a soft pseudo label based on the prediction for each training. This history-based soft pseudo label will be taken as the target to optimize parameters by minimizing the distance between the target and the prediction. In addition, we augment the training data with Mix-up, a widely used method, to prevent overfitting on the small dataset. Extensive experimental results demonstrate that the proposed dynamical soft label strategy significantly improves the performance of several widely used deep learning classification models on binary and multi-class text classification tasks. Not only is our simple and efficient strategy much easier to implement and train, it is also exhibits substantial improvements (up to 2.54% relative improvement on FDCNews datasets with an LSTM encoder) over Label Confusion Learning (LCM)—a state-of-the-art label smoothing model—under the same experimental setting. The experimental result also demonstrate that Mix-up improves our method's performance on smaller datasets, but introduce excess noise in larger datasets, which diminishes the model’s performance.

Original languageEnglish
Pages (from-to)2395-2405
Number of pages11
JournalInternational Journal of Machine Learning and Cybernetics
Issue number7
Publication statusPublished - Jul 2023


  • Label distribution learning
  • Mix-up
  • Text classification


Dive into the research topics of 'Improving text classification via a soft dynamical label strategy'. Together they form a unique fingerprint.

Cite this