Lazarus NLP at SemEval-2025 Task 11: Fine-Tuning Large Language Models for Multi-Label Emotion Classification via Sentence-Label Pairing
Published in SemEval 2025 at ACL 2025, 2025
Abstract
Multi-label emotion classification in lowresource languages remains challenging due to limited annotated data and model adaptability. To address this, we fine-tune large language models (LLMs) using a sentence-label pairing approach, optimizing efficiency while improving classification performance. Evaluating on Sundanese, Indonesian, and Javanese, our method outperforms conventional classifierbased fine-tuning and achieves strong zero-shot cross-lingual transfer. Notably, our approach ranks first in the Sundanese subset of SemEval2025 Task 11 Track A. Our findings demonstrate the effectiveness of LLM fine-tuning for low-resource emotion classification, underscoring the importance of tailoring adaptation strategies to specific language families in multilingual contexts. Our source code is available at: https://github.com/LazarusNLP/SemEval2025-Emotion-Analysis.
BibTeX Citation
@inproceedings{wongso2025lazarus,
title={Lazarus NLP at SemEval-2025 Task 11: Fine-Tuning Large Language Models for Multi-Label Emotion Classification via Sentence-Label Pairing},
author={Wongso, Wilson and Setiawan, David and Joyoadikusumo, Ananto and Limcorn, Steven},
booktitle={Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)},
pages={763--772},
year={2025}
}
Recommended citation: Wongso, W., Setiawan, D., Joyoadikusumo, A., & Limcorn, S. (2025, July). Lazarus NLP at SemEval-2025 Task 11: Fine-Tuning Large Language Models for Multi-Label Emotion Classification via Sentence-Label Pairing. In Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025) (pp. 763-772).
Download Paper