Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
NJU ICS2024 PA 作业心得(四)
Published:
NJU ICS2024 PA 作业报告
NJU ICS2024 PA 作业心得(三)
Published:
NJU ICS2024 PA 作业心得(三)
NJU ICS2024 PA 作业心得(二)
Published:
NJU ICS2024 PA 作业心得(二)
NJU ICS2024 PA 作业心得(一)
Published:
NJU ICS2024 PA 作业心得(一)
portfolio
Portfolio item number 1
Published:
Short description of portfolio item number 1
Portfolio item number 2
Published:
Short description of portfolio item number 2
publications
Improved pseudo data for machine translation quality estimation with constrained beam search
Published in EMNLP Long Paper, 2023
Machine translation(MT) quality estimation(QE) is a crucial task to estimate the quality of MT outputs when reference translations are unavailable. Many studies focus on generating pseudo data using large parallel corpus and achieve remarkable success in the supervised setting. However, pseudo data solutions are less satisfying in unsupervised scenarios because the pseudo labels are inaccurate or the pseudo translations differ from the real ones. To address these problems, we propose to generate pseudo data using the MT model with constrained beam search~(CBSQE). CBSQE preserves the reference parts with high MT probabilities as correct translations, while the rest parts as the wrong ones for MT generation. Therefore, CBSQE can reduce the false negative labels caused by synonyms. Overall, beam search will prefer a more real hypothesis with a higher MT generation likelihood. Extensive experiments demonstrate that CBSQE outperforms strong baselines in both supervised and unsupervised settings. Analyses further show the superiority of CBSQE.
Recommended citation: Xiang Geng, Yu Zhang, Zhejian Lai, Shuaijie She, Wei Zou, Shimin Tao, Hao Yang, Jiajun Chen, and Shujian Huang. 2023. Improved Pseudo Data for Machine Translation Quality Estimation with Constrained Beam Search. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 12434–12447, Singapore. Association for Computational Linguistics.
Download Paper | Download Bibtex
Unify word-level and span-level tasks: NJUNLP’s Participation for the WMT2023 Quality Estimation Shared Task
Published in WMT2023 Shared Task, 2023
We introduce the submissions of the NJUNLP team to the WMT 2023 Quality Estimation (QE) shared task. Our team submitted predictions for the English-German language pair on all two sub-tasks: (i) sentence- and word-level quality prediction; and (ii) fine-grained error span detection. This year, we further explore pseudo data methods for QE based on NJUQE framework. We generate pseudo MQM data using parallel data from the WMT translation task. We pre-train the XLMR large model on pseudo QE data, then fine-tune it on real QE data. At both stages, we jointly learn sentence-level scores and word-level tags. Empirically, we conduct experiments to find the key hyper-parameters that improve the performance. Technically, we propose a simple method that covert the word-level outputs to fine-grained error span results. Overall, our models achieved the best results in English-German for both word-level and fine-grained error span detection sub-tasks by a considerable margin.
Recommended citation: Xiang Geng, Zhejian Lai, Yu Zhang, Shimin Tao, Hao Yang, Jiajun Chen, and Shujian Huang. 2023. Unify Word-level and Span-level Tasks: NJUNLP’s Participation for the WMT2023 Quality Estimation Shared Task. In Proceedings of the Eighth Conference on Machine Translation, pages 829–834, Singapore. Association for Computational Linguistics.
Download Paper | Download Bibtex
Alleviating Distribution Shift in Synthetic Data for Machine Translation Quality Estimation
Published in ACL Long Paper, 2025
Quality Estimation (QE) models evaluate the quality of machine translations without reference translations, serving as the reward models for the translation task. Due to the data scarcity, synthetic data generation has emerged as a promising solution. However, synthetic QE data often suffers from distribution shift, which can manifest as discrepancies between pseudo and real translations, or in pseudo labels that do not align with human preferences. To tackle this issue, we introduce DCSQE, a novel framework for alleviating distribution shift in synthetic QE data. To reduce the difference between pseudo and real translations, we employ the constrained beam search algorithm and enhance translation diversity through the use of distinct generation models. DCSQE uses references—i.e., translation supervision signals—to guide both the generation and annotation processes, enhancing the quality of token-level labels. DCSQE further identifies the shortest phrase covering consecutive error tokens, mimicking human annotation behavior, to assign the final phrase-level labels. Specially, we underscore that the translation model can not annotate translations of itself accurately. Extensive experiments demonstrate that DCSQE outperforms SOTA baselines like CometKiwi in both supervised and unsupervised settings. Further analysis offers insights into synthetic data generation that could benefit reward models for other tasks.
Recommended citation: Xiang Geng, Zhejian Lai, Jiajun Chen, Hao Yang, and Shujian Huang. 2025. Alleviating Distribution Shift in Synthetic Data for Machine Translation Quality Estimation. arXiv preprint arXiv:2502.19941.
Download Paper | Download Bibtex
Why Not Transform Chat Large Language Models to Non-English?
Published in Preprint, 2025
Chat large language models (LLMs), fine-tuned from pre-trained models and optimized for alignment with human preferences, excel in following diverse instructions while maintaining consistency with human values. In this paper, we propose the TransLLM framework for transforming chat LLMs from English to other languages using publicly available resources. TransLLM employs the translation chain-of-thought (TCoT) technique, which transfers chat ability through inference-time computation. Specifically, for a query in the target language, TCoT guides the LLM to first generate an English query and response as intermediate transfer steps before producing the final response in the target language. We underscore the necessity of improving the performance of each step in TCoT. However, improvement through continual pre-training (CPT) induces catastrophic forgetting of the original chat ability. To address this issue, we introduce recovery knowledge distillation (RKD), which utilizes data generated by the original chat LLM to recover its chat ability. Experimental results indicate that TransLLM outperforms baseline models across various languages and LLMs while demonstrating adaptability in multilingual settings and generalizability beyond its training tasks. Our analysis elucidates the mechanism by which RKD, in conjunction with LoRA, mitigates catastrophic forgetting.
Recommended citation: Xiang Geng, Ming Zhu, Jiahuan Li, Zhejian Lai, Wei Zou, Shuaijie She, Jiaxin Guo, Xiaofeng Zhao, Yinglu Li, Yuang Li, Chang Su, Yanqing Zhao, Xinglin Lyu, Min Zhang, Jiajun Chen, Hao Yang, and Shujian Huang. 2024. Why Not Transform Chat Large Language Models to Non-English? arXiv preprint arXiv:2405.13923.
Download Paper | Download Bibtex
How does Alignment Enhance LLMs’ Multilingual Capabilities? A Language Neurons Perspective
Published in Preprint, 2025
Multilingual Alignment is an effective and representative paradigm to enhance LLMs’ multilingual capabilities, which transfers the capabilities from the high-resource languages to the low-resource languages. Meanwhile, some researches on language-specific neurons reveal that there are language-specific neurons that are selectively activated in LLMs when processing different languages. This provides a new perspective to analyze and understand LLMs’ mechanisms more specifically in multilingual scenarios. In this work, we propose a new finer-grained neuron identification algorithm, which detects language neurons (including language-specific neurons and language-related neurons) and language-agnostic neurons. Furthermore, based on the distributional characteristics of different types of neurons, we divide the LLMs’ internal process for multilingual inference into four parts: (1) multilingual understanding, (2) shared semantic space reasoning, (3) multilingual output space transformation, and (4) vocabulary space outputting. Additionally, we systematically analyze the models before and after alignment with a focus on different types of neurons. We also analyze the phenomenon of “Spontaneous Multilingual Alignment”. Overall, our work conducts a comprehensive investigation based on different types of neurons, providing empirical results and valuable insights for better understanding multilingual alignment and multilingual capabilities of LLMs.
Recommended citation: Shimao Zhang, Zhejian Lai, Xiang Liu, Shuaijie She, Xiao Liu, Yeyun Gong, Shujian Huang and Jiajun Chen. 2025. How does Alignment Enhance LLMs' Multilingual Capabilities? A Language Neurons Perspective. arXiv preprint arXiv:2502.21505.
Download Paper | Download Bibtex
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
Conference Proceeding talk 3 on Relevant Topic in Your Field
Published:
This is a description of your conference proceedings talk, note the different field in type. You can put anything in this field.
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.