Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Breaking Boundaries: Investigating the Effects of Model Editing on Cross-linguistic Performance

Published

Author(s)

Somnath Banerjee, Avik Halder, Rajarshi Mandal, Sayan Layek, Ian Soboroff, Rima Hazra, Animesh Mukherjee

Abstract

The integration of pretrained language models (PLMs) like BERT and GPT has revolutionized NLP, particularly for English, but it has also created linguistic imbalances. This paper strategically identifies the need for linguistic equity by examining several knowledge editing techniques in multilingual contexts. We evaluate the performance of models such as Mistral, TowerInstruct, OpenHathi, Tamil-Llama, and Kan-Llama across languages including English, German, French, Italian, Spanish, Hindi, Tamil, and Kannada. Our research identifies significant discrepancies in normal and merged models concerning cross-lingual consistency. We employ strategies like 'each language for itself' (ELFI) and 'each language for others' (ELFO) to stress-test these models. Our findings demonstrate the potential for LLMs to overcome linguistic barriers, laying the groundwork for future research in achieving linguistic inclusivity in AI technologies.
Citation
This is in the Association of Computational Linguistics rolling review process, so the conference isn't known at submission time.

Keywords

large language models, low resource languages

Citation

Banerjee, S. , Halder, A. , Mandal, R. , Layek, S. , Soboroff, I. , Hazra, R. and Mukherjee, A. (2024), Breaking Boundaries: Investigating the Effects of Model Editing on Cross-linguistic Performance, This is in the Association of Computational Linguistics rolling review process, so the conference isn't known at submission time. (Accessed April 19, 2025)

Issues

If you have any questions about this publication or are having problems accessing it, please contact reflib@nist.gov.

Created June 17, 2024, Updated April 9, 2025