Analysis and Mitigation of Religion Bias in Indonesian Natural Language Processing Datasets

  • Muhammad Arief Fauzan Universitas Indonesia
  • Ari Saptawijaya Universitas Indonesia
Keywords: natural language processing, indonesian NLP, social bias, debiasing

Abstract

Previous studies have shown the existence of misrepresentation regarding various religious identities in Indonesian media. Misrepresentations of other marginalized identities in natural language processing (NLP) datasets have been recorded to inflict harm against such marginalized identities in cases such as automated content moderation, and as such must be mitigated. In this paper, we analyze, for the first time, several Indonesian NLP datasets to see whether they contain unwanted bias and the effects of debiasing on them. We find that two of the three data sets analyzed in this study contain unwanted bias, whose effects trickle down to downstream performance in the form of allocation and representation harm. The results of debiasing at the dataset level, as a response to the biases previously discovered, are consistently positive for the respective dataset. However, depending on the data set and embedding used to train the model, they vary greatly at the downstream performance level. In particular, the same debiasing technique can decrease bias on a combination of datasets and embedding, yet increase bias on another, particularly in the case of representation harm.

Downloads

Download data is not yet available.

References

M. Wiegand, J. Ruppenhofer, and T. Kleinbauer, “{D}etection of {A}busive {L}anguage: the {P}roblem of {B}iased {D}atasets,” in Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 602–608, doi: 10.18653/v1/N19-1060.

L. Dixon, J. Li, J. Sorensen, N. Thain, and L. Vasserman, “Measuring and Mitigating Unintended Bias in Text Classification,” in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, 2018, pp. 67–73, doi: 10.1145/3278721.3278729.

A. Ball-Burack, M. S. A. Lee, J. Cobbe, and J. Singh, “Differential Tweetment: Mitigating Racial Dialect Bias in Harmful Tweet Detection,” in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 2021, pp. 116–128, doi: 10.1145/3442188.3445875.

A. Olteanu, C. Castillo, F. Diaz, and E. Kıcıman, “Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries ,” Frontiers in Big Data , vol. 2. 2019, [Online]. Available: https://www.frontiersin.org/articles/10.3389/fdata.2019.00013.

M. Lim, “Freedom to hate: social media, algorithmic enclaves, and the rise of tribal nationalism in Indonesia,” Crit. Asian Stud., vol. 49, no. 3, pp. 411–427, Jul. 2017, doi: 10.1080/14672715.2017.1341188.

M. Heychael, H. Rafika, J. Adiprasetyo, and Y. Arief, “Marginalized Religious Communities in Indonesian Media,” Remotivi, 2021. https://www.mediasupport.org/publication/marginalized-religious-communities-in-indonesian-media/.

S. L. Blodgett, S. Barocas, H. Daumé III, and H. Wallach, “Language (Technology) is Power: A Critical Survey of {``}Bias{’’} in {NLP},” in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 5454–5476, doi: 10.18653/v1/2020.acl-main.485.

S. Barocas, M. Hardt, and A. Narayanan, Fairness and Machine Learning: Limitations and Opportunities. fairmlbook.org, 2019.

M. S. Saputri, R. Mahendra, and M. Adriani, “Emotion Classification on Indonesian Twitter Dataset,” in 2018 International Conference on Asian Language Processing (IALP), 2018, pp. 90–95, doi: 10.1109/IALP.2018.8629262.

A. Purwarianti and I. A. P. A. Crisdayanti, “Improving Bi-LSTM Performance for Indonesian Sentiment Analysis Using Paragraph Vector,” in 2019 International Conference of Advanced Informatics: Concepts, Theory and Applications (ICAICTA), 2019, pp. 1–5, doi: 10.1109/ICAICTA.2019.8904199.

M. O. Ibrohim and I. Budi, “Multi-label Hate Speech and Abusive Language Detection in {I}ndonesian {T}witter,” in Proceedings of the Third Workshop on Abusive Language Online, 2019, pp. 46–57, doi: 10.18653/v1/W19-3506.

B. Wilie et al., “{I}ndo{NLU}: Benchmark and Resources for Evaluating {I}ndonesian Natural Language Understanding,” in Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing, 2020, pp. 843–857, [Online]. Available: https://aclanthology.org/2020.aacl-main.85.

K. W. Church and P. Hanks, “Word Association Norms, Mutual Information, and Lexicography,” Comput. Linguist., vol. 16, no. 1, pp. 22–29, 1990, [Online]. Available: https://aclanthology.org/J90-1003.

S. Kiritchenko and S. Mohammad, “Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems,” in Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics, 2018, pp. 43–53, doi: 10.18653/v1/S18-2005.

J. M. Kleinberg, S. Mullainathan, and M. Raghavan, “Inherent Trade-Offs in the Fair Determination of Risk Scores,” CoRR, vol. abs/1609.05807, 2016, [Online]. Available: http://arxiv.org/abs/1609.05807.

D. Leben, “Normative Principles for Evaluating Fairness in Machine Learning,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 86–92, doi: 10.1145/3375627.3375808.

T. A. Le, D. Moeljadi, Y. Miura, and T. Ohkuma, “Sentiment Analysis for Low Resource Languages: A Study on Informal {I}ndonesian Tweets,” in Proceedings of the 12th Workshop on {A}sian Language Resources ({ALR}12), 2016, pp. 123–131, [Online]. Available: https://aclanthology.org/W16-5415.

K. Kurniawan, “KaWAT: {A} Word Analogy Task Dataset for Indonesian,” CoRR, vol. abs/1906.09912, 2019, [Online]. Available: http://arxiv.org/abs/1906.09912.

T. Bolukbasi, K.-W. Chang, J. Y. Zou, V. Saligrama, and A. Kalai, “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings,” CoRR, vol. abs/1607.06520, 2016, [Online]. Available: http://arxiv.org/abs/1607.06520.

T. Manzini, L. Yao Chong, A. W. Black, and Y. Tsvetkov, “{B}lack is to Criminal as {C}aucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings,” in Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019, pp. 615–621, doi: 10.18653/v1/N19-1062.

Published
2023-08-12
How to Cite
Fauzan, M. A., & Saptawijaya, A. (2023). Analysis and Mitigation of Religion Bias in Indonesian Natural Language Processing Datasets . Jurnal RESTI (Rekayasa Sistem Dan Teknologi Informasi), 7(4), 845 - 857. https://doi.org/10.29207/resti.v7i4.5035
Section
Information Technology Articles