ESG Investment Analysis and Stock Price Prediction Using FinBERT and LSTM Models
DOI: https://doi.org/10.62381/ACS.DIMI2025.01
Author(s)
Nan Zheng
Affiliation(s)
International Business School, Xi'an Jiaotong-Liverpool University, Suzhou, China
Abstract
This study explores the integration of Environmental, Social, and Governance (ESG) data with deep learning models for stock price prediction. We propose a hybrid framework combining FinBERT, a pre-trained financial language model, for sentiment analysis of ESG-related texts, and Long Short-Term Memory (LSTM) networks for time-series forecasting. Experimental results on Ping An Bank (stock code: 000001.SZ) demonstrate that the model achieves a Mean Absolute Error (MAE) of 0.107 and Root Mean Squared Error (RMSE) of 0.163, indicating robust convergence. The analysis highlights the potential of ESG sentiment as a predictive factor and underscores challenges such as data scarcity and inconsistent ESG ratings. Future research directions include multi-enterprise validation and model comparison.
Keywords
ESG Investment; FinBERT; LSTM; Stock Prediction; Natural Language Processing
References
[1]Friede, G., Busch, T., & Bassen, A. (2015). ESG and Financial Performance: Aggregated Evidence from More than 2000 Empirical Studies. Journal of Sustainable Finance & Investment, 5(4), 210 - 233. DOI: 10.1080/20430795.2015.1118917
[2]Broadstock, D. C., Chan, K., Cheng, L. T., & Wang, X. (2021). The role of ESG performance during times of financial crisis: Evidence from COVID - 19 in China. Finance Research Letters, 38, 101716. Retrieved from https://doi.org/10.1016/j.frl.2020.101716
[3]Lins, K. V., Servaes, H., & Tamayo, A. (2017). Social capital, trust, and firm performance: The value of corporate social responsibility during the financial crisis. The Journal of Finance, 72(4), 1785–1824. https://doi.org/10.1111/jofi.12505
[4]Henriksson, R., Livnat, J., Pfeifer, P., & Stumpp, M. (2019). Integrating ESG in Portfolio Construction. The Journal of Portfolio Management, 45(4), 67–81.
[5]Zhou, G., Liu, L., & Luo, S. (2022). Sustainable development, ESG performance and company market value: Mediating effect of financial performance. Business Strategy and the Environment, 31(7), 3371–3387. https://doi.org/10.1002/bse.3089
[6]Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. The MIT Press.
[7]LeCun, Y., Bengio, Y., & Hinton, G. (2015). Deep learning. Nature, 521(7553), 436 - 444. Available at: https://doi.org/10.1038/nature14539
[8]Liu, Z., Huang, D., Huang, K., Li, Z., & Zhao, J. (2020). FinBERT: A Pre-trained Financial Language Representation Model for Financial Text Mining. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20) Special Track on AI in FinTech, 4513 - 4519.
[9]Devlin, J., Chang, M.W., Lee, K. and Toutanova, K. (2019) 'BERT: Pre-training of deep bidirectional transformers for language understanding', in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
[10]Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is All you Need. In Advances in Neural Information Processing Systems (pp. 5998 - 6008).
[11]Taylor, W. L. (1953). Cloze procedure: A new tool for measuring readability. Journalism Bulletin, 30(4), 415–433.
[12]Afzal, T., Abdul Rauf, S., Malik, M. G. A., & Imran, M. (2025). Fine-Tuning QurSim on Monolingual and Multilingual Models for Semantic Search. Information, 16 (2), 84. https://doi.org/10.3390/info16020084
[13]Zargar, S. A. (2021). Introduction to Sequence Learning Models: RNN, LSTM, GRU. Preprint. DOI: 10.13140/RG.2.2.36370.99522
[14]Lipton, Z. C., Berkowitz, J., & Elkan, C. (2015). A Critical Review of Recurrent Neural Networks for Sequence Learning. arXiv preprint arXiv:1506.00019v4.
[15]Fang, W., Chen, Y., & Xue, Q. (2021). Survey on Research of RNN-Based Spatio-Temporal Sequence Prediction Algorithms. Journal on Big Data, 3(3), 98-110. DOI:10.32604/jbd.2021.016993
[16]Ismoilov, N., & Jang, S. - B. (2018). A Comparison of Regularization Techniques in Deep Neural Networks. Symmetry, 10 (11), 648.
[17]Wang, W., Wei, F., Dong, L., Bao, H., Yang, N., & Zhou, M. (2020). MiniLM: Deep self-attention distillation for task-agnostic compression of pre-trained transformers. arXiv preprint arXiv:2002.10957. Retrieved from https://arxiv.org/abs/2002.10957
[18]Patro, S. G. K., & Sahu, K. K. (2015). Normalization: A Preprocessing Stage. arXiv:1503.06462 [cs.OH]. https://doi.org/10.48550/arXiv.1503.06462