Free Will in the Digital Society: Challenges to Human Autonomy from Artificial Intelligence
DOI: https://doi.org/10.62381/P253403
Author(s)
Sichen Jin*
Affiliation(s)
School of Political Science, Law and Public Administration, Yan'an University, Yan'an, Shaanxi, China
*Corresponding Author
Abstract
This study explores the impact and challenges of artificial intelligence (AI) technologies on human free will and autonomy within the context of a digital society. Utilizing literature review and theoretical analysis, it systematically examines theories and findings related to free will, human autonomy, and AI across interdisciplinary fields such as philosophy, computer science, and sociology. Through logical deduction and critical analysis, the study investigates the mechanisms by which AI affects human autonomy in applications such as cognitive assistance, decision support, and behavior guidance. The findings indicate that AI, leveraging its powerful data processing capabilities and algorithmic recommendation systems, is reshaping human patterns of information acquisition, decision-making, and behavioral choice, potentially leading to issues such as algorithmic constraints on cognition, reliance on external technological support for decision-making, and diminished behavioral autonomy. Furthermore, the opacity and value-laden nature of AI systems pose a potential threat to the expression of human free will. The study concludes that the challenges posed by AI to human autonomy in the digital society are not only technological but also touch upon philosophical, ethical, and governance aspects, necessitating a coordinated response across multiple dimensions, including technology optimization, institutional refinement, ethical standards, and the enhancement of human agency to preserve free will and autonomy in the digital age.
Keywords
Digital Society; Artificial Intelligence; Free Will; Human Autonomy; Ethical Challenges
References
[1] Smith, J. D. (2022). The Impact of Search Engine Autocompletion on User Information Seeking Behavior. Journal of Information Science, 48 (3), 345-358.
[2] Johnson, L. A., & Brown, E. R. (2021). Virtual Social Interaction and Self-Identity among Young Adults. Social Psychology Quarterly, 84 (2), 156-173.
[3] Datta, A., Sen, S., & Zick, Y. (2020). Discrimination in Online Ad Delivery. Communications of the ACM, 63 (5), 44-54.
[4] Goodall, N. J. (2019). Who Should Be Liable for a Self-Driving Car Crash? IEEE Intelligent Systems, 34 (2), 84-87.
[5] Brown, T. J., & Green, S. R. (2023). The Impact of Algorithm Failures in Smart City Infrastructure. Journal of Urban Technology, 30 (4), 56-73.
[6] Frankfurt, H. G. (2024). On the Concept of a Person Revisited in the Age of AI. Philosophy and Technology, 37 (2), 123-138.
[7] Wang, Y., & Chen, X. (2022). Visualizing Deep Learning Algorithms for Explainable AI. IEEE Transactions on Visualization and Computer Graphics, 28 (6), 2567-2576.
[8] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135-1144.
[9] European Commission. (2023). Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Brussels.
[10] Mittelstadt, B. D., & Floridi, L. (2019). The Ethics of Algorithms: Mapping the Debate. Big Data & Society, 6 (1), 1-18.
[11] UNESCO. (2020). Recommendation on the Ethics of Artificial Intelligence. Paris.