Artificial Intelligence and Criminal Liability
Abstract
Technology and internet users can gain from AI and cybersecurity working together. AI can be utilized to identify cyberattacks and develop more potent defences. Machine learning algorithms, for instance, can be trained to identify odd computer network behaviours or suspect traffic patterns. They can aid in the prompt detection of cyberattacks, resulting in a quicker and more efficient reaction when handling security-related issues. What happens when artificial intelligence goes rogue, buys drugs on the darknet, or commits other criminal acts? Can it be punished? Only humans are subject to criminal accountability; legal persons are also subject to criminal liability, for which the primary sanctions are less effective than the complementary ones. In Dutch law, the use of AI is permitted in this capacity by amending the criminal provisions of the legislation. Still, the concept of the victim is assumed only when the victim is a human being, because only they are legally protected, from the use of their rights when they are also the beneficiary of social values, to the protection afforded by criminal law. Similar to the incrimination of legal persons, it would be able to incriminate AI that engages in criminal activity.
References
Bierstedt, R. (1976). JOSEPH WEIZENBAUM. Computer Power and Human Reason: From Judgment to Calculation. The ANNALS of the American Academy of Political and Social Science, 426(1), 266-267. Retrieved 01 30, 2025, from https://journals.sagepub.com/doi/10.1177/000271627642600162.
Brundage, M., Avin, S., & Clark, J. e. (2018). The Malicious Use of Artificial Intelligence: Forecasting, Prevention and Mitigation. Report. Retrieved 01 30, 2025, from https://maliciousaireport.com/.
Clapper, J. R. (2011, March 10). Statement for the Record on the Worldwide Threat Assessment of the U.S. Intelligence Community for the Senate Committee on Armed Services. Retrieved 01 30, 2025, from https://www.dni.gov/files/documents/Newsroom/Testimonies/20110310_testimony_clapper.pdf.
Clapper, J. R. (2013, March 12). Worldwide Threat Assessment of the US Intelligence Community Senate Select Committee on Intelligence. Retrieved 01 29, 2025, from https://www.dni.gov/files/documents/Intelligence%20Reports/2013%20ATA%20SFR%20for%20SSCI%2012%20Mar%202013.pdf.
Cotterel, R. (2017). The concept of "crime" and transnational networks of community. In V. Mitsilegas, P. Alldrige, & L. Cheliotis, Globalization, Criminal Law and Criminal Justice (p. 22). New York: Bloomsbury.
Dupont, B., Stevens, Y., Westermann, H., & Joyce, M. (2018). Artificial Intelligence in the Context of Crime and Criminal Justice, in Korean Institute of Criminology. Montréal: ICCC, Université de Montréal. Retrieved 01 30, 2025, from https://www.researchgate.net/publication/337402608_Artificial_Intelligence_in_the_Context_of_Crime_and_Criminal_Justice.
Eurojust. (December 2019). Judicial Cybercrime Monitor, no. 5. Retrieved 01 30, 2025, from https://www.eurojust.europa.eu/publication/cybercrime-judicial-monitor-issue-5.
European Parliament’s Libe Committee Adopted a New Draft Report On The Use Of AI By The Police And Judicial Authorities. (2021). Retrieved 01 30, 2025, from Team AI Regulation: https://ai-regulation.com/libe-new-draft-report-on-the-use-of-ai-by-the-police-and-judicial-authorities/.
(06 Dec 2021). Europol, The EU Serious and Organised Crime Threat Assessment (SOCTA) 2017. Europol. Retrieved 01 30, 2025, from https://www.europol.europa.eu/publications-events/main-reports/european-union-serious-and-organised-crime-threat-assessment-2017.
Farwell, J. P., & Rohozinski, R. (2011). Stuxnet and the Future of Cyber War. Survival, 53:1, 23-40. doi:10.1080/00396338.2011.555586.
Hallevy, G. (2015). Liability for Crimes Involving Artificial Intelligence Systems. New York: Springer Edition. Retrieved from 10.1007/978-3-319-10124-8.
Hoel, E. (2021). We need a Butlerian Jihad against AI. A proposal to ban AI research by treating it like human-animal hybrids. The Intrisic Perspective. Retrieved 01 15, 2025, from https://www.theintrinsicperspective.com/p/we-need-a-butlerian-jihad-against.
Husti, G. (2019). Guilt and other elements regarding criminal liability in the case of autonomous cars. Caiete de Drept Penal/Criminal Law Notebooks, no. 3, 84-86.
Karnow, C. E. (2018). Research Handbook on the Law of Artificial Intelligence. Cheltenham: Edward Elgar Pub.
King, T., Aggarwal, N., Taddeo, M., & al., e. (2020). Artificial Intelligence Affordances: Deep-fakes as Exemplars of AI Challenges to Criminal Justice Systems. Sci Eng Ethics 26, 89–120. doi:https://doi.org/10.1007/s11948-018-00081-0.
Laukyte, M. (2019). AI as a Legal Person. Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law (ICAIL '19) (pp. 209–213). New York: Association for Computing Machinery. doi:https://doi.org/10.1145/3322640.3326701
McKinsey Analytics. (2021, December). The state of AI in 2021. Retrieved 01 30, 2025, from https://mck.co/3xKiXhr.
Negroponte, J. D. (2007, January 11). Annual Threat Assessment of the Director of National Intelligence. Retrieved 01 30, 2025, from https://www.dni.gov/files/documents/Newsroom/Testimonies/20070111_testimony.pdf.
Stănilă, L. (2019). Artificial Intelligence and the Criminal Justice System – Criminal Risk Assessment Instruments. Revista Română de Drept al Afacerilor/Romanian Business Law Journal, no 3, 130-157.
Stănilă, L. (2020). Artificial Intelligence, Criminal Law and the Criminal Justice System. Memories of the Future. Bucharest: Universul Juridic.
Zlati, G. (2020). Cybercrime Treaty, Vol. 1. Bucharest: Ed. Solomon.

This work is licensed under a Creative Commons Attribution 4.0 International License.
The author fully assumes the content's originality and the holograph signature makes him responsible in case of trial.