Abstract
This article examines how the types of crimes have changed due to the democratization of Artificial Intelligence (AI). Between 2023 and 2025, there is a perceived shift from simple attacks to automated, adaptive, and personalized forms. The research suggests a classification divided into three components: AI as a tool (deepfakes, automated phishing) ( (Labs, 2025), p.1), AI as a target of attack (infections through prompt injection) ( (NIST., 2025), p.2), and AI as an independent agent within advanced botnet networks.
Statistics reveal a 238% increase in AI-driven cyber incidents during 2023 ( ([IJFMR]., 2024), p.2), with global losses exceeding $8.5 million. Confirmed examples, such as the use of “nude spoofing” applications in Spain or voice cloning scams in 2025, show that AI has facilitated technical access for less experienced criminals while amplifying the effectiveness of international criminal groups. Europol’s IOCTA 2025 report ( (Europol., 2025), p.1) notes that AI not only supports fraud, but is at the heart of a new era of “artificial identities” that challenges existing legal frameworks, such as the 2024 UN Cybercrime Treaty.