The field of AI security and safety is expected to mature significantly in 2025 as real-world use cases for generative AI emerge, addressing AI as a target, a tool, and a threat.
Art Gilliland notes that generative AI will enhance adaptive authentication, making it smarter and more proactive in containing security breaches.
Generative AI tools and techniques, such as deepfakes and targeted social engineering, are expected to move down-market and become accessible to ordinary cyber criminals in 2025.
Patrick Joyce, Global Resident CISO at Proofpoint, observes that CISOs are increasingly scrutinizing Generative AI tools as third-party risks, specifically questioning how these tools are manufactured and secured, similar to how food packaging labels disclose ingredients.
Steve Povolny, senior director of Security Research & Competitive Intelligence and co-founder of TEN18 by Exabeam, predicts that generative AI models trained to create malicious code will emerge in underground markets, allowing individuals without coding skills to deploy ransomware, spyware, and other malware.
According to a recent Gartner survey, only 13% of organizations have implemented effective data leakage tools for generative AI.
AI developers have an increased responsibility to demonstrate that the data used to train and refine model predictions is clean, timely, and has provable lineage, especially as generative AI is applied to more tasks with higher degrees of autonomy.
Hackers are increasingly using Generative AI to impersonate police officers or high-ranking C-suite executives from Fortune 500 companies to gain access to login credentials and personally identifiable information (PII).
Benjamin Fabre, CEO of DataDome, asserts that basic bot attacks will persist despite the increasing sophistication and scalability of bots driven by generative AI tools.
Paul Walker, a field strategist at Omada, claims that Identity Governance and Administration (IGA) will shift its focus from prevention toward contributing to operational security and security hygiene posture, driven by the adoption of user-friendly interaction methods like Generative AI-powered natural language models.
Eduardo Mota notes that generative AI (GenAI) enables bad actors to generate realistic artifacts to deceive employees, and that organizations must establish a security perimeter for GenAI to prevent unauthorized data access.
Hyperautomation utilizing Generative AI can manage and parse under-protected systems to auto-remediate or escalate threats before they take root.
As generative AI advances, prediction models will likely integrate AI more deeply to support humans in making faster, informed security decisions rather than resulting in an AI takeover.
Jim Broome, CTO and president of DirectDefense, states that generative AI and deepfakes are making phishing attacks more sophisticated by eliminating traditional indicators like grammatical errors, rendering standard employee training methods obsolete.
Tyler Swinehart, director of Global IT & Security at IRONSCALES, predicts that in 2025 there will be a significant increase in the creation of fabricated experts and audiences for sale, facilitated by generative AI and deepfake technologies.
Generative AI empowers both attackers and defenders, with attackers using it to generate complex, targeted phishing, deepfakes, and adaptive malware.
In 2025, companies must improve their security postures to address new risks introduced by AI, such as prompt injection attacks where malicious inputs are disguised as legitimate user prompts in generative AI systems.
Managed Service Providers (MSPs) will become critical partners in building robust security frameworks and third-party oversight as organizations increasingly depend on AI, GenAI, and automation.
A recent Gartner survey indicates that 68% of executives believe the benefits of AI outweigh the risks, yet only 14% are incorporating generative AI usage guidance into their security policies.
Sergey Medved, VP of product management at Quest Software, predicts that Microsoft Copilot will be a highly innovative product in 2025, driving generative AI adoption by leveraging data across Microsoft 365.
Transnational criminal groups are expected to adopt modern AI tools, such as generative AI and deepfakes, to evolve their business operations.
GenAI tools such as Copilot and ChatGPT have driven significant growth in niche security tools designed to control and monitor GenAI usage.
The primary risk associated with using GenAI tools is the lack of a robust data protection program within organizations.
Generative AI's search and analysis capabilities will be used by threat actors to discover unknown zero-day vulnerabilities and unpatched CVEs, increasing the workload for security teams.
Generative AI is expected to lead to a rise in traditional fraud schemes, specifically impersonation tactics, as the technology becomes easily accessible to hackers.
Generative AI will facilitate synthetic identity fraud, where cybercriminals use AI to create realistic digital identities that challenge traditional verification methods.
The proliferation of generative AI and the associated hype will increase the security risks posed by non-human identities in 2025.
The 2024 Bitwarden Cybersecurity Pulse survey found that 89% of tech leaders are concerned about existing and emerging social engineering tactics enhanced by generative AI.
Malicious actors will increasingly utilize generative AI to create morphing malware that adapts and mutates to evade traditional detection methods.
Alex Holland, principal threat researcher at HP Security Lab, predicts that phishing click-through rates may rise as Generative AI helps attackers craft convincing, multi-lingual, and targeted lures.
Generative artificial intelligence is lowering the barriers for unsophisticated attackers while amplifying the capabilities of advanced threat actors, forcing security teams to rethink traditional defenses.
Model security, specifically data security, data lifecycle management, and data telemetry, will be a top priority in 2025 as commercial-off-the-shelf (COTS) foundational models drive the adoption of generative AI across industries.
Generative AI accelerates the understanding of people, processes, and technologies, which will facilitate sophisticated attacks such as phishing, deep fakes, and vishing.
Alex Holland, principal threat researcher at HP Security Lab, predicts that cybercriminals will adapt Generative AI (GenAI) use cases—such as creation, automation, and virtual assistance—to support cybercrime activities like writing scripts, uncovering vulnerabilities, analyzing data, and assisting with coding tasks.
Since the release of commercial generative artificial intelligence tools, phishing attacks have surged by 1,265 percent.
Casey Ellis observes that attribution of cyberattacks is becoming more challenging due to evolving global alliances, the acceleration of time-to-effectiveness through generative AI and technique-sharing, and a broadening spectrum of attribution.
In 2025, threat actors will weaponize generative AI to orchestrate large-scale cyber attacks, including autonomously identifying vulnerabilities, crafting deceptive phishing campaigns, and bypassing detection systems.
Cloud-native security solutions leverage Generative AI to automate threat detection and response across distributed environments, enabling real-time analysis and predictive defense.
TK Keanini, chief technology officer at DNSFilter, predicts that by 2025, generative AI will be integrated into nearly every business and department, which will boost productivity but also introduce new security risks.
Retrieval-Augmented Generation (RAG) is a technique for enhancing the accuracy and reliability of generative AI models by fetching facts from external sources, which allows users to verify claims and build trust.
Organizations face significant risks from the potential exploitation of internal knowledge as they increasingly integrate generative AI into their operations.
Enterprises can build applications around commercial-off-the-shelf (COTS) AI models, which reduces the need to acquire and maintain specialized hardware and allows generative AI companies to amortize training costs across multiple users.
Identity Governance and Administration (IGA) products are expected to evolve into proactive security tools by integrating Generative AI to provide real-time recommendations and insights for IT security operations.
Alex Holland, principal threat researcher at HP Security Lab, states that Generative AI will lower the barriers to entry for cybercriminals, enabling novices to execute attacks without coding knowledge.
Steve Wilson, chief product officer at Exabeam, predicts that by 2025, cyber attackers will use generative AI with improved reasoning abilities to execute realistic phishing scams, including deepfake voices and video avatars, and perform complex automated probing for vulnerabilities.