AI-driven phishing will continue to be a major security issue in 2025 as AI capabilities are used to create more sophisticated and cleverly crafted campaigns.
Todd Thorsen, CISO of CrashPlan, predicts that artificial intelligence will fuel the advancement of ransomware threats in 2025, leading companies to adopt broader cyber resilience programs focused on AI.
Many AI deployments implemented in 2024 operated under the assumption that AI would function as conventional software, lacking a dedicated framework to define the capabilities and limitations of AI agents.
As organizations move to multicloud environments and increase third-party integrations, managing extended supply chains securely will be crucial, with AI playing a key role in detecting threats and making real-time adjustments to secure data flows.
In 2025, AI and machine learning will automate complex identity governance processes, such as role management and access reconciliation, by analyzing historical data and usage patterns.
Cybercriminals will target AI-driven processes, such as supply chain management and financial planning, to conduct high-stakes fraud without relying on social engineering to trick individuals.
Ashish Nagar, CEO of Level AI, predicts that regulatory compliance in AI will drive innovation in transparent, explainable AI models for customer service applications.
In 2025, artificial intelligence will function as both an offensive and defensive force in cybersecurity, with both sides attempting to control critical data.
Alex Holland, principal threat researcher at HP Security Lab, suggests that cybersecurity teams will harness AI to enhance threat detection and response, which will help relieve pressure on those teams.
Bill Murphy, director of security & compliance at LeanTaaS, notes that AI allows attackers, particularly those operating outside the U.S., to generate personalized attacks by analyzing the digital footprints of their targets, making the attacks indistinguishable from legitimate communications.
Defenders can leverage AI to analyze massive amounts of data and identify patterns, which accelerates the work of Security Operations Center (SOC) teams and other blue-team operations.
AI and machine learning-based fraud detection systems are increasingly vital for businesses because they use dynamic learning to adapt to evolving bot tactics in real-time, unlike static defenses that rely on preset rules.
AI and machine learning integration in 2025 will improve efficiency, natural language use, and threat detection capabilities, while simultaneously expanding the threat landscape and enhancing adversary execution capabilities.
John Bennett, CEO of Dashlane, claims that cybercriminals are leveraging AI to create highly personalized and harder-to-detect malware and phishing schemes.
Organizations must learn how to secure AI before broadly deploying it for security purposes.
Ron Reiter, CTO and co-founder of Sentra, predicts that organizational adaptation to AI-driven cybersecurity will raise new ethical questions regarding the security of training data and the autonomy of AI in making security-critical decisions.
On-premises attacks are being detected more frequently because EDR products are becoming more visible and incorporating AI capabilities that enhance system visibility.
Danielle Coady, vice president at Index Engines, argues that while AI-powered technologies are essential for enhancing cyber resilience, they also provide opportunities for bad actors to exploit innovation for financial gain.
AI functions as both a defensive tool to strengthen cybersecurity and an offensive tool that provides attackers with new capabilities to exploit systems.
Paul Nguyen, co-founder and co-CEO of Permiso, observes that cloud environments, AI services, and SaaS applications are becoming increasingly valuable assets for threat actors to hijack and abuse.
In 2025, AI-powered threats will become more sophisticated, with deepfakes appearing more frequently and amplifying issues related to misinformation and fake news.
AI will have a dual impact on cybersecurity in 2025, characterized by increased productivity and heightened security risks.
AI will help security teams spot emerging attack patterns before they cause damage by training models on vast amounts of historical data.
Fortifying supply chains, adopting IoT standards, and leveraging AI are essential strategies for organizations to maintain cybersecurity in 2025.
Eyal Benishti asserts that AI-enabled phishing kits and APIs will allow attackers to automate the creation of personalized, targeted, and polymorphic phishing emails, increasing both the volume of attacks and their success rates.
Steve Wilson, chief product officer at Exabeam, observes that AI's ability to identify weaknesses faster than humans will significantly shrink the time between vulnerability discovery and exploitation.
By 2025, AI in cybersecurity will shift from a chatbot-based approach to an agent-driven approach, where organizations use agents for threat detection, autonomous responses, IT resource scalability, and improved cyber hygiene.
In 2025, companies must improve their security postures to address new risks introduced by AI, such as prompt injection attacks where malicious inputs are disguised as legitimate user prompts in generative AI systems.
Nation-state cybercriminals are utilizing AI to create personalized, believable phishing attacks, including the use of AI-backed misinformation bots and the impersonation of public figures or personally known individuals like family and friends.
Artificial intelligence will increase the threat of social engineering by enabling junior attackers to generate multilingual, credible, and official-sounding text to manipulate people.
CISOs must implement robust governance systems to maintain oversight of critical access decisions and govern AI projects to reduce the risk of data loss.
Managed Service Providers (MSPs) will become critical partners in building robust security frameworks and third-party oversight as organizations increasingly depend on AI, GenAI, and automation.
David Wiseman argues that organizations must shift focus from educating teams about AI risks to actively detecting and preventing attacks by investing in end-to-end identity security platforms that unify identity providers across on-prem, cloud, and hybrid environments.
Artificial intelligence is transforming the threat landscape by making cyber attacks faster, more scalable, and more automated.
The primary cybersecurity threats in 2025 will originate from increasingly sophisticated, AI-driven attacks.
Developers can integrate AI with automated tooling and CI/CD pipelines to quickly identify and fix coding flaws.
Eyal Benishti predicts that security vendors will develop new tools to detect AI-based content, including synthetic writing, videos, static imagery, and voice duplication, as well as AI-enabled attacks.
Commercial AI vendors are significant consumers of open source software (OSS) but often lack transparency with customers regarding the specific OSS components they utilize.
Transnational criminal groups are expected to adopt modern AI tools, such as generative AI and deepfakes, to evolve their business operations.
John Bennett, CEO of Dashlane, predicts that in 2025, AI will become increasingly central to both cyber attacks and cyber defenses, driving a significant evolution in the threat landscape.
Organizations will continue to experiment with AI technologies in 2025 to determine where the technology offers value.
AI agents behave in non-deterministic ways similar to humans and can be deceived, as demonstrated by researchers who successfully manipulated AI assistants into extracting sensitive user data by convincing the AI to adopt a 'data pirate' persona.
The evolution of AI-driven cyber threats will force organizations to rethink security strategies and invest in AI-powered defense mechanisms to improve threat detection speed and streamline security processes.
In 2025, AI in security operations will advance to the investigation stage, where it will conduct investigations, generate adversary activity timelines, and summarize findings.
Cybersecurity vendors will need to focus on demonstrating value and proving ROI, as they will no longer be able to rely on generic promises of "AI-driven security" to make sales.
Chris Hughes, chief security advisor at Endor Labs, predicts a continued intersection of AI, application security (AppSec), and open source software (OSS), noting that malicious actors are targeting open source AI models, communities, and hosting platforms.
Ev Kontsevoy proposes that the solution to AI security risks is to treat all software and hardware powering AI like humans from a security perspective, requiring the consolidation of AI agent identities with other identities (engineers, laptops, servers, microservices) into a unified inventory for identity, policy, access relationships, and real-time visibility.
AI-driven cybersecurity systems can analyze data in real-time to identify patterns and anomalies indicating a breach faster than human analysts.
By 2025, using AI within cloud-native frameworks will be essential for maintaining the agility needed to counter increasingly adaptive threats.
John Bennett, CEO of Dashlane, claims that cybersecurity solutions are advancing to include AI-discovered vulnerabilities and autonomous real-time threat detection and mitigation systems powered by predictive analytics.
Riaz Lakhani, CISO at Barracuda, predicts that threat actors will use artificial intelligence to scale content creation, produce more persuasive content, and employ deepfake and voice replication technologies for sophisticated phishing and social engineering attacks.
Cybercriminals will deploy sophisticated social engineering tactics in 2025, using AI to bypass security measures such as multi-factor authentication (MFA).
In 2025, over half of small and medium-sized businesses will depend on AI to manage their security operations.
Flashpoint leverages artificial intelligence tools like Automated Source Discovery to empower analysts, enabling them to uncover critical intelligence faster and disrupt adversaries effectively.
In 2024, threat actors used AI-generated personas on LinkedIn to pose as recruiters, targeting developers and engineering talent by tricking them into downloading malicious files under the guise of recruitment tests.
Raffael Marty, EVP & general manager of Cybersecurity at ConnectWise, predicts that attackers will focus on automated, large-scale attacks against small and medium-sized businesses, using AI to exploit vulnerabilities rather than relying on intelligence-driven tactics.
Many organizations currently struggle to defend against basic cyber attacks, making it critical for them to implement AI in their defensive strategies.
AI will be integrated into digital wallets in 2025 to provide hyper-personalized experiences, prevent fraud, and offer businesses insights into customer behaviors.
Gary Orenstein, chief customer officer at Bitwarden, suggests that the most effective way to combat AI-enhanced social engineering threats is through layered security, which includes passwordless solutions, multi-factor authentication (MFA), and continuous employee education.
Nir Zuk states: "The real advantage will go to the organizations that can centralize their data, enabling AI outcomes we have yet to see, and make the decisions now that will enable their security and success for the future."
Organizations will face the challenge of balancing AI's security advantages with the mounting risks it introduces in the coming year.
Rik Ferguson, vice president of security intelligence at Forescout, predicts that by 2025, cybercriminals will leverage AI to automate and accelerate campaigns, specifically utilizing attack vectors such as model manipulation, data poisoning, supply chain disruptions, and AI-assisted fraud.
Cybercriminals will use AI to craft personalized phishing and social engineering campaigns by adapting messages on the fly and analyzing media and social media trends.
In 2025, security leaders are expected to experience a growing sense of disillusionment regarding the potential of AI in cybersecurity, as the initial excitement begins to fade.
83% of security leaders report that developers are already using AI to generate code, and 57% of security leaders state that using AI for code generation is now common practice.
By 2025, AI tools will automate compliance workflows, including auditing, reporting, and monitoring regulatory requirements in real-time, according to Jimmy Mesta of RAD Security.
Credential stuffing attacks will become more sophisticated in 2025 as AI is integrated with automated workflows to test stolen login credentials on shorter timelines.
Aditya K. Sood, VP of security engineering and AI strategy at Aryaka, claims that the adoption of AI introduces new attack surfaces and potential vulnerabilities into network environments.
Software vendors are increasingly integrating AI features into existing products by leveraging foundational models and open source software (OSS) large language models (LLMs).
The proliferation of cloud-native technologies and AI is accelerating the creation and deployment of machine identities, such as TLS and SPIFFE, which increases the complexity of identity management.
Jason Urso notes that AI combined with new sensors in industrial plants provides guidance to assure plant operations remain safe, similar to how sensors in cars alert drivers to hazardous conditions.
Jason Urso, VP and CTO of Industrial Automation at Honeywell, states that AI provides insights and guidance that help industrial workers perform tasks efficiently by reducing mundane work and allowing the workforce to focus on higher-value tasks.
Cyber attackers are currently using AI to enhance their tactics, and the danger of AI-powered cyberthreats is expected to increase as AI technology evolves and quantum computing capabilities emerge.
AI agents are susceptible to both malware and identity-based attacks simultaneously.
Michael Smith, field CTO at Vercara, predicts that cybercriminals will use AI in 2025 to enhance the effectiveness and scale of attacks, leading to record levels of return on investment (ROI).
AI is lowering the barrier to entry for creating sophisticated phishing campaigns, including deepfake voice calls and hyper-personalized spear phishing emails.
Computing infrastructure identity management tools were built on the assumption that users are either humans or machines, a distinction that Ev Kontsevoy argues will stop making sense in 2025 because AI agents straddle the line between human and machine.
Business Email Compromise (BEC) is expected to evolve into Autonomous Business Compromise (ABC), where AI automates fraud with minimal human interaction.
The integration of AI into security operations has been a goal for over a decade, with recent improvements in data collection and AI technology enabling tangible progress.
Future AI-driven malware is anticipated to be capable of learning and adapting in real-time during an attack.
In 2025, AI will enable malicious actors with low technical proficiency to launch high-volume, enterprise-wide attacks that were previously only possible for large-scale criminal organizations.
AI and machine learning will play an increasingly significant role in detecting and responding to threats, leading to more advanced threat hunting tools and automated incident response systems.
David Wiseman asserts that siloed identity management tools and traditional multi-factor authentication (MFA) tools are no longer sufficient to address the rapid pace of AI adoption and manipulation.
Companies are currently using AI in security operations workflows to reduce the volume of alerts by filtering out false positives.
The AI bubble in the cybersecurity industry will burst in 2025, causing AI-enabled cybersecurity companies to struggle while attackers leverage AI for new attack methods.
Chris Scheels, VP of product marketing at Gurucul, states that AI-powered threat hunting will be crucial for detecting and responding to advanced threats, as AI models can identify sophisticated attacks that traditional methods might miss.
Companies that lag in fortifying their Identity and Access Management (IAM) strategies risk exposing critical assets to attackers using artificial intelligence as a skeleton key.
Enterprises deploying artificial intelligence in 2025 face challenges related to business operations, safety, skills, and technical infrastructure.
AI is poised to revolutionize attack strategies for cybercriminals, enabling them to execute large-scale operations with minimal effort.
AI-powered attack techniques, including autonomous malware, social engineering, data exfiltration, and credential stuffing, are becoming significantly harder to detect than traditional threats.
A collaborative approach to penetration testing will emerge where AI handles routine, large-scale vulnerability scanning and data analysis, while human experts focus on interpreting results, strategic thinking, and identifying nuanced or context-specific security issues.
John Bennett, CEO of Dashlane, claims that the commoditization of sophisticated attack tools will make large-scale, AI-driven campaigns accessible to attackers with minimal technical expertise.
Larger enterprises will be targets of AI-supported attacks that are sophisticated and capable of adapting in real-time, requiring organizations to adopt proactive defenses.
Artificial intelligence enables threat actors to more easily uncover SaaS vulnerabilities and misconfigurations, bypass traditional security measures, and create more convincing phishing campaigns.
Businesses should deploy AI strategically where it adds value rather than adopting it solely due to market hype.
Ev Kontsevoy predicts that the pace of AI deployment will slow in 2025 because security teams will need to retrofit current security models to address vulnerabilities in AI agents.
Security teams will increasingly use AI and non-AI technologies to automate tasks across domains such as GRC, security operations, and product security.
Alex Holland, principal threat researcher at HP Security Lab, predicts that threat actors will use AI to craft highly successful ransomware campaigns in 2025.
To counter cyberthreats that complicate system recovery, organizations must rely on isolated, unaffected data copies and AI/ML-powered tools to detect and validate clean data.
AI-powered identity management systems will integrate with AI frameworks to monitor and analyze user behavior continuously, allowing them to detect anomalies and dynamically adjust permissions based on real-time context.
Organizations will integrate AI to augment human capabilities to fortify the network as a pivotal line of defense and policy enforcement.
AI and machine learning serve a dual role in the 2025 cybersecurity landscape, empowering both attackers to bypass detection and defenders to validate clean data for recovery.
Bad actors are increasingly using AI to create more convincing phishing emails, automate the discovery of vulnerabilities, and develop malware that evades detection by traditional security tools.
Attackers can exploit users who share data with AI by infiltrating AI chatbots to access the input data provided by those users.
Ron Reiter, CTO and co-founder of Sentra, asserts that the arms race centered on AI-driven cybersecurity strategies began to emerge in 2024.
Russ Kennedy, chief evangelist at Nasuni, observes that threat actors are evolving by using AI to create insidious methods, such as embedding corrupted models and targeting AI frameworks directly.
Traditional security operations center (SOC) analyst roles will rapidly decline in 2025 as AI and machine learning automate routine security tasks.
Darren Anstee, CTO for Security at NETSCOUT, asserts that companies will prioritize secure, customizable AI solutions that protect sensitive customer data while leveraging advanced analytics.
Secureframe organizations are leveraging AI to automate security control monitoring and detect anomalous patterns that could indicate compromise.
Attackers may use AI to craft sophisticated social engineering attacks and review public code for vulnerabilities, complicating cybersecurity in the near future.
Jim Broome, CTO and president of DirectDefense, advises businesses to combat evolving AI-driven threats by continually refreshing employee training and adopting advanced AI tools, such as Microsoft's Azure sandbox, to maintain security control.
Predictive maintenance powered by AI will play a pivotal role in addressing vulnerabilities proactively, minimizing downtime and costs while bolstering security in building management systems.
AI in security operations will be capable of understanding threat context and autonomously initiating response actions, while requiring human analyst confirmation to proceed further.
Cybercriminals are using Artificial Intelligence (AI) to craft targeted phishing attacks, requiring organizations to evolve their defensive strategies.
Cybercriminals will increasingly utilize AI to develop sophisticated and targeted attacks, which necessitates that defense mechanisms evolve to stay ahead.
Eyal Benishti, founder and CEO of IRONSCALES, predicts that the adoption of AI tools like ChatGPT will drive growth in AI-augmented services, extensions, and browser plug-ins.
Russ Kennedy, chief evangelist at Nasuni, asserts that in 2025, data protection and rapid recovery will become the backbone of any AI strategy as enterprises increasingly rely on AI to power operations.
Organizations in 2025 need to focus on minimizing risks associated with AI services by addressing security at both the application level and the model level, specifically regarding Large Language Model (LLM) risks.
George Gerchow states that AI will be instrumental in 2025 for both offense and defense, including enhancing internal and external bots for automated GRC (Governance, Risk, and Compliance) and audits, and helping security teams scale against sophisticated threats.
AI can reduce the impact of security incidents and improve overall security posture by automating routine tasks and recommending effective response strategies.
Ev Kontsevoy, CEO and co-founder of Teleport, predicts that 2025 will be the year of 'The Great AI Awakening' among cybersecurity professionals, as they discover how easily AI agents can be manipulated to act in unintended ways, such as causing data leaks.
In 2025, AI will drive both attack and defense strategies, redefining incident response and necessitating the use of AI systems for detecting breaches, identifying anomalies, and automating cybersecurity measures.
Identity spoofing is expected to be a major concern in 2025 due to the advancement of AI and deepfake technologies and the use of personal metadata and listening data from telecom network breaches by attackers.
Bill Murphy, director of security & compliance at LeanTaaS, observes that cybercriminals are using AI to create highly persuasive phishing campaigns that lack traditional indicators of fraud, such as poor grammar or awkward phrasing.
AI-aided threat monitoring, including pattern recognition, anomaly detection, and data classification, will become necessary for security operations center (SOC) managers to identify urgent threats within large datasets.
72% of security leaders feel pressured to allow the use of AI to stay competitive, while 63% of security leaders have considered banning AI due to security risks.
The cybersecurity market is increasingly skeptical that artificial intelligence alone is sufficient to defend against AI-generated attacks.
AI in cybersecurity can predict attacker behavior, assist in threat modeling, and automate responses to security events through Security Orchestration, Automation, and Response (SOAR).
89% of security practitioners plan to use more AI tools in the coming year, despite concerns that adding more AI tools could create more work.
Avani Desai, CEO of Schellman, asserts that attackers are deploying machine learning models that adapt, disguise themselves, and evade traditional defenses in real-time, creating a race between defensive and offensive AI technologies.
Security and IT leaders should prepare to evaluate and onboard a diverse set of immature AI products.