Enterprises do not need much convincing to invest in AI anymore. The debate has shifted from whether to adopt it to how quickly it can be embedded across products, workflows, customer support, software development, and internal operations. That urgency is understandable. What is less reassuring is how uneven enterprise readiness still looks. F5’s 2025 State of AI Application Strategy report found that 96 percent of organizations are implementing AI models, yet only 2 percent rank as highly ready to secure and sustain them at scale.
That gap is where much of today’s risk sits. AI is helping legitimate businesses move faster, but it is also helping attackers scale familiar tactics with far less effort. At the same time, many companies are creating fresh exposure themselves by rolling out AI before governance, identity controls, and data handling policies are mature enough to keep up.
The security picture is already shifting in that direction. Google’s 2025 zero-day review found 90 exploited zero-days last year, with 43 of them, or 48 percent, targeting enterprise technologies, a record share. The implication is hard to miss. Attackers are increasingly focusing on the systems, appliances, and business platforms that sit closest to corporate operations and sensitive data.
For fast-growing companies, especially those juggling expansion, hybrid work, and vendor-heavy tech stacks, the real question is no longer whether AI changes cyber risk. It is where that change will show up first, and what to do before it becomes expensive.
AI-written phishing gets harder to spot
Phishing is old. What has changed is how quickly it can now be tailored, localized, and polished. Generative AI helps attackers produce convincing emails in multiple languages, mimic internal tone, and generate variants at scale. That matters for enterprises operating across markets, where employees may already be managing multilingual communication, cross-border vendor requests, and constant message overload.
The danger is not just that phishing messages look better. It is that they fit more naturally into ordinary business workflows. A payment update from a supplier, a document review request from HR, or a login prompt tied to a cloud service can now arrive with fewer of the awkward clues that once gave attackers away.
That raises the bar for defense. Companies need stronger controls around what happens after an employee clicks. Payment approvals, bank detail changes, and credential resets should require verification outside email. Staff training still matters, but inbox judgment alone is no longer a sufficient control when the quality of deception is rising so quickly. Microsoft’s takedown of nearly 340 phishing-related websites in September 2025 was one reminder that industrialized phishing infrastructure is still expanding, not fading.
Deepfake impersonation moves into the enterprise
Deepfakes still attract attention as a political or social media problem, but the enterprise use case is becoming more practical and more immediate. A fake voice note from a senior executive. A spoofed video call requesting a transfer. A fabricated investor update sent during a moment of urgency. The problem is not merely realism. It is timing, pressure, and the human tendency to trust familiar signals when a request appears to come from someone important.
The International Telecommunication Union warned in a 2025 report that companies need stronger measures against AI-driven deepfakes because of growing risks tied to fraud and misinformation. That is especially relevant for firms with distributed teams, outsourced support functions, or executive travel schedules that make unusual requests seem plausible.
High-risk decisions should never rely on a single channel, whether that channel is email, voice, or video. Finance teams should have callback requirements. Sensitive approvals should have a secondary verifier. Executive offices should assume that familiar faces and voices are no longer proof of authenticity. In many cases, resilience starts with slowing down a request that was designed to feel urgent.
Identity workflows are now an attack surface
One of the more sobering lessons from 2025 is that attackers do not always need to beat a company’s technology stack head-on. Sometimes, they simply need to manipulate the people and processes around it. Reuters reported that the cyberattacks on Marks & Spencer and Co-op began with hackers impersonating employees to IT help desks in order to get passwords reset. In a separate Reuters report, M&S said the broader incident would cost about 300 million pounds in lost operating profit.
That example deserves attention well beyond retail. In many enterprises, the help desk is still treated as an operational support function rather than a privileged security gateway. Yet account recovery, multifactor resets, device enrollment, and contractor access changes can all become entry points if the process is weak or rushed.
This is where AI adds force to old methods. Attackers can collect context faster, mimic internal language more effectively, and pressure frontline support staff with more convincing impersonation. Enterprises should review identity proofing for resets, separate high-risk admin workflows from standard user support, and treat vendors and contractors as part of the same exposure map. A clean password policy means little if the reset process is easy to manipulate.
AI speeds up the race to exploit vulnerabilities
Not every AI-powered threat is social engineering. Some of it is about speed. AI tools can help attackers automate reconnaissance, scan for patterns, summarize code, and identify likely weak points more efficiently than before. Defenders still have the same patching bottlenecks, asset visibility gaps, and change-management delays they had before. The result is a widening mismatch between attacker speed and defender readiness.
Google’s latest review points to that pressure directly. The company said AI is likely to accelerate reconnaissance, vulnerability research, and exploit development. It also found that enterprise software and edge devices remain prime targets, with enterprise-grade technologies accounting for nearly half of tracked zero-days in 2025.
For enterprises, this means cyber hygiene has become a more strategic issue than many boards still assume. Internet-facing appliances, security tools, remote access infrastructure, and tightly connected enterprise software deserve priority because a single foothold there can expose far more than one endpoint ever could. The response is not simply to patch faster in the abstract. It is to know which systems matter most, reduce unnecessary exposure, and assume that attackers are getting better at finding weak links before routine security cycles catch them.
Employees can leak sensitive data without meaning to
Some of the most immediate AI risk does not come from an external breach at all. It comes from employees trying to work faster. Staff paste meeting notes into public chatbots, upload code into unsanctioned tools, summarize customer information through consumer AI services, or test prompts using personal accounts outside the company’s security perimeter.
This is one reason AI adoption can outpace AI governance so easily. The tools feel helpful, the barrier to entry is low, and the business value appears immediate. Yet the data handling consequences are often poorly understood at the point of use. Once proprietary material, customer records, or internal strategy documents move into tools the company does not control, containment becomes much harder.
The answer is not to ban everything and hope for compliance. It is to make sanctioned usage easier than unsanctioned usage. Companies need clear rules on what data cannot be pasted into external models, approved enterprise tools with logging and contractual safeguards, and managers who understand that shadow AI is often a workflow problem before it becomes a policy problem. If employees believe official channels are too slow or too restrictive, they will route around them.
Extortion is becoming more personalized and scalable
Ransomware and extortion are not new either, but AI is helping attackers sharpen the pressure tactics that follow an intrusion. Once data is stolen, it can be sorted, summarized, and weaponized more quickly. Executives can be targeted with tailored threats. Stolen communications can be mined for leverage. The technical breach becomes only one phase of a broader influence and coercion campaign.
Reuters reported in June 2025 that hackers were tricking employees into installing a modified Salesforce-related app, giving them access to data and other cloud services that could then be used for extortion. That kind of campaign reflects a broader reality. Post-breach pressure no longer depends only on encryption. It depends on how effectively attackers can turn access and stolen information into reputational, legal, and operational pain.
That is why incident response plans need to expand beyond containment. Enterprises should know who handles regulator communication, customer outreach, executive impersonation attempts, identity resets, and third-party coordination if an attacker begins applying pressure with stolen data in hand. The companies that fare better are often not the ones that avoid every incident. They are the ones that prepare for the business consequences of compromise before a crisis starts.
The next enterprise test
AI is not replacing the standard enterprise threat playbook so much as intensifying it. Phishing looks more credible. Impersonation grows more persuasive. Vulnerability exploitation moves faster. Internal shortcuts create new leakage paths. Extortion becomes more precise. That mix leaves enterprises facing a familiar challenge under less forgiving conditions.
For founders, operators, and technology executives, the priority now is not to chase every new headline. It is to identify where AI most quickly amplifies existing weaknesses, especially around identity, data handling, patch discipline, and cross-channel verification. The companies that do this well will not necessarily be the ones with the loudest AI strategy. They will be the ones that treat AI risk as part of operational maturity, not an afterthought to innovation.
TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.
Featured image: Glen Carrie on Unsplash
The hidden labor of intimacy on subscription platforms like OnlyFans

