TL;DR

  • AI-enabled cyberattacks increased 89% in 2025 compared to 2024, according to CrowdStrike's Global Threat Report 2026
  • Attackers use AI to write convincing phishing emails, develop malware, and scale operations faster than ever
  • Small businesses face the same AI-powered threats as enterprises—but with fewer resources to defend themselves
  • Defense requires identity-first security, AI-aware training, and behavioral threat detection

The AI Arms Race Is Here

Cybercriminals aren't just using AI—they're scaling it. According to CrowdStrike's Global Threat Report 2026, attacks by "AI-enabled adversaries" surged 89% in 2025 compared to the previous year [1]. This isn't theoretical. Threat actors from nation-state groups to ransomware gangs are actively deploying Large Language Models (LLMs) and machine learning to optimize every stage of the attack lifecycle.​‌‌​​​​‌‍​‌‌​‌​​‌‍​​‌​‌‌​‌‍​‌‌​​​‌‌‍​‌‌‌‌​​‌‍​‌‌​​​‌​‍​‌‌​​‌​‌‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌‌​‌​​‍​‌‌‌​‌​​‍​‌‌​​​​‌‍​‌‌​​​‌‌‍​‌‌​‌​‌‌‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌‌​‌​‌‍​‌‌‌​​‌​‍​‌‌​​‌‌‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​​‌‌​​‌​‍​​‌‌​​​​‍​​‌‌​​‌​‍​​‌‌​‌‌​‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​​‌​‍​​‌​‌‌​‌‍​‌‌​​‌‌‌‍​‌‌‌​‌​‌‍​‌‌​‌​​‌‍​‌‌​​‌​​‍​‌‌​​‌​‌

For small and medium businesses (SMBs), this

creates an asymmetric threat. Attackers can now launch sophisticated, personalized phishing campaigns at scale—targeting hundreds of businesses in the time it once took to target one. The question isn't whether AI-powered threats will reach smaller organizations. It's whether those organizations are ready to detect and stop them.

How Attackers Are Using AI

1. AI-Generated Phishing at Scale

The most visible AI threat is also the most pervasive: phishing. Attackers use LLMs to write phishing emails that are grammatically perfect, culturally context-aware, and emotionally compelling. What once gave phishing away—awkward phrasing, spelling errors, generic greetings—has been eliminated by AI [1].​‌‌​​​​‌‍​‌‌​‌​​‌‍​​‌​‌‌​‌‍​‌‌​​​‌‌‍​‌‌‌‌​​‌‍​‌‌​​​‌​‍​‌‌​​‌​‌‍​‌‌‌​​‌​‍​‌‌​​​​‌‍​‌‌‌​‌​​‍​‌‌‌​‌​​‍​‌‌​​​​‌‍​‌‌​​​‌‌‍​‌‌​‌​‌‌‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌‌​‌​‌‍​‌‌‌​​‌​‍​‌‌​​‌‌‌‍​‌‌​​‌​‌‍​​‌​‌‌​‌‍​​‌‌​​‌​‍​​‌‌​​​​‍​​‌‌​​‌​‍​​‌‌​‌‌​‍​​‌​‌‌​‌‍​‌‌‌​​‌‌‍​‌‌​‌‌​‌‍​‌‌​​​‌​‍​​‌​‌‌​‌‍​‌‌​​‌‌‌‍​‌‌‌​‌​‌‍​‌‌​‌​​‌‍​‌‌​​‌​​‍​‌‌​​‌​‌

CrowdStrike documented a Chinese intelligence operation using AI to create fake consulting firms and target former U.S. government employees on recruitment platforms [1]. A Russian cybercriminal operation dubbed "Renaissance Spider" used AI tools to craft credible phishing emails delivering ClickFix malware to Ukrainian targets [1].

For SMBs, this means the "obvious phishing" test no longer works. An AI-generated email can reference your industry, use your terminology, and address you by name—all pulled from your public website or LinkedIn profile. The volume is the real threat: attackers can generate thousands of unique phishing variants, evading signature-based email filters that rely on pattern recognition.

2. AI-Assisted Malware Development

While AI isn't replacing traditional malware development, it's accelerating it. CrowdStrike's report detailed Fancy Bear (a Russian state-sponsored group) embedding LLM prompts directly into malware—dubbed "LameHug"—to support reconnaissance and document collection during attacks on Ukraine [1].

The LameHug campaign didn't represent a revolutionary leap in malware sophistication. But it showed something more important: threat actors are experimenting with AI as a development aid, integrating it into operational workflows. As AI coding tools improve, the barrier to creating functional malware drops—and SMBs face more attackers with more capabilities.

3. Automated Reconnaissance and Targeting

Before launching an attack, criminals need to know about you. AI automates this reconnaissance phase, scanning public data, employee social media, company websites, and supply chain relationships to build detailed targeting profiles.

CrowdStrike noted that AI tools allow threat actors to "plan and accelerate reconnaissance operations" [1]. For a business with 50 employees, that might mean automated discovery of who handles finance, which cloud platforms you use, and when your fiscal year ends—all before a human attacker ever touches a keyboard.

This reconnaissance advantage compounds the phishing threat. An AI-powered attacker can generate 100 tailored phishing emails for 100 different employees, each referencing specific projects, job roles, or business relationships. Traditional security awareness training—teaching people to spot generic phishing red flags—struggles against this level of personalization.

Why SMBs Are in the Crosshairs

AI-powered attacks were once the domain of high-value targets: defense contractors, financial institutions, government agencies. The economics have changed. When phishing required manual research and writing, targeting a $5M-revenue business wasn't worth the effort. When AI automates both research and writing, SMBs become just another scalable target.

The M-Trends 2026 report from Mandiant reinforces this. It documented a collapse in the "hand-off window"—the time between initial access and a secondary threat group taking over—from 8 hours in 2022 to 22 seconds in 2025 [2]. Initial access brokers use AI-generated phishing to compromise accounts, then immediately hand off to ransomware groups. Your business doesn't need to be the primary target; you just need to be vulnerable enough to be worth exploiting as a stepping stone.

Building AI-Resilient Defenses

1. Identity-First Security

If AI-powered phishing bypasses technical controls, your last line of defense is identity security. Mandiant's report showed that prior compromise—where attackers leverage existing access—became the top initial infection vector for ransomware operations in 2025, accounting for 30% of attacks [2].

Protective measures include:

  • Phishing-resistant MFA: Standard MFA is no longer sufficient. Implement FIDO2/WebAuthn hardware security keys or passkeys for privileged accounts. These can't be phished, even with AI-generated social engineering [3].
  • Least privilege access: Ensure compromised accounts can't access your entire infrastructure. Role-based access control (RBAC) limits blast radius.
  • Session monitoring: Monitor for anomalous session activity, such as logins from unusual locations or rapid access to multiple systems.

2. Behavioral Threat Detection

Signature-based security looks for known bad patterns. AI attacks generate novel patterns, requiring behavioral detection that flags deviations from normal activity.

Key capabilities include:

  • Anomaly detection: Flag unusual login times, data access volumes, or lateral movement within your network.
  • Email analysis: Deploy email security that uses behavioral heuristics rather than just signatures. Mandiant noted that email fuzzing—dynamic text randomization—renders static pattern matching significantly less effective [4].
  • Endpoint detection: EDR tools that identify suspicious process chains, not just known malware.

3. AI-Aware Security Training

Traditional security training teaches people to spot generic phishing red flags. AI-aware training acknowledges that well-crafted phishing may have no obvious red flags—only contextual inconsistencies.

Training should emphasize:

  • Verification workflows: "Unexpected request? Verify through a second channel before acting."
  • Understanding the threat: Help employees understand that perfect grammar and personalization don't legitimacy make.
  • Reporting mechanisms: Easy reporting of suspicious messages—even if you're not sure.

4. Supply Chain and Third-Party Risk

Mandiant's research showed attackers compromising third-party SaaS vendors to steal hard-coded keys and access tokens, then pivoting into downstream customer environments [2]. Your vendor's AI security gap becomes your problem.

Protective steps:

  • Vendor security assessments: Before integrating new tools, ask about their AI security practices.
  • Token management: Minimize use of long-lived API tokens. Rotate them regularly.
  • Access reviews: Regularly audit which third-party apps have access to your data.

The Cost of Inaction

The IBM Cost of a Data Breach Report 2025 found the average global breach cost reached $4.88 million [5]. For SMBs, that's existential. But the real cost is operational downtime, reputational damage, and lost customer trust—factors that hit smaller organizations harder because they have less margin for error.

AI-powered attacks don't just increase breach likelihood. They increase breach velocity. Mandiant's median dwell time—the time attackers remain undetected—rose to 14 days globally in 2025, but reached 122 days for cyber espionage incidents [2]. When AI accelerates the attack lifecycle, detection and response windows shrink.

What You Can Do Today

You don't need a seven-figure security budget to defend against AI-powered threats. You need prioritized, layered defenses:

  1. Enable phishing-resistant MFA for all admin accounts and anyone with access to sensitive data.
  2. Review and revoke unnecessary access—especially OAuth tokens and API keys that persist beyond their useful life.
  3. Deploy behavioral email filtering that looks beyond signatures.
  4. Update security training to address AI-generated phishing and verification workflows.
  5. Back up everything, with immutable backups that can't be deleted or encrypted by attackers.

FAQ

No. AI automation makes it cost-effective to target SMBs at scale. If you have email, you're a target.

Yes. Behavioral anomaly detection, AI-powered email filtering, and automated incident response can help match attacker speed. But tools alone aren't sufficient—processes and training matter equally.

Standard MFA (SMS codes, authenticator apps) can be bypassed by adversary-in-the-middle attacks. Phishing-resistant methods like FIDO2/passkeys provide stronger protection.

You often can't. That's why verification workflows—contacting the sender through a separate channel to confirm requests—are more reliable than trying to spot red flags.

Not necessarily. Many existing security tools (EDR, behavioral analytics, secure email gateways) are effective against AI threats if properly configured. Focus on detection and response capabilities rather than "AI" labels.

Not in the foreseeable future. AI is a force multiplier for attackers, but campaign strategy, target selection, and high-value decision-making still require human judgment. The threat is AI-enhanced attackers, not autonomous AI.


References

[1] CrowdStrike, "Global Threat Report 2026," CrowdStrike, 2026. [Online]. Available: https://www.crowdstrike.com/en-us/blog/crowdstrike-2026-global-threat-report-findings/

[2] Mandiant, "M-Trends 2026: Data, Insights, and Strategies From the Frontlines," Google Cloud, 2026. [Online]. Available: https://cloud.google.com/blog/topics/threat-intelligence/m-trends-2026/

[3] FIDO Alliance, "Phishing-Resistant Authentication," FIDO Alliance, 2025. [Online]. Available: https://fidoalliance.org/phishing-resistant-authentication/

[4] Hornetsecurity, "Monthly Threat Report March 2026," Hornetsecurity, 2026. [Online]. Available: https://www.hornetsecurity.com/en/blog/monthly-threat-report/

[5] IBM Security, "Cost of a Data Breach Report 2025," IBM, 2025. [Online]. Available: https://www.ibm.com/reports/data-breach

[6] CISA, "Identity and Access Management," Cybersecurity and Infrastructure Security Agency, 2025. [Online]. Available: https://www.cisa.gov/resources-tools/resources/identity-and-access-management

[7] Australian Cyber Security Centre, "Phishing," ACSC, 2025. [Online]. Available: https://www.cyber.gov.au/threats/phishing

[8] National Institute of Standards and Technology, "AI Risk Management Framework," NIST, 2025. [Online]. Available: https://www.nist.gov/itl/ai-risk-management-framework


AI is changing threat landscape, but sound security principles still apply. lilMONSTER helps businesses build AI-resilient defenses through layered security, identity-first architecture, and practical incident response. Get in touch at https://consult.lil.business?utm_source=blog&utm_medium=post&utm_campaign=ai-cyberattack-surge-2026

TL;DR

  • Bad guys are using AI robots to write fake emails that trick people
  • These emails look real and can fool anyone—even careful people
  • You can protect your business with special keys, good training, and smart computer defenses

What Are AI Hackers?

Imagine a robot that can write thousands of fake letters in one second. That's what AI hackers do—except they send fake emails instead of letters.

Bad people used to have to write these fake emails themselves. They made mistakes. They had bad spelling. They wrote things like "Dear Sir" instead of using your name. Most people could spot them easily.

Now bad guys use AI to write the emails for them. The AI spells everything perfectly. It uses your real name. It knows where you work. It can even write in your language perfectly. These fake emails are much harder to spot.

How Many More AI Attacks Are Happening?

A lot more. In 2025, there were 89% more AI attacks than in 2024 [1]. That means almost twice as many.

Think of it like this: if 10 bad guys tried to trick you last year, this year 19 bad guys might try. And each one of those bad guys can send thousands of tricky emails because their AI robot writes them all automatically.

Why Your Business Should Care

You might think: "I'm not a big company. Why would hackers target me?"

Here's the thing: AI makes it cheap and easy to target everyone. The bad guys set up their AI robot once, and it sends fake emails to 1,000 small businesses in the time it used to take to target just one big company.

Your business doesn't have to be famous to be a target. You just need to have email and money or information that bad guys want.

How AI Hackers Try to Trick You

The Perfect Fake Email

Let's say you run a bakery. An AI hacker's robot might:

  • Look at your website and learn you sell wedding cakes
  • Find your name on your "About Us" page
  • Write an email that says: "Hi Sarah! I saw your beautiful wedding cakes online. I'm planning my daughter's wedding and would love to order. Can you click this link to see my inspiration board?"

The email looks perfect. Good spelling. Your real name. References your actual business. But the link goes to a fake website that steals your password.

The Speed Problem

AI robots work super fast. They can:

  • Research your company in seconds
  • Write a fake email that sounds real
  • Send it to you and 1,000 other businesses
  • All before lunch

Human hackers can't work that fast. AI robots never get tired. They never take breaks. They keep going and going.

How to Protect Your Business

Use Special Keys (Not Just Passwords)

Passwords are easy to steal. Special keys that you plug into your computer or phone are much harder to steal. They're called security keys or passkeys.

Think of it like your house key. You can't tell someone your house key over the phone. They have to physically have the key. Security keys for computers work the same way—bad guys can't trick you into giving them up over email [2].

The "Double-Check" Rule

Here's a simple rule that stops almost every attack: if someone asks for something important over email, check with them a different way.

Example:

  • You get an email from your boss asking you to transfer money
  • Before you do it, call your boss (or walk to their office)
  • Ask: "Did you really send this email?"

If it's fake, your boss will say no. Problem solved.

This works because AI robots can trick your email, but they can't trick your phone call or face-to-face conversation.

Teach Your Team What to Look For

Most attacks succeed because someone clicks something they shouldn't. Teach your team:

  • If an email creates urgency ("ACT NOW!"), slow down and check
  • If an email asks for sensitive info (passwords, money), verify through another channel
  • If something feels even a little bit off, ask someone else to look at it

Get Help from Computer Defenders

Just like you have a lock on your front door, you need locks on your computer systems. These are special programs that:

  • Watch for weird behavior on your network
  • Block dangerous emails
  • Alert you when something seems wrong

Good computer defenses can detect AI attacks because they notice patterns that humans miss.

What Happens If You Get Attacked?

When bad guys break into a business's computers, they might:

  • Steal customer information (names, addresses, credit card numbers)
  • Lock your files and demand money to unlock them (called ransomware)
  • Read your private emails and documents
  • Pretend to be you and trick your customers

This costs businesses a lot of money—on average, about $4.88 million when it happens [3]. For a small business, that could mean going out of business.

The Good News

You don't need to be scared. You just need to be prepared.

Most attacks happen because of simple mistakes:

  • Someone clicks a link they shouldn't have
  • Someone uses a weak password
  • Someone doesn't have security protections turned on

Fix those things, and you're already safer than most businesses.

What You Can Do Right Now

Here's your action list:

  1. Turn on special security keys for important accounts (like email and banking)
  2. Make a rule: never send money or passwords without double-checking through another channel
  3. Install good computer security software
  4. Back up your files regularly (keep copies somewhere safe)
  5. Teach your team what to watch for

FAQ

Not unless you give it access. The AI hackers we're talking about use AI to write fake emails, not to read your real ones. But if someone tricks you into giving them your password, they can read whatever they want.

No. You need basic protections and smart habits. Think of it like locking your doors—you don't need to be a locksmith, you just need to use the lock.

No. Security protections are getting better too. The key is using the right tools and following good practices. AI changes the threat, but good security still works.

Sometimes you can't tell just by looking. That's why the "double-check rule" works so well—if something important is being asked, verify through a different channel (phone call, in-person, different app).

Yes. Anyone with an email account can be targeted. That's why teaching kids about online safety early is so important—they'll face these threats for the rest of their lives.

What Can You Do?

Worried about AI-powered threats but don't know where to start? lilMONSTER helps businesses build practical defenses that work against AI-enhanced attackers. We focus on layered security, smart identity protection, and training that actually prepares your team for modern threats.

Get in touch: https://consult.lil.business?utm_source=blog&utm_medium=post&utm_campaign=ai-cyberattack-surge-eli10


References

[1] CrowdStrike, "Global Threat Report 2026," CrowdStrike, 2026. [Online]. Available: https://www.crowdstrike.com/en-us/blog/crowdstrike-2026-global-threat-report-findings/

[2] FIDO Alliance, "How Security Keys Work," FIDO Alliance, 2025. [Online]. Available: https://fidoalliance.org/how-fido-works/

[3] IBM Security, "Cost of a Data Breach Report 2025," IBM, 2025. [Online]. Available: https://www.ibm.com/reports/data-breach

[4] Google, "Advanced Protection Program," Google, 2025. [Online]. Available: https://www.google.com/advanced-protection

[5] National Cyber Security Centre, "Phishing Guidance," NCSC, 2025. [Online]. Available: https://www.ncsc.gov.uk/guidance/phishing

Ready to strengthen your security?

Talk to lilMONSTER. We assess your risks, build the tools, and stay with you after the engagement ends. No clipboard-and-leave consulting.

Get a Free Consultation