Cybercriminals are increasingly using artificial intelligence to make phishing attacks faster, more personal, and harder to spot. New research from cybersecurity company
KnowBe4 shows that 86% of phishing attacks are now supported or fully driven by AI.
That’s according to the new Phishing Threat Trends Report Volume Seven, based on analyses of more than 3,000 threat actors. The data shows phishing is rapidly evolving from simple spam blasts to sophisticated, multichannel attacks that hit multiple communication platforms at once.
The shift matters as companies become more reliant on digital collaboration tools like Microsoft Teams, calendar apps, and real-time chat. Attackers are deliberately moving away from traditional email.
AI makes phishing more personal—and at scale
AI is fundamentally changing phishing. Where phishing emails used to be riddled with typos and sent en masse, attackers can now generate credible, personalized messages using publicly available information.
According to the researchers, with generative AI cybercriminals can:
- craft unique phishing messages for each target
- mimic a colleague’s writing style and tone of voice
- deploy deepfake audio or video during conversations
- automatically generate attacks at massive scale
KnowBe4 estimates AI-driven phishing campaigns are now up to seven times more efficient than traditional methods.
A notable development is so-called polymorphic phishing, where email content changes continuously and automatically, making it harder for security tools to detect patterns. Traditional spam filters rely on recognizable markers or reused text, while AI can make every attack unique.
Cybercriminals are leaving the inbox
The report also shows attackers are increasingly operating outside of email.
Phishing via calendar invites jumped 49% in six months, according to KnowBe4. Attackers send malicious .ics files that automatically add events to digital calendars. Even without opening an email, a suspicious meeting can appear in Outlook or Google Calendar.
Researchers call this “default trust”: users inherently trust calendars more than email. As a result, employees click links or accept invites faster.
Microsoft Teams is being abused more often, too. Phishing attacks via Teams rose 41% in half a year, the study says.
Security experts cite several reasons:
- Teams conversations feel more informal
- messages are short and rapid
- employees respond quickly without thorough checks
- attackers can build trust gradually
Because collaboration tools are embedded in daily workflows, they’ve become prime targets.
Multichannel phishing is becoming the default
The biggest shift, the report notes, is the rise of multichannel attacks—where cybercriminals mix multiple communication channels in a single operation.
A victim might first receive a phishing email, then a Teams message from a fake colleague account referring to the same request. Repeating the same story across channels creates a false sense of legitimacy.
That makes these attacks harder to detect.
According to Jack Chapman, SVP Threat Intelligence at KnowBe4, the inbox is no longer the only frontline of social engineering. Attackers are increasingly targeting real-time collaboration tools and digital calendars.
What this means for businesses
The numbers make it clear: traditional cybersecurity measures aren’t enough. Many organizations still focus heavily on email filters, while attacks now spread across multiple platforms at once.
This calls for broader security strategies, including:
- monitoring collaboration tools
- stricter verification of internal requests
- training on AI-generated phishing
- securing calendar and chat environments
- detecting deepfakes and impersonation attempts
Behavioral analytics is also becoming more important. Because AI phishing relies less on recognizable malware or fixed patterns, detection is shifting toward anomalous user behavior and contextual signals.
AI ratchets up pressure on cybersecurity
The rise of AI-powered phishing fits a broader trend: artificial intelligence is empowering both attackers and defenders.
Cybersecurity firms are pouring investment into AI systems that analyze suspicious communications in real time. Meanwhile, criminals are using the same tech to build more convincing attacks at lower cost.
The result is a technological arms race between security vendors and cybercriminals.
Large organizations with sprawling communication channels face the highest risk. The more platforms employees use daily, the bigger the attack surface becomes.