THE CXO BRIEF
What 5,000+ IT Leaders Are Thinking This Week
January 23, 2026
📖 5 min read
The Cisco advisory dropped Tuesday and I've spent the last 48 hours watching security Twitter melt down. Your voice infrastructure, your AI assistants, and your remote interview process all just became attack surfaces—and the common thread is uncomfortable: we've granted production-level trust to systems we don't yet know how to constrain.
📰 THREE STORIES THAT MATTER
1. Cisco Unified Communications Zero-Day Is Being Actively Exploited
Cisco patched a critical vulnerability (CVE-2026-20045) in its Unified Communications platform yesterday—but only after attackers had already weaponized it. The flaw allows unauthenticated remote code execution that escalates to root access. No workarounds exist.
CISA has added this to the Known Exploited Vulnerabilities catalog with a February 11 deadline for federal agencies. Affected products include Unified CM, IM & Presence, Unity Connection, and Webex Calling Dedicated Instance. Source: Cisco Security Advisory
🔐 CISO Take: Check exposure immediately. If you're running Unified CM or Unity Connection, this is a patch-now situation—attackers are already scanning. The root-access escalation makes this a worst-case scenario for lateral movement.
💻 CIO Take: If this advisory didn't come through your Cisco account team directly, document how you found out. That's signal—not noise—going into renewal.
⚙️ CTO Take: Map all web-management interfaces exposed to the network. This vulnerability hits the admin console, so even "internal only" deployments may be at risk if you've got any path from compromised endpoints to management ports.
📡 Network signal: Detection and response mentions have tripled since Q1 2025—and that's before adding "Unified Communications infrastructure" to the attack surface list. Voice systems weren't built as security targets. Now they are.
The bottom line: Active exploitation + no workarounds = patch by Friday or consciously accept the breach risk.
2. Google Gemini Prompt Injection Lets Attackers Steal Calendar Data via Meeting Invites
Security researchers at Miggo disclosed a vulnerability in Google Gemini that allowed attackers to exfiltrate private calendar data using nothing but plain language hidden in a meeting invite description. No malicious code required—just a carefully worded prompt that Gemini would later execute when answering routine questions.
Google has mitigated the flaw, but the implications are significant for any organization deploying AI assistants with access to enterprise data. The attack bypassed authorization controls through what researchers called "semantic manipulation"—instructions that looked plausible in isolation but became harmful when executed with the AI's permissions.
Nothing was "hacked" in the traditional sense—the assistant behaved exactly as designed, just with the wrong trust assumptions. Which means your internal AI tools are exposed to the same class of failure.
🔐 CISO Take: Add prompt injection to your threat model. Traditional security controls won't catch natural language attacks that look like legitimate calendar text. Start documenting which AI tools have access to what data—your audit committee will ask.
💻 CIO Take: Before rolling out AI assistants enterprise-wide, define the trust boundary. This exploit worked because Gemini had broad calendar access. Scope AI permissions like you scope human permissions—minimum necessary, reviewed quarterly.
⚙️ CTO Take: Build semantic input validation into your AI integration roadmap. Google's fix involved better reasoning about intent and data provenance. If you're building agents that interact with external content, you need the same.
📡 Network signal: One VP/CISO in our network this week: "We have a project in place, with budget allocated, for a tool that can track AI assistant usage and enforce guardrails around that usage." That project just got more urgent.
The bottom line: Your AI assistant trusts external content. That's a vulnerability, not a feature.
3. Experian Warns: Deepfake Job Candidates Will Infiltrate Remote Workforces in 2026
Experian's annual fraud forecast predicts that generative AI tools will enable "deepfake candidates capable of passing interviews in real time." The report warns employers will unknowingly onboard individuals who aren't who they say they are—giving bad actors direct access to sensitive systems from day one.
This matters now because identity is becoming the primary control plane—and hiring is the least instrumented identity decision most enterprises make.
This isn't theoretical. The FBI has already warned about voice-cloning scams targeting enterprise authentication, and the $25M Hong Kong deepfake CFO fraud case demonstrated how convincing real-time video manipulation has become. The Experian report identifies this as part of a broader trend: "Deepfake-as-a-Service" platforms making sophisticated impersonation accessible to anyone. Source: Experian
🔐 CISO Take: Work with HR to add identity verification steps that don't rely solely on video or voice. Background checks, credential verification, and in-person onboarding for sensitive roles are no longer paranoia—they're risk management.
💻 CIO Take: Review your remote onboarding process. If a new hire gets system access before ever meeting anyone in person, you've got a hole. Consider extending your probation period controls for privileged access.
📡 Network signal: Human risk mentions in member priorities have increased 6x since Q2 2025. What's changed isn't awareness—it's the gap between training compliance and actual behavior change. Leaders are realizing the checkboxes aren't working.
The bottom line: Your next hire might be a deepfake. Your verification process needs to assume that.
🎯 FOUR THINGS TO DO THIS WEEK
Patch Cisco Unified Communications — CVE-2026-20045 is being actively exploited. CISA deadline is February 11, but you should treat this as urgent, not theoretical.
Audit AI assistant permissions — Document which AI tools have access to what data. If you can't answer that in five minutes, you've found your first project.
Add video/voice verification bypass to your onboarding threat model — Talk to HR about adding verification steps that don't rely on real-time video alone.
Ask your vendors one question — "How would you notify us if you discovered active exploitation of your product?" The Cisco situation is a good prompt for that conversation.
📊 FROM THE NETWORK
This week's signal: Human risk isn't failing because people won't comply—it's failing because security teams can't observe or intervene in real behavior at scale.
Human risk mentions in member priorities increased 6x from Q2 to Q4 2025 (1.0% → 6.2%). Alert fatigue and triage mentions have tripled in the past quarter alone. Meanwhile, detection and response needs continue climbing, with MDR-related submissions hitting 12.6%—triple where they were a year ago.
"We are looking to improve our end user behaviour. We have security awareness training but follow up and actual behavioural change is lacking." — CISO, Global Education Company
"We suffer from serious alert fatigue and want to use tools and AI to simplify response." — Senior Director ITIO & Cybersecurity, Public Institution
Two quotes. Same theme. The organizations getting ahead are shifting from compliance checkboxes to continuous behavioral measurement—treating human risk like any other risk vector: with data, not hope.
💬 YOUR TURN
What's your biggest security headache heading into Q1? AI governance? Human risk? Vendor fatigue? Reply and tell me—I read every response.
JOIN THE NETWORK
The CXO Brief is published by DoGood, a network of 5,000+ IT leaders at companies averaging $24B in revenue.
How it works: You control your calendar. Every meeting is opt-in, pre-screened, and paid ($200-400).
The data in this newsletter comes from real conversations with real IT leaders—not surveys, not analysts, not vendors.
Subscribe if someone forwarded this to you.
