THE CXO BRIEF

What 5,000+ IT Leaders Are Thinking This Week

December 26, 2025

📖 4 min read

Your safety nets are fraying. A major insurer disclosed 22.6 million victims from a social engineering attack. CISA's ransomware watchdog resigned. And Microsoft decided your users need guardrails whether you asked or not. I spent Christmas Eve reading breach disclosures so you didn't have to.

The pattern across all three stories: Security models that depend on humans behaving correctly — analysts watching dashboards, help desks verifying callers, users spotting phish — are breaking under scale. This week, controls got enforced by vendors and attackers alike. Your job is to decide which ones you're choosing deliberately.

📰 THREE STORIES THAT MATTER

1. CISA's Ransomware Early Warning Program Is Now a Single Point of Failure

The Pre-Ransomware Notification Initiative sent over 2,100 warnings in 2024 and reportedly prevented $9 billion in economic damage. It ran on one person. That person resigned December 19 after being reassigned to FEMA.

The program's value came from trusted relationships with threat intel sources built over years — relationships that don't transfer with a job change. CISA says the program continues, but the infrastructure was never institutionalized. No redundancy. No documented handoff. One resignation, and years of relationship capital disappear. Source: Cybersecurity Dive

This isn't a personnel story. It's a systems story. Early warning programs built on individual relationships don't scale — and now yours may be materially weaker.

🔐 CISO Take: If your threat intel strategy includes "wait for CISA to call," that's no longer a strategy. Confirm your sector ISAC membership is active and someone on your team actually triages the alerts.

💻 CIO Take: Ask your CISO this week: "What's our early warning coverage if government programs go dark?" If the answer requires a specific person or agency, you have a gap.

📡 Network signal: CISA sent 2,100+ warnings in 2024. If your organization wasn't on that list, you were relying on luck — and that luck just got thinner.

The bottom line: External safety nets are brittle. Assume you're on your own.

2. Aflac Breach Hits 22.6 Million — And It Started With a Phone Call

Aflac disclosed this week that the June breach affected 22.65 million people — nearly half their customer base. Stolen data includes Social Security numbers, medical records, government IDs, and passports. Federal investigators linked the attack to a known criminal organization targeting insurers — the same group behind the MGM and Caesars breaches.

Erie Insurance and Philadelphia Insurance Companies were hit around the same time, suggesting a coordinated campaign. The entry point across all of them: social engineering. Specifically, phone calls to help desks requesting credential resets. No malware. No zero-days. Just humans trusting other humans. Source: TechCrunch

This attack class doesn't exploit software. It exploits the gap between identity verification policy and identity verification practice.

🔐 CISO Take: Audit your help desk's identity verification procedures this week. If reset requests are approved based on information available on LinkedIn or in an HR directory, your process is already compromised.

💻 CIO Take: Check whether any of your critical vendors — especially insurers and benefits providers — have disclosed breaches in the past 6 months. If Aflac handles your employee benefits, plan as if exposure is possible until you confirm otherwise.

⚙️ CTO Take: The detection gap here is help desk abuse and anomalous credential resets. If your identity stack can't flag unusual MFA enrollment patterns, impossible travel, or spikes in password reset tickets, that's where to focus.

📡 Network signal: One CISO this month: "For 2026 I am tasked by my Board with a tech solution due to social engineering concerns." Scattered Spider just proved why — 22.6 million records from a campaign that used phones, not code.

The bottom line: This wasn't a hack. It was a process failure at scale. Secure your help desk.

3. Microsoft Is Enforcing Security Defaults Because You Didn't

Starting January 12, 2026, Microsoft will automatically enable messaging safety features in Teams for any tenant that hasn't modified default settings. Weaponizable file type blocking. Malicious URL detection. User reporting for false positives. If you never touched the settings, Microsoft is about to touch them for you.

This is vendor-enforced security. Microsoft is making a bet that most organizations won't configure these protections themselves — so they're flipping the switch. The company framed it as "improving messaging security by enabling key safety protections by default." Read that again: by default, because asking didn't work. When human-dependent security doesn't scale, vendors step in. Source: BleepingComputer

💻 CIO Take: Open Teams admin center before January 12. Navigate to Messaging > Messaging settings > Messaging safety. Decide whether Microsoft's defaults work for you — or take control before they do.

⚙️ CTO Take: If you have workflows involving macros, scripts, or executables shared via Teams, test them in a dev tenant now. Find the breakage before your users do.

📡 Network signal: Across the network, leaders keep saying the same thing: "User awareness and training continues to be a struggle." Microsoft heard it too. This is what happens next.

The bottom line: Review Teams settings by January 12 — or let Microsoft decide for you.

🎯 THREE THINGS TO DO THIS WEEK

  1. Pressure-test your help desk verification. Call your own help desk. Request a password reset using only information you could find on LinkedIn. If it works, you have until January to fix it — or until an attacker tries first.

  2. Check your Teams admin settings before Microsoft changes them. If you've never touched messaging security, you have until January 12 to decide your own configuration. After that, it's Microsoft's call.

  3. Plan for possible vendor breach exposure. If Aflac, Erie, or Philadelphia Insurance touches your employee data, treat exposure as possible until confirmed otherwise. Don't wait for a notification letter to start monitoring.

📊 FROM THE NETWORK

This week's signal: Training isn't scaling. Leaders are shifting to technical controls — not because they want to, but because they have to.

"Would love to understand how you can measure risky behavior and deploy interventions automatically. We have a growing number of endpoints and user awareness and training continues to be a struggle. Introducing technical controls is the goal." — CIO, Higher Education Institution

This CIO isn't alone. Across the network, we're hearing the same pivot: from awareness programs to automated enforcement. Another leader put it more bluntly — their board mandated a "tech solution for social engineering" as a 2026 priority. Not more training. A tech solution.

The Aflac breach is exactly what these leaders are trying to prevent. With automation mentions up more than 3x over the past year in network submissions, the message is clear: human-dependent security isn't scaling.

💬 YOUR TURN

What's your help desk verification process? Reply and tell me — I'm curious how different organizations are handling the social engineering threat.

JOIN THE NETWORK

The quotes above come from IT leaders in the DoGood network — 5,000+ CIOs, CISOs, and CTOs who share what they're actually working on (not what vendors want them to say).

You control your calendar. Every meeting is opt-in, pre-screened, and paid.

Was this forwarded to you? Subscribe here to get The CXO Brief every week.

Keep Reading

No posts found