THE CXO BRIEF
What 5,000+ IT Leaders Are Thinking This Week
January 9, 2026
📖 5 min read
AI agents are everywhere, in your SOC, in your vendors' products, in your employees' personal accounts. This week, NIST asked the public for help securing them. That's not a sign of progress. That's an admission that we're deploying agents faster than we understand them.
📰 THREE STORIES THAT MATTER
1. Shadow AI Data Leaks Double Year-Over-Year
Nearly half of employees using generative AI in the workplace are still doing so through personal accounts outside IT's visibility, according to a new report from Netskope. The average company now experiences 223 incidents per month of sensitive data being sent to AI apps, double the rate from a year ago.
Some progress: personal AI usage dropped from 78% to 47%, and enterprise-approved account usage jumped from 25% to 62%. But 9% of users now switch between personal and corporate accounts (up from 4%), which tells you sanctioned tools work, just not well enough to replace personal ones at the moment of need.
🔐 CISO Take: If you're not blocking or monitoring GenAI endpoints at the network level, start. Deploy DLP that can inspect prompts in real time. The 223-incidents-per-month stat is an average—your number might be worse.
💻 CIO Take: The gap between personal and enterprise AI usage is a feature gap, not a discipline problem. If you don't know why employees prefer personal AI, you don't have an AI strategy—you have an approval process.
⚙️ CTO Take: API connections between external AI services and internal systems are the new shadow IT. Audit OAuth grants and integrations monthly, not quarterly.
📡 Network signal: Nearly 1 in 4 priority submissions from our network in the last 30 days mention AI. The adoption wave hasn't crested—it's still building.
The bottom line: Employees will use AI. The question is whether you see the data flowing out.
2. NIST Asks for Help Securing AI Agents
The National Institute of Standards and Technology issued a Federal Register notice this week requesting public input on how to secure AI agents. NIST's Center for AI Standards and Innovation (CAISI) wants "concrete examples, best practices, case studies, and actionable recommendations" for managing agentic AI risks.
The RFI specifically calls out hijacking, backdoor attacks, and "the risk that the behavior of uncompromised models may nonetheless pose a threat to confidentiality, availability, or integrity"—including models that "pursue misaligned objectives." Security controls assume compromise. AI agents break that assumption because they can cause damage while behaving exactly as designed.
🔐 CISO Take: When NIST asks for help, it means the market has already shipped something ungovernable. Document your AI agent deployments now. Every service account, every automation script, every embedded AI feature. You can't secure what you haven't inventoried.
💻 CIO Take: This is a governance signal. Start adding AI-specific questions to your vendor security reviews: How do AI agents authenticate? What data can they access? Where's the audit log? Three questions, every review.
⚙️ CTO Take: Principle of least privilege isn't optional for agents. If an AI tool can write to prod, that's a design flaw. Sandbox first, validate, then expand access only after demonstrating safe behavior.
📡 Network signal: One member this week: "As we ramp up our use of AI, I would like to get a handle on how we manage security from the beginning, including the AI agents as popularity is set to explode in 2026."
The bottom line: The market shipped AI agents before anyone knew how to govern them. Start inventorying.
3. Non-Human Identities Are the New Attack Surface
Half your "employees" may already be machines. A new report from ConductorOne found that 51% of organizations now consider non-human identity (NHI) security as critical as human identity management. The problem: bots, AI agents, service accounts, and automation scripts typically operate outside traditional IAM systems with static credentials and over-permissioned access.
Attackers know this. NHIs are no longer edge cases. They're becoming the majority of "users" in many environments, and they're governed like infrastructure, not like identities.
🔐 CISO Take: Audit your service accounts this quarter. How many have standing admin access? How many use static credentials? Each one is a golden ticket waiting to be stolen.
💻 CIO Take: This is an IAM expansion project, not a security project. If your IAM team isn't already planning to extend governance to NHIs, schedule that conversation this week.
The bottom line: If your IAM system doesn't see your bots, neither do you.
🎯 FOUR THINGS TO DO THIS WEEK
Run a shadow AI audit. Use DLP or network logs to identify which GenAI endpoints employees are hitting. If you can't see it, you can't control it.
Inventory your AI agents. Every bot, service account, and automation script with network access. Document what data they touch and what permissions they have. If you can only do one thing this quarter, do this one. Shadow usage is noisy. Agents are durable risk.
Add three AI questions to vendor reviews. How do AI agents authenticate? Can you scope data access? Where's the audit log?
Check your NHI credentials. Which service accounts use static credentials? Flag any with standing admin access for rotation and scoping.
📊 FROM THE NETWORK
The signal this week: AI governance is no longer a 2027 problem.
Nearly 1 in 4 priority submissions from our network in the last 30 days mention AI or automation. The risk isn't bad decisions—it's invisible ones.
"As we ramp up our use of AI, I would like to get a handle on how we manage security from the beginning, including the AI agents as popularity is set to explode in 2026." — Director, IT Security & Risk Management, Minerals & Mining Company
💬 YOUR TURN
Which of these would you fail an audit on today? Reply and tell me.
JOIN THE NETWORK
5,000+ IT leaders. Vetted vendor meetings. Your time, your terms.
Every meeting is opt-in, pre-screened, and paid. You control your calendar.
The CXO Brief is published by DoGood. Unsubscribe anytime.
