The Signal

Three weeks ago, AI submissions in the network favored policy words (policy, governance, shadow) over control words (inventory, visibility, machine identity, access reviews) 11-to-6. This week the count flipped: 5 to 6.

A Minerals & Mining Director of IT Security & Risk Management: "AI agents are exploding this year at my company, and I would like to understand how your platform may be able to provide some visibility and guardrails around them." A Software Senior Manager of Cyber Security: "We would like to have a consolidated overview of the inventory of the AI agents and ensure we have the appropriate security controls implemented." A Software IT Director: "The rapid growth of machine and AI identities has left my team buried in manual access reviews and slow provisioning."

The buyers stopped asking how to govern AI. They started asking where it is, what it can reach, and who is reviewing its access.

From the Network

Three quotes from three industries, each pointing at a different control gap.

"Looking for a tool that gives us unified visibility into where sensitive data lives, who/what can access it, and how AI tools might expose it. As we scale copilots/LLMs, we need a platform that keeps us compliant, reduces risk, and identifies gaps."

— Sr Director IT & Analytics, Finance

"Looking for a way to get visibility into end user AI usage and file uploads to unauthorized AI services."

— Manager of Information Security, Law Firms & Legal Services

"We have multiple AI flavors working and need to explore options for safety and security."

— Senior Vice President & CISO, Business Services

Three layers of the same problem: where the data goes, what the users send, what the apps can do. None of these requests is solved by a written policy. All three need a tool that can see the agent and stop it, not a document that says the agent should not do that.

Top Open Priorities This Week

Two raw asks pulled directly from member submissions in the last 14 days, unedited:

"Looking for help managing AI use and privileged access in our environment."

— Chief Information Officer, Leisure

"We are currently relying on standard MS tools for IGA, but they are struggling to handle our complex hybrid setup, especially since the rapid growth of machine and AI identities has left my team buried in manual access reviews and slow provisioning."

— IT Director, Software

Both members are stuck on the identity layer: human-AI privileged access on one side, agent provisioning and review on the other. If your stack cannot answer both questions in one query, close that gap this quarter.

New to the Network

Two dozen IT leaders joined the DoGood network in April. The senior cohort included the Global CIO at Omnicom Advertising, the SVP/CIO at Valleywise Health, the VP & CIO at Villanova University, the VP/CTO at WellStar Health System, the CIO/CTO at HNI Healthcare, the Divisional CIO/CISO at Quest Diagnostics, the VP for Tech & Cyber Governance at LPL Financial, the VP of Cybersecurity at Travelers, the CISO at NOVOLEX, and the SVP for Strategic Relationship Management at Worley.

The cohort spans 14 industries. Healthcare and hospitals is the densest single cluster, 6 of 24.

Powered by the DoGood network

The data in this issue came from priority submissions by 5,000+ enterprise IT leaders. If you run IT or security at a $100M+ company and want to see what your peers are funding, and earn rewards for participating in vetted meetings with the vendors worth your time, apply to join DoGood.

The Context

The headlines are catching up to what the network already knew. GitGuardian's State of Secrets Sprawl 2026 report, released April 14, found 28.6 million leaked secrets in public GitHub commits last year, a 34% jump. AI-service credentials grew faster than any other category: 1.27 million leaked secrets tied to AI services in 2025, up 81% from the prior year. Secret leaks inside AI-assisted code ran at roughly double the GitHub-wide baseline. GitGuardian also flagged 24,008 unique secrets exposed in MCP-related config files alone, 2,117 of them still live: a new exposure surface created entirely by the agent stack.

Bottom Line: Inventory is not a governance project. It is a credential-loss-prevention project.

What to Do About It

This week, pull two lists. First: every AI agent, copilot, and LLM integration currently running in your environment. Second: every standing service account, API key, or OAuth token an AI tool currently holds. For each row in either list, name the human owner, the data scope, and the date of the last access review. The rows where any of those three are blank are the Q3 program.

Where does the AI vocabulary in your stack fall: still 11-to-6 policy-to-control, or already flipped? The network flipped this month.

The CXO Brief is powered by the DoGood network, 5,000+ IT leaders sharing what they are actually working on.

Know a CIO who needs this? Forward it and they can subscribe here.

Enterprise IT leader at a $100M+ company? Apply to join DoGood.

Keep Reading