While Your AI Adoption Committee Is Meeting, Your Developers Have Already Decided
The choice isn’t “AI agents vs. no AI agents.” It’s “AI agents you know about vs. AI agents you don’t.”
I was heading out the door when I stopped to talk to four developers.
I’d just wrapped a meeting with leadership of a Fortune 100 company. It was a good one—the kind where they ask the right questions about where they want to take the organization. On the way out, I ended up in a loose, post-meeting cluster near the exit. I asked the developers what tools they were working with day-to-day.
One was running Devin on autonomous coding tasks. Another had set up Claude Code in the CLI and was letting it write, test, and iterate on entire modules without much intervention. One had given his GitHub account over to his OpenClaw and said it was picking up the issues his PM was assigning him without him even being aware of what PRs were being tackled until he was reviewing them. The last developer was using an early-access agent I hadn’t seen yet.
That’s four developers, and four different always-on agents—each of which is self-configured and operating entirely outside IT visibility.
Upstairs, their leadership had just told me they were starting a committee to decide which AI tools to eventually roll out to the developer team. The developers had already moved on without them.
This is not a scrappy startup where shadow IT is a feature. This is an organization with security requirements, compliance obligations, and real exposure when proprietary code ends up somewhere it shouldn’t. And right now, they have no idea what autonomous agents their developers are running, what access those agents have been granted, or what decisions they’re making inside the company’s systems.
The decision to “not decide yet” created exactly the conditions they were trying to avoid.
Why does this happen? It’s not stupidity; I’ve seen smart, careful leaders fall into this exact trap.
AI agent adoption feels different from other technology decisions. Far from passive tools that suggest the next line of code, autonomous systems write code, run tests, execute commands, and push changes on their own. The use cases are still being mapped, and the security models are still being written. There’s real risk that needs to be managed: assessing model providers, data handling, access scoping, audit trails. A committee feels like the right call.
Unfortunately, your developers are not waiting for the committee.
They have tickets to close and architecture decisions to make. Their deadlines don’t care about procurement timelines. If they’ve figured out how always-on agents can handle significant chunks of their work autonomously, they’re already using them: with their own accounts, no management control plane in sight.
What starts as one person’s productivity unlock spreads through Slack, and becomes a dozen different systems running inside your infrastructure that your security team has never reviewed and your platform team can’t support.
By the time the committee convenes to make its careful, considered recommendation, you’ll already have a shadow AI infrastructure that nobody mapped—one where autonomous agents have been granted access to your code repositories, your internal APIs, and your test environments by individual developers who needed to get things done.
The cost of “let’s think about it more” is greater than the delay itself: it’s the vacuum the delay creates.
“Most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow.”1
AI agent adoption is a two-way door decision. You can walk through it, see what’s on the other side, and adjust. You don’t need the perfect enterprise framework before you start. You just need enough of one to begin with visibility and control, and then iterate from there.
The committee is building a governance framework as if it’s a one-way door, but you can walk it back. Every major cloud provider, every enterprise security vendor, every internal platform team knows how to scope agent permissions, set up audit trails, and revoke access. What isn’t reversible is the access those four developers have already given their shadow agents while you’re still in committee.
So what does it actually mean to move on this without moving recklessly?
No, you don’t have to approve every agent that shows up in a Product Hunt newsletter. You do have to recognize that the choice isn’t “AI agents vs. no AI agents.” It’s “AI agents you know about vs. AI agents you don’t.”
In practice, that means:
Talk to the people actually doing the work. Ask what they’re already running. (You might be surprised—or alarmed.)
Pick one or two agents for a controlled rollout with defined access scopes and permission boundaries.
Set up a management control plane: visibility into which agents are active, what systems they can touch, what they’re doing.
Build in a 90-day review cycle and adjust based on what you learn.
I know I move fast, and I know the risks of that. But I’ve also learned that not deciding is always a decision—it’s just one that gets made for you, by the people with work to get done and no time to wait. When those people are developers configuring autonomous agents with access to production systems and proprietary code, the stakes of the vacuum are higher than they look from inside the committee.
Four developers at a Fortune 100 company didn’t need a governance framework before they started. They needed tools, and they got them, one way or another. The question is whether you’re the one who gave it to them—or whether by the time your committee reports back, you’re already inheriting the risk without any of the control.
You Can’t Throw Tools at People and Expect Something To Change
I say this to engineering leaders constantly, and I’ll say it here too: buying AI licenses and handing them out is not an AI strategy.
Jeff Bezos’ 2016 Letter to Shareholders.


