You Can’t Throw Tools at People and Expect Something To Change
…and other lessons in AI adoption for engineering leaders.
I say this to engineering leaders constantly, and I’ll say it here too: buying AI licenses and handing them out is not an AI strategy.
A familiar pattern I’ve seen play out: A CTO, usually a sharp person, genuinely motivated, decides their team needs to move faster. They sign up for a handful of AI coding tools, distribute the credits, send a Slack message encouraging everyone to “explore”, and then wait for the productivity gains to show up in their metrics. The more likely outcome is low adoption, and a line item that’s increasingly hard to defend to the skeptics.
Why Most AI Tool Rollouts Fail
Imagine you sign up for a new SaaS app. You go through account creation, log in for the first time, and you’re looking at a dashboard full of empty charts and zeroes. Nobody has told you what to do next. You don’t know what good looks like. When there’s no obvious starting point, most people close the tab and never come back.
That’s the experience most engineers have when they’re handed an AI tool with no context: uncertain where to start, with no obvious reason to push through that initial friction.
The burden here falls on the organization: on CIOs, CTOs, engineering leaders. If you want to see ROI from your AI investment, you have to do the work of bringing your team up to speed. That means teaching people the basics: what good context management looks like, how to think about token usage, how to prompt well. These aren’t exotic skills, but they’re also not intuitive. A resource like learn.kilo.ai is a good starting point for teams just getting oriented, but honestly, there’s also a lot leaders can learn by just experimenting with the tools themselves.
There’s No Single Door Into AI Adoption
I’ve noticed that leaders often talk about AI adoption as if it’s one specific thing, usually code generation in an IDE. And yes, that’s probably how most developers first engage with AI. But there are multiple entry points, and the right one depends on where your team is starting from.
Some teams find autocomplete to be a lower-commitment way to build familiarity with AI-assisted development, and it compounds over time. I’ve written separately about why we’re still investing in autocomplete at Kilo, but the short version is that almost half of developers still don’t use AI in their daily work, and autocomplete is often what shifts that.
Other teams start from the other direction: they install a code review tool, run it on existing PRs, and have their first “aha” moment when it catches something a human reviewer missed. There’s no change to how they write code, but suddenly AI is adding value. That experience tends to open minds.
The point is that a good AI enablement strategy meets engineers where they are, rather than assuming everyone is ready for the same starting point.
What This Looks Like When You Actually Do It
I’m not offering this advice from the outside: the reason I can make the argument that leaders need to be in the adoption curve with their teams is that I’m in it myself—and so is everyone at Kilo Code.
We use Kilo to build Kilo. Some people call this dogfooding. Some call it drinking your own champagne. I call it finding all the bugs before your customers do. What that looks like in practice is messier than a lot of companies would admit publicly.
Every engineer at Kilo uses the product day-in, day-out—in the CLI, in VS Code, in JetBrains, in the cloud. I’m personally a heavy user of Kilo for Slack. My morning workflow starts before I sit down at my desk: I pull up Kilo for Slack, kick off a list of tasks for the cloud agent, and by the time I’m at my computer, the Code Reviewer has already been through my PRs. I also have my own KiloClaw bot, Chad, and I’ve been experimenting with what it looks like to have genuine personal leverage through an AI assistant. Everyone on the Kilo team has access to one.
Our internal usage is also where we catch the rough edges. In a recent engineering sync, one of our engineers, Mark, flagged that Cloud Agent was crashing too often, with sessions stopping mid-run. Another engineer pointed out that if this was trackable on our observability stack, we shouldn’t be waiting for internal feedback to surface it; we should already have alerts. A separate thread in the same meeting got into the friction of PRs being opened under the agent’s identity rather than the engineer’s, which means they don’t show up when you filter GitHub by your own work. Annoying enough when you know to look for it; invisible if you don’t.
All of that, while not pretty, is exactly what you want your feedback loop to look like: real usage by people with high standards, surfacing real friction, feeding directly into the roadmap. Our engineers use the product with genuine expectations, and when something doesn’t work the way they expect, they say so, in the same meeting where the person who can fix it is sitting.
This is also connected to a broader model we’ve built at Kilo, the idea that every engineer operates as a mini CEO for their product area, responsible for driving weekly active users. That model only works if engineers are genuine users of the tools they’re building. You can’t have a credible feedback loop from a distance.
Using your own tools honestly, with real tasks, under real conditions, with no obligation to be diplomatic about what isn’t working, also creates a different relationship with the product than any amount of user research can. That’s true for quality assurance, and it’s true for understanding the adoption curve your customers are navigating.
What Leaders Can Take From This
Most AI tool rollouts fail because the people championing them haven’t experienced the adoption curve themselves. They’re asking their teams to navigate something they’ve never had to navigate. If you’re a leader who wants your engineers to actually use AI, the most useful thing you can do is use it yourself, with the same constraints your team has, and talk openly about what’s working and what isn’t.
This has two effects: struggling with the learning curve is a shared experience, rather than a source of embarrassment; and your assessment of the tool is based on real experience instead of optimism. The teams that make it through the adoption curve are the ones whose leadership is willing to be in it with them.



