People develop products, but now there’s an alternative, right?
“Our stakeholders keep wanting more features faster. I think AI is the answer. What do you think?”
And just like that, I realized the next silver bullet chase had begun.
That question, from a new client executive, had an implied expectation. Saying something controversial was off-limits (I wanted to make a good first impression). But I hesitated to join the chorus of AI enthusiasts.
Many see AI as the productivity nirvana for software product development. I don’t. Chasing AI for efficiency might be a mistake for most.
More on that later.
The initial temptation.
My gut reaction to the executive was to say, “Heck yes, AI will outpace our developers. We need AI to turn up the volume.” Here’s why my mind jumped to this so fast.
Humans have had a stronghold on coding for decades. But we have inconsistent speed and quality. With AI, our hold on software development is slipping.
Generative AI, like ChatGPT and Gemini, can produce code (among other things) instantly. If you can dream up the prompt, these tools have a response. I recently used ChatGPT to create a product data model in ten seconds. Scripts and sample data included. The allure is undeniable.
But I stopped myself from endorsing AI with foolish enthusiasm. I almost fell down the rabbit hole—chasing the vision of an AI army, coding 24×7, never tiring, never complaining. Not my best moment.
Before we get to how I actually responded, let’s first explore why organizations are so eager to jump on the AI train.
Why is generative AI so appealing for software development?
Organizations have always chased higher quality, lower cost, and more speed.
But software development has by and large failed to meet these expectations. It’s often slow and frustrating, weighed down by bureaucracy. Software development has become more burden than benefit for many.
So, businesses salivate over the promise of generative AI, dreaming of unlimited productivity. They imagine initiatives free of sick days, endless debates, and low motivation. They see the perfect developer, needing no one-on-ones or work-life balance.
But does this sound absurd? It should. Efficiency isn’t effectiveness. Doing things fast doesn’t mean doing the right things fast. AI could trap us into focusing even more on output over outcomes. And reducing human involvement along the way is a terrifying thought.
It’s a trap we must avoid with AI.
Software development is a creative human act. For years, we’ve tried to dehumanize it, automate it, and make it efficient. But without humans, there’s no software.
Edwards W. Deming said, “A bad system will beat a good person every time.”
This means the environment, behaviors, and ways of working dictate performance. It’s not the people who are the problem with getting value flowing. It’s the system.
AI will not fix a broken system.
I’d amend his quote to say: “A bad system will beat a good person or AI every time.”
Let’s see how I’ve arrived at my stance on AI (I bet you suspect where I’m headed).
Will more, faster output really produce better outcomes?
I see companies littered with unnecessary, half-finished features.
AI will only add to this pile if the system is broken. A broken system that favors output over outcomes does not benefit from more output.
The issue has never been how much code developers can type and how fast. It’s more about:
- How well we know our users.
- How many bad ideas we can kill faster.
- How we can better collaborate on good ideas.
The ability to create any feature we want with ease is tempting.
But it won’t if help if we have cracks in the system. If silos exist, more output leads to more tasks piling up in the corner awaiting integration. And if we don’t talk to users, faster output produces bloat with unneeded features.
AI is no better than any other trick we have tried in the past to crank out features faster.
More output doesn’t make for better outcomes. Implementing bad ideas with speed doesn’t make them good.
The thought occurs to me that AI may allow us to test ideas faster with users.
But my experience shows we don’t have a great track record at testing our ideas with users today either. Most teams I work with know their ticketing system, not their customers. Should we assume we can snap our fingers and AI will make us start engaging with our users to test our ideas? I doubt it.
We are more likely to start testing our ideas with AI masquerading as our customers. And this leads us to my next question.
Can AI really be a proxy for our customers?
I read the other day how we can use AI to help us understand our customers.
We only need to feed our customer relationship data into the large language model. It will learn what our customers need and desire. Then, we can ask the model questions to arrive at better “user-centered” solutions.
You can stop laughing now. No, really, you can stop.
Empathy by proxy has never worked, and AI won’t suddenly make it work.
Teams today are far removed from the customer. Many teams don’t know anything about them. They read a specification on a ticket, build their isolated part, and move on to the next ticket.
Customer? What customer?
We must first fix the system to promote direct team and customer engagement.
If we know our customers, we can then use AI to enhance our knowledge. Imagine it slicing and dicing analytics to help us better understand user patterns. That would be useful. I see AI as a fast, capable user research assistant, not a user substitute.
Remember, there’s no shortcut to knowing your customer.
We should trust human-to-human connection over a machine, which leads me to my final point.
Do we really think trusting a machine is better than trusting a human?
Today’s organizations are in a trust deficit.
Managers don’t trust teams, teams don’t trust management, and teams don’t trust each other. This has been my experience over the past twenty-five years. And remote work has made it worse. From what I see today, the human trust factor has never been lower.
Everywhere I look, I see the evidence:
- Optics over transparency
- Pushing work over pulling work
- Performance metrics over conversation
- Silos and the blame game over collaboration
But should we put our trust instead in a machine?
At first, the machine seems better. Complete transparency, unlimited stamina, measurements galore, and no blaming. But putting blind trust in AI is asking for trouble. Skynet anyone?
You could argue that we should trust machines less than humans.
We must add mechanisms to double-check its work. AI creates many additional concerns:
- Security and data protection concerns amplify.
- Governance and compliance need careful monitoring.
- Quality control measures increase to catch AI hallucinations.
The good news: AI can help automate the extra safeguard checks it spawns. But this does not remove the need for human judgment. You can’t replace that.
Humans need to be in the lead in this dance with AI.
Here is where AI can help: relieving bottlenecks.
If you’ve ever read, “The Goal,” by Eliyahu M. Goldratt, you know the concept behind the Theory of Constraints. If not, let me summarize.
The core message of “The Goal” is around bottleneck identification and removal. A bottleneck always lurks in your system, wreaking flow havoc. Any improvement to a part of the system besides the bottleneck does nothing to help it. Here’s the gist of what it recommends instead:
- Identify and optimize the bottleneck.
- Increase bottleneck capacity until it no longer hinders flow.
- Find the next constraint and repeat the process, forever.
AI can help you with this in many ways.
- Analyzing and identifying bottlenecks.
- Suggesting ways to optimize the constraints.
- Speeding up the bottleneck through automation.
A team can use AI as the ultimate Theory of Constraints partner. This blows away any notion of developers using it to develop code faster. Coding speed is rarely the bottleneck.
So, we can use AI better as a bottleneck destroyer, not a code generator, to improve the system.
My answer to the executive.
I did not go with my gut. Instead, I paused and considered:
- Faster output does not give better outcomes.
- AI is no substitute for direct customer engagement.
- Humans lead the dance with AI, not the other way around.
So, here is how I responded:
“We should let the teams decide how they can use AI to improve the system. Teams know best what’s broken. Your people are here to stay. Who better to guide AI than them?”
The executive liked my response (phew). She said it’s easy to get swept up in the AI hype. The nuance (humanity) in my perspective gave her something to ponder.
Teams remain vital. Their role will shift to make room for AI. They will decide on how to deploy AI best to improve the broken system.
How will you enable your teams to leverage AI effectively?
AI is here to stay. So are humans. Let’s find the synergy.
➡️ Sign up here for weekly deep dives like this. You’ll get practical tips on what it takes for leaders to achieve more value with less effort while respecting people.
Todd Lankford unlocks Lean Leverage in organizations to cultivate powerful, engaged product teams who maximize outcomes and impact.
His articles share his experiences and learnings along the way. Join the mailing list to get them in your inbox.