The market for AI consulting advice is crowded and, frankly, noisy. There are people who have spent three months building a personal brand around a technology they encountered six months ago -- and there are people who have spent years navigating actual enterprise AI implementations, with all the messiness and constraint and organisational friction that involves. From the outside, these can be very difficult to tell apart.
A polished case study looks the same regardless of how much of it the consultant actually delivered. A confident answer to a technical question sounds the same whether it comes from experience or from preparation. References can be curated. Frameworks can be borrowed.
So how do you find the practitioners in the crowd? There is one question that I have found consistently cuts through. Most hiring managers never ask it.
The question
Tell me about an AI or automation project that failed. Not a project that faced headwinds and eventually succeeded. A project that genuinely did not work -- where the outcome was materially worse than what was planned. What happened, and what was your role in it?
That is it. Simple. But watch what happens when you ask it.
Why this question works
Practitioners have failure stories. This is not because practitioners are worse than theorists -- it is because practitioners have been in enough real situations to have encountered the full range of outcomes. Anyone who has spent meaningful time inside enterprise AI implementation has watched something not work. The technology has not behaved as expected. The organisation has not absorbed the change. The business case has not survived contact with the operational reality.
If someone cannot give you a specific, detailed, honest answer to this question, one of two things is likely true. Either they have not done enough real work in this area to have a failure story, or they are not the kind of person who reflects honestly on their own experience. Neither of those is what you want in a consulting partner for something as consequential as an AI transformation.
What a good answer sounds like
A good answer is specific. It names the type of organisation, the type of programme, and the specific nature of the failure. It does not hide behind generalities like "the stakeholder engagement could have been stronger" or "we underestimated the complexity."
A good answer includes an honest account of the consultant's own role in the failure -- not just the organisational or technical factors. If the only explanation is that the client did not listen, or the technology was not ready, or the project was doomed from the start, be skeptical. Real failures are almost always shared failures. The consultants who understand that are the ones who learn from them.
A good answer ends with a clear articulation of what changed as a result. Not a corporate lesson learned statement, but a specific change in how the person approaches this kind of work. "After that project, I now insist on doing X before I commit to a timeline" or "I learned that Y is a sign of deeper organisational dysfunction that I will not take on without addressing first."
What a bad answer sounds like
A bad answer pivots quickly to a success story. It frames the failure as a learning opportunity in a way that deflects responsibility. It is vague on specifics and heavy on process language. It involves a lot of "we" without any "I."
The worst answer, in my experience, is the answer that reframes the question. "I would not say it failed -- it was more of a recalibration." If someone cannot acknowledge failure plainly, they will not be honest with you when your project is heading in the wrong direction. And at some point, it will head in the wrong direction. That is what enterprise AI implementation looks like in practice.
One more thing
This question works in the other direction too. If you are an AI consultant being evaluated by a hiring manager, and they do not ask you about your failures at any point in the process, that tells you something about the client. A client who only wants to hear about wins is not a client who is ready for an honest assessment of their situation. That is useful information to have before you sign anything.
The best consulting relationships start with honesty in both directions. This question is a test of whether that is possible.