In 2023, I was scared to touch it.
Not vaguely uncomfortable. Not mildly skeptical. Scared. The kind of scared where you watch everyone else jump in and you tell yourself you’re being thoughtful when really you’re just standing at the edge.
Here’s what made that strange: I am not someone who usually hangs back from technology.
I built some of the first websites on the internet. I walked the halls of Sand Hill Road, helping Silicon Valley’s most ambitious technology innovators bring their ideas to the financial markets. I was part of the team that helped launch one of the first video streaming platforms: Hulu. I remember when all of that felt like science fiction.
And yet. In 2023, when people I respect were already years deep into AI, I was still watching from the door.
I’ve been sitting with why that was. And a conversation I had with Kristi Pihl gave me language for something I think I already knew. (Watch the full conversation here).
Kristi is a Northwestern-trained mechanical engineer who has been building AI systems since 2017. Her work earned a TIME Best Invention in AI in 2019. She has spent two decades as a strategic tech advisor inside some of the most complex organizations in the world. She knows this technology from the inside.
And when we talked, she spent almost no time on the technology itself.
She kept coming back to this: AI is a mirror. It is a language-based model designed to reflect back what you give it. Which means if you approach it with fear, with defensiveness, with a need to look capable before you actually are... that is exactly what gets amplified.
She called it an accelerant. Not just an amplifier.
“If you’re already going to make the wrong turn,” she said, “you’re just going to make it faster.”
I’ve been thinking about that in the context of my own 2023 self. Because the fear I felt wasn’t really about the technology. It was about what the technology might reveal. About gaps I hadn’t named yet. About a version of my thinking that might not look as sharp under a microscope. About being exposed as a novice, undeserving of my title or my authority.
The fear was mine. AI just had the potential to show it to me faster.
Your C-suite is probably not afraid of AI.
They’ve done the work. They’ve committed budget, assembled a task force, chosen a platform. They have a plan and they are executing it.
But I want to ask you something.
When the board asked about your AI strategy, did you answer from clarity or from pressure? Did you build the plan because you knew what your organization needed, or because the alternative felt like falling behind? Was it a decision from your best thinking, or a decision from FOMO dressed up as strategy?
Kristi sees hundreds of organizations from her vantage point as an advisor and investor, and there’s a clear pattern: almost every organization she encounters right now believes they are behind. Behind their competitors. Behind the market. Behind some imaginary benchmark set by a press release from a tech CEO whose actual numbers, she noted, do not hold up to scrutiny.
The pressure is real. Boards are breathing down necks. Middle managers are automating workflows before anyone has asked whether those workflows were actually the problem. Everyone is running. And running faster, she reminded me, in the wrong direction is worse than running slowly in the right one.
Here’s the thing: neuroscience and psychology are aligned on this. Decisions made out of fear are rarely good ones. The bear chasing you in your mind doesn’t actually exist. The tiger is not in the room. What is in the room is a manufactured sense of urgency, a FOMO cycle fed by headlines, and a leadership team that has not yet had the harder conversation underneath the AI conversation.
The harder conversation is about what AI cannot fix.
It cannot fix a product that doesn’t fit its market. It cannot fix a pricing model that isn’t resonating. It cannot fix teams that have stopped communicating honestly with each other. It cannot fix a culture where people are overtaxed, understaffed, and quietly disengaged.
What AI will do is accelerate all of those things. Amplify them. Make them more visible, faster, at a greater scale.
This is where I keep coming back to the work I do with senior leaders. Not the AI strategy. The ME Work underneath it.
Because the leaders I coach who are quietly questioning their AI choices are rarely questioning the tools. They’re questioning whether their organizations are actually ready to use them well. And that question almost always leads somewhere more personal: do my teams trust each other enough to have the conversations that would tell us the truth? Are we making this decision from our best thinking, or from our fear of looking like we don’t have the answers?
Kristi described it this way: the leaders who use AI well already have the native judgment to evaluate it. They can tell when it’s being sycophantic. They can sense when it’s pulling from a random source instead of settled science. They have the critical thinking to know when to trust the output and when to push back.
That judgment takes time to build. It takes reps. And here’s the existential problem: if we automate the work that builds that judgment before the next generation has developed it, we don’t just lose the output. We lose the succession pipeline. We raise leaders who have offloaded their thinking before they ever owned it.
You can offload a task. You cannot offload understanding.
So what do you actually do with this?
There’s no skipping the hard part. Because the hard part is the point.
The first thing is to slow down enough to ask an honest question: what is the real problem we are trying to solve? Not “how do we implement AI?” but “what decisions are we making poorly, and why?” Not “how do we use these tools?” but “do we have the culture, the trust, the communication norms that would let us use any tool well?”
The second is to look at your leadership team with fresh eyes. Not at their AI literacy. At their judgment. At whether they are having the real conversations or the performative ones. At whether fear is running the room or curiosity is.
Kristi’s advice for getting grounded was simple: go talk to someone you trust who has no skin in the game. Not your board. Not your investors. Not the vendor selling you the platform. The colleague from ten years ago who always told you the truth. The mentor who is two levels ahead and has no incentive to flatter you. Your nine-year-old, who, as Kristi reminded me, is considerably more skeptical of AI than most executives.
Make your world smaller for a moment. The real signal is usually there, in the people who knew you before you had something to prove.
I am no longer scared of AI.
Well, that’s not entirely true. I am still fearful of the existential questions. But the day-to-day business implementations. I’m over that. AI is simply part of the evolution of business building.
What helped was not a course, a certification, or a deployment plan. What helped was getting honest about what I was actually afraid of. And then doing the work to address that, not the technology.
The fear was a signal. It was pointing somewhere real. The moment I stopped trying to bypass it and started getting curious about it, things shifted.
Your organization’s relationship with AI is telling you something about your organization. The question worth asking is not whether you’re ahead or behind. It’s whether you’re actually listening.
If this conversation sparked something, I’d love to hear what you’re navigating. Hit reply or share in the comments and tell me what the real conversation underneath your AI strategy actually is. The best ones always start there.
And if you are interested in understanding how to think about AI through the lens of a long-term strategist, not just a short-term tactician, subscribe to Kristi’s Substack — Systems & Spines and follow her on LinkedIn — trust me, you’ll be happy you did.
May you lead without limits,




