Blog
Resource

How to Evaluate AI Fluency in GTM Hiring (And Why Most Interviews Get It Wrong)

By
Captivate Talent
April 6, 2026
5 min read
Share this post

Table of contents

The bar has moved. Not over the last year. Over the last few months.

Twelve months ago, playing around with ChatGPT was enough to seem AI-savvy in an interview. Today, senior candidates are being asked to show what they've built, point to a GitHub repository, and demonstrate real hands-on fluency. If you're a founder still evaluating talent the old way, you're already behind.

I had this conversation with Andy Mowat on a recent episode of Human First, and it was one of those ones where I kept thinking, "every founder I know needs to hear this."

Andy is the founder of Whispered, an AI platform helping senior executives navigate their next career move, and someone who previously led RevOps and demand generation at four unicorns, including Carta, Box, Culture Amp, and Upwork. He's spent the last year in constant conversation with senior GTM leaders about exactly this: how AI is reshaping how they hire and get hired.

Here's what stood out.

The question that separates candidates right now

A few years ago, Andy's proxy question for a strong GTM hire was simple: can you pull your own data? It was a signal that someone wasn't sitting around waiting for another team to run their analysis. They could explore, ask questions, and figure things out.

That question has evolved.

Now, the question more and more CEOs are leading with in interviews is: "I know you can lead. But what have you built in AI?"

It's a scary question if you don't have a good answer.

Andy described what a strong answer actually looks like. One candidate in his Whispered community received a case study exercise and, instead of building a polished deck like everyone else, built a live dashboard from the data set and showed it in the room. The hiring team asked to keep it as a leave-behind.

That's the new standard. Not because every executive needs to be building full-time, but because demonstrating real hands-on capability signals something deeper: systems thinking, genuine curiosity, and the ability to lead a team through an AI-driven transformation.

"If you can't point to your GitHub repository, I'm going to question your AI skill set." That's a direct quote from Andy. He compared it to being told twenty years ago you needed 1,000 LinkedIn connections to get in the door. Felt ridiculous at the time. Then it just became reality.

Most candidates don't realize they're overselling

This is where the conversation got honest.

Andy doesn't think candidates are being deliberately misleading. He thinks they genuinely believe they're AI-fluent. The technology is moving so fast that yesterday's impressive is today's baseline, and most people haven't noticed the gap yet.

He made a simple observation: there are three tabs in Claude. Claude, Cowork, and Claude Code. Very few people have bothered to tab over to the last two. Building a slide deck, iterating on ideas, generating a graphic... that was impressive two years ago. It's not a differentiator anymore.

The trickier problem is on the hiring side. To probe beneath the surface of what a candidate actually knows, you have to know it yourself. Andy's approach: find a topic the candidate is comfortable with, then drill deep. One person picked their strongest subject, got one follow-up question, and was stumped.

If you don't know the territory, you'll never catch that.

Kyle Norton at Owner described it differently. He said when someone really gets it, you have to get them to stop talking about what they're building. It's a feeling of recognition, not a checklist. You know it when you're in it.

"I want to hire a builder" is not a hiring strategy

The word "builder" has become a reflex. Every founder says it. Very few have thought through what they're actually asking for.

Andy pushed back on this directly: "When I hear people say, 'I want to hire a builder,' I'm kind of like, okay, cool. But what's the question behind the question? Is it: can you do more with less? Can you fly low and high? Or can you actually implement AI tooling? I don't think people actually know what the question behind the question is."

That ambiguity is causing real mistakes.

Andy has seen companies choose a junior manager over a seasoned VP for a 40-person RevOps team because they assumed younger meant more AI-fluent. It backfired. A strong senior leader can hire the right technical people, build the right systems, and set the right direction. Skipping leadership depth in pursuit of perceived technical agility is a trap.

And on the marketing side specifically... yes, teams are getting leaner. Design, SEO, content production, those roles are consolidating as AI handles more of the execution. But the strategic layer, messaging, brand, personas, positioning, is becoming more important, not less. The craft still matters. Someone still has to know what's worth doing.

Your case study process is probably broken

AI hasn't just changed how candidates work. It's broken one of the main tools founders rely on to evaluate them.

Every candidate is running the brief through ChatGPT, generating a polished deck in Gamma, and presenting something that looks impressive but reveals very little about how they actually think. The output looks good. The thinking behind it is invisible.

Andy is seeing a few alternatives start to emerge from founders who've figured this out.

Some are ditching the take-home entirely and going straight to whiteboard sessions where they can probe thinking in real time. Others are keeping presentations but focusing entirely on how deeply a candidate can defend and develop their ideas under questioning. One approach Andy found genuinely sharp: a timed, no-prep exercise. Hit go, get the prompt, record your answer in two minutes. No AI assist. No overnight polish.

His own preference is the most direct: bring in your best work and let's talk for an hour. Show me your decks, your dashboards, your GitHub repos. Let me understand how you think.

There's also a practical reason to rethink this beyond evaluation quality. Onerous case studies repel top candidates. If your best prospect is weighing three opportunities, a 14-hour exercise disguised as a one-hour ask might push them toward the company that respected their time.

Andy said it plainly: "If you're at a company and you've got an A candidate, you may risk losing them."

What actually makes someone AI-fluent in GTM right now

Andy described the people in his network who are genuinely AI-fluent, and the pattern is consistent.

They're systems thinkers. They're not waiting for the world to come to them. They configure things, tinker, and think in processes. He pointed to RevOps as the function most naturally wired for this moment, and he's hearing the same thing from VC talent teams: RevOps is becoming the bridge role for companies trying to get AI-fluent fast.

But fluency isn't just technical. It's also knowing what's worth building. Knowing what AI can and can't do. Knowing how to lead a team that's still figuring it out. That 30,000-foot view still matters, maybe more than ever.

If you need to close the gap yourself, start here

For senior leaders who know they're behind but don't know where to start, Andy laid out a practical path.

Download Claude and set up a GitHub account. The first couple of weeks will be frustrating. Push through it.

Build something small. A personal website, a simple app, a dashboard. It doesn't need to be work-related. The point is to learn what it actually feels like to ship something.

Start with Lovable if Claude Code feels too steep. Everything is more contained, which makes the learning curve more manageable. But eventually you'll need to get to Claude Code for anything more complex.

Find a sounding board. One person you can call when you're stuck. Andy credits this with getting him through multiple transitions, even before AI existed. You can also just ask Claude what to do next. It's messy, and you won't always understand why you're doing what you're doing. But you do it.

Break it down to one concrete project. "I need to learn AI" is paralyzing. "I'm going to build a Pomodoro timer" is actionable. That's exactly how I started, and it changed everything about how I think about what's actually possible.

The technology is not the bottleneck. As Andy put it, even if nothing improved for the next ten years, there's enough capability right now to transform how GTM teams operate. The bottleneck is adoption.

The bottom line for founders

The GTM hiring landscape is in a messy transition right now. Andy's word, and honestly the most accurate one.

The tools are powerful enough. The frameworks for evaluating talent on them are still catching up. Founders who invest in their own fluency now won't just hire better. They'll ask sharper questions, build leaner teams, and avoid the trap of chasing a buzzword instead of real capability.

That starts with knowing what you're actually looking for. Before you post the next job description with "AI-fluent" in the requirements, ask yourself: what's the question behind the question?

Listen to the full conversation with Andy on Human First here.

Blog

More great resources from Captivate Talent

Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Resource
5 min read

How to Evaluate AI Fluency in GTM Hiring (And Why Most Interviews Get It Wrong)

The bar for AI fluency in go-to-market hiring has shifted fast. Andy Mowat breaks down what CEOs should actually be testing for, why most candidates are overselling, and what to do about it.
Read more
For Employers
5 min read

Why "Job Hoppers" Are the Best Early-Stage GTM Hires

Why short tenures at early-stage companies are qualifications, not red flags
Read more
Resource
5 min read

How to Assess AI-Native GTM Talent: The 2026 Framework for Sales, Marketing, RevOps, and CS

AI has permanently changed how go-to-market teams operate. Yet most hiring processes still assess talent as if it were 2021. This short guide helps founders and hiring managers evaluate AI fluency across Sales, Marketing, RevOps, and Customer Success. Learn how to identify AI-Curious, AI-Active, and AI-Native talent and make better hiring decisions in 2025.
Read more