One of the most common briefs we hear from founders and revenue leaders right now is some version of: "We want someone who's AI-native."
And when we push on what that actually means, what it looks like in an interview, how you tell the real thing from someone who just knows the right words, most people don't have a clear answer yet.
We want to give you a little more guidance here.
Kyle Norton is the CRO at Owner, one of the faster-growing SaaS companies in restaurant tech. He's also the host of The Revenue Leadership Podcast and one of the more thoughtful voices in the space on AI adoption in revenue teams. We sat down with him on Human First to pressure-test how he thinks about AI fluency, centralized vs. decentralized AI teams, and how to stop hiring on vibes.
Here's what we walked away with.
Is AI fluency actually table stakes now?
Short answer: yes, for senior GTM hires.
Kyle's framing is that using AI occasionally today is starting to look like using the internet occasionally in 1995. It's not a differentiator anymore. It's the baseline.
"It is absolutely table stakes to be in the tools pretty constantly. I think folks understanding what is required versus what gives you an advantage is sometimes tough to untangle. People say, 'Oh yeah, I'm in ChatGPT all the time.' And it's like, well, everybody is."
The problem is that saying you use ChatGPT is the new "I'm proficient in Microsoft Office." It doesn't tell you anything. So Kyle laid out a sophistication ladder that's worth stealing.
The AI fluency ladder for go-to-market hires
Five rough levels, from baseline to differentiated:
- Chat interface use. ChatGPT, Claude, Gemini. Daily conversations, basic prompting. This is the floor.
- Custom GPTs, Gems, or Claude Projects. Reusable setups for specific workflows you run over and over.
- Workflow tools. Clay, Make, n8n. Pulling data from different sources, webhooks, multi-step automation.
- Claude Code and agentic setups. Building skills, working in terminal or the desktop app, running more sophisticated problems end-to-end.
- Custom apps and internal tooling. Coding your own workflows, building agents, shipping things for yourself and your team.
Kyle's bar for senior GTM folks today: if you're not at least at level 2 (building custom GPTs, Gems, or Claude Projects), you're behind. To actually carve an advantage, you should be exploring Claude Skills and using Claude Code to solve harder problems.
That's a specific bar. It's also a good interview filter.
How do you spot a genuinely AI-fluent candidate in an interview?
Kyle's honest answer: you have to be in it yourself.
"If you don't know enough about these tools, the ecosystem, and AI fundamentals, it's going to be really hard to figure out who does and doesn't. You're going to get fooled pretty easily."
That's the uncomfortable truth. You can't outsource this assessment to someone who isn't AI-pilled themselves, because a charismatic candidate can absolutely spin yarn with the right buzzwords and you won't catch it.
But here's the pattern he looks for. Two questions that do most of the work:
1. "Tell me what you've built using AI for the business."
2. "Tell me about some stuff you've built for yourself. What are your favorite day-to-day productivity workflows?"
"People who are AI-pilled just light up. They're like, 'Oh, I did this, and then I did these things,' and I have to cut them off because they're so excited and have so much to say."
That energy is the tell. Someone who's actually in the tools has opinions, has favorite workflows, has a thing they built last weekend that didn't quite work. Someone who isn't will give you clean, polished, generic answers about "leveraging AI for efficiency."
If you personally aren't AI-fluent enough to separate fact from fiction, get someone in your interview process who is. That part is non-negotiable.
Should every GTM hire be AI-native? Or just the senior ones?
This is where Kyle's thinking has evolved, and it's one of the more useful parts of the conversation.
At Owner, AI sits in a centralized applied AI team. Multiple technical people embedded alongside the sales, CS, and RevOps teams, building scoring models, churn prediction, pre-call research, agent workflows. That team ships things that are, in Kyle's words, "an order of magnitude better" than what an individual manager could build on the side of their desk.
The implication for hiring:
- Senior GTM leaders and anyone in a leverage role (RevOps, enablement): AI fluency is a hard requirement. You can't make good decisions about where to apply AI if you don't personally understand the art of the possible.
- Managers: Encouraged but not required. They participate in AI work, they might build their own productivity workflows, but they're not the ones shipping the systems that move the business.
- ICs (AEs, CSMs, etc.): Can grow into it. The tooling gets handed to them.
This is a meaningful departure from the "everyone needs to be AI-native on day one" mantra. It's also probably more realistic for most companies.
"It doesn't have to be going out and finding a traditional AI person, and that might not actually be the right fit. We really want business acumen alongside technical proficiency. You can come from strong business acumen and learn the technical piece, or you can be very technical and we layer in the business acumen."
The hybrid, business-plus-technical person is the one doing the most interesting work at Owner right now. One of their directors came in as a Biz Ops IC, taught himself the tools, and is now shipping churn analysis agents and running AI-powered monthly reviews. That's the profile.
Centralized vs. decentralized AI teams: what most companies get wrong
If you're thinking about how to structure an AI function in your revenue org, Kyle's take is that centralized wins, at least for now.
The reasoning:
- Expert AI builders produce orders-of-magnitude better output than a bunch of part-time efforts across the org
- Applied AI is a distinct skill set that takes dedicated time, not something you do on the side of your desk
- Centralizing the function lets you compound learnings and ship faster
The caveat: where that team reports depends on your leadership bench. At Owner it sits in Business and Data. At Vanta, it's a dedicated applied AI team under Stevie Case. At other companies it lives in engineering or RevOps. The right home depends on who's going to be the best steward for that work, not a template.
Where companies are making structural mistakes right now: asking every manager to also be an AI builder, then being disappointed when the output is mediocre. Or, conversely, hiring a technical AI specialist with no business context and watching them build things that don't actually move revenue.
What about "potential over pedigree" for IC sales hires?
Kyle has always erred on the side of potential. He wants a history of excellence (top SDR, top SMB AE, valedictorian, head of student society), but over ten-year-tenured reps, he prefers high-potential people on the upslope of their careers.
His hit rate with established, tenured AEs hasn't been as strong at Owner. His theory: the pace of the company, combined with the fact that senior reps often cruise on their existing book of business, makes it harder for that profile to win there.
Today, roughly 80% of Owner's new sales hires come straight out of the top business schools in Canada. They have so much candidate volume that their acceptance rate is three times harder than Harvard's.
That's not replicable for everyone. But the principle is:
- Look for the DNA (intellectual horsepower, drive, resilience, history of excellence, coachability)
- Pair that with a learner's mentality
- Build the enablement program that takes good craft to great craft
If you're hiring from "lesser-known" companies, the filter is: were they the #1 or #2 person there? Did they learn the craft of sales while selling something hard? Those reps often outperform brand-name hires who are used to inbound doing the work for them.
How to stop hiring on vibes: the structured interview process
This is the piece every founder and revenue leader needs to hear.
Vibe hiring is where a candidate meets a bunch of people, those people ask whatever questions come to mind, and every single one of them asks about the candidate's career history. The candidate repeats it four times. You learn nothing new. You decide based on a gut feeling.
Kyle's alternative is structured, methodical, and boring in the best way:
- Same interview, same structure, same questions, same order. Every candidate, every time.
- Weighted scorecard criteria. Specific things you're evaluating, not vague impressions.
- Divide and conquer. Brandon assesses for conscientiousness. Kyle assesses for coachability. Danielle assesses for business acumen. Some overlap for triangulation, but you maximize the data by not all asking the same things.
- Specific questions for specific traits. If you want to know if someone is coachable, have a coachability question. If you want to know if they're organized, have a conscientiousness question. Don't leave it to vibes.
- Go back to interview notes when someone mis-hires. What did you miss? What did the people who scored them low see that you didn't? Close the loop.
Kyle's structure at Owner: recruiter screen → hiring manager interview → mock call or case study (weighted heavily) → bar raiser with him focused on DNA and values. Sales craft gets assessed earlier, not in the final round.
The other thing most companies don't do: actually calibrate with their recruiter. Kyle and his recruiter, Matt, have been partnered for seven years across Owner and Shopify. Early on they sat in a room for 90 minutes reviewing resumes together, calling out what they liked and didn't like, until they saw the world the same way.
"Spending 90 minutes reviewing resumes with the recruiter to calibrate, that's a grind, but it pays off in spades."
The companies getting this right are doing the unsexy work: structured scorecards, calibrated recruiters, reviewing interview notes against hire outcomes, tightening the loop constantly. Vibe hiring just means you're flying blind when it comes time to ask what went wrong.
How do you find top GTM talent to follow and learn from?
Kyle's answer was refreshing: Twitter and YouTube.
Specifically, X is where the most current AI conversation is happening right now, in long-form. If you only live on LinkedIn, Instagram, or Facebook, you'll miss most of it. Start following AI practitioners, let the "For You" feed educate you, and use it as a gateway to the longer YouTube content worth your time.
For fundamentals, Kyle recommends Andrej Karpathy's long-form YouTube videos on how LLMs work, the difference between pre-training and post-training, and core concepts like tokens. Not sexy, but important if you want to actually understand what these tools are doing in the wild.
One concrete step for this week
If you take one thing from this: get structured in your interview process.
Pick one role you're actively hiring for. Write down the five to seven traits that actually matter for success in that role. Assign weights. Build a question for each trait. Make sure the same questions get asked in the same order by the same people across every candidate.
That's it. Do that, and in three months you'll have data to go back to when something works and when something doesn't. Without it, every hire is an N of 1 and you learn nothing.
Final thought
The best revenue orgs right now aren't just hiring AI-fluent people. They're building interview processes that actually surface who's AI-fluent, structuring their teams so the right people are building the right things, and staying flexible as the tooling evolves.
Most of the founders we work with are somewhere in the middle of figuring this out. If that's you, Kyle's framework is a good place to start.
This episode of Human First is available wherever you listen to podcasts. You can follow Kyle on LinkedIn and find The Revenue Leadership Podcast for more on AI in go-to-market.

