Team Enablement
Mar 5, 2026
|
6 min read
Mar 5, 2026
|
6 min read

How to build an AI committee that actually drives adoption

Greg Leach, Senior Director of Expansion Revenue at 7shifts, didn't mandate AI adoption or hire a dedicated expert. He built a system—and over half his 230-person company showed up voluntarily to learn.

How to build an AI committee that actually drives adoption

Table of contents

Most companies approach AI adoption the same way: leadership announces it, a few enthusiastic people run with it, and the rest of the company watches and wonders what they're supposed to do.

Six months later, there are pockets of progress and a lot of inconsistency. Marketing figured something out. Engineering is doing their own thing. And nobody really knows what the company's actual position on AI even is.

Greg Leach, Senior Director/GM of Expansion Revenue at 7shifts, saw this pattern coming and decided to get ahead of it. Rather than let each department figure it out independently—or hire a dedicated AI person—he built a cross-functional AI committee from scratch. And the results were immediate: at their first optional AI demo session, over half the company showed up voluntarily.

Here's exactly how he did it.

The problem with leaving AI adoption to chance

When AI pressure first hit 7shifts, Greg's instinct wasn't to mandate anything. It was to listen. And what he heard from employees was revealing:

"I feel like across our employee base that people felt like they were falling behind. Like, who knows what my type of job will look like in three years, right? If I'm a marketer, I'm a salesperson, if I'm a product person, I'm an engineer."

That anxiety is real. And it's made worse by the constant noise — LinkedIn posts about companies moving faster, headlines about job displacement, and a flood of AI tools that nobody has time to evaluate properly.

The deeper problem Greg identified wasn't motivation. People wanted to learn. The problem was structural: no clear company viewpoint, no easy way to get tools approved, and no place to learn from colleagues who'd already figured something out.

You can't mandate your way out of a structural problem. You have to build a system.

How Greg built his AI committee

Step 1: Ask for volunteers, not assignments

Greg's first move was to resist the temptation to appoint people. Instead, he asked two questions company-wide: who wants to be part of an AI committee, and who wants to chair it?

"So basically we asked two things. One, who wants to put their hand up to be part of the AI committee generally? And then two, do we have any volunteers to be basically our chair, our committee head?"

The committee ended up being 15 to 17 people. Greg himself volunteered to be the chair. Critically, he made sure every department had representation — marketing, sales, engineering, product. This wasn't a product initiative. It was a company initiative.

The volunteer structure matters more than it might seem. A committee of people who raised their hands carries completely different energy than a committee of people who were assigned. One creates ownership. The other creates obligation.

Step 2: Diagnose before you prescribe

Before the committee did anything else, Greg wanted to understand what was actually broken. They pulled employee survey data and looked for recurring themes. Three clear problems emerged.

Problem one: No visible company viewpoint on AI. A position document existed, but as Greg described it: "It's like a document that no one had — maybe it had been shared in a Slack group, but you know how people forget." The fix was simple but important: make the company's AI stance visible, consistent, and easy to find.

Problem two: Tool access and budget friction. Getting an AI tool approved could take two and a half weeks. That's enough friction to kill experimentation entirely. People don't file requests for tools they're not sure they're allowed to want.

"What is our tool approval process and how does it work? Do we feel like it is a reasonable turnaround time to get tools? We don't want an AI tool to take two and a half weeks to get approved."

The committee documented the process, streamlined it, and created department-level budgets specifically for AI tool experimentation. Each department could now try something without navigating a lengthy approval chain.

Problem three: People don't know what they don't know. The learning gap was the trickiest one. As Greg put it: "Once you figure out a workflow and how to leverage AI to do that workflow, the biggest challenge is it sometimes takes four or five hours to set up that process." Most people give up before they get there. The solution had to make that learning curve feel shorter.

Step 3: Create peer-to-peer learning, not top-down training

This is where Greg's approach gets genuinely interesting. Rather than rolling out formal AI training, the committee launched AI demos — bi-monthly sessions where anyone in the company could volunteer to show how they were using AI.

The format was deliberately low-stakes: here's the problem I had, here's what I built. Three to five demos per session. Cross-functional. Anyone could present — not just engineers or product managers.

"It's just: come in and do your thing. Quickly explain what you did, the problem, and then share what you built."

The first session happened in December. 7shifts is a 230-person company.

"We had 100 to 120 people show up voluntarily. So that shows you the interest of like people that want to learn how to use this, right? We would just make a couple of posts in Slack — 'hey, come to this.'"

Over half the company. Voluntarily. From a couple of Slack posts.

What made it work was the osmosis effect Greg was deliberately creating. People watched their colleagues solve real problems with AI tools they hadn't considered, and walked away saying "I didn't know AI could do that." That shift — from abstract anxiety to concrete curiosity — is exactly what Greg was after.

The next planned addition: AI office hours, where a handful of technically strong employees would make themselves available to help colleagues build their own workflows without spending four hours struggling alone.

Step 4: Build in accountability from day one

Greg knew the committee's biggest vulnerability: it's everyone's side job. Without a forcing function, even well-intentioned committees drift.

His solution was to bake accountability into the company's existing rhythm. Every month at the company all-hands, the AI committee gives an update.

"We're trying to give an AI committee update every time. So it's a forcing function for us — because we meet on a monthly basis, and there are action items that come out of that and things we want to do."

The monthly cadence keeps the committee moving without adding overhead. And because it's public — in front of the whole company — there's genuine accountability to show progress.

Why this approach works

What Greg built isn't complicated. But it works because it addresses the actual barriers to AI adoption rather than trying to motivate people past them.

It's distributed, not dependent. No single person owns AI at 7shifts. If Greg leaves, the system keeps running. Knowledge is spread across every department.

It removes friction before it asks for effort. Streamlining tool approval and creating department budgets means people can experiment without navigating bureaucracy. The barrier to trying something dropped significantly before any learning initiative launched.

It creates pull, not push. Voluntary demos, optional office hours, raised hands — nothing in Greg's system is mandatory. And that voluntary energy translates directly into the kind of engagement you can't manufacture with top-down mandates.

The early signals Greg described are telling:

"I think we're hearing positive signals. There's positive participation. People just want to know: how can I automate more? How do I do those things?"

People are asking "how do I automate more?" That's a fundamentally different question than "do I have to use AI?" It means the committee shifted the frame from obligation to opportunity.

Key takeaways

  1. Start with volunteers. A committee of people who raised their hands will always outperform one that was assembled by assignment. Ownership beats obligation.
  2. Diagnose the structural barriers first. Motivation isn't usually the problem. Unclear policy, slow approval processes, and no peer learning opportunities are. Fix those before you launch any adoption initiative.
  3. Peer-to-peer learning scales better than top-down training. When a salesperson sees another salesperson showing how they automated a workflow, it lands differently than any training deck. Create the conditions for that kind of knowledge sharing.
  4. Make it easy to experiment. Department-level AI budgets and a clear tool approval process remove the friction that quietly kills experimentation. You can't learn if you can't try.
  5. Build the forcing function in. Committees need accountability structures or they stall. A monthly all-hands update is a simple, low-overhead way to keep momentum without micromanaging.

The bigger picture

Greg's committee isn't just an AI initiative. It's a model for how companies can build distributed learning systems that scale without depending on any single person or team.

The companies that figure out AI adoption in the next two years won't be the ones who mandated it hardest or hired the most AI experts. They'll be the ones who made it easy for curious people to learn from each other — and built systems that kept that learning going.

This article is based on insights from Greg Leach's conversation on the debut episode of Product Leaders Lab. Check out the full episode and episode breakdown here.