Product teams waste hours in status meetings, Slack threads, and coordination overhead. 3 product leaders shares 6 ways how high-performing teams are using AI tools to automate the busywork

Product teams have always had a coordination tax. Status meetings, Slack threads, stakeholder updates, release notes, feedback loops that never quite close. For most teams, that tax quietly eats 30 to 40 percent of every week before anyone gets to actual product work.
What's changing now is that teams are finally finding ways to automate that overhead, not with better meeting hygiene or tighter Slack norms, but with AI agents doing the connective tissue work that used to fall to PMs.
At the AI Product Leaders Summit in March 2025, I brought together three product leaders to talk honestly about what's working. Betsy Shaak, VP of Product at Voxpopme, has been building custom AI agents to solve specific internal coordination problems. Greg Leach, Sr. Director of Expansion Revenue and GM at 7shifts, chairs the company's cross-functional AI committee and is thinking carefully about adoption at scale. Partho Ghosh, VP of Product at Uberall, is rethinking the entire product lifecycle from an AI-native perspective. Here is what they shared.
Every product team has some version of the same story. A customer asks for a feature. The team builds it. And then nothing happens. No one loops back to the customer. The CSM finds out three weeks later on a call. The PM has moved on to the next sprint.
This is what Betsy calls "the gray area." It's not a technology problem or a process problem. It's a handoff problem, and it compounds at scale.
"Clients request something. You build it. Then there's this big gray area of time where they either maybe find out that it's been built, or they ask about it again in their next CSM call, and there's no actual communication gap filler."
The bigger the team, the worse this gets. And with AI accelerating shipping velocity, the gap between what gets built and what customers actually know about is growing, not shrinking.
Betsy built a custom agent, primarily using Claude, that watches the product releases channel in Slack, scans multiple feedback sources (client calls, sales calls, feedback forms), and surfaces potential matches between what shipped and who asked for it. When it finds one, it drafts an outreach email for the PM to review and send.
"It tries to make those connections, and then in our case built a Slack message that was like, 'This person said this thing. We think it might be related to this release. If it is, here's an email copy to this person that you can then send to close that gap and close the loop of feedback.'"
The agent is not perfect. Betsy acknowledged it sometimes surfaces connections that are completely unrelated to the release. But that is the point: it surfaces them for a human to judge, rather than letting them fall through the cracks entirely. The manual alternative, a PM cross-referencing Jira tickets, hunting down call notes, tracking down the CSM, drafting an email, is an "absolute pain in the side," as Betsy put it. A messy agent beats no agent.
Partho runs a global team. That means internal calls at 2AM or 5AM Pacific time. His solution is an AI agent that joins those calls in his place, feeds notes through Granola, and surfaces a prioritized summary to him and Claude before he starts his day.
"Not just through Gemini, because I find that sometimes not the greatest notes. It'll then input that info into Granola, which comes to me and Claude and say, 'Here's what you missed and here's the things that are important.'"
The key distinction he made is proactivity. Native tools like Teams or Gemini can transcribe meetings, but they require you to go hunting for the summary afterward. Partho's setup sends him a briefing. When you start your day and things are already backed up, he noted, "you're already behind the eight ball." The system works precisely because it removes the hunting step.
As shipping velocity increases, one of the things that quietly breaks is release communication. At Uberall, the team's output has grown so fast that keeping release notes current became genuinely difficult.
One of Partho's senior PMs built an agent specifically for this. It wraps up everything that shipped, synthesizes the release notes, and keeps pace with a team moving faster than manual documentation ever could.
"There's so much velocity happening on the teams, unlike just an unprecedented amount of things going up before. Things are just falling through the cracks as a whole."
This is a use case that sounds small until you consider the downstream impact. Release notes are not just internal documentation. They are how customers, CSMs, and sales teams understand what the product can do. When they lag, everyone is working off stale information.
Greg's team at 7shifts sits on a lot of data: company dashboards, funnel data, PLG metrics, sales data. The problem is that pulling useful insights from all of it requires someone to know where to look, what filters to apply, and how to account for things like seasonality. That process soaks up time that could go toward actually acting on the data.
What Greg's team is working toward is an agent that can connect those disparate data sources and surface insights directly, rather than requiring someone to go spelunking through dashboards.
"Connecting all of that data together within an agent to basically be able to give insights is something that we're trying to work towards, because that enables faster decision making right now. Because many of our people in the company spend a lot of time just looking at data, putting filters on, doing those things."
The goal is not to eliminate analysis. It is to remove the mechanical layer of data retrieval so that the team's time goes to interpretation and action.
Partho's team started using tools like Bolt and Replit to build pixel-accurate prototypes that customers can interact with before anything is actually built. The goal is simple: make the decision about what to build faster.
"It was an exercise of I just want to be able to make a decision of what the right thing to go and build is faster."
They reached 95% pixel accuracy between their prototype tool and their actual product. That meant customers could not easily tell the difference between what was real and what was not, which made the validation signal much cleaner.
From there, many of their PMs have shifted directly to using Claude Code via CLI, bypassing the no-code prototyping tools entirely. Partho described the product lifecycle shift this way: the time from ideation to validation that used to take weeks now takes days. What used to take months now takes weeks.
Betsy echoed this from her own experience, describing a recent project where she paired with a backend engineer, conducted internal user interviews, and fed all the recordings and notes into Claude to use as a reference throughout the build:
"That product went 10 times faster in the first three weeks than it would have gone before. We were showing an update to the team, and some people were like, 'Were you working on the weekends on this?'"
Greg chairs a cross-functional AI committee at 7shifts that looks specifically at how to drive internal adoption without creating downstream chaos. When you give 250 people access to AI tools and tell them to start building, things can break in ways you did not anticipate, including security, PII handling, and code quality.
"We're trying to find the right balance of we're going to get people certain access to certain tools. What's the right way from a security standpoint to do that?"
The committee's approach has two practical elements: company-wide AI demos where non-technical employees share what they have built (which Greg noted generates more genuine excitement than product demos ever did), and an "AI builders program" that identifies internal champions to help teams tackle specific automation problems.
The principle behind both is that broad adoption needs structure. The goal is not to restrict experimentation. It is to make sure experimentation does not create risks that slow everything down later.
Every example in this panel included a human checkpoint. Betsy's close-the-loop agent drafts emails; a PM decides whether to send them. Partho's meeting agent surfaces priorities; he decides what to act on. Greg's committee sets guardrails before tools get deployed widely.
Greg made this point directly when talking about junior team members: there is a risk that AI outputs feel authoritative even when they are not. If someone relies on a tool to tell them the answer without having the domain expertise to evaluate it, the output is often generic at best.
"You're reliant on that to tell you whatever you think is right. And then when you go have that conversation with that person, you can tell that they were given this answer."
The teams getting real efficiency gains are not the ones automating decisions. They are automating the retrieval, synthesis, and drafting work so that human judgment can focus on what only humans can do: evaluating context, reading relationships, and making calls that require accountability.
What these three leaders are describing is a fundamental shift in where PM time goes. The coordination layer of product work, the meetings, the updates, the feedback loops, is becoming automatable. That does not mean it disappears. It means it gets compressed, and the time reclaimed goes somewhere more valuable.
Here are my key takeaways from this panel discussion:
As Betsy put it near the end of the panel, AI is not just making her team faster at their current jobs. It is changing what those jobs can look like: "I think further out now because my team has such a good handle on the short term."
That is the real efficiency gain. Not fewer meetings. Longer time horizons.