Asana leans on win-loss analysis to fuel their compete program

Tim Bowman

Head of Compete

Math, physics, and win-loss analysis. Tim talks about his academic background and how the ability to identify trends and solve problems are critical for companies that—like Asana—are seeking to understand the complexity of the customer buying experience.

The discussion revolves around how Asana utilizes win-loss intelligence to enhance their competitive program. Tim, the head of compete at Asana, explains his role, which include identifying competitors, understanding Asana’s competitive advantages, and conducting win-loss interviews to learn why customers choose or reject their product. Initially, Asana faced little competition in collaborative work management, but as the market matured, a structured compete program became necessary. Tim emphasizes the importance of win-loss analysis as it provides concrete insights into customer perceptions, helping to refine messaging and product development based on actual customer feedback rather than internal opinions. He shares how insights gathered from these analyses inform product roadmaps, training needs, and overall strategy. Tim also advises those starting a compete program to begin small and focus on delivering valuable insights consistently. He underscores the importance of building trust and credibility within the organization and encourages advocating for the establishment of such programs, highlighting the benefits not just for the company but also for individual career growth. Overall,the conversation highlights the significant role of competitive intelligence in driving business success and informed decision-making.

Expand
See less

Q&A

Clozd: Today we're talking about how Asana uses win-loss intelligence to fuel their compete program. Tim, could you tell me a little bit about your role at Asana?

Tim: So, as you mentioned, I’m the head of compete at Asana, and that means my job is to identify who our competitors are, what our competitive differentiation is, and how I surface that differentiation through sales enablement assets and trainings. I do large-deal support, and sometimes we’ll need custom messaging, custom compete insights, so I join those deals. Then I also do win-loss interviews once the deals have closed to get a better understanding of why people did or didn't buy Asana. And then the last thing I do in my role—like I need another thing to do—is the QBR analysis. So I do a deep-dive analysis every quarter into Salesforce data and churn data. Why are people buying us? Why are they leaving us? What are the win rates per competitor, per region, per segment?

You started the compete program at Asana. How did that come to be?

Well, Asana was a pioneer in a category called collaborative work management. And in the early days, there wasn't really much competition. It was really just non-use of collaborative work management. So you didn't really need a robust compete program. You could rely on anecdotes from the field on what's working and not working. But as the category matured, competition started to heat up, and we couldn't rely on anecdotes. In order to win more, we needed a scalable way of getting these insights out into the field, into the hands of our sellers.

And so our CRO decided we needed to have a compete program. I’d been on the business strategy team because I did a lot of market research, and ultimately I raised my hand for this compete program because I wanted to have a more tangible impact on the day-to-day business. Ultimately, my metric was competitive win rate.

How did win-loss analysis become a key part of that compete program?

To me, it's the most important part of the program, because we have a lot of ideas and opinions inside the company about why our product's the best, why people buy it. But ultimately, I like to think that eight hours of talking to internal leaders and understanding our product is not a good use of my time when I could spend those eight hours with customers to understand why they did or didn't buy us. Because ultimately, people buy things based on their expectations of the product, not necessarily the product itself.

So we needed to understand firsthand from the actual decision makers why they did or didn't choose Asana. And often it's not about the product. Win-loss allows you to zoom out a bit and understand the sales experience, the product, the brand, the marketing, the intangibles of why people do and don't buy Asana, so that I could bring that back into the company to improve the messaging, improve the understanding of our product and the perception of not just the product but also the category.

And I wouldn't be able to do that if I didn't have customer quotes to back it up from every segment and region out there. Otherwise, they're just opinions—and you can have really well-formed opinions, but ultimately you need a validation point on those opinions, and that's what win-loss is to me at Asana.

How are you using the competitive insights you get through win-loss analysis?

I always like to say that the competitive insights from win-loss is just one vector coming in. There are a ton of other insights from user research, market research, and analyst relations that we have to understand. But the insights I want to bring back are: Where are we below minimum viable in our feature set? What are essential features that must be part of a product for it to be considered a best-in-class platform within a broader tech ecosystem?

And so those are the insights I typically focus on. Sometimes they're small, like we just need this extra proofing capability. And sometimes they're really large, like if we’re not telling the story that gives my CIO confidence that we’re scalable and flexible and the right pick over the long term.

When you gather these insights, what do you do with them, and how do you share them internally?

Well, the first thing is that I read them all. It's quite a lot, but it's worth the time investment. And so I dig into not just the great analysis that comes out of Clozd, but I read them all. I want to make sure I understand the nuance. Then I use the reporting in the platform. I push out quarterly or every six months, depending on how many interviews we did. I'll put out a report, I'll read it out to our head of product, our product leads, our head of sales, and our QBR audiences.

But those are discrete points in the year, so I also onboard people to the platform so that they can self-serve. So product managers, PMMs, analyst relations, user research—they can tap into them in real time. And I think I'm happiest when I see those insights being used without me getting involved, because I don't want to be a dependency or a blocker to people using these insights.

I also look at the data over time as a series—because we're always launching new things, competitors are launching new things, there's different macroeconomic factors. So I’ll export the data from the platform and slice and dice it a little bit differently for my different audiences, and I’ll put it into a time series view so I can see how the drivers for winning and losing are changing over time and try to correlate that back to what it is that we did. What did we launch? How can we do more of that?

But it's an art, not quite a science. It looks like science, but it's definitely much more qualitative than quantitative. It's a great input into the product roadmap, into our go-to-market strategy and our enablement strategy. 

That’s awesome. Can you tell me a little bit about some of those outcomes and some of the sales or enablement teams that have benefited from it?

Sure, and I'll be as specific as I can. I can't give any of the specifics about how we've shifted our roadmap, but one good example is that there was something we consistently heard about why people bought us. And when we started doing interviews, it did show up as the number one reason why people bought us—but it was also one of the top reasons people didn't buy us.

And that can sound paradoxical, but it really is a perception and the expectations of the product. And so we were able to refine our messaging to be less broad and more specific. From a product standpoint, we’ve made pretty significant material changes to our roadmap, even after the roadmap was published, which is not supposed to happen. But when you get compelling data backed up with a statistically significant sample size and you surface that to the right teams, they're going to use it. And if it's the right thing to do to shift the roadmap, no matter how disruptive it might be, that has happened.

Now, I'm not going to say it happens all the time—you don't want to be too disruptive. But it's really important when you do see something that’s diverging from what everyone else is thinking, to raise your hand in these types of programs and bring the data. And it can be tough, because I think of this as a mirror being held up to our beliefs, and it's kind of a funhouse mirror and you may not like what you're seeing. But ultimately, if you're hearing it repeatedly, it’s the truth—whether you believe it or not.

And so it's really pointed us in the right direction to do more research, to do more validation. It's not really the end-all, be-all because I think we do 75–80 interviews a year, which is enough to see the patterns, but I wouldn't say it's enough to make a massive shift in your strategy or roadmap. It's just another important vector to understand when you're making those decisions.

What's the secret to getting good competitive intelligence from your win-loss program?

To me, it's the mindset with which you're approaching it. I like to think of myself as a scout. My job is to bring back the most accurate map, whether I like it or not. So subtract any personal beliefs or deeply held opinions, because ultimately, that's for marketing. That's when you're a preacher, that's when you're taking these insights, amplifying them, and trying to convince people that your point is right, that your product is better.

Now, I need to do analysis and storytelling on top of it to bring the insights to life, but I can't impose my own beliefs on it.

And a corollary is that sometimes you're going to be telling people things they don't want to hear, and depending on where you work, some cultures avoid those hard conversations. But there's a time for spiciness and a time for blandness. And I think these programs are good for spiciness, because you have the data to back it up. These aren't opinions, which is another secret to success.

Don't exaggerate when you're sharing these insights. If the head of product, let’s say, is asking a really specific question, just say, "I don't know." And then go back to the data to get an answer. Don't speak off the cuff, because the second the insights appear non-credible, the program will start to wither. Because ultimately, the point is that this is what customers are saying, not what we think customers are saying.

So I try to be as hands-off as possible. And that's what I always recommend, because if you're not telling the hard truths, why even bother? If you're just going to reinforce already-held beliefs, why bother? It takes time and it takes money to do it right, but you can get a lot of return out of this because it's invaluable to so many aspects of the business.

What piece of advice would you give to other businesses who might be interested in launching a compete program similar to yours?

Don't make the perfect the enemy of the good. Just get started. Get started on a small scale. You can do interviews without a great vendor to help you scale it, and sometimes you need to do that. And so when I first got started, I was working with sellers directly to pull me into deals and into interviews, and we started finding insights that were valuable. And then one of our executives that had used Clozd in the past recommended it to me, and I looked into it.

And because I am just a team of one, I can't really scale myself—but in order to get permission to scale myself, you need to start to show some value. So I share those insights regularly. I have a newsletter that I consistently send out. I also post them to a project internally every month to help create a groundswell, because ultimately, you need to build trust and credibility over time.

So just start small, start delivering insights, start earning the trust of your sales teams so that when you expand it to the entire global sales team, you can get testimonials from other sales managers and other sales leaders who trust you and know that win-loss analysis with Clozd is worthwhile. And so it's that social currency and advocacy internally that really does scale the program over time. But if I had waited to have all the budget and programs and understanding what the true impact was going to be, it probably never would've gotten started.

Done is better than perfect. I like that.

It's true. Because what is perfection? It's an asymptote that you're just going to always approach and never get to. So just know what good looks like. That's also a benefit I get from working with you all. I don't run these programs at scale globally, but I have a great CSM who advises me along the way, and they've become a great partner to help scale myself because personally that's what I want to do. I want to grow my impact, grow my career. And ultimately, you wouldn't believe how many people at Asana think I have a whole team of people doing work.