February 25, 2021
Product

How to supercharge product development

Everyone wants to crack the growth code, but the truth is that what works for one company won’t necessarily work for another. Success lies not in assuming there’s a one-size-fits all solution, but in being intentional and strategic about how you identify and validate the right set of growth techniques and processes for your specific company. 

I advise a lot of different startups on how to approach growth, data, and product strategy. As a Sequoia Scout and Reforge EIR, I’ve had the opportunity to support companies like Carousell, CRED, Quilt, Kumu, Celo, and others. The company where it all started, however, is Gojek, a Super App used for ordering a variety of products and services (food, commuting, digital payments, shopping, hyper-local delivery, and many more).

I was Gojek’s first data hire, and in my four-and-a-half years with the company, I developed the business intelligence and growth functions that enabled Gojek to become the largest consumer transactional tech group in Southeast Asia. I inherited the growth team (which consisted of about eight product marketing managers with no technical skills) and built out the data team from scratch. 

In the early days, my role was primarily ramping everyone up on SQL and evangelizing growth methodology throughout the organization. Over the years, our team matured and was able to take on more sophisticated projects. Our goal was to supercharge product development in a way that drives repeatable, scalable, and predictable growth. 

At the same time, we worked hard to ensure that our team had a certain amount of autonomy, meaning that we didn’t rely on engineering or other resources. This enabled us to remain agile and build momentum since we didn’t have to slow down to wait for access to limited resources. Our MVP-driven, iterative approach—identify and prioritize opportunities, develop hypotheses, test, validate, and expand—gave me the chance to learn a lot about what really drove growth at Gojek. And while the solutions we developed might not work for everyone, the scrappy, action-oriented process we used to uncover and develop those solutions can apply to growth efforts almost anywhere.

4 keys to success

Over the course of my career—and many, many different growth-focused projects—I have identified four core success pillars. Each one on its own has the potential to change the game, and a combination of all four is a sure-fire way to supercharge your product development for exponential growth.

Understand what drives business growth

The first step is really digging into and fully comprehending what drives growth for your specific business. And, once you understand the key drivers, being able to quantify and assess opportunities not just in terms of their size, but also in terms of their feasibility within your existing constraints. You might identify what looks like an enormous opportunity, but if you don’t have the resources—technical skills, internal integration, leadership buy in, etc.—you’re never going to be able to close the gap between the big idea and reality. You need to understand both the opportunity and what leverage you need to build to act on that opportunity. 

Ensure data is credible and accessible

Data problems are not linear, they are exponential. In other words, for every wrong or malformed piece of data that you work around today, you’ll have an exponentially bigger mess to deal with tomorrow. It’s like compounding interest. Because of this, postponing a data fix only creates more work down the road. Even so, sometimes postponing a fix is the right decision. When that happens, it’s important to keep everyone aware of the analytics debt they are accruing. 

That said, the main goal with data is to ensure that it is credible. And, if it can’t be 100% perfect, at least make sure it’s consistent and ‘directionally correct.’ The best product managers will not only understand the data flow in general, they will also take the time to verify exactly how the data is being piped and that it’s happening correctly. With data, small details can make a big difference; so this kind of due diligence pays off big in the long run. 

After all, you are relying on your data to effectively inform your decisions so you can identify and build your growth strategy around truly exponential lift opportunities. It only makes sense that you’ll gain the most relevant and accurate insights by using the highest quality data. 

(For more details, you can read my post, “A step by step process to fix the root causes of most event analytics mistakes.”)

Avoid building one-off operations

The last thing you want to do is embark on expensive and time-consuming builds that ultimately fail to add up to meaningful insights or actual growth. Instead, focus on identifying which tools you already have at your disposal for running tests. Get creative about how you use these tools to develop and deploy experiments. Only invest time and effort in building a new testing platform when you have established a consistent pattern of need for that platform. (You want to make sure you get your money’s worth out of it once you’ve built it.) From there, the process is about deploying experiments, verifying wins, and then productizing those wins in an effective way. 

For example, when we first started thinking about designing a new “commuter subscription package” to increase retention rates, we didn’t overexecute on our assumptions and develop a product from scratch. Instead, we ran tests using our existing tool — a voucher platform that allowed users to apply nominal or percentage-based discounts to any single ride. We built on the existing tool by adding a time-restricted feature that mimicked the expiry period a subscription might have, allowing the team to deploy vouchers at a higher leverage point to change user behaviors. 

In the end, our original hypothesis didn’t pan out, but that was okay. We hadn’t spent a lot of time or money designing a new commuter package (which wouldn’t have moved the needle at the time). And, just as important, we opened up the ability to test time-based vouchers on many other hypotheses, like whether the time of a user’s first interaction with our app (morning vs evening) correlated with a user eventually converting into a power user. In addition, because we built things in a generalized way, another team actually ended up leveraging our promotions for their own specific product use cases, and didn’t even have to communicate with our team to launch their experiment. 

Find the right internal partner

Last, but certainly not least, it’s critical to team up with the right internal partner. At Gojek, we started with the driver operations team. We chose this team because we saw that they had a lot of growth potential, but they lacked resources, tooling, and an extra hand to help them access that potential. They were too busy constantly putting out fires to think much beyond any given day. 

The first growth challenge we tackled was how to get our driver activation rates to scale up alongside our onboarding numbers. We were literally renting stadiums to onboard new drivers, but then seeing conversion rates on our orders dropping to 50% and then 40%. We discovered that there was a huge delta between the rate of onboarding drivers and the scale at which Gojek could provide the necessary training and education on the product to encourage our drivers to accept every order that came to them on the app. 

We started this growth experiment by leveraging our Twilio API, a CSV file, and Python script to send a simple text notification that said, “You are in the bottom 20% of driver acceptance rates. This makes our customers sad. Please accept every order you receive.” The process was very straightforward. The results were massive: drivers who received these texts increased their order acceptance rates by 20%. 

This early, low-tech success got a lot of people in the organization interested in growth, including folks on the engineering side. As we expanded our team to include both front and backend engineers, we were able to start working on more interesting problems, like who is most likely to convert to a power user, and how do we drive more engagement within the app. 

Ultimately, our team’s partnership with the driver operations team took Gojek from millions of orders per day to tens of millions of orders per day, and we saw meaningful growth across 20 different verticals. A big part of our shared success was our commitment to being true partners and always pursuing collaborative buy in across the board. We didn’t go in expecting to just announce test results and dictate what our partner team needed to do in response to those results. We spent time helping them understand exactly why something worked or didn’t work so that they became invested not only in the outcome, but also in the process to productionize the learnings into their product development process. 

We also made sure to always keep the driver operations team in the loop. We shared our findings with them ahead of publishing to the rest of the organization. This “sneak peek” gave them the opportunity to address any shortcomings or at least develop plans to address them. We never put our partner teams on the spot. In fact, we often worked to intentionally make them part of the solution in a way that not only gave them credit for their role in discovering an opportunity, but also made them the heroes of the story. 

On the flip side, it’s incredibly important for any growth team to build and maintain a reputation as a source of absolute truth. You will likely encounter scenarios in which another group presents statistically invalid results that are either just bad data or, worse, intentionally misleading. In such cases, you want leadership to turn to the growth team as the arbiter of integrity and intellectual honesty. 

Getting started

In any startup, there are a lot of untested hypotheses. Knowing which ones to pursue and how to pursue them requires asking a few specific questions and taking the time to get straight answers.

1. Which opportunity has the greatest potential?

When you’re prioritizing growth opportunities, you want to elevate the ones that have the potential to drive a viral loop. That’s where you’ll get the biggest return on your investment. The trick is not just making a guess about what you think will drive a viral loop, but finding a way to prove that hypothesis.

Take referrals, for example, a common way to create a viral loop. Many companies use a casual contact loop that exposes prospective users to a product when they receive, for instance, a survey or a meeting invite. But, that strategy won’t work for everyone. I worked with an invoicing app that initially assumed they could use a casual contact loop to generate new leads from people who received an invoice from an existing user. But resources were scarce—how could they validate this hypothesis and make the right-sized bet?

To explore this potential opportunity, the team set out to determine how many people in the target audience were qualified for the product. The test was simple—they called the last 50 people who had received an invoice. It turned out that these invoiced users were not, in fact, business owners, so the product was irrelevant to them.

As a result, we deprioritized building a contact experience to drive growth, and switched our focus to the onboarding experience, specifically capturing more information about users’ business categories. The business category question had been removed from onboarding in order to reduce friction, so the team sent a survey to users asking them to categorize their business based on a streamlined selection of 10 possible categories. Over 80% of the recipients answered the question, indicating that this could be a low-friction, high-context way to personalize and acquire users.

With this new information, the team started sending users educational material relevant to their specific business category. We found that users who received that content not only converted and activated more quickly, they also referred more people through the existing referral program. The kicker? Users who selected B2B categories gave us the necessary context to develop a relevant, lightweight acquisition strategy for the users they invoiced.

The moral of this story is that it’s really important to think things through and identify the right leverage points instead of making assumptions. And, while you’re thinking things through, remember to consider any constraints that are outside the product team’s influence. It won’t do you any good to pretend that your team can magically pivot the tide that’s already in motion. Instead, look for existing momentum indicators or buy in that you can enhance and accelerate.

2. What could go wrong? (And what if it goes right?)

It can take as little as ten minutes to look through an experiment design and troubleshoot it for weak points that could derail the effort. Taking the time (before you get started) to review different scenarios and possible outcomes to see if you’ve missed anything critical in the design can save you a lot of pain on the other end.

Sometimes a growth team gets so caught up in the intricacies of their experiment that they forget to think about what comes after the test is complete. Do you have a clear idea of what the next steps will be depending on the kind of results you get? And is it possible for your team to execute those next steps independently, or do you need to get other teams involved? Will those teams be willing and able to jump in to help?

And ultimately, is it repeatable? Will it scale? We already talked about avoiding one-off experiments, but as you get more sophisticated in your approach, you want to go even deeper to ensure that you’re creating truly repeatable and scalable experiments that you can iterate on in order to improve results and gain new insights.

And what if the experiment doesn’t turn out the way you expected? Will you rerun the test based on another hypothesis you had waiting in the wings? Or was there something you missed in the experiment design that you can change to improve the results? The best growth PMs plan at least three steps ahead of launching any experiment.

Playing through different scenarios can save you a lot of time and effort and ensure the optimal performance of your experiments as well as the most efficient implementation of results-based product changes.

3. What data do I need, and how will I get it?

One of the most common mistakes organizations make is failing to draft a plan for how they will use data. Even more than being unable to say how they want to test something, many teams lack the ability to define which data they need to execute the test, how they will collect the relevant data, and when they will stop collecting the data. Having a clear workflow not only helps you identify what you need to implement that workflow, it keeps you along the path from start to finish.

The process doesn’t have to be complicated, but it does have to be thorough. For example, you need to cover things like how you’re going to set up the data model, what analysis you’re going to run, and which audience segment you’ll work with. Depending on what kind of test you’re running, you may need to craft push notification copy and create a calendar to map out the different triggers and timelines for various communications. You may need to coordinate with community and service teams. And you need to extend the thought process to consider what happens if the test goes the way you expect, and what happens if it doesn’t.

4. Do I have the tooling I need?

Every company should have at least one tool for sending emails or SMS or some other form of communication (push notifications, parameters on the screen, etc.) to users. Optimally, that tool will allow you to choose which users get which experience so that you can segment and analyze by user types and other variables. Further, it’s advisable to have deep capabilities in at least one such tool, and then broader capabilities in several others. For instance, if you focus on push notifications, do you have the ability to do more sophisticated things like segment, include parameterized details, trigger them based on user actions, deep link, facilitate in-push actions, and so forth? Once you have that, you want to be able to support and augment that with more general capabilities on email, web, and so forth.

For example, say your growth model relies on notifying influencers. You may ultimately want to build a recommendation product that uses the attributes of previous behaviors related to notifications. In such a scenario, if a user saves an article where the article attributes include “Topic A,” you should be able to use that attribute in future notifications, “Similar to Topic A: Read about Topic B.” To make this kind of recommendation strategy work, however, you need to determine which are the appropriate interests and how you could use that data to define the right interest categories.

In one such case, the company I worked with started with a simple Typeform survey sent to small groups of users. Early on, they achieved some initial success when notifications based on the interest graph generated higher conversion rates and CTRs than a generic, non-interest notification.

From there, the question became, which strategy would perform better: broad or very specific interest categories? To explore that, we surveyed individuals in two separate groups and saw a high CTR for specific interests, but also saw that many people declined to click any interest because the choices were too specific. Bottom line: modeled over the long term, we wouldn’t see that much growth despite the higher CTR for what ended up being a much smaller portion of the audience.

In the end, we went with the broader category strategy. And because we had done our due diligence with the experiment, the product manager had the information and confidence they needed to generate buy-in and execute effectively.

You've validated your hypothesis. Now what?

Figuring out how to productize test results in a meaningful way that scales over time is the next perpetual challenge. As you consider how you want to move forward, it’s important to ask how configurable or how evolved a feature needs to be to ensure it’s not detrimental to the product experience. For example, Slack notifications are great, but as adoption scales and more people get on Slack, those notifications quickly become overwhelming. The notification feature will need to adapt, and it’s important to think about how that will happen architecturally over the life cycle of a feature.

A key part of successfully productizing test results is really drilling down to understand why the test was successful. Getting at those details is how you extract the most value out of any experiment. For example, we found that the Gojek driver SMS about being in the bottom 20% of drivers had the greatest impact on drivers who had been onboarded in the last 40 days. This indicates that driver behaviors can be shaped in the early days of their engagement on the platform. We made the most out of this finding by creating a comprehensive onboarding lifecycle journey of programming covering a driver’s first 40 days.

Another way to build off initial success is to model the feature experience for new versus core versus power users to look for patterns that can help you replicate success methods more quickly and efficiently. For example, at Gojek, we developed a great new-user experience for our food delivery product. This experience highlighted well-known merchants within recommendations, which was helpful to users who hadn’t yet built trust for food delivery services. But that experience wasn’t as relevant or helpful for power users. So the next step is to look at how that experience should be different and what existing learnings can we apply to shorten the distance between test and actual growth across the different personas on the platform.

Bottom line: So many options. So little time. Choose wisely.

As new experimentation tools proliferate across your organization, you’ll find that you have way more more experiment opportunities than you’ll ever have the time to run. Part of the solution to making the most of this wealth of learning opportunities is to make it easier for more people within your organization to run experiments on their own, without the need for engineering support (which is always a valuable commodity that’s hard to come by). And, just as important as providing the tools and permission to run experiments, is to evangelize the right way to think about experiments so that they are done well and in a way that delivers all the possible benefits. 

As with so many things in life, supercharging product development through smart and strategic testing isn’t just about what you do, but about how you do it. A little proactive planning and intentional execution go a long way toward helping you reach your growth goals.


In addition to helping all kinds of startups level up their growth strategies, Crystal Widjaja is also the co-founder of Generation Girl, a non-profit organization aimed at introducing young girls to the STEM (Science, Technology, Engineering, and Math) fields through fun, educational Holiday Clubs. Alongside her co-founder and Generation Girl CEO, Anbita Nadine Siregar, Crystal works to create a community and safe place for girls age 12 to 16 to ask the questions that open up a world of possibilities. “The goal is for girls who come out of the programs to feel like they have agency and like they are empowered to pursue these fields,” says Crystal.

Crystal Widjaja is an EIR and Partner Advisor at Reforge and a Sequoia Scout. She was most recently the Chief of Staff to the co-CEOs and the Senior VP of Business Intelligence and Growth for Gojek, the leading on-demand multi-service platform in Southeast Asia serving tens of millions of orders per day.

Learn more