Given the media hype that surrounds the term ‘Growth Hacking’, you can be forgiven for dismissing the whole thing as another marketing buzzword. But what can get lost in the hubbub are some useful, development-inspired, working practices that can help a team focus on maximizing growth.
In this Tech Talk, Rob Sobers, Director of Inbound Marketing at Varonis, tells you all you need to know about Growth Hacking. Rob explains what Growth Hacking is and describes the processes key for it to be effective – from setting goals, to working through an experimentation cycle and how it works in practice.
About Fog Creek Tech Talks
At Fog Creek, we have weekly Tech Talks from our own staff and invited guests. These are short, informal presentations on something of interest to those involved in software development. We try to share these with you whenever we can.
Content and Timings
- What is Growth Hacking (0:00)
- People (2:34)
- Process (3:22)
- Setting Goals (5:25)
- Experimentation Cycle (6:12)
- How It Works In Practice (12:03)
What is Growth Hacking
I was a developer, started out my career as a developer, kind of moved into the design space and then did customer support here, and then now I’m doing marketing. I’ve been doing marketing for the past, I don’t know, two and a half three years almost. This sort of like, phrase, growth hacker kind of cropped up. I kind of let the phrase pass me by. I just didn’t discuss it. I didn’t call myself a growth hacker. I stayed completely out of it, mainly because of stuff like this.
It’s just overwhelming. Like Google ‘growth hacking’, you’ll want to throw up. What it really comes down to is that growth hacking is not at all about tactics. It’s not about tricks. It’s not about fooling your customers into buying your software or finding some secret lever to pull that’s hidden that’s going to unlock massive growth for your company. It’s really about science. It’s about the process. It’s about discipline. It’s about experimentation. Tactics are inputs to a greater system.
If someone came up to you, you’re a Starcraft player and said, “What tactic should I use?” You would have a million questions, “Well what race do you play? Who are you playing against? Who’s your opponent? What does he like to do? What race is he playing? Is it two vs. two or is it three vs. three?” There’s so many different questions. So if someone comes up to me and says, “What tactics? What marketing channels should I use for my business?” You can’t answer it. The answer is not in the tactics.
So this is what Sean Ellis, this is how he defines growth hacking. He says, “Growth hacking is experiment driven marketing.” You walk into most marketing departments, and they’ve done a budget, and they sit in a room, and they decide how to divvy up that money across different channels. “Okay, we’ll buy some display ads. We’ll do some Google Ad Words. We’ll invest in analyst relations,” but they’re doing it blind. Year after year, they’re not looking at the results, looking at the data, and they’re not running experiments. So it’s really kind of blind. So this is really the difference.
I took it one step further. I said growth hacking is experiment-driven marketing executed by people who don’t need permission or help to get things done, because I think growth hacking’s a lot about the process. And it’s about culture, and embracing the idea of doing a whole bunch of marketing experiments week over week. But if you have a team that is only idea-driven, and tactic driven, and then they have to farm out all of the production to multiple other stakeholders in the business like teams of Devs or Designers, then you’re not able to iterate. So to simplify it I just said, “Growth hacking equals people, people who have the requisite skills to get things done from start to finish, and process.”
So let’s talk about people. You don’t just wake up in the morning and just say like, “Let’s do some marketing.” You have to know what your goals are and then break it down into little pieces, and then attack based on that. So this is a system, that was devised by Brian Balfour at HubSpot. I call it the Balfour method. A good way to measure a person, when you’re hiring to be a growth hacker and run growth experiments, is to show them this chart and say, “Well how far around the wheel can you get before you need to put something on somebody else’s to-do list?” Now granted you’re not going to always be able to hire people that can do everything. I’ve seen it work where people can do bits and pieces, but it sure is nice to have people who can do design and development on a growth team.
So before you begin implementing a process at your company, what you want to do is establish a method for testing. And then you need analytics and reporting. I’ve seen a lot of companies really miss the boat with their analytics. They’ve got it too fragmented across multiple systems. The analytics for their website is too far detached from the analytics that’s within their products. Because you don’t want to stop at the front-facing marketing site. It’s great to run A/B tests, and experiment on your home page, and try and get more people to click through to your product page and your sign-up page, but then also there are these deep product levers that you can experiment with your onboarding process, and your activation and your referral flow.
So what you’re really looking for, and the reason why you establish a system and a method, is number one to establish a rhythm. So at my company we were in a funk where we were just running an A/B test every now and then when we had spare time. It’s really one of the most high-value things we could be doing, yet we were neglecting to do it. We were working on other projects. The biggest thing we did was we had implemented this process which it forces us to meet every Monday morning and discuss, layout our experiments, really define what our goals are, and establish that rhythm.
Number two is learning, and that basically is that all the results of your experiments should be cataloged so that you can feed them back into the loop. So if you learned a certain thing about putting maybe a customer testimonial on a sign-up page, and it increases your conversion by 2%, maybe you take a testimonial and put it somewhere else where where it might have the same sort of impact. So you take those learnings and you reincorporate them, or you double down.
Autonomy, that goes back to teams. You really want your growth team to be able to autonomously make changes and run their experiments without a lot of overhead. And then accountability, you’re not going to succeed the majority of the time. In fact you’re going to fail most of the time with these experiments. But the important thing is that you keep learning and you’re looking at your batting average and you’re improving things.
So Brian’s system, it has a macro level and a micro level. You set three levels of goals. One that you’re most likely to hit. So 90% of the time you’ll hit it, another goal at which you’ll hit probably 50% of the time, and then a really reach goal which you’ll hit about 10% of the time. And then an example would be let’s improve our activation rate by X%. This is our stated goal. Now for 30 to 60 days let’s go heads down and run experiments until the 60 days is up, and we’ll look and see if we hit our OKRs with checkpoints along the way. So now you zoom in and you experiment. So this is the week-by-week basis. So every week you’re going through this cycle.
So there’s really four key documents as part of this experimentation cycle. The first is the backlog. That’s where you catalog. That’s where you catalog all your different ideas. Then you have a pipeline which tells you what you’re going to run next, as well as what you’ve run in the past. So that somebody new on the team can come in and take a look and see what you’ve done to get where you are today. Then is your experiment doc which serves sort of as a specification.
So when you’re getting ready to do a big test, like let’s say you’re going to re-engineer your referral flow, you’re going to outline all the different variations. You’re going to estimate your probability for success, and how you’re going to move that metric. It’s a lot like software development as you’re estimating how long somethings going to take, and you’re also estimating the impacts. And then there’s you’re play-books, good for people to refer to.
So with Trello it actually works out really well. So the brainstorm column here, the list here, is basically where anybody on the team can just dump links to different ideas, or write a card up saying, “Oh, we should try this.” It’s just totally off the cuff, just clear out whatever ideas are in your head and you dump them there. So you can discuss them during your meeting where you decide which experiments are coming up this week.
The idea is that you actually want to do go into the backlog. The pipeline are the ones that I’m actually going to do soon, and I’ll make a card, and I’ll put it in the pipeline. And then when I’m ready to design the experiment, I move it into the design phase, and then I create the experiment doc. And then I set my hypothesis, “I’m going to do this. I think it’s going to have this impact. Here are the different pages on the site I’m going to change, or things within the product I’m going to change.” And then later in the doc, it has all of the learnings and the results.
So one key tip that Brian talks about is when you’re trying to improve a certain metric, rather than saying, “Okay, how can we improve conversation rate?” You think about the different steps in the process. It just sort of helps you break the problem in multiple chunks, and then you start thinking a little bit more appropriately. And this is actually where the tactics come into play when you’re brainstorming, because this is where you’d want to actually look to others for inspiration. If you’re interested in improving your referral flow, maybe use a couple of different products, or think about the products that you use where you really thought their referral flow worked well, and then you use that as inspiration to impact yours. You don’t take it as prescription. You don’t try and like apply it one-to-one, but you think about how it worked with their audience and then you try to transfer it over to how it work with yours.
Prioritization, there’s really three factors here. You want to look a the impact, potential impact. You don’t necessarily know but you want to sort of gauge the potential impact should you succeed with this experiment. The probability for success, and this can be based on previous experiments that were very close to this one. So like I mentioned earlier, the customer testimonial you had a certain level of success with that in one part of your product on your website, and you’re going to just reapply it elsewhere. You can probably set the probability to high, because you’ve seen it in action with your company with the product before.
But if you’re venturing into a new space, let’s say like Facebook ads. You never run them for your product before. You don’t know what parameters to target. You don’t know anything about how the system works and the dayparting and that, then you probably want to set the probability to low. And then obviously the resources. Do I need a marketer? Do I need a designer, a developer and how many hours of their time?
So once you move something into the pipeline, I like to have my card look like this. I have my category, my label. So this is something with activation, trying to increase our activation rate. And then I say, “It’s successful. This variable will increase by this amount because of these assumptions.” Then you talk with your team about these assumptions, and try and explain why. So the experiment doc, I had mentioned before, this is sort of like your spec. I like doing this rather than implement the real thing upfront, if you can get away just putting up a landing page, and then worrying about the behind the scenes process later, do that. Like if you’re thinking about changing your pricing. Maybe change the pricing on the pricing page, and then not do all the accounting, billing code, modifications just yet.
Implement, there’s really not much to say about that. The second to last step is to analyze. So you want to check yourself as far as that impact. Did you hit your target? Was it more successful than you thought, less successful? And then most importantly, why? So really understanding why the experiment worked. Was it because you did something that was specifically keyed in on one of the emotions that your audience has? And then maybe you carry that through to later experiments.
And then systemize, another good example of systemizing that actually comes from HubSpot is the idea of an inbound marketing assessment. It’s actually their number one Lead Gen channel which is they just offer, for any company that wants, they’ll sit down one-on-one and do a full assessment of their website, of their marketing program, et cetera. When they were doing these one-on-one discussions those became their best leads most likely to convert.
So they made something called Website Grader which is something you could find online, and it’s sort of like the top of the funnel for that marketing assessment where someone’s like, “Ah, I don’t know if my website’s good at SEO, and am I getting a lot of links?” like that. So they’ll plug it into the marketing grader. It’ll go through and give them a grade. They’ll get a nice report, and then a sales rep in the territory that that person lives will now have a perfect lead-in to an inbound marketing assessment. Which they know is a high-converting activity should someone actually get on the phone with their consultant. So it’s a good example of productising.
How It Works In Practice
So this is just sort of like how the system works. So Monday morning we have … It’s our only meeting. It’s about an hour and a half, and we go through what we learned last week. We look at our goals, make sure we’re on track for our OKR, which is our Objective and Key Result. And then we look at what experiments we’re going to run this week, and then the rest of the week is all going thought that rapid iteration of that cycle of brainstorming, implementing, testing, analyzing, et cetera.
So you kind of go through these periods of 30 to 90 days of pure heads-down focus, and then afterwards you zoom out, and you say, “How good am I at predicting success of these experiments? Are we predicting that we’re going to make a big impact, or making small impacts? Is our resource allocation predictions accurate?” And then you want to always be improving on throughput. So if you were able to run 50 experiments during a 90-day period, your next 90 days you want to be able to run 55 or 60. So you always want to be improving.