February 15 2022 •  Episode 005

James Slayter - Accelerating Experimentation at Australia’s #1 Online Real Estate Marketplace

“I’ve seen time and time again that people learn more from failed experiments than successful ones. It’s really highlighting that it’s OK to fail and be bold in your experiments. If experiments don’t work, then it’s a really good insight for everybody else.”


James Slatyer is the Head of Product (Membership) at REA Group. For the past 10 years he has held a variety of Digital Leadership and Product roles. James has extensive experience in Experimentation, Personalisation, CRM, and Big Data across eCommerce, B2B, B2C and customer experience.

He was responsible for implementing and scaling experimentation at REA Group, working across the business to deliver personalised customer experiences across apps, website, content, and communication channels. REA Group is one of Australia’s biggest digital businesses, leading the charge with experimentation for more than a decade.

Prior to REA Group, James worked for Jetstar Airways (Qantas) developing new digital products and services in Adjacent and Emerging Channels, also managing Personalisation, Optimisation and CRM for the airline.

 

Get the transcript

Episode 005 - James Slayter - Accelerating Experimentation at Australia’s #1 Online Real Estate Marketplace


Gavin Bryant  00:03

Hello and welcome to the Experimentation Masters Podcast. Today I'd like to welcome James Slayter to the show. James is the head of product (membership) at REA Group. REA group is a leading digital business that specializes in global real estate advertising with a presence in Australia, Asia, Europe and North America. 

 

In this episode, we're going to discuss how James established and scaled experimentation group wide REA Group. 

 

Gavin Bryant  00:35

Welcome to the show, James. 

 

James Slayter  00:37

Thanks, Gavin. 

 

Gavin Bryant  00:39

So I thought a good place to start for our audience, James would be to provide a little bit of background and an overview of where you got started with your journey on experimentation.

 

James Slayter  00:52

Yeah, sure. So I guess my sort of background.... I came from Jetstar, prior to REA. So obviously a major kind of E commerce operation. And was actually really lucky at Jetstar. In that time, I took over a team, we had sort of a small kind of team, it was kind of a one man show, that was running experiments on the website, and then they were able to put up a business case and sort of get a consultancy onboard. And that consultancy, then we're able to kind of take data and use data to drive the backlog. And to sort of help us get started with experimentation. So I ended up taking over that team, and sort of really helping to run that team and establish it and embed it within Jetstar. So we sort of started off with a, I guess, a centralized team that would run all of the experiments, and this was done by the consultancy, and then really look to move and democratize that into the product teams. So really trying to embed that into the product managers’ way of thinking.

 

Gavin Bryant  02:03

Excellent. And thinking more recently to the experimentation journey at REA Group, what's that look like?

 

James Slayter  02:09

So obviously, I came to REA about 3 years ago now, and was really interested in getting into a sort of a core tech business and seeing the way that they operate compared to a more E-commerce or an airline type business. And now they were doing experimentation, there were some teams that were testing out a sort of financial services team, as well as our leads team were running experiments, and they were being quite successful. They were running quite a few experiments, and sort of testing around the edges of our core experience. You know, most of the eyeballs that we get are on our sort of property details page and our search results pages, as you can imagine, lots of people searching photos. So there are a lot of people running experiments sort of on the sides of the site. So I think coming into REA, I definitely saw an opportunity to bring experimentation into those core pages. So I really sort of took it on just given my background and experimentation, Jetstar and the scale that we had there, saw an opportunity to come in and really start to upskill, some of the people in the product managers and the teams that were working in those core pages to help try and drive outcomes for some of those key conversion metrics.

 

Gavin Bryant  03:27

Excellent. So thinking about your personal philosophy, or guiding principles around experimentation? What are some of those guiding principles that you've formed over the past few years?

 

James Slayter  03:42

Yeah, for sure. I think probably the first one is to keep it simple. You know, experimentation can get quite complex, especially when you think about... You know, when you start talking about statistical significance, and you know, all of the different mathematics that sits behind how you tell whether an experiment is a winner, and the different variants and how you implement it, but try and keep it simple as much as possible. So when I say that it's really about targeting one metric at a time. Definitely, I've sort of seen time and time again, you know, people that are getting started and experimentation, want to measure multiple things with the one experiment. But I guess my sort of guiding principle and learnings over time is that the more that you can focus on that single metric, generally, the more successful those tests are. 

 

Also, spend the time and get the hypothesis right, I think is another really important one, it's really worth sitting down and thinking about what you want to achieve and having a really good hypothesis statement in the framework of [if then because], or that's generally the one that we use at REA. So I think yeah, spend the time and get the hypothesis. And then always think about secondary metrics is probably one of the other kind of principles I'd like to take in there. So it's not just about testing the one thing, obviously keep it simple and use one metric. But definitely look at how it impacts some of the other metrics around the experience. So you just want to make sure that you're not increasing conversion in one space, but then actually decreasing conversion or user experience in another space.

 

Gavin Bryant  05:22

Excellent. Let's just discuss a couple of those elements for a moment. So at REA Group, do you have a core set of metrics that effectively product teams choose from? Or is that shifting based on the demands of the experiment?

 

James Slayter  05:42

Sort of when I came in, everything was the agile teams that at REA were really empowered to work their own way. So there's obviously company metrics that get used, but teams are really sort of in charge of their own destiny to a degree. So there are definitely core metrics, you know, the amount of property views that we get is obviously important for our agents, the number of people that save properties, save searches, login, you know, those kind of pretty simple ones that I guess everyone's keeping their eye on. But we've definitely got some adjacency businesses that do have slightly different metrics. You know, we've got a financial services arm that tries to drive home loans and leads to brokers. So that would generally be some of their metrics, filling out calculators. And then on the other side, our sales leads business, is really looking at kind of qualified leads to agents. So they might be targeting other things. So all of that does come back into the core experience. So on our property details page and our search results page.... So people are continually trying to optimize in there, but I think.... Yeah, so it sort of changes from team to team. But we obviously want to make sure some of those core metrics don't get impacted by the different experiments.

 

Gavin Bryant  07:02

Good point. And thinking about hypotheses statements, what are some of the challenges that you see that teams have with setting the hypotheses prior to the experiment?

 

James Slayter  07:15

I think, as I said before, it's probably around too many metrics. So trying to focus on too many things at once. You know, generally keeping it crisp, and trying to sort of highlight one metric is definitely something I say. 

 

Also, I think, probably taking research or an insight, or some analysis that leads to that hypothesis statement.... I think that because part of the hypothesis statement is quite important. You know, if we change this button color, then we'll see a 5% uplift. Because, you know, I think sometimes, you know, you say, well, because we just think people will like that, whereas I think it's great to have an insight or a piece of research or a piece of analysis or a metric that is that because,

 

Gavin Bryant  08:06

Yeah, that's a really good point that the hypotheses is not a guess, that it's grounded and anchored in some form of qualitative insight or quantitative research.

 

James Slayter  08:17

Right. 

 

Gavin Bryant  08:19

So, thinking about leadership effectiveness, as Head of Product, how do you find that experimentation helps you to lead more effectively?

 

James Slayter  08:29

Look, it's a good question. I think that probably one of the big things that I think experimentation enables more broadly, and it also comes down to leadership too is being able to enable data driven decision making. And I think that's something that really allows.... that really gives people the opportunity to galvanize together around. You know, if you are quite a metric driven business, and most commercial businesses are, you know, being able to sort of take out the opinions, sometimes out of decisions, and to really make sure that you're making decisions and leading based on either North Star metrics or tangible metrics that can then be tested and optimized and measured, I think, just enables clarity in leadership. And I think there's this sort of a.... I don't know whether you call it protocol on an acronym, but the HIPPO principle, which is the highest paid person's opinion, and I think that what it does, you know, by having experimentation, it actually takes some of that out. So I think it empowers product managers and designers and analysts and people in the business to really challenge existing opinions and challenge the status quo a bit. And I think that it definitely helps leaders in that, you know, you've got more and more people that are contributing to the decision making process.

 

Gavin Bryant  09:58

Yep, excellent. Excellent summary there. So thinking about the experimentation journey at REA Group, you came in from Jetstar Airways. What was your Ground Zero assessment that you had.... You mentioned that some of the teams were performing experiments, that pockets of the business were experimenting, and they were experimenting effectively. What was your initial assessment of culture and capability?

 

James Slayter  10:32

So, just coming in initially, as I said, Before, there was some really capable people that were running experiments and running them quite well .... They had quite a good velocity. In the number of experiments they were running, they were using sort of more of the client side experimentation, sort of tools. And there was definitely a culture in those teams of testing everything. And probably purely because there wasn't an experimentation tool in those core pages, that the teams that were working in those spaces probably weren't as, sort of up to date with experimentation. So that was sort of the gap that I guess I saw was, how do we get a tool into those core pages, and really start to build that momentum. 

 

So I think that there was definitely sort of a culture of people starting to use data to make decisions. But I think also, when I joined there was just the re-platforming project had just taken place that I think had used up quite a few of the teams. So there was definitely a big scope to deliver there. And I think people were sort of ready to get into the experimentation space. But they sort of hadn't been, I guess, a centralized team or capability to help train that. So there were definitely some opportunities. I think when I joined, there were three or four different experimentation tools being used around the business, in varying ways. So again, trying to centralize things a little bit and distill that down into one tool was something that I felt could add a lot of value, just because everybody's talking the same language, everybody's using the same tool, everybody's sort of experimenting in a similar ish way. 

 

James Slayter  12:20

So yes, I think there was definitely a culture of people wanting to learn and wanting to do new things. And I think that's really what's helped drive the experimentation culture today.

 

Gavin Bryant  12:32

One of the key things you mentioned, there was consolidation and rationalization of platforms. So a one tool focus, what was some of the other key elements of the experimentation journey to really simplify and to build skill and capability across the business?

 

James Slayter  12:52

To begin with, probably starting with a bit of a framework on how to run experimentation on how to run experiments. So, putting together some docs around, what does a good hypothesis statement look like? You know, sort of took an experiment calculator on something that can give you a quick idea of statistical significance, that help teams sort of work out whether or not their experiment would even gain statistical significance and how long it would take. 

 

So sort of educating people a little bit on that to look at... You know, and then I think people start to understand what the leftover control needs to look like what the sort of traffic needs to look like, what kind of conversion rates, you might need to get to actually run the experiment. 

 

So there was a bit of training there, definitely highlighting the benefits of experimentation and how much it can benefit. You know, the different metrics that the teams have. So yeah, I think sort of setting up that framework. Also, getting the tech community on board was probably another really key piece to that. There were definitely some hesitations around putting a client side tool into our most highly trafficked pages, just because of the potential impact on page load times on just general performance, you know, there's opportunities or there's a risk of flickering.... You know, when they when they first load that experience, the page can flicker a bit to the new version. So there was some hesitation there. 

 

So definitely getting the tech teams on board. And we used a server side product for that reason, which does move sort of the setup of the experiments into the development teams, which can slow things down a little bit. Obviously, you can run things a lot quicker if a product manager can jump in and change some copy or change a button color or move some things around. But I guess the complexity with our businesses, you know, agents sort of buy that listing for vendors and that's really their sort of listening. 

 

So we can't have too many people going in there and moving things around a page and changing too many things without the... You know, the proper go to market and the approvals from the account managers and agents.

 

Gavin Bryant  15:14

One of the key things that you mentioned there was a capability uplift around some of the fundamentals of experimentation, the why, what and how? Are there any strategies that you found really effective in building organizational capability and educating?

 

James Slayter  15:35

I think probably around experimentation, definitely, sort of having a bit of an open door policy to help individuals with their individual questions, has been something that's definitely helped a bit. 

 

Having a regular showcase is also something that I've seen work really well, and where you can cooperate, a bit of sort of training and an overview of the fundamentals in that showcase. So it's not just a showcase of the results in the experiments, but also an opportunity to bring in guest speakers or to bring an expert in the field, to actually then talk through, how they do experiments and really create a bit of momentum and a forum around it, get people to share their experiences. 

 

You know, I think it also creates a bit of a safe place as well, you know, as we all know, the experiments, a lot of them fail. And really coaching people that failing is good in experiments, because often, an experiment is much cheaper to set up, as much cheaper to run. And, you know, you're not going and spending all of this money and development team time building something that's not going to work. 

 

So I think, me personally, and I've seen time and time again, that people learn more from failed experiments than they do from the successful ones. So I think, creating those forums, also getting people from different sides of the business, when I say different sides, different functions. So building a bit of a team of experts, I'm really lucky to have.... A few people at REA that are also experts in it from design to tech. You know, it's product that really enjoy experimentation. So really bringing them on board, because everybody's got different sort of ways that they approach it. So allowing them to train people on those different things.

 

Gavin Bryant  17:34

Let's just double click on failure for a moment. So how did you shift the needle in the organisation embracing failure more readily?

 

James Slayter  17:49

Look, it's definitely a work in progress. As with most businesses, it's sort of unnatural to you know, make failure, okay. But it's definitely something that we're working on both as kind of an experimentation community within REA. And I think just broadly within REA, probably like a lot of businesses, you know, the Netflix's of the world, and the Amazons, and the Facebook's is sort of leading the way and in that way of thinking and really celebrating failure. 

 

So, I see experimentation as a really core part of that, and really highlighting that it's okay to fail, and be bold in your experiments. And if they don't work, then it's a really good insight for everybody else. 

 

So we're sort of really hoping that we can get more people to tell their stories of failure in their showcases, and in the different forums. Because we feel like once you start sort of normalizing those stories of failure a little bit more you can definitely start changing the culture.

 

Gavin Bryant  18:55

It's a really good point. So thinking back to the Showcase, it seems like it's been a really effective communication tool to engender support and communicate within the organization. Are there any other rituals or ceremonies that the organization has around experimentation that have been effective?

 

James Slayter  19:15

Yeah, okay. I think getting going when we got started when I first started, and we got the tool into the core pages, having kind of a weekly or fortnightly stand up, you know, obviously, the big company, the big sort of tech business like REA, there's sort of federated models. So, there's teams that are, I guess, custodians of different parts of the code base, and other teams need to federate into those places, so they're obviously got invested, the teams that are in those code bases have a vested interest in the experiments that get run. 

 

So to get visibility, running either a weekly or fortnightly stand up, whereby you have all of the different exposures. The moments can kind of be talked about on a Trello wall or, you know, on a JIRA wall, we found quite effective to then keep everybody across exactly where different things are up to. It also really helps with code removal as well, which is probably one of the difficulties with experimentation is just, you know, it's all well and good to run your experiment. But once it's finished, and you've done all your iterations, making sure that teams go back and clean up all of the quick code that they might have put in there to get the result. 

 

So that was quite an effective forum, I think getting going. But for us, unfortunately, COVID hit, and then it sort of things definitely, in the experimentation space slowed down. So we've stopped doing those forums just for now, just because I wanted to see how the business would go and operate without that structure to say whether teams would still do the required stakeholder management and keep different teams across all the experiments that are going on. So I think that trying to remove as much governance as possible has been one of my principles, even though we started with the governance, I think, to get it going, I've really wanted to let teams go and do their thing and try and keep it really lean and mean, and that obviously comes with its challenges. So I think it's.... I lack of a better word, it's all part of the optimisation of it all. 

 

James Slayter  21:36

So I think that stand up was probably a good forum, I think that the showcase was another good forum, and then really getting the word out there. So if there's town halls, if there's opportunities to sort of share experiments or results, I think that really helps to start to start building the culture.

 

Gavin Bryant  21:55

Yeah. So thinking about some of those challenges that you just alluded to. So what are the key challenges that you've had to face into along that journey?

 

James Slayter  22:09

There's definitely been a few, I think, initially, probably getting the buy in from the business, to sort of spend the time really integrating a tool into our kind of core experience. Because it had to be server side that obviously needed a team dedicated to that. And traditionally, there hadn't been a team that was dedicated to experimentation. 

 

So there was definitely a bit of work sort of getting business buy in and highlighting the benefits and consolidating the tools. So I think that was probably one of the first challenges that I faced, I think probably the second one is around sort of analytics. And I think we still face that today. And it's more sort of both sides of the coin. So it's more not only doing the analysis upfront to build out a backlog, I've seen that work before. And it's an extremely powerful tool in doing your prioritization. So picking your metrics, and really then analyzing all of the data associated with those and building out a prioritized backlog of experiments. 

 

So I think that having analyst capacity or product manager capacity to do that has been a bit of a challenge. And then on the other side of the coin, doing that post experiment analysis as well, again, just through kind of capacity and time, often, it's the sort of product managers and designers that are running a lot of the experiments at REA, and it's going in that sort of detailed analysis post experiment, looking at all your secondary metrics, looking at your primary metric, interrogating all of the page flows afterwards, and how it impacted different things. 

 

There's some real gold in that post experimentation analysis. And I think just sort of the way our analytics is set up, and just the capacity of the people doing it. That's not quite where we want it to be in isn't a bit of a challenge at the moment. So I think, they're probably the three main ones. I think that probably the last one is, just now starting to link it up with communications as well. So getting that cross channel experimentation happening, where you've got an email, that's often in a different kind of piece of tech. It's got three different variants, and then they hit a landing page or get a page on the site that then is matched with three different variants. 

 

So yeah, I think that's going to be sort of our network is the current challenge, but is definitely the next thing we'll focus on.

 

Gavin Bryant  24:56

You mentioned that the development teams are obviously critical to be able to enable experimentation, have you seen any challenges with competing priorities given that the teams are also fixing bugs and they're delivering key projects as well. And there's also a focus around experimentation, do you often struggle for prioritization?

 

James Slayter  25:17

Yeah, definitely. I mean, it's a great call. And I think, in that sort of democratized model, that's definitely one of the challenges, you really got to have a product manager, that's really focused, or a team that's really focused on doing experiments. You know, we've sort of all been in that situation where you've got a big project or a big scope, that's often going to time and you want to run an experiment, but then other things come up. And yeah, it can sometimes get bumped out. 

 

So yeah, I think that's definitely a challenge with that model. Obviously, if you move to more of a center of excellence model, where you've got an autonomous team that can run experiments, wherever they want, and whenever they want, sometimes that can be removed.

 

Gavin Bryant  26:05

Good point. So the start of our conversation, we're talking a little bit about how there's been a shift towards more data informed decision making, what are some of the leadership changes that you've observed as experimentation has become more prominent in the business?

 

James Slayter  26:26

I think there's definitely been a bit more of a focus around analytics, instead of not just the experimentation space, but I think just the analytics more general. You know, we've obviously got teams of analysts and data scientists and those sort of things at REA, you know, very, very capable people. I think just the analytics focus on experimentation is something that we're starting to see change a little bit from a leadership standpoint, which I think is really positive. And I think also that leaders are really starting to champion the results of experiments and the data that they get from the experiments, not only the uplift that they get from the conversion increases, but also the ability to use data in different strategies and approaches. So I think that.... I've probably also seen in the design community, but definitely the leaders that have been more into experimentation, are sort of taking more of those senior roles, and really starting to drive that from the top down. 

 

So I think that's been a really positive shift. Because definitely seeing user experience and UI designers start thinking with experimentation, and every step really helps the product managers, and the teams just embedded in their everyday thinking.

 

Gavin Bryant  28:03

Excellent point. So, you've had some really strong exposure across two multinational organizations now, establishing and growing experimentation. What are some of the key mistakes that you think that organizations can potentially make when trying to commence experimentation?

 

James Slayter  28:29

I think that there's that classic kind of Crawl Walk Run type methodology that I definitely... I mean, it's obviously a pretty common one. But I heard first from a consultancy, called the Luminary, and I really like it in that... I think that one of the mistakes that teams or organizations can make when they're getting started is trying to run before they can crawl. So, trying to go too big too early. I think starting small and really targeting one or two metrics, to begin with a small team, is really what's key to getting started. You know, I think if you try and go too big and too broad, then it can get a bit confusing and a bit out of hand and not actually sure what you're learning. 

 

So, yeah, I think that's probably one of the big ones. Probably another, I guess, key mistake is probably not a mistake, but it's probably just a learning from, I guess my experience doing it is, you know, really having an analyst, like someone like a data analyst there from the get go, or someone that's really kind of into the data. I think he's also a really key part of an experimentation program success, you know, haven't driven out of data science or analytics or I think the data really does the talking. So I think that having an analyst at the beginning can really help. And if you don't, it's sometimes harder to get traction. And I think that's probably one of the things I've really learned at REA, we didn't have a lot of analysts sitting in, I guess, my space at the business. So really rising up and bubbling those data insights up to senior management can really help and I think it's definitely a learning I've taken from here is having an analyst from the get go and have them really championing the the results.

 

Gavin Bryant  30:39

Okay, just to summarize there, so think about stay small, stay local to begin and strong analytics support to begin. 

 

James Slayter  30:48

Yeah. 

 

Gavin Bryant  30:50

Question around strategy. So obviously, the experiments are closely linked back into strategic tactics and broader strategic objectives. How readily have you seen in REA, were talking a little bit about failure before? Have you seen product roadmaps change based on experimentation results? Or maybe more broadly, elicit a strategic shift? 

 

James Slayter  31:20

Yes, I think there's been a few examples of it. I think there's definitely sort of in my space, we've sort of tried to run experiments to validate whether or not a concept is there, and then all businesses have KPIs or OKRs, or kind of North Star metrics that they reach for. So mapping things back to that is always really important, and making sure that they're sort of set up and measurable from the beginning is the key to doing that. But, yeah, I think that it really comes down to kind of being able to challenge some of those metrics as well and running an experiment first to validate your idea or validate that particular metric or that strategic objective. 

 

So I guess one of the examples that that I've got at the moment is sort of around a guides’ type experience. And would people use it a checklist. And we wanted to kind of test that first because managing, you know, everybody's.... Where everyone is up to in a particular checklist or guide could be quite complex from a technical perspective. So being able to experiment with that first and put enough tracking on the page to then see if people were would actually use a checklist, the way that we want to, was probably one way that we were able to validate that it probably wasn't a great idea. And that we probably wouldn't pursue that checklist type concept. And that kind of fed into a few different strategic objectives and OKRs. And I think, then that's allowed us to... You know, we're probably pivot towards some others that we're now going to focus on. So without sort of going into too much detail on that one. That's probably one example of where I'd saying experimentation be able to inform a change or a change in product direction or a strategic objective. 

 

Gavin Bryant  33:28

Yeah. So thinking about your key pieces of advice, what are some of the key pieces of advice that you give to organizations that wanted to make a start with experimentation, we've briefly touched on "Stay small, stay local, and have a strong focus on analytics." Is there anything else that comes to mind?

 

James Slayter  33:51

I think probably defining one or two high impact measurable business metrics to begin with, is really important. Also doing some analysis, or research upfront, to build out your experiment backlog. If you are going to get started, I think that's really important. And there's a thing called ICE scoring. So Impact Confidence and Effort. If you can sort of Google it, you can find a few different ways to set it up. But that helps you just start to shape a backlog. I think having a quite a clearly defined and prioritized backlog to begin with is quite important when you're getting started. You know, dedicating a small team and going as hard as they can at those one or two metrics or a quarter or two quarters is something that I've seen work quite well. I think having a small team that's autonomous that can really not be kind of bothered by other things. When you're getting started. I think that can really deliver a lot of value and highlight to the business that it's worth doing. 

 

And then I think, once you've done that, those one or two quarters for each of the successful experiments, or the even the unsuccessful ones as well, you know, forecasting out the annualized opportunity. So if that conversion uplift, let's say, Experiments show that it was getting 5%, better conversion, what would that actually look like for that metric when you analyzed it over the full financial year? Those metrics... That's when the metrics can actually look really powerful. If you were to ramp that up to 100% of people, and everyone was converting at 5% more, what would that mean for the business? So they're probably the four things that I recommend.

 

Gavin Bryant  35:43

Just to touch on prioritization for a moment with a lot of the ideas and concepts and initiatives that are generated up front.... You know, we're not a good judge of what ideas are work and what won't? How much organizational effort and time do you spend curating these raw ideas? Or is it more of just a quick pass to be able to run them through an ICE framework?

 

James Slayter  36:13

It depends on the team. I think different teams have got different approaches, sometimes have used the ICE scoring before. And that's proved as quite effective in helping them with their backlogs, it can be a little bit subjective, sometimes, so I would definitely caution. At that, it's definitely good to have quite a few people in the room, if you are talking about an ICE score, and try and get a bit of an average. But in terms of validating the ideas.... Yeah, I think it's important to get the design team really across that and make sure that it's part of a kind of a future direction, and the UX has been thought through, especially for the bigger experiments. So I guess that's probably one of the validation points. 

 

I think also just sort of bringing those types of ideas to the experimentation showcase as well to talk them through when they're in that idea phase. I think that sometimes helps people validate what they're doing and talk it through, and you've got other people who are thinking about totally different spaces, to have their input. So that can sometimes be an effective way for people to really validate those different ideas. And then a final one's just around sort of the data analysis. So if your idea was going to work, what do you need to, you know, if it was going to improve by 5% what would that mean? And do you actually think that's reasonable? You know, so kind of trying to validate that metric and the uplift as much as possible is important.

 

Gavin Bryant  37:54

Okay, so thinking about three closing questions now, number one, signature question. So thinking back across your time at Jetstar and REA, has there been an experiment that you've performed, that completely reframed organizational perspective?

 

James Slayter  38:16

I was trying to think....

 

Gavin Bryant  38:26

What would be a good example of an experiment that was performed that shifted understanding and potentially assumptions?

 

James Slayter  38:37

Look, I think probably the biggest one that I'll highlight is probably from Jetstar. So at Jetstar, we had airlines in Singapore, Vietnam, and Japan. And there was a huge issue with people in Japan queuing up at the airport, like the airports were just overwhelmed with people queuing up, and no one was buying bags on our website. Before they get to the airport, they were all basically paying for them at the desk, which was causing a nightmare for the operations, and we had no idea why. And we went there. And we did a whole bunch of user research, where we got sort of both an interpreter to interpret the language, but then also kind of a cultural interpreter in to sort of say, well, they're saying this thing, but actually mean this thing. 

 

So we did a whole bunch of research. And we actually started experimenting on the bags page on the website. And what we figured out it wasn't until we actually went to Japan or the team went to Japan, I didn’t go on this particular trip, and they were standing there ordering a coffee and they had, you know, kind of their coffees were in sort of, you know, small, you know, like most things it's like extra small, small, medium, large and extra-large. And we were constantly going for the weights.... The baggage weights that people would buy. And it triggered an idea that maybe we should move to more of this coffee style of ranking for our different baggage weights. And after kind of validating that with some of the people in Japan, I mean, in Japan that they're quite precise, they like to be precise with things. So when they're going on a trip, they're like, Well, I don't know what I'm going to need 15 kilos, or 20 kilos, or 30 kilos. So I'll just take it to the airport. And I'll just weigh it at the airport. And, you know, I don't want to get it wrong kind of thing. I don't want to pay for 20 kilos and only take 8 kilos, so I only take 15. But they seem to resonate much better, they are going on a small trip or a medium trip, or I'll probably need a large piece of luggage. And so that seemed to resonate a lot better with them. And we basically made that change, and the queues just died, almost didn't totally die off but dropped significantly from the airports and reduced a lot of strain on the operations. And obviously, there was the revenue impact of then a lot more people buying bags, because they understood the scoring system a little bit better.

 

Gavin Bryant  41:12

That's a really good example, it highlights how important it is to do that upfront user research, and to understand the customer journey and the pains and the problems and what people are trying to achieve. Great example. 

 

So thinking about some resources now of books, blogs, podcasts, what would be your top three recommendations to our audience that you found helpful on your journey?

 

James Slayter  41:43

Good question. What I think there's a lot of kind of online research, I'm just trying to think of sort of specific resources.

 

Gavin Bryant  42:00

Do you have a go to book doesn't have to be experimentation related?

 

James Slayter  42:07

I do. But I just can't remember the name of it. It's been a while; I've probably done probably more of this stuff by doing.... I can't remember sorry, off the top of my head. But I think in terms of resources, I think just sort of really kind of understanding, you know, how to write a good hypothesis, and really understand what kind of.... You don't need to have a full understanding of statistical significance, but definitely kind of having a bit of a feel for the different metrics that are involved in experimentation. That's not a great answer, but I'll have to pick it up on.

 

Gavin Bryant  42:52

Number three; If our listeners want to get in contact with you, what's the best way to get in contact?

 

James Slayter  43:01

Probably via LinkedIn. I'd would probably be the best way.

 

Gavin Bryant  43:06

Excellent. Thanks so much for the chat today, James. Really appreciate it. 

 

James Slayter  43:12

All right. Thanks, Gavin.

 

“One of the biggest mistakes that an organisation can make when they’re getting started is trying to run before they can crawl. Trying to go too big, too early. Start small, start localised with a small dedicated team with a focus on one to two key metrics.”


Highlights

  • Keep it simple. Target one key metric at a time. Try to avoid focussing on too many metrics simultaneously.

  • Take the time upfront to get your hypothesis right. James suggests the hypothesis statement framework IF [product change] THEN [predicted outcome] BECAUSE [research/insight]

  • Ensure that your hypothesis statement is grounded in some form of reality. Your hypothesis is not a guess. Qualitative research and quantitative research feed into the development of your hypothesis statement

  • Think about the downstream impacts of your experiments. If an experiment supports your hypothesis, look at the second order effects too. You have increase conversion rate in one area, however, decrease conversions or experience in another area

  • Experimentation provides leadership clarity. It enables Product Managers, Designers and Analysts to challenge status quo and existing assumptions

  • One platform, one tool. Centralisation to one experimentation tool removes duplicate effort, standardises processes and produces consistency in business terminology and language

  • A head start for building organisational capability - communicate the benefits of experimentation and how it works, hypothesis statement templates, statistical significance calculators

  • Don’t underestimate how much stakeholder engagement will be required. Particularly, ensure that Technology teams are on-board early

  • Over-communicate about your experimentation program - Guest speakers, showcases, briefing sessions, business forums etc.

  • On failed experiments … teams learn more from failed experiments than they do from successful experiments. Invite teams to share their stories of failed experiments. The more that teams share their failure stories in showcases and forums the faster a culture of failure can be normalised

  • Keep experimentation governance processes “light touch”. Remove as much governance as possible without compromising on quality control

  • One of the biggest mistakes an organisation can make when getting started with experimentation is to go too big, too early. Start small, start localised, with a small dedicated team that has a razor sharp focus on 1-2 key metrics

  • Analytics is a big part of experimentation program success. Ensure that you have analytics support right from the get go. It helps to have a champion that can bubble insights and learnings to senior leadership teams

In this episode we discuss:

  • How James got started with experimentation

  • James’s guiding principles for experimentation

  • How experimentation helps James to lead more effectively

  • Experimentation culture at REA Group

  • Strategies for building skill and capability

  • How REA Group embraced failure

  • Tools for effective organisation communication

  • Hurdles and obstacles to scaled experimentation

  • Common mistakes that organisations make

  • Advice for businesses getting started with experimentation

 

Get instant access to expert coaching on experimentation.

Ready to learn how to become a more successful and confident leader with experimentation from the best in the world?

Have greater impact with your decisions. Level up your career.

Spread the word


Review the show

 
 
 
 

Connect with Gavin