The growth team at Postmates straddles all of the different teams and products at the company. We built our own A/B testing framework which the rest of the company uses, which has enabled the company to be more data-driven in how we approach decisions. Something you’ll hear time and time again on our team is “highest impact over cost” which is unequivocally the driving force in everything we do. The quantitative and qualitative methodologies we use are being adopted across the company and in this way, we are inflecting the company’s culture to be more data-guided.
Two of our core values focus on outcomes, not activities. We use data to ensure we’re always confident in our decision making. Few features make it out to users without first running in an experiment. We have a simple but effective experiment framework. Periscope provides a way to share these metrics with both technical and non-technical team members. Engineers who build features consider how the data is collected, how it is surfaced, and work with the other stakeholders to determine how to measure the success of it. Beyond quantitative data, our design and customer success teams have feedback loops for collecting qualitative responses to new features to surface product challenges.
Everything from the profile survey questions a customer initially fills out to the feedback and ratings they provide for each piece we send them is measured and tracked. Data and machine learning is used to solve our cold start problems, improve the items we send out on subsequent boxes (our system gets smarter each time), improve efficiency of our operation, and even to make our own clothes. Almost all of our product roadmap is driven by data analytics. We are constantly running AB tests and evaluating results in order to inform the next step of feature iteration.
When Przemek (our co-founder) wanted to buy a camper car, he scraped most popular listing sites and checked if the price was seasonal (it was, by the way). We apply data-driven culture in our work and personal lives. Gathering information and presenting it in a clear way is part of our decision making process. We try to share this culture with our clients.
1 Open Positions
We strive for integrity and accuracy to ensure that quality data is delivered to the market. We are metrics-driven in our decision-making both internally and externally. We want to make sure the decisions we make - we make with data and metrics backing it. We have Amplitude, Google Analytics, and Custom Grafana dashboards set up so that we can constantly monitor the effects of the changes that we make. We spent countless hours making sure the we can trust the internal metrics, and that people have the metrics in place as their north star as they go about their day-to-day.
What is the overall company revenue? A lot of companies actually can’t answer this question. Arriving at a number that can be trusted can be very difficult because of different currencies, custom deal structures, and various payment methods. We constantly monitor and validate our north star metrics (Daily Active Users, Revenue, Press mentions) and discuss our progress during every weekly All Hands meeting.
We use Chartio and Redshift extensively to analyze and test all aspects of our product. Our product is designed to be passive in that customers can sign up for Digit and then go on about their lives. Digit will then work in the background to ensure they are financially healthy. Given this passive nature, we rely on data (both qualitative and quantitative) to guide product roadmaps and growth experiments. We currently have 35 features and experiments which are being tested in production right now.
As an engineer at Digit, you’ll be responsible for working with our data analysts and user researchers to define what metrics you care about and which you are trying to improve.
Our biggest challenge is connecting the dots between disparate data sources within our customers’ sites to inform them where bottlenecks exist and how they can remove them.
We use Jeff Bezos’ Type 1 vs. Type 2 decision-making framework for effective decision making. Type 2 Decisions are like walking through a door — if you don't like the decision, you can always go back. Generally, we optimize for velocity in our actions, and we default to “Type 2” mode: We make the best decisions we can with the information we have (vs. waiting for better information).
Type 1 Decisions are not reversible, and we are very careful making them, so this doesn’t happen to us. Part of recognizing a Type 1 Decision is ensuring everyone feels like they can raise a flag and say “This is a Type 1 Decision.”
There are some times when we optimize for trust (i.e., “trusted brand”) vs. just velocity: When we code, we have someone else review pull requests; we don’t merge our own work except on an exception-basis. When we produce public content, we have someone else review the post before publishing it (just like coding).
When you do this, errors and issues can easily multiply. We use data to tackle this head-on, by systematically tracking the data from all our automated processes, which helps us remove human bias and guesswork from our work. Why did the latest run of a particular protocol fail? We could either try and guess using scientific intuition and previous experience, or look at the data for that run, and compare and contrast it with previous runs to highlight the key differences in proceedings. Did the temperature change? Was a reagent left out for longer than usual?
This approach also helps to optimise our processes. We want to make our system as efficient as possible, and concentrate effort where it will bring most reward. By systematically tracking every process and every parameter, the answers come straight from the data, avoiding a reliance on guesswork to figure out where to target our efforts.
Data is also key for our evolution engine - it parses vast numbers of DNA sequences to infer rules about how the sequences are constructed, in order to shape our experimental testing. As with many ML systems, the more data it receives, the smarter it gets!
The evolution of our company, starting as a B2C tech support company, transitioning to B2B tech support, and pivoting into the marketplace we are today, is evidence that we always follow the signal of data. Some of the leadership team have backgrounds in banking, private equity, and financial modeling, so naturally there is a focus on metrics. Every Monday, everyone at the company meets to share their weekly goal. You’ll never hear someone say, “I’m going to do market research this week,” because we only set discrete, quantifiable goals like, “This week, I am going to research 10 companies with a deliverable on Friday.”
1 Open Positions
Data informs everything we do at Stitch Fix, from which pants we should order to the images that should appear on the homepage. We believe we can use data to create a better experience for our customers. That's why we employ nearly as many data scientists as we do software engineers. Data informs so many of the decisions that we make here at Stitch Fix, that part of life here is working together with data scientists to craft experiments and solve problems.
One example of how we leverage data is in how our data scientists have improved our demand model. Our buying decisions are much more informed than they would be if they were based on traditional retail models, and give us a better grasp on things like, “How many customers do we think we’ll have?” and “How many customers will be wearing smalls vs. extra smalls?” Where traditional retailers rely on industry standard size breaks, we leverage data to make informed decisions when buying and are able to better utilize that inventory.
Want to List Your Company?
Submit a team profile!
Select 8 Values
Contact me (Lynne 👋)
Qualify Your Values
Reach Thousands of Devs
Find Value-Aligned Candidates
Have Meaningful Initial Conversations
Don't Fill Roles, Hire Teammates
You can post as many job openings as you want.