We encourage everyone at Custora to try and test new things, and we want everyone to also feel comfortable with the failures that often come with experimenting. In fact, our actual product emphasizes the importance of experimentation. Our app allows users to test different types of emails based on their customer demographics and compare how those emails perform. A/B testing is a core feature and a value prop of Custora, so much so that we provide it to our customers.
We’re open about experimentation even on an organizational level. Inspired by Spotify’s squad system, we tried implementing squads at Custora, too. After some time though, we realized it was too process-heavy and we were too small of a team for squads to be effective. Even though it was a bit of a failed experiment, it was a useful exercise for everyone, and we all happily reverted to our old way of doing things, operating in teams.
Even though engineers always work on a team, individuals are never restricted to working on any one particular thing. While our workloads are shaped by business goals and product plans, engineers have agency to choose what they work on each product development cycle. Having a say in what you work on keeps everyone excited and challenged, and it also cultivates more creativity in how we solve problems. Just like our customers, we encourage one another to experiment and run tests, learning from both our failures and successes.
1 Open Positions
The default for most startups is failure, so in order to fight against those odds, we need to push ourselves in ways that will feel uncomfortable at times. We learn a lot more from failing than we do immediate success (where you often pat yourself on the back and not truly understand why you succeeded). With this in mind, we run a lot of experiments and A/B tests with the expectation that more than 50% will fail. As Thomas Edison said, “To have a great idea, have a lot of them.”
When we asked Yi Sun (one of our engineers) how he’d describe our team, he said, “Success on the engineering team means not being afraid of failure. The team leadership truly supports an environment to fail. That might sound crazy, but if you are aiming to never fail, you are not getting out of your comfort zone. When you have confidence to try new things no matter if it fails, you are on a path to innovation.”
We jump on the opportunity to learn from our mistakes. In a world where not all problems are solved, there are bound to be answers that don’t quite hit the mark. Our customers depend on our software to be reliable, so we have many processes in place to prevent regressions. We understand that everyone on the team is human (except you SlackBot), so we’re bound to make mistakes. When we do make errors, we compile information about the instance, document it, and focus on creating preventative measures to make us more reliable.
Here’s an example to illustrate how we handle failures. Our website’s signup form was recently hit with a malicious script to batch signup a bunch of accounts. The form didn’t include our Google Recaptcha snippet and thus, opened us up to an attack. The original programmer of the signup form was not targeted or blamed for this. Rather, the team came together to add the Google Recaptcha code. As a team, we put together a process to prevent sign ups like this from occurring in the future. Our team has each other’s back and “figuring out who pushed the bug” wasn’t on our agendas. Ultimately, we were able to create a stronger sign up experience in the process.
We think this is necessary to learning and improving as a team. Inevitably, some of the things we try aren't going to be successful. When something goes wrong or we fall short of our goals, we don't look to figure out who is to blame -- we figure out where the gaps are in our processes (or our expectations!). We go into retrospectives with the understanding that people did the best they could with the conditions they had. A failed experiment isn't viewed as a personal failure due to our culture of support and accomplishment.
Labor Automation Cloud Platform
New York, Toronto, and Lexington (just outside of Boston)
Sometimes, these experiments do not yield the results we desire—or simply don’t work— but we encourage experimentation and failing quickly because learning from these is what enables future success.
In our experience, failure is rarely a direct function of the individual contributor but rather a process inefficiency that needs to be remediated. To diagnose the issue, we practice blameless retrospectives to understand what happened, how to improve, and communicate our findings to the team. We look to continually improve our output.
For example, we will push out early software under feature toggles to test and observe customer usage with production traffic. If the experiment and associated a/b tests prove to be successful, we’ll expand the rollout accordingly. On the other hand, if the experiment fails we put the idea on the shelf and move on.
For example, a major increase in writes to our main database caused degraded performance for our entire suite of products. This was isolated to be an issue with fetching data from one of our APIs stuck in a loop. The post-mortem focused on the technical problem, and the root causes, including recommendations for process improvements. One of the attributes our team holds dear is that of Team > Self, where any incident is the team’s fault, not the individual who may have pushed or deployed some code.
We take every failure as an opportunity to learn and be better. We know we may not fix the problem right away, but it’s about testing and trialing, and learning from it to make sure the next time we tackle it, it’ll be better than the last. We released the first beta of our API in a few weeks when it wasn’t really rock solid yet. Our first big user actually broke it, as high-availability was not even there. However, it allowed us to get feedback and to focus on the highest impact items. Since then, we have done tons of iterations to arrive at the product we have today and we continue to iterate on it every week. Our biggest failures are often times are best learning opportunities, which is why we spend time to thoroughly document post mortems (DNS DDos incident, 8-min indexing downtime). We share these with the public too, in hopes that everyone can benefit from the lessons we’ve learned the hard way.
19 Open Positions
Things are going to go wrong or not work –– it is how we react to them that define how we move forward. We’ve built several services and products that worked for a while but needed to be torn down and rebuilt later. Our needs change, the industry changes, and over time, our customers expect more, too. As a result, we need to be able to adapt and respond quickly. Our data is used to power the growing on-demand economy, and beyond, and we need to be able to deliver what our customers need.
We have a strong practice of internal and external post-mortems when an issue occurs and take ownership when things go wrong. For example, when switching to new infrastructure, a command was accidentally run in production that was designed for development… this resulted in a database being dropped! 😱 Our whole team rallied and we had it restored within minutes along with pull requests out to prevent it from happening again. All the while, we experienced zero downtime! 😎
11 Open Positions
Transforming endpoint security with big data analytics
Waltham, MA; Boston, MA; Boulder, CO; and Hillsboro, OR
Carbon Black is a constant work in progress and the best way we navigate this is by living our values. One of the biggest ways we live our values is by doing a retro when we fail to figure out how we can do better in the future – and not who caused the issue. By shifting the language to “doing better going forward” rather than “which throat do I choke?” we start from a place of safety. Then we take the recommended steps to improve and add those steps to our future work.
A great example of this is when we had a migration issue on CB Protection a few years ago. A person wrote a script that had a mistake which ended up modifying data. Once we recognized that the issue came from a poor migration step, we went right to remediation and lessons learned for the team. This meant three weeks of planning for how to address the root cause and how to get our customers back to health. We spent zero time calling out the person who wrote the script. We, instead, focused on how to tactically fix the software issue and help our customers.
Only after the two above issues were met did we have a personal retro with the person. We discussed what they learned from the process and what, if anything, they’d change. (In this case, the person said they’d add several tests to smoke tests. And that’s what we did!)
We also do post-mortems, and we always do a “lessons learned/retro” for major failures. The example given above is one such example. We don’t always share the results of the post-mortem externally; however, we do make sure our customers know what the problem was and how we’ll fix it. For example, in the scenario above, we talked to every customer impacted and worked with them to remediate the data change.
49 Open Positions
We actively encourage the offering and exploration of new ideas, even if their value is uncertain to the broader team. If a next step forward on an idea can’t be agreed upon, we encourage parties to agree on an experiment instead. An experiment is something that is safe to try and has a clear hypothesis of what outcomes are expected. From its execution, we’ll all learn something of value, and should it fail it won’t cause the organization unavoidable or serious harm.
Failure isn’t the fault of an individual developer. It’s the fault of the systems and practices that failed to prevent it. We learn from one another and continuously strive to learn from our mistakes and the mistakes of others. Senior developers and management are transparent about our past failures (dropping production databases for example!) and encourage new staff to try new things and experiment. We chronicle these failures through a postmortem repository and support documentation to and make each error an opportunity to learn.
This is one of our core values. We highly value continuous learning, both at the level of the individual and as a team navigating the ever-changing dating space. We constantly reflect on and find opportunities to do better, which inevitably comes with making mistakes. As one example, we send “nudges” to our users when there’s a lull in their chat conversation. We thought maybe if our nudges were sassy and sarcastic, they might increase chat participation by giving both parties something to laugh about or comment on. Unfortunately, people were writing in that our nudges were too abrasive and over the top. We thought nothing of it and switched back to the old messages. Failure? Not at all. We know more now than we did before.
It’s a horizontal organization at Monograph. We’re very open about how all of this is new to us and we’re all figuring it out together as we go. It’s part of the process. There is no blame whatsoever and with how much we communicate with one another, it’s never one person’s fault. We’ve actually had to rebuild Monograph two times now, and each time was a decision we made together. We hope you’ll experiment and work openly with us on both Monograph and whatever side project(s) you have.
1 Open Positions
Want to List Your Company?
Submit a team profile!
Select 8 Values
Contact me (Lynne 👋)
Qualify Your Values
Reach Thousands of Devs
Find Value-Aligned Candidates
Have Meaningful Initial Conversations
Don't Fill Roles, Hire Teammates
You can post as many job openings as you want.