Our success rests on our ability to keep innovation alive at every level of the company. We know that the fastest way to suppress innovation is to make it unsafe to fail, which is why we’ve worked hard to ensure that everyone at Mode is able to take risks. We’ll try something out even if we’re not 100% sure it will succeed because we know that as long as we did the best we could to de-risk it, no one will shame us if it doesn’t work out. In everything we do, we have a “Shared Responsibility, Shared Risk” mentality.
Making it safe to fail requires a blameless post-mortem process, and for that, we have After Action Reviews (AARs). During an AAR, we start with the assumption that everyone did the best they could with the information they had at the time. Participants focus on how to understand and improve the systems and processes that led to the failure, rather than who did what wrong. We use these meetings as opportunities to reinforce that Mode is a safe place to fail, build stronger safeguards against future failures, and bring the team closer together.
47 Open Positions
Your current expertise does not limit what you are able to do or work on here at Covariant. Many of us have research backgrounds in academia, where failure is common and expected. We view it as a strength that our domain knowledge differs so widely across the company, which is why we encourage people to speak up and ask questions if they don’t know or understand something. You won’t know everything (no one can!), but embracing this truth gives us the confidence to make risky bets and try new things. Not only does this help us to grow, but this is how we as a company and industry innovate.
Because robotics is so cross-functional, when something isn’t working, it takes an open mind to find the root cause. Triaging each problem means considering mechanical, electrical, general software, or AI errors. Some organizations get stuck blaming the weakest link. At Covariant, we are hyper-focused on making sure our customers are absolutely thrilled with our products and to do so effectively means short-circuiting that failure mode of blame. Success for us requires teamwork so we fully embrace every challenge as a single unit.
11 Open Positions
There is a lot of stuff to build. It’s exciting, but some mistakes are inevitable. We strive to create a safe space that’s open to failure. We learn from our mistakes openly and support each other in the process. To foster a safe learning environment, we try to document everything so we and future team members can build off of what we’ve learned (see Open Communication).
For developers, we prioritize building tools along the way to help reduce that risk, like CI/CD, linters, and release automation to help us ensure that a small number of people can be responsible for a large codebase. We do weekly releases to quickly fix bugs that crop up, and invest time to improve the upgrade process for our customers. We pair regularly to share new work and to collaboratively solve tough problems.
1 Open Positions
In order to innovate, you have to take risks and make mistakes along the way. “We chase direction from customers and don’t hold it against engineers if their project doesn’t wind up being successful,” says John. Robotics is also inherently cross-functional, so when something isn’t working, we have to approach the problem with an open mindset. The root cause could be a mechanical or electrical issue, general software problem, or algorithm bug. Our goal is always to learn from a mistake and never to place the fault on any one person, which is why we have blameless postmortems. “We solicit anonymous feedback for retrospectives, so everyone is comfortable sharing their thoughts on any problems that occurred and how we can prevent them from happening in the future,” says Jackie, a field application engineer. Success for us requires teamwork, and we triage each problem with that in mind.
We’ve learned the most from our failures. We’ve had companies, products, and projects all fail and they’ve been some of our most impactful learning experiences. Making mistakes has allowed us to become more thoughtful and intentional about our decisions, including carefully thinking about roles before we post them. We’ve found when a project fails, it’s not any one person’s fault, but rather mismanaged expectations or a misalignment/misappropriation of resources. Only in an environment where you fail enough to learn – but not so much that you feel ineffective – will you be successful.
We want Indent to be a warm place, where you can feel safe, do your best work, and feel good about what you do. This means being able to take risks and learning how you can iterate and improve moving forward.
Trade-offs: We lean toward failing early and often to learn what we need to make the next iteration successful, rather than risk going too deep on a shaky foundation that can lead to a significant opportunity cost. There are certain problems that require a correct answer on the first try and we try to distinguish if it’s a “one-way door” or a decision we can approach iteratively.
We expect failure in our production systems and are building safety nets into those systems that automatically recover from failure. If machines can fail, so can people. Part of resilience means reducing the (negative) impact of failure. We encourage systems that self-heal and roll-forward. Engineers aren’t dinged when they push an error, but rather praised when they can reduce the amount of time spent in the outage.
We use AI, especially machine learning, precisely because we do not know the right answer before we start. We run many experiments and accept that most will fail. Our success metrics, therefore, are based on learning quickly to ultimately make people more effective through the use of AI: some things we build will help a little, some a lot, and some not at all. In the end, our goal is to keep moving the proverbial needle forward. We do this by measuring everything and constantly evaluating our resilience. Our quantitative measures are around system downtime, mean time to resolution, and time to push a change. Our qualitative measurements include 1:1 conversations between managers and their directs, discussion at all-hands, and our monthly anonymous survey.
1 Open Positions
For example, a major increase in writes to our main database caused degraded performance for our entire suite of products. This was isolated to be an issue with fetching data from one of our APIs stuck in a loop. The post-mortem focused on the technical problem, and the root causes, including recommendations for process improvements. One of the attributes our team holds dear is that of Team > Self, where any incident is the team’s fault, not the individual who may have pushed or deployed some code.
Startups are inherently risky ventures and if we don’t take enough risks we won’t succeed. In order to support a culture of risk taking, we strive to create a safe environment to fail. For example, when we ship bugs, or accidentally cause outages we use those as opportunities to improve our process as a team rather than focus on the mistakes of any one person.
Postmortems are standard and we always use blameless language. Creating a place where people feel safe to take risks, make mistakes, and learn along the way is what allows us to iterate quickly and build the best product. In fact, one of our engineers actually set a personal goal “to make at least N mistakes this year,” which only underscores how comfortable team members are to take risks, reflect, and learn. At the end of the day, we take our work seriously, but we’re lighthearted folks.
11 Open Positions
Mobile-based personal and professional development platform
When we fail, we focus on what we can take away from the experience to apply moving forward. One way we implement this behavior is through a practice we refer to as “learning memos.” Employees are encouraged to take time to reflect on what they’ve learned through various trials and tribulations, and write their thoughts out in a way that can be shared with the rest of the team.
When we introduce a critical bug into a production environment, we follow-up the incident with a post-mortem that focuses both on what went wrong and how to prevent the incident from being repeated in the future. We are strong believers in approaching this process with empathy and acting as humans, not robots during the post-mortem analysis.
We are also acutely aware that psychological safety is key to both innovation and creating a positive work environment where people feel empowered to speak up with ideas, issues, or concerns. Our managers across the organization welcome and invite feedback of all kinds.
21 Open Positions
If we're not failing at all, we’re not taking enough risks or pushing ourselves beyond our comfort zones. Being eventually correct is much better than being consistently incorrect, and we focus on getting to the right answer through reflection and discussion, instead of trying to be right the first time. We know we’ll make mistakes along the way, but we believe each failure should be examined dispassionately, objectively, and without blame. Instead, we treat each misstep as an opportunity to improve. An environment of mutual trust and respect empowers us to debate ideas effectively and create the best outcomes for our customers and our team.
Listen to our VP of Engineering, Uma Chingunde, share her strategies on how to cultivate a blameless culture here.
Things are going to go wrong or not work –– it is how we react to them that define how we move forward. We’ve built several services and products that worked for a while but needed to be torn down and rebuilt later. Our needs change, the industry changes, and over time, our customers expect more, too. As a result, we need to be able to adapt and respond quickly. Our data is used to power the growing on-demand economy, and beyond, and we need to be able to deliver what our customers need.
We have a strong practice of internal and external post-mortems when an issue occurs and take ownership when things go wrong. For example, when switching to new infrastructure, a command was accidentally run in production that was designed for development… this resulted in a database being dropped! 😱 Our whole team rallied and we had it restored within minutes along with pull requests out to prevent it from happening again. All the while, we experienced zero downtime! 😎
10 Open Positions
It’s completely okay to get it wrong. In fact, we have a strong bias toward taking risks and making mistakes versus playing it safe. As a result, one of our core company values is, “We are an elite force. It’s always we, never me.” We are all open about making mistakes across all levels, including our CEO. Our most senior developers admit to hacks and come forward about being wrong when they are. We’re honest in our PRs and the feedback we give, and also recognize risks and knowingly accept them. We have dedicated time to build test harnesses to catch bugs and alert us (i.e. alarms, emails) because we encourage speed over perfection. It makes us more comfortable to take risks, too, because we are reassured that mission critical regressions will be caught early.
To us, feeling safe to fail goes hand in hand with open feedback. We frequently reference the radical candor graph when we are giving or asking for feedback, and make sure everyone has context around what they’re building. We document our in-person meetings, whether they be dev team retrospectives or standups, and share them in our shared Slack office. We also have a Facebook Portal in both of our offices, which is on at all times. It provides an additional wormhole from one office to another. And despite our time differences, we schedule weekly All Hands meetings and weekly #devs meetings when our timezones overlap so that everyone can attend. Ideas are shared, concerns are voiced, and we happily address successes and failures from the week.
1 Open Positions
Seamlessly create, send, and track video emails
Colorado Springs, Denver, or Remote in CO, NY, PA, WI
Failure is inevitable and it can easily be intimidating. We believe it’s important to intentionally work through that fear by experiencing failure, talking about it, and then reinforcing practices that can limit failure’s impact, or, even better, leverage its teachings. Servers crash, ideas don’t pan out, timelines slip; the trick is to anticipate, manage, and mitigate risk to a degree that enables you to thrive without being frozen.
Cloud-based observability platform
San Francisco, Portland, Seattle, Phoenix, Denver, Dallas, or Los Angeles
Our Reliability Team provides one of the infrastructures at New Relic that is incredibly on point: Do not Repeat Incidents (DRI). The retros for DRIs are completely blame-free and when managers write up summaries, we won’t even include names. There’s a lot of acknowledgment that anyone can make a mistake and we never focus on who did it. Instead, we try to identify what’s missing in our process or infrastructure and what we need to solve in order to alleviate those gaps.
Specifically on our Data OS team, there is a tremendous amount of support reviewing PRs and improving our code. There’s actually quite a bit of silliness and joy! Feedback is never given from a punitive place and always comes from a place of wanting to support you and your career.
1 Open Positions
Project Advisors change from project-to-project depending on the needs of each individual project and DRI. For example, you might have an advisor with a high degree of skill in a functional area you’re less familiar with, or an advisor who can help avoid duplicated effort because they’ve done the same type of work before. Whether you’re developing a new unit testing harness or designing a new onboarding experiment, your Project Advisor will be there to guide you whenever uncertainty arises.
Professionally questioning each other’s assumptions is key to rejecting bias and creating a more inclusive culture. By continually challenging our conscious and unconscious biases, we’re validating our own direction constantly and can better elevate our work. Not only does this allow for fast and sustainable growth, but it also ensures our culture and processes remain productive and empowering.
Career network for college students and recent grads
San Francisco, Denver, or Remote (US)
At Handshake, we recognize a growth mindset is only possible when individuals are given a safe environment to fail. Many Handshake managers consider failing a necessary part of growing as an engineer here, which is why we embrace failures as learning opportunities. In fact, many of our employees favorite the #learning-from-losses Slack channel, where employees show humility with their recent failures and how they have learned from them.
As discussed previously (see EQ > IQ), we practice blameless postmortems mistakes. Rather than pointing fingers, we focus on how to fix the issue at hand and prevent it from occurring in the future. Postmortems are then shared across the entire engineering organization so everyone can learn and improve collectively. For instance, one of our engineers wrote a feature to run a promotion within the Handshake platform, but unfortunately, the data from the promotion was getting corrupted due to a misunderstanding of how Rails destroys records. “There were no finger-pointing or blaming of why this happened,” the engineer said, “only a calm focus on how can we prevent the next incident. In fact, one of the team members even researched more of why this problem happened in the first place and gave a tech talk to fellow engineers on how to avoid these record destroy pitfalls.”
We recognize that these mistakes don't make us bad engineers but actually they are lessons for all of us to be better and help others avoid making the same mistakes.
11 Open Positions
It’s inevitable that mistakes will happen, but we never place the blame on any one person. Rather, if an incident does occur, we remain calm and identify as a team how we can mitigate it. Once we’ve rolled things back to a safe point and limited any of the risk, then we can address the root fix. Our post-mortems are always blameless and we keep a running document of the timeline as things are happening so we have a reference.
We operate from the principle that if one person was able to cause a problem, then it’s a failure of the system. There should be enough checks that it doesn’t come down to one person doing something wrong, and if that’s the case, then we need to reexamine the processes we have in place. This helps us build stronger safeguards against future failures and ensures we can continue to take risks and innovate.
Automated financial management to save, plan, and invest all in one place
Palo Alto, CA or Remote (US)
We believe it’s crucial to learn from incidents and take actions to prevent them from recurring, which is why we’ve formalized this process with blameless post mortems. Our primary goal in conducting a post mortem is to ensure the incident is documented, that all contributing root causes are well understood, and that effective preventative actions are put in place to reduce the likelihood and impact of a recurrence.
For a post mortem to truly be blameless, we make sure it focuses on identifying an incident’s contributing causes without placing blame on an individual or team for bad or inappropriate behavior. At Wealthfront, a blamelessly-written post mortem assumes everyone involved in an incident had good intentions and did the right thing with the information they had. This means engineers whose actions contributed to an incident can give a detailed account of what happened without fear of punishment or retribution.
12 Open Positions
Our industry is incredibly complicated and opaque. To break through, we need to innovate and make mistakes.
We want developers who bring an inherent curiosity to everything they do and a desire to try new things. We value enthusiasm and smart engineering more than expertise in a particular stack. Some of our most talented engineers have mastered programming languages while on the job.
Following major projects, we conduct thorough post-mortems where everyone is encouraged to contribute. You’ll never hear blame in these meetings. We take shared responsibility for mistakes and a collaborative approach to fixing them. If you try something and fail, we’ll share learnings together and choose a different strategy next time.
12 Open Positions
It’s a horizontal organization at Monograph. We’re very open about how no technology or process is sacred and we’re all figuring it out together as we go. Operating under the guiding mantra of “blame systems, not people,” we reinforce the blameless culture in our feedback and communication. For instance, we’ve actually had to rebuild Monograph two times now because of early architecture choices, and each time was a decision we made together. We hope you’ll experiment and work openly with us on both Monograph and whatever side project(s) you have.
Want to List Your Company?
Submit a team profile!
Select 8 Values
Contact me (Lynne 👋)
Qualify Your Values
Reach Thousands of Devs
Find Value-Aligned Candidates
Have Meaningful Initial Conversations
Don't Fill Roles, Hire Teammates
You can post as many job openings as you want.