Our success rests on our ability to keep innovation alive at every level of the company. We know that the fastest way to suppress innovation is to make it unsafe to fail, which is why we’ve worked hard to ensure that everyone at Mode is able to take risks. We’ll try something out even if we’re not 100% sure it will succeed because we know that as long as we did the best we could to de-risk it, no one will shame us if it doesn’t work out. In everything we do, we have a “Shared Responsibility, Shared Risk” mentality.
Making it safe to fail requires a blameless post-mortem process, and for that, we have After Action Reviews (AARs). During an AAR, we start with the assumption that everyone did the best they could with the information they had at the time. Participants focus on how to understand and improve the systems and processes that led to the failure, rather than who did what wrong. We use these meetings as opportunities to reinforce that Mode is a safe place to fail, build stronger safeguards against future failures, and bring the team closer together.
26 Open Positions
There is a lot of stuff to build. It’s exciting, but some mistakes are inevitable. We strive to create a safe space that’s open to failure. We learn from our mistakes openly and support each other in the process. To foster a safe learning environment, we try to document everything so we and future team members can build off of what we’ve learned (see Open Communication).
For developers, we prioritize building tools along the way to help reduce that risk, like CI/CD, linters, and release automation to help us ensure that a small number of people can be responsible for a large codebase. We do weekly releases to quickly fix bugs that crop up, and invest time to improve the upgrade process for our customers. We pair regularly to share new work and to collaboratively solve tough problems.
2 Open Positions
Your current expertise does not limit what you are able to do or work on here at Covariant. Many of us have research backgrounds in academia, where failure is common and expected. We view it as a strength that our domain knowledge differs so widely across the company, which is why we encourage people to speak up and ask questions if they don’t know or understand something. You won’t know everything (no one can!), but embracing this truth gives us the confidence to make risky bets and try new things. Not only does this help us to grow, but this is how we as a company and industry innovate.
Because robotics is so cross-functional, when something isn’t working, it takes an open mind to find the root cause. Triaging each problem means considering mechanical, electrical, general software, or AI errors. Some organizations get stuck blaming the weakest link. At Covariant, we are hyper-focused on making sure our customers are absolutely thrilled with our products and to do so effectively means short-circuiting that failure mode of blame. Success for us requires teamwork so we fully embrace every challenge as a single unit.
We expect failure in our production systems and are building safety nets into those systems that automatically recover from failure. If machines can fail, so can people. Part of resilience means reducing the (negative) impact of failure. We encourage systems that self-heal and roll-forward. Engineers aren’t dinged when they push an error, but rather praised when they can reduce the amount of time spent in the outage.
We use AI, especially machine learning, precisely because we do not know the right answer before we start. We run many experiments and accept that most will fail. Our success metrics, therefore, are based on learning quickly to ultimately make people more effective through the use of AI: some things we build will help a little, some a lot, and some not at all. In the end, our goal is to keep moving the proverbial needle forward. We do this by measuring everything and constantly evaluating our resilience. Our quantitative measures are around system downtime, mean time to resolution, and time to push a change. Our qualitative measurements include 1:1 conversations between managers and their directs, discussion at all-hands, and our monthly anonymous survey.
1 Open Positions
We’ve learned the most from our failures. We’ve had companies, products, and projects all fail and they’ve been some of our most impactful learning experiences. Making mistakes has allowed us to become more thoughtful and intentional about our decisions, including carefully thinking about roles before we post them. We’ve found when a project fails, it’s not any one person’s fault, but rather mismanaged expectations or a misalignment/misappropriation of resources. Only in an environment where you fail enough to learn – but not so much that you feel ineffective – will you be successful.
We want Indent to be a warm place, where you can feel safe, do your best work, and feel good about what you do. This means being able to take risks and learning how you can iterate and improve moving forward.
Trade-offs: We lean toward failing early and often to learn what we need to make the next iteration successful, rather than risk going too deep on a shaky foundation that can lead to a significant opportunity cost. There are certain problems that require a correct answer on the first try and we try to distinguish if it’s a “one-way door” or a decision we can approach iteratively.
4 Open Positions
We think this is necessary to learning and improving as a team. Inevitably, some of the things we try aren't going to be successful. When something goes wrong or we fall short of our goals, we don't look to figure out who is to blame -- we figure out where the gaps are in our processes (or our expectations!). We go into retrospectives with the understanding that people did the best they could with the conditions they had. A failed experiment isn't viewed as a personal failure due to our culture of support and accomplishment.
2 Open Positions
For example, a major increase in writes to our main database caused degraded performance for our entire suite of products. This was isolated to be an issue with fetching data from one of our APIs stuck in a loop. The post-mortem focused on the technical problem, and the root causes, including recommendations for process improvements. One of the attributes our team holds dear is that of Team > Self, where any incident is the team’s fault, not the individual who may have pushed or deployed some code.
2 Open Positions
When we fail, we focus on what we can take away from the experience to apply moving forward. One way we implement this behavior is through a practice we refer to as “learning memos.” Employees are encouraged to take time to reflect on what they’ve learned through various trials and tribulations, and write their thoughts out in a way that can be shared with the rest of the team.
When we introduce a critical bug into a production environment, we follow-up the incident with a post-mortem that focuses both on what went wrong and how to prevent the incident from being repeated in the future. We are strong believers in approaching this process with empathy and acting as humans, not robots during the post-mortem analysis.
We are also acutely aware that psychological safety is key to both innovation and creating a positive work environment where people feel empowered to speak up with ideas, issues, or concerns. Our managers across the organization welcome and invite feedback of all kinds.
Things are going to go wrong or not work –– it is how we react to them that define how we move forward. We’ve built several services and products that worked for a while but needed to be torn down and rebuilt later. Our needs change, the industry changes, and over time, our customers expect more, too. As a result, we need to be able to adapt and respond quickly. Our data is used to power the growing on-demand economy, and beyond, and we need to be able to deliver what our customers need.
We have a strong practice of internal and external post-mortems when an issue occurs and take ownership when things go wrong. For example, when switching to new infrastructure, a command was accidentally run in production that was designed for development… this resulted in a database being dropped! 😱 Our whole team rallied and we had it restored within minutes along with pull requests out to prevent it from happening again. All the while, we experienced zero downtime! 😎
10 Open Positions
It’s completely okay to get it wrong. In fact, we have a strong bias toward taking risks and making mistakes versus playing it safe. As a result, one of our core company values is, “We are an elite force. It’s always we, never me.” We are all open about making mistakes across all levels, including our CEO. Our most senior developers admit to hacks and come forward about being wrong when they are. We’re honest in our PRs and the feedback we give, and also recognize risks and knowingly accept them. We have dedicated time to build test harnesses to catch bugs and alert us (i.e. alarms, emails) because we encourage speed over perfection. It makes us more comfortable to take risks, too, because we are reassured that mission critical regressions will be caught early.
To us, feeling safe to fail goes hand in hand with open feedback. We frequently reference the radical candor graph when we are giving or asking for feedback, and make sure everyone has context around what they’re building. We document our in-person meetings, whether they be dev team retrospectives or standups, and share them in our shared Slack office. We also have a Facebook Portal in both of our offices, which is on at all times. It provides an additional wormhole from one office to another. And despite our time differences, we schedule weekly All Hands meetings and weekly #devs meetings when our timezones overlap so that everyone can attend. Ideas are shared, concerns are voiced, and we happily address successes and failures from the week.
1 Open Positions
We believe that failure is one of the most informative teachers and if we want to truly Build for the Long Term, we need to learn as much as we can as quickly as possible. We need to push ourselves and feel comfortable dropping our egos, sharing our learnings from mistakes or missteps, and iterating quickly. Supply chain is also an incredibly complex, mistake-laden industry, so to operate within it successfully we need to understand that failure can happen, acknowledge when it does, and learn how to move forward.
We hold “funerals” for projects or meetings that we stop because we’ve learned from them, team members acknowledge mistakes across retros and informal conversations, and we regularly shout out people who have “Learned and Iterated,” one of our core company values.
Career network for college students and recent grads
San Francisco, Denver, or Remote (US)
At Handshake, we recognize a growth mindset is only possible when individuals are given a safe environment to fail. Many Handshake managers consider failing a necessary part of growing as an engineer here, which is why we embrace failures as learning opportunities. In fact, many of our employees favorite the #learning-from-losses Slack channel, where employees show humility with their recent failures and how they have learned from them.
As discussed previously (see EQ > IQ), we practice blameless postmortems mistakes. Rather than pointing fingers, we focus on how to fix the issue at hand and prevent it from occurring in the future. Postmortems are then shared across the entire engineering organization so everyone can learn and improve collectively. For instance, one of our engineers wrote a feature to run a promotion within the Handshake platform, but unfortunately, the data from the promotion was getting corrupted due to a misunderstanding of how Rails destroys records. “There were no finger-pointing or blaming of why this happened,” the engineer said, “only a calm focus on how can we prevent the next incident. In fact, one of the team members even researched more of why this problem happened in the first place and gave a tech talk to fellow engineers on how to avoid these record destroy pitfalls.”
We recognize that these mistakes don't make us bad engineers but actually they are lessons for all of us to be better and help others avoid making the same mistakes.
Cloud-based observability platform
San Francisco, Portland, Seattle, Phoenix, Denver, Dallas, or Los Angeles
Our Reliability Team provides one of the infrastructures at New Relic that is incredibly on point: Do not Repeat Incidents (DRI). The retros for DRIs are completely blame-free and when managers write up summaries, we won’t even include names. There’s a lot of acknowledgment that anyone can make a mistake and we never focus on who did it. Instead, we try to identify what’s missing in our process or infrastructure and what we need to solve in order to alleviate those gaps.
Specifically on our Data OS team, there is a tremendous amount of support reviewing PRs and improving our code. There’s actually quite a bit of silliness and joy! Feedback is never given from a punitive place and always comes from a place of wanting to support you and your career.
1 Open Positions
Seamlessly create, send, and track video emails
Colorado Springs, Denver, or Remote in CO, NY, PA, WI
Failure is inevitable and it can easily be intimidating. We believe it’s important to intentionally work through that fear by experiencing failure, talking about it, and then reinforcing practices that can limit failure’s impact, or, even better, leverage its teachings. Servers crash, ideas don’t pan out, timelines slip; the trick is to anticipate, manage, and mitigate risk to a degree that enables you to thrive without being frozen.
2 Open Positions
It’s a horizontal organization at Monograph. We’re very open about how all of this is new to us and we’re all figuring it out together as we go. It’s part of the process. There is no blame whatsoever and with how much we communicate with one another, it’s never one person’s fault. We’ve actually had to rebuild Monograph two times now, and each time was a decision we made together. We hope you’ll experiment and work openly with us on both Monograph and whatever side project(s) you have.
1 Open Positions
Automated financial management to save, plan, and invest all in one place
Palo Alto, Boston, or Remote (US)
We believe it’s crucial to learn from incidents and take actions to prevent them from recurring, which is why we’ve formalized this process with blameless post mortems. Our primary goal in conducting a post mortem is to ensure the incident is documented, that all contributing root causes are well understood, and that effective preventative actions are put in place to reduce the likelihood and impact of a recurrence.
For a post mortem to truly be blameless, we make sure it focuses on identifying an incident’s contributing causes without placing blame on an individual or team for bad or inappropriate behavior. At Wealthfront, a blamelessly-written post mortem assumes everyone involved in an incident had good intentions and did the right thing with the information they had. This means engineers whose actions contributed to an incident can give a detailed account of what happened without fear of punishment or retribution.
12 Open Positions
Our industry is incredibly complicated and opaque. To break through, we need to innovate and make mistakes.
We want developers who bring an inherent curiosity to everything they do and a desire to try new things. We value enthusiasm and smart engineering more than expertise in a particular stack. Some of our most talented engineers have mastered programming languages while on the job.
Following major projects, we conduct thorough post-mortems where everyone is encouraged to contribute. You’ll never hear blame in these meetings. We take shared responsibility for mistakes and a collaborative approach to fixing them. If you try something and fail, we’ll share learnings together and choose a different strategy next time.
Want to List Your Company?
Submit a team profile!
Select 8 Values
Contact me (Lynne 👋)
Qualify Your Values
Reach Thousands of Devs
Find Value-Aligned Candidates
Have Meaningful Initial Conversations
Don't Fill Roles, Hire Teammates
Celebrate
You can post as many job openings as you want.