Our success rests on our ability to keep innovation alive at every level of the company. We know that the fastest way to suppress innovation is to make it unsafe to fail, which is why we’ve worked hard to ensure that everyone at Mode is able to take risks. We’ll try something out even if we’re not 100% sure it will succeed because we know that as long as we did the best we could to de-risk it, no one will shame us if it doesn’t work out. In everything we do, we have a “Shared Responsibility, Shared Risk” mentality.
Making it safe to fail requires a blameless post-mortem process, and for that, we have After Action Reviews (AARs). During an AAR, we start with the assumption that everyone did the best they could with the information they had at the time. Participants focus on how to understand and improve the systems and processes that led to the failure, rather than who did what wrong. We use these meetings as opportunities to reinforce that Mode is a safe place to fail, build stronger safeguards against future failures, and bring the team closer together.
16 Open Positions
While we strive to write high-quality code, we recognize mistakes and edge cases that cause errors are inevitable. We have a bias for speed over perfection; the sooner we release a feature, the faster we can gain more data and iterate to provide our users with the best experience possible. We’re not afraid to break production (it happens!). In fact, it’s only by doing so that we can learn how to build better safeguards against future failures. Ultimately, this leads to a cluster immune system, which helps us deploy even faster and more securely (see Continuous Delivery below).
If something goes wrong, we never place the blame on any one person. Rather, we’re happy to jump in, help fix the problem at hand, and learn from it during blameless retros. For instance, we once misnamed an environment variable and accidentally ran a stub client in production. To avoid this class of mistake from happening in the future, we added a runtime check to prevent a server with this issue from ever serving real traffic. While this is a small example, by repeating the blameless retro process over 90+ times, we've built an environment where it's actually safe to make mistakes because there's a robust safety net.
The Zapier team is highly collaborative. Engineers work closely with product and design to come up with a product plan, then work together as an eng team to spec it out and build prototypes. We encourage people to speak up and share their ideas, and recognize that we’ll all make mistakes along the way. Any time something goes wrong, we all jump in to help solve the problem and never place the blame on any one individual. In fact, one of our senior engineers, Grant, openly shared his experience: “A change I made brought down Zapier for about 30 minutes. 🤦♂️ Someone opened up an incident and I jumped right on it to get a temporary fix in place. At no point did anyone blame me for what happened. We all stayed focused on arriving at a solution. Everyone brought empathy and patience to the incident, which only helped us get to a resolution more quickly and reflect [blamelessly] during the post-mortem. It made me really happy to work at Zapier.”
8 Open Positions
In order to innovate and move quickly, Sibi is willing to take risks. Knowing it takes a few misses to land a winning swing, we are more than happy to hop into the metaphorical boxing ring. To us, the best way to learn is by recognizing that no mistake is the fault of any one person. Rather, we view each misstep as a learning opportunity to make ourselves better. For example, when a team member recently broke production, we gathered together, shared a few cupcakes and made the experience fun while introducing an opportunity to learn. In general, there are very few safeguards because we celebrate learning from our mistakes. While there’s a pull request in order to push to master, we view it as more of a communication tool. It’s primarily a way to radiate community and intent, not to judge if you’re writing flawless code.
We’ve learned the most from our failures. We’ve had companies, products, and projects all fail and they’ve been some of our most impactful learning experiences. Making mistakes has allowed us to become more thoughtful and intentional about our decisions, including carefully thinking about roles before we post them. We’ve found when a project fails, it’s not any one person’s fault, but rather mismanaged expectations or a misalignment/misappropriation of resources. Only in an environment where you fail enough to learn – but not so much that you feel ineffective – will you be successful.
We want Indent to be a warm place, where you can feel safe, do your best work, and feel good about what you do. This means being able to take risks and learning how you can iterate and improve moving forward.
Trade-offs: We lean toward failing early and often to learn what we need to make the next iteration successful, rather than risk going too deep on a shaky foundation that can lead to a significant opportunity cost. There are certain problems that require a correct answer on the first try and we try to distinguish if it’s a “one-way door” or a decision we can approach iteratively.
Startups are inherently risky ventures and if we don’t take enough risks we won’t succeed. In order to support a culture of risk taking, we strive to create a safe environment to fail. For example, when we ship bugs, or accidentally cause outages we use those as opportunities to improve our process as a team rather than focus on the mistakes of any one person.
Postmortems are standard and we always use blameless language. Creating a place where people feel safe to take risks, make mistakes, and learn along the way is what allows us to iterate quickly and build the best product. In fact, one of our engineers actually set a personal goal “to make at least N mistakes this year,” which only underscores how comfortable team members are to take risks, reflect, and learn. At the end of the day, we take our work seriously, but we’re lighthearted folks.
If we're not failing at all, we’re not taking enough risks or pushing ourselves beyond our comfort zones. Being eventually correct is much better than being consistently incorrect, and we focus on getting to the right answer through reflection and discussion, instead of trying to be right the first time. We know we’ll make mistakes along the way, but we believe each failure should be examined dispassionately, objectively, and without blame. Instead, we treat each misstep as an opportunity to improve. An environment of mutual trust and respect empowers us to debate ideas effectively and create the best outcomes for our customers and our team.
Listen to our VP of Engineering, Uma Chingunde, share her strategies on how to cultivate a blameless culture here.
In order to innovate quickly, we believe in taking measured risks. Not everything will go flawlessly, and that’s okay. Since deployments happen on a daily basis, the longest that anybody is really dealing with the aftermath of a potentially dangerous line of code is 24 hours. Team members are always willing to jump in and help if something doesn’t go as planned and we never place blame on any one person.
When failures do happen, we’re able to speak honestly about what went wrong, and what we can do better next time in our bi-weekly engineering retrospectives. That said, we also have certain guardrails in place, including a solid review and QA process to ensure we’re shipping high-quality code.
It’s completely okay to get it wrong. In fact, we have a strong bias toward taking risks and making mistakes versus playing it safe. As a result, one of our core company values is, “We are an elite force. It’s always we, never me.” We are all open about making mistakes across all levels, including our CEO. Our most senior developers admit to hacks and come forward about being wrong when they are. We’re honest in our PRs and the feedback we give, and also recognize risks and knowingly accept them. We have dedicated time to build test harnesses to catch bugs and alert us (i.e. alarms, emails) because we encourage speed over perfection. It makes us more comfortable to take risks, too, because we are reassured that mission critical regressions will be caught early.
To us, feeling safe to fail goes hand in hand with open feedback. We frequently reference the radical candor graph when we are giving or asking for feedback, and make sure everyone has context around what they’re building. We document our in-person meetings, whether they be dev team retrospectives or standups, and share them in our shared Slack office. We also have a Facebook Portal in both of our offices, which is on at all times. It provides an additional wormhole from one office to another. And despite our time differences, we schedule weekly All Hands meetings and weekly #devs meetings when our timezones overlap so that everyone can attend. Ideas are shared, concerns are voiced, and we happily address successes and failures from the week.
1 Open Positions
We ship to production often, sometimes multiple times a day. Quickly iterating helps us learn and if nothing ever breaks, we’re under-indexing on speed. However, we always view missteps as learning opportunities and never place the blame on any one person. For instance, when we experienced a production outage or a recent time zone bug, it only motivated analysis and improvement of our systems.
Different features require different levels of certainty around success – both technically and from a product perspective. Ascertaining these levels and executing appropriately is critical to the success of the business, and if we avoid all failure on a micro level, we believe we’ll fail on a macro level. At the end of the day, while perfect code doesn’t exist, we each take complete responsibility in producing excellence. We review code thoroughly, write tests and design architecture collaboratively, and practice blameless postmortems so we can learn from failures and celebrate wins together.
Across the board, we work hard to ensure we create a safe space to try new things. Even if we’re not 100% sure something will succeed, we take calculated risks and learn from our mistakes along the way. While we don’t have structured postmortems at this point, we do have regular roundtable discussions to reflect on takeaways. For example, we recently launched our first version of a new platform to users and then gathered as a frontend team to take stock of what went well and what went less well. Then, as a broader software team, we compiled a list of things we intentionally put off from the previous quarter in order to get the release out and prioritize our work.
We’re all in the same boat, working toward a common goal, so there’s never any blame placed on an individual. Instead, we strive to create a culture where it’s safe to call out if you didn’t do something well, think there could be an improvement, or simply need to ask for help. Ultimately, both our successes and failures are shared.
Yes, some of the stuff we do will fail, but we believe if we're not failing sometimes, we're not being bold or risky enough. As our CEO, Chris Hyams says, “The greatest soil for personal growth is made of the things that you mess up on badly, but are willing to look at honestly.”
This is a key principle of Indeed Incubator, an internal program that gives Indeedians an opportunity to pitch their ideas and get funded like a venture-capital investment, in various amounts from Angel to Series C. We provide guardrails, not speed bumps to solve real-world problems facing job seekers. This gives us the freedom to test ideas. Even if they fail, we take what we learned and apply it to another project.
Key products that have come from or are currently testing by Indeed Incubator include:
For more successes check out:
Indeed Incubator partners with Indeed University, which launched in 2015 with the mission to empower new hires out of college to prototype ideas, some of which have become Indeed products.
Online learning platform for course-specific study resources
Redwood City, Vancouver, Toronto, or Remote (w/in a 50-mile radius)
Effective error handling is important both for the developer and the user. We try to make sure both internal and external error messages make sense from a user’s point of view (human-friendly output please!). And in the case of internal errors, they should always be actionable, communicating what was expected versus what was received. To track runtime errors we use Rollbar, which fires Slack notifications once an error exceeds a pre-set threshold. You can’t fix what you can’t see, so it’s crucial to have visibility to the exceptions that are occurring in order to maintain a great user experience.
We’ve also increasingly leaned into using tests to make sure our front ends pass muster; we use React Testing Library for testing individual React components and Cypress for testing application flows (and legacy non-React code). We also leverage compilers like TypeScript to reduce both runtime errors and the number of unit tests we need to write.
Two of our company values are “always learning” and “responsible.” We maintain a strong desire to know more in order to better ourselves and Course Hero, never letting a fear of failure get in the way of learning and progress. We each take smart, strategic risks and are accountable for our successes and failures. We take pride in making progress and enabling those around us to do the same. While mistakes are inevitable, we view them as learning opportunities. When something goes wrong it may sting in the moment, but it ultimately yields valuable insight to improve our customer’s experience and save us from future stress.
11 Open Positions
Our industry is incredibly complicated and opaque. To break through, we need to innovate and make mistakes.
We want developers who bring an inherent curiosity to everything they do and a desire to try new things. We value enthusiasm and smart engineering more than expertise in a particular stack. Some of our most talented engineers have mastered programming languages while on the job.
Following major projects, we conduct thorough post-mortems where everyone is encouraged to contribute. You’ll never hear blame in these meetings. We take shared responsibility for mistakes and a collaborative approach to fixing them. If you try something and fail, we’ll share learnings together and choose a different strategy next time.
It’s a horizontal organization at Monograph. We’re very open about how no technology or process is sacred and we’re all figuring it out together as we go. Operating under the guiding mantra of “blame systems, not people,” we reinforce the blameless culture in our feedback and communication. For instance, we’ve actually had to rebuild Monograph two times now because of early architecture choices, and each time was a decision we made together. We hope you’ll experiment and work openly with us on both Monograph and whatever side project(s) you have.
1 Open Positions
Want to List Your Company?
Submit a team profile!
Select 8 Values
Contact me (Lynne 👋)
Qualify Your Values
Reach Thousands of Devs
Find Value-Aligned Candidates
Have Meaningful Initial Conversations
Don't Fill Roles, Hire Teammates
You can post as many job openings as you want.