Making the right decisions in terms of ensuring our code is modular, well-tested, and reusable enables the code to adapt to future needs. We emphasize quality code because it makes everyone’s life easier: Engineers have an easier time reusing code, and customers using a bug-free product.
We periodically review our code organization and data structures, and devote time to DevOps and refactoring projects. We don’t like getting bogged down by technical debt. Is this class a service or a repository? Should this method be a in a trait or in a parent class? You’ll have team members who care about answering those questions.
1 Open Positions
We’re currently at 96% unit test coverage of our entire codebase. Testing is fundamental to our culture, and engineers are expected to not only write tests for all their code, but also manually test each feature they develop. We run lean when it comes to manual QA testers, as this isn’t scalable.
We call this workflow "pragmatic craftsmanship". Pragmatic craftsmanship means knowing when to invest vs when to ship. As a result, the codebase has engineering quality proportional to its impact. We also believe that product quality is a key differentiator for Asana. Thus we are willing to invest more to build the best experience.
Beyond our commitment to pragmatic craftsmanship, we heavily invest in new hire onboarding. This onboarding process allows us to teach developers the best patterns and practices. The process also enables developers to be valuable members of the team faster.
14 Open Positions
We keep a high bar for testing on our team. All new code is expected to be unit tested, we share an extensive focus on documentation, and code reviews are never rushed. Lonnon Foster explains that the very first step in a thorough code review is to read the tests that back it. “Not only does this insure there are tests, it gives a good overview of how the new code is intended to be used. Good tests serve as documentation for how their code is supposed to be used.” You can read more about our code review standards, and how it contributes to our high quality code base.
All code goes through code review, is one practice. We have a culture of measuring everything, a culture of granular metrics to monitor the health of our system. We also try to make sure that teams have clearly defined areas of ownership. This ensures there are multiple people who are familiar with multiple parts of the code, and that there is always someone who can review your code if you’re making a change to a specific area. We try to have ownership scoped in such a way that no part of the code is not owned. AKA someone always owns that part.
We refactor frequently whenever we encounter technical debt, and like to be on the bleeding edge of technologies and coding practices. It makes our engineers happier and allows us to ship faster!
Our stack consists of Koa.js, MongoDB, and Angular – because they were the hottest things a few years ago. These days, we’re playing with React, React Native, Flow.js, and Postgres – because Postgres is cool again :)
That said, we don’t just follow trends; we adopt new technologies only if it can significantly increase the quality of our code base. So while we adopt a micro-service architecture, we don’t overdo it and use our best judgement and apply the 80/20 rule to balance perfect with scrappy. Fun fact: our core algorithm is written in Common Lisp!
All code gets peer-reviewed before it goes into production. We strive for 99% test coverage on all the code we ship. This makes refactoring easier too!
Choosing how to tune the knob between expediency and elegance is one of the toughest (and most fun!) types of day-to-day decision that engineers face at a fast-paced early startup. However, we believe that this tradeoff is correctly achieved by compromising on the requirements of a problem or the generality of a solution, not in the quality of a system.
We have a high bar for robustness and have no problem prioritizing and working on highly technical tasks, or tasks that non-engineers would have difficulty understanding. We are constantly challenging each other to write better code, improve our processes, and generally improve ourselves and the codebase. We are intentional about what we build and spend lots of time up front designing solutions and choosing abstractions before we dive into a project. As a result, we have a clean codebase, have very healthy coding practices, and spend very little time fighting technical fires.
We use planning and estimation to help clients understand what feature set fits within their budget, not what level of quality they can afford. We closely collaborate on user stories and make sure we have as much information about current and upcoming work. This helps us make the right design decisions that balance immediate needs with long-term maintenance.
When we program, we prioritize quality in a number of ways. We make extensive use of pair programming, code review, and test-driven development to create an environment where design is collaborative and the codebase is collectively owned. Since we have many long-term projects with clients, we spend a lot of time working on code we wrote months or even years ago. This helps us have a long-term outlook when writing features or fixing bugs instead of rushing to ship as much as possible every week.
Our technical screen is a short coding exercise and we emphasize to our candidates that completion is not a criterion for success, and we instead look for insight into a developers process: Do they ask questions if something’s ambiguous? Do they use tests? Do they document assumptions? What is their design process?
When Honor first started, we had 10 engineers before we were even funded. Our engineering team was made up of the best people we knew from our previous companies, which also meant we brought on a lot of the best practices we learned from our previous experiences. We have to ship high quality code because people’s lives are on the line. It’s certainly a balance in being nimble as well, but we understand that everything we do has profound effects on operations (and the bigger we get, the more profound that gets). That is why we implement good practices, code reviews, team discussions, and have deep conversations about architecture and tradeoffs. There’s a lot of communication around good code.
We also highly prioritize ownership. Ownership is everything. If something breaks, it’s a team effort to fix it so it’s important that you do your part in raising the flag. In a way, this also keeps things at a high quality because no one is thinking that they’ll be punished for shipping a bug.
Not only is great code a joy to work with, it’s the only way to move quickly over the long-run. We take pride in our work, do thorough code reviews, and leave time every week to work on technical debt. Within the engineering team we review everything that goes into production, both to improve the quality of our code and to share knowledge between team members. We automate the checking of style so that code reviews can focus more on architecture and maintainability.
Our coding exercise during the interview process is primarily about assessing an engineer’s craft, or level of care that they put into their code. We prefer simple, clean, well-structured code over flash or speed. That said, we still value moving quickly, so we put time parameters on the exercise to give us a relative sense of what can be produced in a given amount of time. We hope to learn as much about you and your code quality during your interview as you do about us and our code quality.
As our customers often require similar functionality, we help them save money by reusing code from our existing codebase, that we call the Atlas Framework. The code that is part of our shared codebase is well written, tested, and maintained because we have many projects using it (any bugs are found and addressed quite quickly). When a customer comes to us asking for some new functionality that we don't already have in our shared codebase, we make a decision on whether to try and separate this new functionality out into something that can be reused, or to bake it into their application.
To help ensure that our code remains high quality, we always ensure that at least two developers work on each project and don't have anyone working in isolation. If a developer ever sees any issues with anything that has been written, we often see who wrote that code and go back to them with some feedback to help them write better code in the future. When we work on a software project we also factor in extensive software testing by our dedicated team of International Software Testing Quality Board (ISTQB) qualified software testers and we use our comprehensive software testing plan documents.
We don’t have QA and instead rely on comprehensive testing including unit and integration tests. Each team supports their own applications in production, so there is high incentive to ship a stable, quality product. We use software linters to enforce a share style and, for the most part, our software development manifesto has held true over the years. Our core platform is a collection of highly available microservices built with Node.js, ReactJS and ES2017 backed by MongoDB, RabbitMQ and Redis. In the future we plan to expand into new domains including iOS application development and software that runs on embedded devices in our hub (IoT).
Want to List Your Company?
Submit a team profile!
Select 8 Values
Contact me (Lynne 👋)
Qualify Your Values
Reach Thousands of Devs
Find Value-Aligned Candidates
Have Meaningful Initial Conversations
Don't Fill Roles, Hire Teammates
You can post as many job openings as you want.