Developer-friendly APIs to automate trusted decisions about every business
New York, NY or Remote
Enigma’s purpose is to move the world forward by engineering deeper understanding through data, so data is at the core of everything we do.
Having a passion for solving complex problems with data is an absolute must, regardless of whether you’re working directly on a data infrastructure squad, partnering with a data-scientist on a product squad, or building the frontend that powers a best-in-class user experience.
At the core of our offering is a promise to focus on the data and provide transparency into our data-science so customers can focus on getting the intelligence they need in order to make better decisions. Whether we’re working with a large financial institution to help them verify small businesses for loans or screen customers and transactions for sanctioned entities, we’re creating a simple, seamless way for our users to access actionable intelligence from hundreds of authoritative data sources.
Two of our core values focus on outcomes, not activities. We use data to ensure we’re always confident in our decision making. Few features make it out to users without first running in an experiment. We have a simple but effective experiment framework. Periscope provides a way to share these metrics with both technical and non-technical team members. Engineers who build features consider how the data is collected, how it is surfaced, and work with the other stakeholders to determine how to measure the success of it. Beyond quantitative data, our design and customer success teams have feedback loops for collecting qualitative responses to new features to surface product challenges.
Continuous integration and delivery platform
Distributed across the US, Canada, Ireland, UK, Germany, Japan
At CircleCI, we are in a unique position to be able to review large amounts of data on how technology delivery teams act in the wild. Our cloud continuous integration and continuous delivery platform processes over 1.6 million job runs per day for more than 40,000 organizations and over 150,000 projects. To that end, we’re constantly looking at data to inform best practices and make improvements.
Every engineering team has a product manager and data scientist/analyst. Engineers work closely with data scientists to determine what data we should be capturing, how easily we can measure the things we’re interested in, and how to measure the impact our work has on user behavior and usage patterns.
We track OKRs at the company, department, and team level. For example, at the team level, we pay attention to delivery data (i.e velocity, cycle time, PR review time), SLOs and SLIs, escalation data, and PR review time. For production incidents, we track mean time to diagnosis and resolution.
We also look at market data, usage data for our product (like adoption rates), and growth metrics using A/B testing. Since insights is part of our product, we’re starting to build out a data intelligence/AI Ops team. Some of our analysis dashboards include Looker, DataDog, Amplitude, Segment, Honeycomb, and more.
We track the percent of time teams spend on different kinds of work: new features, customer support, and technical investment/maintenance. When we see teams that have a consistently high percentage of support and maintenance work, we might do any of the following:
19 Open Positions
Cloud-based observability platform
San Francisco, Portland, Seattle, Phoenix, Denver, Dallas, or Los Angeles
There are a lot of companies that drive initiatives based on executives’ feelings, but they’re not necessarily grounded in actionable data. At New Relic, simply feeling that something is wrong won’t cut it for us – we aim to make decisions rooted in data. We pride ourselves on collecting data on everything from sprint velocity to data quality in order to make decisions regarding process and system improvements. DOS strives for monitoring-driven development, where monitoring is the first step of a project instead of the last. This is key in establishing a baseline for the current state and for demonstrating the utility of new features and system optimizations. That being said, we try to strike the right balance between making decisions based on math/data and remaining empathetic.
1 Open Positions
Financial and software products for a new generation of business owners
San Francisco, CA or Remote (US)
One of our biggest challenges in building a financial services company for small businesses is understanding credit and risk from our cardholders, and from the general economy as a whole. At the heart of this challenge is real-time aggregation, analysis, and automation of data from a variety of sources – our own cardholder ecosystem, the fundamentals of our cardholders’ businesses, and macroeconomic trends, to name a few. This translates into internal engineering (i.e. instrumentation, API integration, and batch/real-time processing systems), and data science (i.e. analytics, expert systems, machine learning) products that allow us to do our jobs faster, better, and more accurately.
Unlike in many B2C companies where you don’t know exactly what consumers are thinking, our customers are talking to us all the time. When they’re giving us the same feedback and pointing out the same things they’d like to see change, it’s obvious what we need to fix. We take customer data seriously when it comes to product development, because we are creating a tool that people interface with directly. By nature, a lot of those interactions are manual, so we aim to make them as accessible and efficient as possible. To do that, our customer operations team tracks their time. For example, we gather data on how much time is being spent on emails or how much time is spent on personalized payroll plans. We share the time-tracking data results in our monthly all-hands meeting. This allows us to make sure we’re prioritizing major feature developments to be as efficient as possible.
We use Chartio and Redshift extensively to analyze and test all aspects of our product. Our product is designed to be passive in that customers can sign up for Digit and then go on about their lives. Digit will then work in the background to ensure they are financially healthy. Given this passive nature, we rely on data (both qualitative and quantitative) to guide product roadmaps and growth experiments. We currently have 35 features and experiments which are being tested in production right now.
As an engineer at Digit, you’ll be responsible for working with our data analysts and user researchers to define what metrics you care about and which you are trying to improve.
We are instrumenting all of our company’s processes. In operations, we collect data on customer activity, LTV, threats, and engagement. For engineering, we capture data about performance, failure rates, response times/latency, build times, service throughput, and developer efficiency/happiness/engagement. These are just the tip of the iceberg of what types of data we will collect.
Data informs every decision we make about our product, too. At a very high level, we collect data about the threats our customers see. This data dictates which threat behaviors we model (for machine learning) first. We target the threats that are seen most often by the largest number of customers. In addition to frequency, we track severity of each threat and then model the most severe threats to our customers. Engineers at ActZero always stay close to the data as our team is structured so that engineers and data scientists work in partnership.
1 Open Positions
Our core philosophy is to use data to help us learn where to evolve our product. We use Amplitude to instrument everything (>1 trillion data points per month) we do and all of our teams focus on understanding user behavior and using it to influence our roadmaps. In short, we use our own product in order to build a better product. We rely on Amplitude to research what we should be working on, identify which customers to reach out to, validate hypotheses, and determine leading and lagging indicators as we build new features. When we find shortcomings in our own system, we invest in building it. In fact, we never use other tools to do the job. When we needed a scalable data-ingestion platform, we built it in-house.
Sometimes, what we build for our internal use ends up getting rolled out. For example, Period-over-Period Analysis was a feature we built for ourselves to compare this week’s data to last week’s and give us a look at percentage change over time. We eventually rolled this out as a free feature for all of our customers to use and it continues to be one of the most popular.
30 Open Positions
By running a full-stack brokerage, we sit in a privileged position to analyze data. We collect structured information about risk from clients (e.g. property data, business metrics, etc.) and risk pricing data from insurance markets. This industry is particularly opaque and we have a rare opportunity to provide transparency and build the first ever multi-sided marketplace for risk.
There are lots of thorny technical problems to solve at Newfront, from workflow systems to data modeling to document generation. We always rely on the data we collect from brokers and clients to inform new projects and test initiatives before release.
Several engineers on our team have backgrounds in data analysis and mathematics, and we encourage the entire team to leverage their expertise when shipping products.
We are an extremely data-driven organization. Our analytics team supports dashboard creation as well as ongoing and ad-hoc analyses. As an organization, we track and report against a plethora of metrics, some of which are related to user events, feature modifications, and revenue changes. The engineering team uses data for suppression, sunsetting, monitoring, and charts creation. Features are rigorously tested using both A/B and bandit algorithms. All teams use robust dashboards for weekly and quarterly strategic planning. Our analytics team uses platforms such as Chartio, Amplitude, and Redash.
14 Open Positions
Data informs everything we do at Stitch Fix, from which pants we should order to the images that should appear on the homepage. We believe we can use data to create a better experience for our customers. That's why we employ nearly as many data scientists as we do software engineers. Data informs so many of the decisions that we make here at Stitch Fix, that part of life here is working together with data scientists to craft experiments and solve problems.
One example of how we leverage data is in how our data scientists have improved our demand model. Our buying decisions are much more informed than they would be if they were based on traditional retail models, and give us a better grasp on things like, “How many customers do we think we’ll have?” and “How many customers will be wearing smalls vs. extra smalls?” Where traditional retailers rely on industry standard size breaks, we leverage data to make informed decisions when buying and are able to better utilize that inventory.
14 Open Positions
Want to List Your Company?
Submit a team profile!
Select 8 Values
Contact me (Lynne 👋)
Qualify Your Values
Reach Thousands of Devs
Find Value-Aligned Candidates
Have Meaningful Initial Conversations
Don't Fill Roles, Hire Teammates
You can post as many job openings as you want.