Data Engineer at Clutter
Los Angeles, CA, US

Clutter is an on-demand, technology company based in Los Angeles that is disrupting the $50B/year self-storage and moving industries. We’ve built an end-to-end logistics and supply chain platform that enables us to offer consumers a much more convenient solution at price parity with the incumbents. We’ve raised $300M from a number of VCs, including SoftBank, Sequoia, Atomico and GV (formerly Google Ventures). We have 500+ team members and tens of thousands of customers in 7 major markets across the US with plans to be in 50+ markets, domestically and internationally, within the next 5 years!

At Clutter, we're fortunate to be providing a consumer value proposition that people love and one that makes economic sense - a true product/market fit that few startups ever find. To deliver on our promise to consumers, team members and investors, we're focused on hiring, training and retaining exceptional individuals. This means that we have a very thorough interview process and maintain high performance expectations, but we'll always be transparent with you and respectful of your time.

The opportunity:
As a Data Engineer, your work will heavily drive key product and business decisions. We are looking for a Data Engineer to build our data pipelines, reliably move data across systems, and build the tools to empower our Analysts and Data Scientists while working closely with our software engineering team to analyze and fill what gaps exist.

As a Data Engineer, you will:

  • Write and maintain ETL and create ingestion pipelines from Google Adwords and third party APIs
  • Use data to improve efficiency and drive growth across our business, including leveraging geospatial data to increase field operations efficiency and improving storage utilization and load times in the warehouse
  • Communicate these data-driven insights in a manner that is meaningful and actionable to all stakeholders in order to drive actionable insights

Core Skills We Look For:

  • Experience building data models, infrastructure, and ETL/ELT pipelines for reporting, analytics, and data science
  • Ability to write complex SQL queries and strong understanding of relational database query performance
  • Building and analyzing dashboards and reports
  • BS or MS degree in Computer Science or a related technical field

Plusses include any of the following:

  • Python, Ruby, or Java programming experience
  • Experience with workflow management tools (Airflow, Oozie, Azkaban, UC4)
  • Experience with one of the messaging system (Kafka, SQS, Kinesis) and different data serialization (json, protobuf, avro)