Posted on:
October 27, 2024
Who We Are Perchwell is the modern data and workflow platform for real estate professionals and consumers. Based on the industry’s foundational data, Perchwell builds a modern software suite to empower real estate professionals to do their best work, provide differentiated service to their clients, and grow their businesses. Backed by Lux Capital, Founders Fund, and some of the country’s leading Multiple Listing Services (MLSs), Perchwell builds next generation workflow software/data products for the multi-trillion dollar residential real estate industry. Perchwell is the first new entrant to come to market in decades and is currently scaling its best-in-class platform.
Position Overview: Perchwell’s mission is to become the fastest growing MLS workflow and data platform in the country. Data is core to what Perchwell represents, and we are looking for a Senior Data Engineer to lead our data engineering initiatives. This includes building a data lake and warehouse solution while scaling our existing data infrastructure to onboard several new MLSes in the coming months. As a senior data engineer, you’ll collaborate with cross-functional teams including Data Insights, Product, Design, and other engineering teams to build robust data solutions that help Perchwell become the best-in-class MLS workflow and data platform. As a foundational member of our small but growing team, you will have the opportunity to shape the standards, culture, and values of the data engineering team.
What You’ll Do: - Build ETL tooling and data pipelines, integrating data from 3rd party sources and APIs. - Design and implement automated data governance measures to enhance data quality and observability. - Develop team processes and culture based on ownership and accountability. - Collaborate with the Data Analyst team to conduct analysis, dashboards, and quality assessments.
What You’ll Need: - 5+ years of experience in data engineering, including proficiency in Python, SQL, or Kotlin. - Experience in building scalable and fault-tolerant data pipelines with data from 3rd party APIs for both batch and real-time use cases. - Expertise with ETL schedulers such as Airflow (preferred), Dagster, Prefect, or similar frameworks. - Familiarity with cloud architecture (preferably AWS) and technologies including S3, SQS, RDS, EMR, Glue, Athena, and Lambda. - Experience with data warehouses like Snowflake (preferred), Redshift, or Google BigQuery. - Knowledge of CI/CD pipelines using GitLab, GitHub actions, Terraform, or Jenkins. - Familiarity with microservices architecture and cloud data lake implementations. - Excellent communication skills, both oral and written, with a demonstrated ability to collaborate effectively with cross-functional teams.
In this role, you’ll work out of our New York City Office in Soho Manhattan at least 3 days/week.
Bonus Points For: - Certifications in AWS, Snowflake, or Elasticsearch. - Ruby on Rails experience.
Compensation: To provide greater transparency, we share base salary ranges for all US-based job postings regardless of state. Our ranges are based on function and level benchmarked against similar stage growth companies. Final offer amounts are determined by multiple factors including skills, job-related knowledge, and depth of work experience. The compensation for this position is $160-$190K base salary + equity + benefits.
Note: At this time, we are only considering candidates who are authorized to work in the U.S.