HireSleek

Senior Data Engineer

Website Careaccess

About Care Access

Care Access is working to make the future of health better for all. With hundreds of research locations, mobile clinics, and clinicians across the globe, we bring world-class research and health services directly into communities that often face barriers to care. We are dedicated to ensuring that every person has the opportunity to understand their health, access the care they need, and contribute to the medical breakthroughs of tomorrow. With programs like Future of Medicine, which makes advanced health screenings and research opportunities accessible to communities worldwide, and Difference Makers, which supports local leaders to expand their community health and wellbeing efforts, we put people at the heart of medical progress. Through partnerships, technology, and perseverance, we are reimagining how clinical research and health services reach the world. Together, we are building a future of health that is better and more accessible for all.

How This Role Makes a Difference

We are seeking an experienced and detail-oriented professional to join our team as a Sr. Data Engineer. In this pivotal role, you will be responsible for designing, developing, and maintaining robust data pipelines that ensure the reliable ingestion, transformation, and delivery of complex data (demographics, medical, financial, marketing, etc.) across systems. The ideal candidate will bring deep expertise in Databricks, SQL, and modern data engineering practices, along with strong collaboration skills to help drive excellence across our data infrastructure.

How You’ll Make An Impact

Data Engineering Strategy and Architecture: Design and implement scalable, reliable, and efficient data pipelines to support clinical, operational, and business needs. Develop and maintain architecture standards, reusable frameworks, and best practices across data engineering workflows. Build automated systems for data ingestion, transformation, and orchestration leveraging cloud-native and open-source tools.

Data Infrastructure and Performance Optimization: Optimize data storage and processing in data lakes and cloud data warehouses (Azure, Databricks). Develop and monitor batch and streaming data processes to ensure data accuracy, consistency, and timeliness. Maintain documentation and lineage tracking across datasets and pipelines to support transparency and governance.

Collaboration and Stakeholder Engagement: Work cross-functionally with analysts, data scientists, software engineers, and business stakeholders to understand data requirements and deliver fit-for-purpose data solutions. Review and refine work completed by other team members, ensuring quality and performance standards are met. Provide technical mentorship to junior team members and collaborate with contractors and third-party vendors to extend engineering capacity.

Technology and Tools: Use Databricks, DBT, Azure Data Factory, and SQL to architect.

To apply for this job please visit job-boards.greenhouse.io.