Intus·about 4 hours ago
About IntusCare
IntusCare is the only end-to-end ecosystem built specifically to help Programs of All-Inclusive Care for the Elderly (PACE) programs deliver exceptional care, strengthen financial performance, and stay compliant. IntusCare replaces outdated technology and manual workarounds with purpose-built solutions for care coordination, risk adjustment, population health, and utilization management. We empower teams to take control of their operations and improve outcomes for dual-eligible seniors- some of the most socially vulnerable and clinically complex individuals in the US healthcare system.
This role sits on the Data Migrations team and focuses on building high‑quality, repeatable pipelines that move healthcare data from point A to point B with 100% accuracy. The engineer in this seat will work on “small data” problems where completeness and correctness matter more than scale or probabilistic approaches, directly unblocking real customer workflows and seeing fast, tangible impact from their work.
Design, build, and own ETL pipelines that extract, transform, validate, and load healthcare data from external sources into our platform using Python and SQL.
Implement robust data quality checks and monitoring to ensure migrations are accurate, complete, and repeatable, with particular attention to edge cases and long‑tail records.
Handle a mix of one‑off data requests and long‑lived, reusable data flows, making thoughtful tradeoffs between quick scripts and durable systems.
Collaborate with product, implementation, and support teams to diagnose and resolve customer data issues, often working from real tickets and real timelines.
Contribute to internal standards, templates, and tools that make future migrations faster and more reliable across customers.
Participate in code reviews and technical discussions with peers, bringing prior experience and good judgment rather than relying on close day‑to‑day mentorship.
6–8+ years of professional experience in software engineering or data engineering, including at least two roles with 2+ years tenure each.
Prior experience in roles such as Data Engineer, Software Engineer (data pipelines), ETL Engineer, or Data Integration Engineer, with clear ownership of production pipelines rather than analytics dashboards.
Strong proficiency with Python and SQL for building and operating ETL/ELT workflows.
Experience working with healthcare data (claims, eligibility, EHR/EMR, clinical data, HL7/FHIR, or similar) is a significant plus and will help the engineer be productive quickly.
A mindset optimized for 100% correctness and completeness in regulated or high‑stakes domains, not “good enough” or purely probabilistic approaches.
Ability to work independently, make sound technical judgment calls, and structure ambiguous migration problems into clear, repeatable solutions.
Comfortable collaborating with non‑engineering stakeholders; prior customer‑facing or implementation experience is a plus but not required.
Nice to have: Experience with TypeScript or modern web stacks in addition to core data engineering work.
Not an MLOps / big data / large‑scale ML pipelines role; the focus is small, precise, healthcare data integrations where every record matters.
Not a pure data analyst or BI/reporting position; success here is about building and maintaining pipelines, not primarily about dashboards or visualization.
The base salary range for this role is $145k-165k. We expect the ideal candidate to fall near the midpoint of this range, though final compensation will be determined based on experience, skills, and organizational needs. Final compensation will also include a variable component and stock options.
Work location: This is a fully remote role based in the United States.
Sponsorship: This position is not eligible for sponsorship.