At Anchorage Digital, we are building the world’s most advanced digital asset platform for institutions to participate in crypto.
Founded in 2017, Anchorage Digital is a regulated crypto platform that provides institutions with integrated financial services and infrastructure solutions. With the first federally chartered crypto bank in the US, Anchorage Digital offers institutions an unparalleled combination of secure custody, regulatory compliance, product breadth, and client service. We’re looking to diversify our team with people who are humble, creative, and eager to learn.
We are a remote friendly, global team, but provide the option of working in-office in New York City, Sioux Falls, Porto, Lisbon, and Singapore. For our colleagues not located near our beautiful offices, we encourage and sponsor quarterly in-person collaboration days to work together and further deepen our Village.
Join the Data Platform team and build the Trusted Data Platform that powers Anchorage's transition to Data 3.0. You'll help shape the unified orchestration foundation, collaborate on governance-as-code patterns, and contribute to self-service frameworks that make quality and compliance automatic. We're moving from manual spreadsheets and theoretical architectures to automated control planes where every dataset is trusted, monitored, and traceable by default.
We have created the Factors of Growth & Impact to help Villagers better measure impact and articulate coaching, feedback, and the rich and rewarding learning that happens while exploring, developing, and mastering the capabilities and contributions within and outside of the team:
Technical Skills:
- Collaborate on designing and implementing unified orchestration patterns (Dagster/Airflow) to replace legacy and fragmented scheduling
- Develop governance-as-code systems in partnership with the team that automatically apply policy tags, RLS, and access controls through an active control plane
Complexity and Impact of Work:
- Help guide the technical design for platform capabilities like data contracts, automated quality gating, observability, and cost visibility
- Support the migration of workloads from legacy patterns to the modern platform, ensuring domain teams have clear paths and golden templates
Organizational Knowledge:
- Partner with domain teams (Asset Data, Reporting & Statements, Product teams) to understand their needs and design platform capabilities that enable their success
- Promote and support data mesh principles and dbt best practices, helping domain owners build and own their data products while platform ensures quality
Communication and Influence:
- Promote data platform engineering best practices, developer experience, and "Data as a Product" principles across the engineering organization
- Contribute to architectural decisions and help establish engineering culture around reliability, cost efficiency, and operational excellence
You may be a fit for this role if you:
- 5-7+ years building data platforms or infrastructure: You bring experience helping design and operate modern data platforms that handle enterprise-scale workloads with quality, governance, and cost controls
- Strong dbt and SQL expertise: You're proficient with dbt and SQL, understand dbt Mesh, and have strong opinions on data modeling, testing, and documentation best practices
- Orchestration experience: You've implemented production data orchestration with Airflow, Dagster, Prefect, or similar tools, and understand the trade-offs between different orchestration patterns
- Cloud data warehouse proficiency: You have strong experience with BigQuery, Snowflake, or Redshift, including query optimization, cost management, and security configurations
- Platform mindset: You think in terms of golden paths, reusable abstractions, and developer experience - you build systems that let others move fast safely
Although not a requirement, bonus points if:
- Metadata and catalog experience: You've worked with Atlan, Collibra, DataHub, or similar metadata platforms and understand active governance patterns
- Data observability tools: You've implemented data quality monitoring with Great Expectations, Monte Carlo, Soda, or similar tools
- Infrastructure as code: You have experience with Terraform, Kubernetes, and modern DevOps practices for data infrastructure
- You're the kind of person who gets excited about declarative config, immutable infrastructure, and metrics dashboards showing cost-per-query trending down