Playson·1 day ago
🎯 What You’ll Actually Do
Architect and run high-load, production-grade data pipelines where correctness and latency matter.
Design systems that survive schema changes, reprocessing, and partial failures.
Own data availability, freshness, and trust - not just pipeline success.
Make hard calls: accuracy vs cost, speed vs consistency, rebuild vs patch.
Build guardrails so downstream consumers (Analysts, Product, Ops) don’t break.
Improve observability: monitoring, alerts, data quality checks, SLAs.
Partner closely with backend engineers, data analysts, and Product - no handoffs, shared ownership.
Debug incidents, own RCA, and make sure the same class of failure doesn’t return.
This is a hands-on IC role with platform-level responsibility.
🧠 What You Bring
5+ years in data or backend engineering on real production systems.
Strong experience with columnar analytical databases (ClickHouse, Snowflake, BigQuery, similar).
Experience with event-driven / streaming systems (Kafka, pub/sub, CDC, etc.).
Strong SQL + at least one general-purpose language (Python, Java, Scala).
You think in failure modes, not happy paths.
You explain why something works - and when it shouldn’t be used.
Bonus: You’ve rebuilt or fixed a data system that failed in production.
🔧 How We Work
Reliability > elegance. Correct data beats clever data.
Ownership > tickets. You run what you build.
Trade-offs > dogma. Context matters.
Direct > polite. We fix problems, not dance around them.
One team, one system. No silos.
🔥 What We Offer
Fully remote.
Unlimited vacation + paid sick leave.
Quarterly performance bonuses.
Medical insurance for you and your partner.
Learning budget (courses, conferences, certifications).
High trust, high autonomy.
Zero bureaucracy. Real engineering problems.
👉 Apply if you see data platforms as systems to be engineered - not pipelines to babysit.