Prior Labs·about 1 year ago
Prior Labs is building foundation models that understand tabular data, the backbone of science and business. Foundation models have transformed text and images, but structured data has remained largely untouched. We’re tackling this $600B opportunity to fundamentally change how organizations work with scientific, medical, financial, and business data.
Momentum: We’re the world-leading organization in structured data ML. Our TabPFN v2 model was published in Nature and set a new state-of-the-art for tabular machine learning. Since its release, we’ve scaled model capabilities more than 20x, reached 2.5M+ downloads, 5,500+ GitHub stars, and are seeing accelerating adoption across research and industry. We’re now building the next generation of tabular foundation models and actively commercializing them with global enterprises across Europe and the US.
Our team: We’re a small, highly selective team of 20+ engineers and researchers, selected from over 5,000 applicants, with backgrounds spanning Google, Apple, Amazon, Microsoft, G-Research, Jane Street, Goldman Sachs, and CERN, led by the creators of TabPFN and advised by world-leading AI researchers such as Bernhard Schölkopf and Turing Award winner Yann LeCun. Meet the team here.
What’s Next: Backed by top-tier investors and leaders from Hugging Face, DeepMind, and Silo AI, we’re scaling fast. This is the moment to join: help us shape the future of structured data AI. Read our manifesto.
You'll be among the first scientists developing an entirely new class of AI models. Our latest breakthrough (TabPFN) outperforms all existing approaches by orders of magnitude - and we're just getting started. This is a rare opportunity to:
Work on fundamental breakthroughs in AI, not just incremental improvements
Shape the future of how organizations worldwide work with their most valuable data
Join at the perfect time: We just received significant funding, have strong early traction, and are scaling rapidly
We're pushing the boundaries of what's possible with transformer architectures for structured data. Key challenges include:
Scaling our transformer architectures from 10K to 1M+ samples while maintaining performance
Building multimodal models that combine text and tabular understanding
Developing specialized architectures for time series, forecasting, and anomaly detection
Creating efficient inference methods for production deployment
Researching causal understanding in foundation models
Designing novel approaches for handling multiple related tables
PhD in Computer Science, Applied Mathematics, Statistics, Electrical Engineering, or a related field
Deep experience with ML frameworks, especially PyTorch and scikit-learn
Strong engineering fundamentals with excellent Python expertise
Experience in data-science and working with tabular data or time series
Publications at top-tier venues (NeurIPS, ICML, ICLR) or significant open-source contributions
Offices in Freiburg, Berlin, San Francisco and NYC, with flexibility to work across our locations
Competitive compensation package with meaningful equity
30 days of paid vacation + public holidays
Comprehensive benefits including healthcare, transportation, and fitness
Work with state-of-the-art ML architecture, substantial compute resources and with a world-class team
We believe the best products and teams come from a wide range of perspectives, experiences, and backgrounds. That’s why we welcome applications from people of all identities and walks of life, especially anyone who’s ever felt discouraged by "not checking every box."
We’re committed to creating a safe, inclusive environment and providing equal opportunities regardless of gender, sexual orientation, origin, disabilities, or any other traits that make you who you are.