Why work at Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy. We create the tools and resources our customers need to solve real-world challenges and transform industries, without massive infrastructure costs or the need to build large in-house AI/ML teams. Our employees work at the cutting edge of AI cloud infrastructure alongside some of the most experienced and innovative leaders and engineers in the field.
Where we work
Headquartered in Amsterdam and listed on Nasdaq, Nebius has a global footprint with R&D hubs across Europe, North America, and Israel. The team of over 800 employees includes more than 400 highly skilled engineers with deep expertise across hardware and software engineering, as well as an in-house AI R&D team.
The product
In a rapidly evolving world, trust in AI depends on AI agents being grounded in fresh, verified real-world data. Search is the foundation that makes this possible.
We are building an agent-native search platform designed specifically for AI systems rather than human users. Our product provides programmatic, low-latency, and observable search APIs that AI agents use to retrieve, filter, and reason over real-world information at scale.
The role
We are looking for a Senior Software Engineer to work on the core backend systems of a novel search engine tailored for agentic AI consumption.
In this role, you will work across the full lifecycle of search data and requests — from acquiring and processing documents, through indexing and retrieval, to serving search results to agents at runtime. You will help shape both offline pipelines and online services, contributing wherever the system needs to evolve most.
You will focus on building systems that are correct, debuggable, scalable, and predictable under production load, while APIs, data formats, and system boundaries co-evolve.
In this position, your responsibility will be to
- Design, implement, and operate core backend components of the search system, spanning request-time services and background data pipelines
- Contribute to document ingestion, crawling, and preprocessing workflows, adapting strategies based on source, domain, and freshness requirements
- Build and evolve indexing and retrieval systems, including data formats, update strategies, and access patterns
- Implement and improve search request flows, including query processing, retrieval orchestration, and response assembly under strict latency budgets
- Build well-tested services and pipelines with clear responsibilities and interaction contracts, while remaining flexible as the system evolves
- Define and implement observability primitives, including structured logs, metrics, traces, and quality signals for both online and offline components
- Support experimentation and iteration by enabling feature flags, controlled rollouts, and online experiments
- Track throughput, latency, and resource usage across the system, and improve performance or cost efficiency when business needs require it
- Collaborate closely with ML engineers to integrate semantic retrieval and ranking models, while keeping ML logic decoupled from core system internals
- Work with data analysts and product managers to translate product and quality goals into concrete backend behavior and measurable metrics
You may be a good fit if you:
- Have 5+ years of experience as a software engineer working on production backend systems
- Have hands-on experience with Go in real-world services (experience with other systems languages like C++ or Rust is a plus)
- Have built concurrent, high-load systems, and are comfortable reasoning about throughput, latency, and failure modes
- Are familiar with distributed systems fundamentals, including fault tolerance, load balancing, and horizontal scalability
- Have operated your own code in production: deployed it, debugged incidents, rolled back changes when necessary, and understand what it means to interact directly with production systems — and when not to
- Tend to think systematically by default, but can make pragmatic tradeoffs and cut corners intentionally when time or scope requires it
- Are comfortable working across boundaries, reasoning about systems end-to-end rather than staying within a narrow component
- Collaborate effectively in cross-functional teams, communicating clearly with engineers, ML practitioners, analysts, and product managers
- Are curious about modern developer tooling and have used AI-assisted tools (coding agents, ChatGPT, etc.) in your workflow
Strong candidates may also have experience with:
- Backend systems for search, recommendation, or other ranking-heavy products (e.g. search engines, ads platforms, content feeds, e-commerce)
- Building or operating data pipelines, ingestion systems, or indexing workflows alongside online services
- Practical exposure to ML-backed systems, including classical ML pipelines or LLM-based services
- AI agent architectures, tool-calling systems, or agent-oriented workflows
What we offer
- Competitive salary and comprehensive benefits package.
- Opportunities for professional growth within Nebius.
- Flexible working arrangements.
- A dynamic and collaborative work environment that values initiative and innovation.
We’re growing and expanding our products every day. If you’re up to the challenge and are excited about AI and ML as much as we are, join us!