Letta·5 months ago
The human brain is a sponge. Today’s AI brains are brittle and rigid. At Letta, we’re building self-improving artificial intelligence: creating agents that continually learn from experience and adapt over time.
Founded by the creators of MemGPT from UC Berkeley’s Sky Computing Lab (the birthplace of Spark and Ray). Backed by Jeff Dean, Clem Delangue, and pioneers across AI infrastructure. Our agents already power production systems at companies like 11x and Bilt Rewards, learning and improving every day.
We’re assembling a world-class team of researchers and engineers to solve AI’s hardest problem: making machines that can reason, remember, and learn the way humans do.
Note that this role is in-person (no hybrid), 5 days a week in downtown San Francisco.
You will design the long-term memory architecture for LLMs. At Letta, you'll work with a world-class, tight-knit team of AI researchers and engineers towards our vision of self-improving superintelligence. Advance the field through open publishing of research through papers, technical reports, blog posts, and open-source code.
Defining the key abstractions of the LLM memory layer
Building memory architectures that support multiple memory types including temporal sequences, episodic experiences, semantic knowledge, and procedural skills
Researching memory sharing between multiple agents that enable effective multi-agent collaboration
Improving context management techniques that solve the long-context / context derailment problem
Running evaluations for measuring and improving memory for agents
Deep expertise in LLMs and retrieval
Track record of impactful research (breakthrough publications and/or open source contributions)
Ability to balance execution speed with empirical rigor
Real-world impact beyond pure academic work