< See all roles
ML Research Engineer

San Francisco, California, USA | Full-time

Transformers feel magical -- they understand text at near-human level. Yet traditional search engines (cough, Google) don't feel magical -- they don't understand text at near-human level. It doesn't have to be this way.

At Exa, we're training foundational models for search. Our goal is to build systems that can instantly filter the world's knowledge to exactly what you want, no matter how complex your query. Search is a unique research problem within generative AI -- instead of generating a single piece of text, search involves asking the same questions about billions of texts.

No other AI lab is exploring this direction. That means if we don't find novel methods for search, they won't be found. Our team has already made breakthroughs not seen in the literature, and we have more coming. Want to explore the search frontier with us?

Desired Experience

  • You have graduate-level ML experience (or are an exceptionally strong undergrad)
  • You can code up a transformer from scratch in pytorch
  • You're comfortable creating large-scale datasets

Example Projects

  • Train a 30B parameter version of our current model
  • Build an RLAIF pipeline for search
  • Dream up a novel architecture for search in the shower, then code it up and beat our best model's top score
// APPLY HERE