Experienced Research Engineer
3 days ago
London
Research Engineer – Foundation Models | London (On-site) We’re partnering with a stealth-mode deep tech AI lab based in London that’s building foundation models from the ground up . This isn’t fine-tuning or prompt engineering, it’s genuine model pre-training, post-training, and large-scale systems research. The founding team includes senior engineers and researchers from Meta, Google, and AWS , each with deep experience training and deploying large-scale models. Their mission is simple but ambitious: to develop the next generation of language and multimodal foundation models and systems capable of general reasoning, memory, and efficient adaptation across domains. As a Research Engineer, you’ll play a key role in designing, training, and evaluating large-scale models and distributed systems. You’ll work closely with experienced colleagues who have scaled some of the world’s largest AI systems, helping to drive both the research and infrastructure that underpin next-generation foundation models. You’ll have a high degree of autonomy and ownership, contributing directly to experiments, training pipelines, and evaluation frameworks that will inform how the models evolve over time. Designing and developing large-scale training pipelines for foundation models (language and multimodal). Building and scaling distributed training infrastructure across multi-GPU and multi-node environments. Exploring pre-training and post-training techniques , including SFT, RLHF, RLAIF, and DPO . Working closely with core research and systems teams to optimise training efficiency, data throughput, and inference performance . Contributing to internal tooling for data curation, versioning, and reproducible experimentation. Strong experience with LLMs, generative, or multimodal models , ideally involving large-scale training or evaluation. Understanding of distributed training, parallelism strategies , and scaling laws . Familiarity with post-training methods (RLHF, RLAIF, DPO, or alignment optimisation). Solid software engineering fundamentals - version control, CI/CD, and production-grade model deployment. Experience working in cloud-native or high-performance compute environments (AWS, GCP, Docker, Kubernetes). A curiosity for foundational research and a desire to work on systems that push beyond the current limits of LLMs. Be part of a core founding technical team shaping a next-generation foundation model company. Collaborate with world-class engineers and researchers from top AI labs. Work on true pre-training and large-scale systems, not just fine-tuning. Competitive salary up to £160k, plus meaningful equity. Operate in a fast-paced, research-driven environment where ideas move quickly from prototype to production. 📍 Compensation: 5 days per week on-site If you’re motivated by deep technical challenges and want to work on real model training and architecture design, not just incremental fine-tuning, this is one worth a conversation. To find out more, send a quick message to Jamie Forgan or apply directly with your CV, and we’ll arrange a confidential chat to run you through the details.