Security Strategy & Enablement Lead
17 days ago
London
The AI Security Institute is the worlds largest and best-funded team dedicated to understanding advanced AI risks and translating that knowledge into action. Were in the heart of the UK government with direct lines to No. 10 and we work with frontier developers and governments globally. Were here because governments are critical for advanced AI going well and UK AISI is uniquely positioned to mobilise them. With our resources unique agility and international influence this is the best place to shape both AI development and government action. About the Team: Security Engineering at the AI Security Institute (AISI) exists to help our researchers move fast safely. We are founding the Security Engineering team in a largely greenfield cloud environment we treat security as a measurable researcher centric product. Secure by design platforms automated governance and intelligence led detection that protects our people partners models and data. We work shoulder to shoulder with research units and core technology teams and we optimise for enablement over gatekeeping proportionate controls low ego and high ownership. What you might work on: Help design and ship paved roads and secure defaults across our platform so researchers can build quickly and safely Build provenance and integrity into the software supply chain (signing attestation artefact verification reproducibility) Support strengthened identity segmentation secrets and key management to create a defensible foundation for evaluations at scale Develop automated evidence driven assurance mapped to relevant standards reducing audit toil and improving signal Create detections and response playbooks tailored to model evaluations and research workflows and run exercises to validate them Threat model new evaluation pipelines with research and core technology teams fixing classes of issues at the platform layer Assess third party services and hardware/software supply chains; introduce lightweight controls that raise the bar Contribute to open standards and open source and share lessons with the broader community where appropriate If you want to build security that accelerates frontier scale AI safety research and see your work land in production quickly this is a good place to do it Act as the connective tissue of the AISI security function. This role blends chief of staff energy with product thinking and delivery focus. Youll own the teams narrative planning communication and rhythm ensuring security is seen as valuable accessible and outcome-driven across AISI and beyond. Youll also connect security to AISIs frontier AI work making model lifecycle risks safeguards and evidence legible to leadership and partners and aligning security delivery with AI safety objectives. • Lead internal strategic planning OKRs delivery coordination and progress tracking, • Own security comms: presentations dashboards monthly updates and assurance packs, • Develop reusable material for onboarding stakeholder engagement and external briefings, • Coordinate cross-cutting initiatives risks and dependencies across the function, • Represent the CISO in meetings and planning forums as needed, • Build and maintain relationships across AISI (engineering research policy) and with DSIT security stakeholders, • Translate technical work into stories and narratives aligned to AISIs mission, • Shape an integrated security AI risk narrative covering model lifecycle and how safeguards map to AISIs mission, • Define and track outcome-oriented metrics that include AI surfaces (e.g. eval/release-gate coverage model/weights custody controls GPU governance posture thirdparty model/API usage patterns key AI incident learnings), • Curate enablement materials for AI/ML teams: secure/vetted patterns for model and data handling use of external model APIs and roles/responsibilities across shared responsibility boundaries, • Coordinate AI-governance touchpoints with DSIT and internal leads (e.g. readiness for NIST AI RMF/ISO 42001 where relevant) partnering with GRC to ensure consistent evidence and narratives, • Maintain a clear stakeholder map across research platform product and policy; run the operating rhythm that keeps security and delivery aligned, • Background in strategy product cyber security or technical programme leadership, • Exceptional written and verbal communication; able to switch fluently between technical and executive audiences, • Operates independently prioritises well and holds delivery to account, • Curious about how teams work not just what they deliver, • Values structure clarity and momentum, • Practical familiarity with AI/ML concepts sufficient to translate between security research and policy, • Desirable: experience enabling research or ML organisations and aligning security narratives with AI safety goals, • Planning and roadmap ownership, • Internal comms and storytelling, • Operating rhythms documentation and delivery support, • Cross-functional leadership across engineering research and policy, • Outcome-focused metrics and OKRs that reflect security posture What We Offer Impact you couldnt have anywhere else • Incredibly talented mission-driven and supportive colleagues., • Direct influence on how frontier AI is governed and deployed globally., • Work with the Prime Ministers AI Advisor and leading AI companies., • Pre-release access to multiple frontier models and ample compute., • Extensive operational support so you can focus on research and ship quickly., • If youre talented and driven youll own important problems early., • 5 days off learning and development annual stipends for learning and development and funding for conferences and external collaborations., • Freedom to pursue research bets without product pressure., • Modern central London office (cafes food court gym) or option to work in similar government offices in Birmingham Cardiff Darlington Edinburgh Salford or Bristol., • Hybrid working flexibility for occasional remote work abroad and stipends for work-from-home equipment., • At least 25 days annual leave 8 public holidays extra team-wide breaks and 3 days off for volunteering., • Generous paid parental leave (36 weeks of UK statutory leave shared between parents 3 extra paid weeks option for additional unpaid time)., • On top of your salary we contribute 28.97% of your base salary to your pension. Annual salary is benchmarked to role scope and relevant experience. Most offers land between65000and145000(base plus technical allowance) with 28.97% employer pension and other benefits on top. This role sits outside of theDDaT pay frameworkgiven the scope of this role requires in depth technical expertise in frontier AI safety robustness and advanced AI architectures. The full range of salaries are as follows: • Level 3:(Base35720 Technical Allowance), • Level 4:(Base42495 Technical Allowance), • Level 5:(Base55805 Technical Allowance), • Level 6:(Base68770 Technical Allowance), • Level 7:145000(Base68770 Technical Allowance76230) Additional Information The Internal Fraud function of the Fraud Error Debt and Grants Function at the Cabinet Office processes details of civil servants who have been dismissed for committing internal fraud or who would have been dismissed had they not resigned. The Cabinet Office receives the details from participating government organisations of civil servants who have been dismissed or who would have been dismissed had they not resigned for internal instances such as this civil servants are then banned for 5 years from further employment in the civil service. The Cabinet Office then processes this data and discloses a limited dataset back to DLUHC as a participating government organisations. DLUHC then carry out the pre employment checks so as to detect instances where known fraudsters are attempting to reapply for roles in the civil this way the policy is ensured and the repetition of internal fraud is prevented. For more information please see -___