AI Security & Risk Manager
hace 20 horas
New York
About the Company The Technology department at our client is responsible for creating and continuously improving a robust and secure technology foundation that supports the firm's business activities. About the Role As artificial intelligence becomes deeply embedded in both internal operations and the broader vendor ecosystem, the firm faces a new and rapidly evolving risk surface. The AI Security & Risk Manager will be our client's dedicated subject matter expert at the intersection of AI and security, helping the firm navigate this landscape with rigor and clarity. We are seeking a high-performing AI Security & Risk professional to join the Cybersecurity team. Reporting to the Head of Technology Risk, this individual will own the firm's approach to identifying, assessing, and managing risk introduced by AI — both through internal AI deployments and through vendors increasingly embedding AI into their platforms. Responsibilities AI Risk Governance & Strategy • Own and maintain the firm's AI risk framework, covering model risk, data privacy, adversarial threats, third-party AI, and regulatory compliance., • Develop and enforce AI usage policies in collaboration with Legal and Compliance, including acceptable use, data classification requirements, and prompt handling standards., • Maintain an inventory of AI tools deployed firm-wide — both sanctioned and shadow — and assess associated risk profiles., • Provide regular AI risk reporting to the Head of Technology Risk and senior leadership, including emerging threat trends, vendor posture changes, and control gaps., • Monitor the evolving regulatory environment for AI (EU AI Act, SEC guidance, DORA, NY DFS) and advise on compliance obligations and required controls. Vendor AI Evaluation & Third-Party Risk • Lead security and risk assessments of vendors introducing AI capabilities into existing or new platforms, including evaluating model transparency, data handling practices, and auditability., • Develop and maintain a structured AI vendor evaluation framework, incorporating criteria for model governance, output reliability, data residency, and incident response obligations., • Partner with Procurement and Legal to ensure AI-specific provisions are reflected in vendor contracts, including data usage restrictions, model change notifications, and liability terms., • Maintain a tiered risk register of third-party AI integrations, with ongoing monitoring for material changes to vendor AI functionality, architecture, or ownership., • Engage directly with vendor security and product teams to assess AI-related controls and drive remediation of identified gaps. AI Threat Modeling & Security Architecture • Conduct threat modeling for AI systems and integrations, including risks from prompt injection, model inversion, training data poisoning, and adversarial inputs., • Evaluate AI-specific attack surfaces introduced by LLM integrations, agentic workflows, and MCP-connected data sources., • Collaborate with infrastructure and application teams to embed AI security controls into deployment pipelines and system design reviews., • Assess risks associated with AI-generated content, including deepfake vectors, synthetic phishing, and automated social engineering in the context of financial services., • Contribute to the firm's broader security architecture by ensuring AI components are assessed within the existing control framework. Internal AI Program Oversight • Serve as the security and risk point of contact for the firm's internal AI deployments, including Claude Enterprise and any future platform integrations., • Evaluate data retention, access control, and logging practices for AI platforms to ensure alignment with the firm's compliance and eDiscovery obligations., • Provide risk assessments for proposed AI use cases across the firm, including a structured framework for approving, conditionally approving, or declining adoption., • Support audit and compliance reviews related to AI, including evidence collection and engagement with regulators or external assessors as required., • Develop and deliver AI security awareness content for technology staff and end users. Qualifications • Bachelor's degree in Computer Science, Information Security, Data Science, or a related field; advanced degree a plus., • At least 7–10 years of experience in information security, technology risk, or a related field, with a minimum of 3 years focused on AI systems, machine learning security, or AI governance., • Deep understanding of the AI and LLM landscape, including foundation model architecture, agentic systems, RAG pipelines, and the risk implications of each., • Hands-on experience evaluating AI platforms and products, including the ability to assess vendor claims about model behavior, data handling, and security controls with appropriate skepticism., • Familiarity with AI risk frameworks and emerging standards, including NIST AI RMF, MITRE ATLAS, OWASP LLM Top 10, and ISO/IEC 42001., • Experience with vendor risk management in a regulated financial services environment, including contract negotiation support and third-party security assessments., • Knowledge of relevant regulatory frameworks including DORA, SOX, SEC cybersecurity disclosure rules, and GDPR/CCPA as they apply to AI data flows., • Strong technical skills sufficient to evaluate AI system architecture, API security, data pipeline design, and access control models without reliance solely on vendor documentation., • Excellent communication skills, with the ability to translate highly technical AI risk concepts into clear, decision-ready language for senior leadership, Legal, and Compliance., • Experience operating in a Microsoft-first environment, including familiarity with Entra ID, Azure, and M365 security tooling, is a strong plus., • Ability to work independently, manage competing priorities, and operate effectively in a fast-paced, lean team environment., • Relevant certifications such as CISSP, CISM, CRISC, or emerging AI-focused credentials a plus.