Responsible AI (RAI) & Governance Solution Architect
21 hours ago
City of London
Responsible AI (RAI) & Governance Solution Architect Experience: 15+ years overall experience, with significant exposure to AI/ML, risk, governance, or enterprise architecture Role Overview The Responsible AI (RAI) & Governance Solution Architect will be responsible for designing, implementing, and operationalizing enterprise‑wide Responsible AI governance frameworks. This role ensures that AI systems are ethical, compliant, secure, transparent, and scalable, while meeting regulatory, organizational, and societal expectations. The architect will work closely with engineering, data science, legal, risk, compliance, security, and business teams to embed governance controls into the AI lifecycle and drive continuous improvement through metrics, tooling, red teaming, training, and incident management. Key Responsibilities 1. Responsible AI & Governance Framework Implementation • Translate organizational Responsible AI principles and governance frameworks into practical workflows, controls, and standards across the AI/ML lifecycle (design, development, deployment, monitoring, retirement)., • Embed governance checkpoints into MLOps / AI Ops pipelines, product reviews, and approval processes., • Ensure compliance with UK and global regulations (e.g., UK AI regulation principles, GDPR, EU AI Act readiness, ISO/IEC standards)., • Define and operationalize model risk management, documentation, and approval processes (e.g., model cards, data sheets, impact assessments). 2. Metrics Tracking, Evaluation & Red Teaming • Design and oversee Responsible AI metrics and KPIs, including agentic AI‑specific indicators such as:, • Harm Propagation Index, • Emergent Behaviour Index, • Bias & Fairness Scores, • Explainability & Transparency Metrics, • Conduct and govern AI red teaming exercises to proactively identify safety, ethical, security, and misuse risks., • Establish continuous monitoring mechanisms to detect model drift, unintended behaviours, and policy violations., • Develop dashboards and reporting for leadership on AI risk posture and governance maturity. 3. Training & Awareness • Design and deliver Responsible AI training programs for:, • Data Scientists & Engineers, • Product Managers, • Risk, Compliance, and Business Stakeholders, • Promote awareness of AI ethics, regulatory obligations, and governance expectations., • Act as a subject‑matter expert and advisor for AI practitioners and leadership. 4. Incident Management & Escalation Support • Investigate and respond to AI‑related ethical incidents, policy breaches, safety issues, or significant performance deviations., • Lead root‑cause analysis and recommend corrective / preventive actions., • Define escalation pathways and participate in AI ethics and risk committees., • Maintain audit‑ready documentation for incidents and remediation actions. 5. Continuous Improvement & Governance Tooling • Evaluate and recommend AI governance platforms and tools (e.g., model monitoring, bias detection, documentation automation)., • Enhance governance maturity through best practices, emerging standards, and industry learnings., • Continuously improve policies, processes, and controls to support scalable and responsible AI adoption., • Track internal and external trends in AI regulation, safety, and ethics. Required Skills & Experience • 15+ years of experience in AI/ML, enterprise architecture, risk management, governance, compliance, or secure systems design., • Proven experience designing and implementing AI governance or model risk management frameworks., • Strong understanding of AI/ML systems, including generative AI and agentic AI. Technical & Governance Expertise • Responsible AI principles: fairness, transparency, explainability, robustness, privacy, accountability., • Experience with:, • Bias & fairness assessment tools, • Explainability frameworks (e.g., SHAP, LIME concepts), • Model lifecycle and MLOps practices, • Knowledge of regulatory and standards landscape:, • UK AI governance principles, • GDPR and data protection, • EU AI Act (or equivalent risk‑based regulations), • ISO/IEC AI standards, NIST AI Risk Management Framework Soft Skills & Leadership • Strong stakeholder management and cross‑functional leadership capabilities., • Ability to influence without authority in complex enterprise environments., • Excellent communication skills, with the ability to explain complex AI risks to non‑technical audiences., • Strategic thinker with a pragmatic, execution‑oriented mindset. Preferred Qualifications • Advanced degree in Computer Science, Data Science, AI, Ethics, Law, Risk, or a related field., • Certifications in AI, data privacy, security, enterprise architecture, or risk management., • Experience working with financial services, healthcare, public sector, or regulated industries. About HCLTech HCLTech is a global technology company with over 227,000 employees across 60 countries, delivering capabilities in digital, engineering, cloud, and AI. Guided by its philosophy of Supercharging Progress™, HCLTech partners with clients worldwide to accelerate their transformation journeys.