Prompt Engineer
12 days ago
London
Role Overview Role/Job title: Prompt Engineer Work Location: London, Tunbridge wells, Ipswich, Bolton Mode of working Hybrid /office based: Hybrid If Hybrid, how many days are required in office?: 3 days The Role As a Prompt Engineer, you will design, implement, and optimize conversational and generative AI experiences powered by Large Language Models (LLMs) on Microsoft Azure. You will craft robust prompt strategies (system prompts, few-/zero-shot prompts, tool-use instructions), implement prompt chaining for multi-step reasoning, and integrate model outputs into enterprise applications via secure APIs. You will collaborate closely with product owners, solution architects, data engineers, and application developers to translate business objectives into high-quality AI outcomes. A working understanding of Retrieval-Augmented Generation (RAG) is essential to ground model responses in authoritative enterprise content and to reduce hallucinations. This role blends hands-on engineering with rigorous experimentation, evaluation, and continuous improvement. Your Responsibilities Author, test, and refine system, developer, and user prompts to achieve reliable, safe, and consistent outputs. Implement prompt chaining and multi-turn orchestration patterns for complex workflows (reasoning, planning, tool use, and validation). Build LLM-powered features on Azure (e.g., Azure OpenAI, Azure Functions). Utilize and manage RESTful APIs/SDKs to integrate model calls within web services, back-end jobs, and enterprise applications. Design and implement RAG pipelines (chunking, embeddings, indexing, ranking/citation policies) to ground responses in approved content stores. Establish offline/online evaluation frameworks (accuracy, safety, faithfulness, latency, cost), create test datasets, and run A/B or canary experiments. Monitor production behavior, analyze conversations, and iterate on prompts and retrieval strategies to improve outcomes. Enforce content safety, PII handling, data privacy, and role-based access; follow Responsible AI practices and organizational guardrails. Partner with architects and engineers to define LLM interfaces, token/cost budgets, and observability. Your Profile Essential skills/knowledge/experience Hands-on experience crafting prompts (system role design, few-/zero-shot, tool-use instructions) and prompt chaining for multi-step tasks. Strong understanding of LLM behavior (context windows, tokens, temperature/top-p, function/tool calling, safety filters). Understanding Prompt Injection and other security aspects of AI. Practical experience deploying LLM solutions on Azure (e.g., Azure OpenAI, Azure Functions, App Service, Key Vault). Proficiency with REST APIs and JSON; integrating LLM calls into applications/services using Python or C# (Node.js also acceptable). Working knowledge of embeddings, document chunking strategies, indexing, semantic search, and citation/grounding techniques. Experience with vector databases (e.g., Azure Cosmos DB vector search, Redis Enterprise, Pinecone) and reranking strategies. Experience with Git and CI/CD (Azure DevOps or GitHub), unit/integration testing for LLM pipelines, and environment/config management. Ability to measure and optimize latency, throughput, and cost (token budgeting, caching, retries, and fallbacks). Exposure to conversation design, guardrail UX, human-in-the-loop review workflows, and prompt libraries/pattern catalogs. Desirable skills/knowledge/experience Document processing/ETL skills to prepare high-quality corpora for retrieval grounding. Familiarity with LLM evaluation frameworks, prompt-quality metrics, red-teaming, and hallucination/safety monitoring. Knowledge of MLOps patterns, experiment tracking, feature stores, and observability (logging, tracing, metrics) for LLM apps. TPBN1_UKTJ