Software Engineer/DevOps Engineer City of London £Competitive plus strong bonus and benefits Azure, Terraform, Data Tooling DevOps Engineer is sought to join a highly prestigious financial services organisation. This is a key role that will see you taking responsibility for developing Microsoft Fabric related DevOps processes, ensuring the correct balance between environmental control and ensuring Data Engineering teams have the flexibility to work efficiently. You will create bespoke modules in Terraform and actions in GitHub (or Azure DevOps) to support CI/CD workflows. You will also liaise with teams across the business to ensure the platform meets all security and performance requirements. Key Responsibilities Develop standards and strategies to manage the deployment of assets into the Microsoft Fabric ecosystem. Where required create custom actions in GitHub/Azure DevOps that use the Microsoft Fabric APIs. Where required create custom terraform modules to ensure Microsoft Fabric configuration is held as infrastructure as code. Work with Data Engineers to create the development environments engineers will use to develop and deploy products in Microsoft Fabric. Work with data owners around the business to ensure source data systems can be securely accessed. Ensure security best practices are followed. BCP/DR strategy. Work with other members of the central platform team to monitor the Microsoft Fabric feature roadmap and integrate new features into the established eco-system. Work with other members of the central platform team to define an efficient project process to deliver new data products. Key Technical Skills and Experience Terraform Modules Infrastructure as code GitHub/Azure DevOps Azure Data Factory Azure Synapse CI/CD including Databases Databricks GitHub Actions/Azure DevOps Tasks Monitoring in Azure Release Management Experience Microsoft Fabric (not essential) Curious to learn new sectors like AI, ML (Not essential) Minimum 6 years working in a cloud environment managing data engineering products.
This role is focused on manual quality assurance testing of AI-generated outputs to evaluate the accuracy, context, grammar, fluency, and pronunciation supported language. You will work closely with the R&D team and AI engineers who are building the models, offering direct feedback and test results that shape the final product. This is a high-impact role that ensures our models meet linguistic standards and serve real users accurately and naturally. 🔹 Key Responsibilities · Check if the AI speaks and writes correctly in different languages, accents, and regions. · Make sure the AI sounds natural and uses the right grammar, tone, and local expressions. · Report mistakes in the AI’s language output, including unclear pronunciation or wrong words. · Work closely with the AI team to improve how well the AI understands and speaks different languages. · Test the AI’s results manually or using tools, and keep clear records of what works and what doesn’t. · Give feedback on how user-friendly the AI is for people from different cultures and backgrounds. · Ensure the AI treats all languages and cultures fairly and follows privacy rules. · Review translated text and voice outputs to see if they make sense in your native language. · Point out common language mistakes and help fix them. · Recheck results after the AI is updated to make sure issues are resolved. · Attend discussions with the AI team to share your language knowledge. · Help build examples and tests to train and measure the AI's language skills. · Fill out feedback forms and track progress on errors and fixes. · Stick to timelines and guidelines shared by the QA or project manager. 🔹 Required Qualifications · Native speaker of the target language (fluency in speaking, reading, and writing) · Good command of English for translation comparison and documentation · Strong understanding of grammar, cultural nuances, idiomatic usage, and slang in the native language · Prior experience in language QA, proofreading, translation review, or content moderation is preferred · Familiarity with Google Sheets, Word, or QA tracking tools 🔹 Preferred Skills · Basic knowledge of AI/ML or translation systems · Experience in speech/audio evaluation tools (Audacity, Praat, etc.) is a plus · Comfortable working with R&D or technical teams · Organized, detail-oriented, and proactive in raising issues