Are you a business? Hire ai developer candidates in London
Are you looking to kick-start a new career as a Data Scientist? We are recruiting for companies who are looking to employ our Data Science Traineeship graduates to keep up with their growth. The best part is you will not need any previous experience as full training will be provided. You will also have the reassurance of a job guarantee (£25K-£45K) within 20 miles of your location upon completion. Whether you are working full time, part-time or unemployed, this package has the flexibility to be completed at a pace that suits you. The traineeship is completed in 4 easy steps, you can be placed into your first role in as little as 6-12 months: Step 1 - Full Data Science Career Training You will begin your data science journey by studying a selection of industry-recognized courses that will take you from beginner level all the way through to being qualified to work in a junior Data Scientist role. Through the interactive courses, you will gain knowledge in Python, R, Machine Learning, AI, and much more. You will also complete mini projects to gain practical experience and test your skills while you study. Step 2 - CompTIA Data+ CompTIA Data+ is an early-career data analytics certification for professionals tasked with developing and promoting data-driven business decision-making. It teaches Data Mining, Visualization, Data Governance & Data Analytics. In any industry, gaining official certifications is very important in the recruitment process. Therefore, this globally recognized certification will enhance your CV and make you stand out from the crowd. Step 3 - Official Exam The CompTIA Data+ exam will certify that you have knowledge and skills required to transform business requirements in support of data-driven decisions through mining and manipulating data, applying basic statistical methods, and analysing complex datasets while adhering to governance and quality standards. The exam is 90 minutes long and can be sat either in your local testing centre or online. Step 4 - Practical Projects Now that you have completed your theory training and official exams, you will be assigned 2 practical projects by your tutor. The projects are the most important part of the traineeship as it will showcase to employers that you have skills required to work in a data science role. The projects will use real world scenarios where you be utilising all of the skill that you have learned. Whilst you are progressing through the projects, you will have the ongoing support from your personal tutor. Once both projects have been completed and given the final sign off, you will have completed the traineeship and will be ready to move onto the recruitment stage. Your Data Science Role Once you have completed all of the mandatory training, which includes the online courses, practical projects and building your own portfolio, we will place you into a Data Scientist role, where you will be guaranteed a starting salary of £25K-£45K. We have partnered with a number of large organisations strategically located throughout the UK, providing a nationwide reach of jobs for our candidates. We guarantee you will be offered a job upon completion, or we will refund you 100% of your course fees back. We have a proven track record of placing 1000+ candidates into new roles each year. Check out our website for our latest success stories. Read through the information? Passionate about starting a career in data science? Apply now and one of our friendly advisors will be in touch.
Software Engineer/DevOps Engineer City of London £Competitive plus strong bonus and benefits Azure, Terraform, Data Tooling DevOps Engineer is sought to join a highly prestigious financial services organisation. This is a key role that will see you taking responsibility for developing Microsoft Fabric related DevOps processes, ensuring the correct balance between environmental control and ensuring Data Engineering teams have the flexibility to work efficiently. You will create bespoke modules in Terraform and actions in GitHub (or Azure DevOps) to support CI/CD workflows. You will also liaise with teams across the business to ensure the platform meets all security and performance requirements. Key Responsibilities Develop standards and strategies to manage the deployment of assets into the Microsoft Fabric ecosystem. Where required create custom actions in GitHub/Azure DevOps that use the Microsoft Fabric APIs. Where required create custom terraform modules to ensure Microsoft Fabric configuration is held as infrastructure as code. Work with Data Engineers to create the development environments engineers will use to develop and deploy products in Microsoft Fabric. Work with data owners around the business to ensure source data systems can be securely accessed. Ensure security best practices are followed. BCP/DR strategy. Work with other members of the central platform team to monitor the Microsoft Fabric feature roadmap and integrate new features into the established eco-system. Work with other members of the central platform team to define an efficient project process to deliver new data products. Key Technical Skills and Experience Terraform Modules Infrastructure as code GitHub/Azure DevOps Azure Data Factory Azure Synapse CI/CD including Databases Databricks GitHub Actions/Azure DevOps Tasks Monitoring in Azure Release Management Experience Microsoft Fabric (not essential) Curious to learn new sectors like AI, ML (Not essential) Minimum 6 years working in a cloud environment managing data engineering products.
About NanoX Tech Solutions NanoX is a fast-growing technology consultancy (est. 2025) that builds custom software, AI-driven data products and cloud solutions for startups and SMEs. We’re a micro-company headquartered in the UK with a globally distributed, autonomy-first culture. Why we’re hiring Our inbound interest is strong, but we need a hunter who can turn leads into signed statements of work and long-term accounts. You’ll be among our first ten hires in the UK, laying the foundation for NanoX’s revenue engine. Role overview Own end-to-end business development: identify prospects, craft solution narratives, close deals and create repeatable processes that scale across the UK & EMEA. Key responsibilities Pipeline generation – Map target verticals (fintech, e-commerce, healthtech, climate tech) and run multi-channel outbound. Solution selling – Lead discovery sessions, translate business problems into NanoX solutions, and draft proposals/SOWs. Partnerships – Build channel and referral networks (e.g., AWS, Azure, niche SaaS). Market intelligence – Track competitor moves and pricing trends to refine our GTM narrative. Process & reporting – Stand up a lightweight CRM cadence, forecast revenue and report KPIs to leadership. Compensation & benefitsComponentDetailsCommission (core pay)4.5 % of the gross revenue on every successful deal you close. Paid monthly when the customer pays us.Performance incentivesAd-hoc cash or e-voucher bonuses for surpassing quarterly targets.Company goodiesQuarterly swag drops (devices, branded merch, etc.).Paid leave28 days of holiday per year, plus UK public holidays.Sponsored retreatOne fully funded company holiday each year (location voted by the team).ProgressionClear path to Head of Growth once you demonstrate sustained quota over-achievement and build the first sales pod.Important: This is a commission-only position with no fixed base salary. It’s designed for high-energy closers who prefer upside over low-risk guarantees.Must-have experience & skills 3-6 yrs B2B sales/biz-dev in software consulting, SaaS or IT services. Consistent record of closing £250k + contracts or hitting £500k+ annual quota (proof required). Comfortable explaining technical concepts (cloud, APIs, AI/ML) to non-technical buyers. Consultative selling, proposal writing and negotiation prowess. Startup mindset: self-directed, resilient, thrives on ambiguity. Excellent spoken/written English and UK work authorisation. Nice-to-have Existing network in our focus verticals. Familiarity with early-stage GTM tools (HubSpot, Apollo, Navattic, etc.). Additional European language. Success metrics (first 12 months) Closed-won revenue: ≥ £750k. Opportunity→deal conversion: ≥ 25 %. Partnerships signed: ≥ 3 strategic alliances. Forecast accuracy: ± 10 % on a rolling 90-day view. Hiring process Intro call (15 min) with People Ops Deep-dive (60 min) with Managing Director (deal walk-through + Q&A) Practical exercise: 24-h async GTM mini-plan for a sample prospect Culture interview with cross-functional panel Offer Think a commission-only model with uncapped upside is your natural habitat? Job Types: Full-time, Part-time Expected hours: No more than 50 per week Additional pay: Commission pay Performance bonus Quarterly bonus Yearly bonus Benefits: Work from home Schedule: Monday to Friday Overtime Weekend availability Work Location: Remote
we are building AlgoRisk AI – a next-generation platform that uses AI to transform how banks develop and govern financial models. We’re currently working on a number of projects across using GPT-4, React, Supabase, and modern LLM tools. This is a confidential, real-world project (not open-source). I’m inviting a small group of motivated contributors to work with me as unpaid interns or collaborators. What You’ll Gain: • Official Certificate of Contribution (AlgoRisk AI) • Mentorship from a fintech founder • Hands-on experience with real-world AI tooling • Strong reference or letter for future roles • Chance for future paid work post-MVP ⸻ Key Skills Needed (any 2–3 of these is enough): • React / Next.js (Frontend) • Supabase (Database + Auth) • OpenAI API (GPT-4/4o chat completion) • JavaScript or TypeScript • CodeMirror or Monaco Editor • Python / FastAPI (nice to have) • GitHub + version control • Curiosity to learn and build fast ⸻ Commitment: • Remote, flexible hours (20-30 hrs/week) • 3–5 weeks (initial phase) • Start immediately ⸻ How to Apply: DM us with: • Your name and country • LinkedIn or GitHub profile • A short sentence on why you’re interested Let’s build something impactful together.
About Luupli Luupli is a social media app that has equity, diversity, and equality at its heart. We believe that social media can be a force for good, and we are committed to creating a platform that maximizes the value that creators and businesses can gain from it, while making a positive impact on society and the planet. Our app is currently in Beta Test, and we are excited about the possibilities it presents. Our team is made up of passionate and dedicated individuals who are committed to making Luupli a success. Job Description As an AI Engineer at Luupli, you will play a pivotal role in developing intelligent systems and orchestrating agentic workflows that power Luupli’s AI features. Your work will span Retrieval-Augmented Generation (RAG), multi-agent LLM orchestration, auto-captioning, generative media, and content moderation. You’ll use frameworks like LangGraph, LangChain, and Google’s Agent Development Kit to build persistent, scalable AI services on Google Cloud Platform (GCP). This is a full-stack AI role that spans intelligent backend APIs, LLM agent orchestration, and integration with product-facing features. Responsibilities Build and deploy multi-agent AI workflows using LangGraph, LangChain, or Google’s Agent Development Kit. Implement RAG pipelines using embeddings, semantic chunking, and vector databases (e.g., FAISS, Pinecone, Weaviate). Integrate hosted and open-source LLMs (OpenAI, Gemini, Claude, Ollama, Mistral) into intelligent systems. Build REST APIs with FastAPI and internal tools with Streamlit to expose AI functionality. Deploy production-grade services on GCP using Vertex AI, Cloud Run, Cloud Functions, IAM, and Pub/Sub. Embed AI into platform features such as auto-captioning, LuupForge (generative studio), feed personalization, and real-time moderation. Maintain modular, testable, observable, and secure code across the AI system lifecycle. Requirements 3+ years experience in applied AI/ML engineering (production-level deployments, not research-only). Strong Python development skills with full-stack AI engineering experience: FastAPI, Streamlit LangGraph, LangChain, or similar PyTorch, Transformers FAISS, Weaviate, or Pinecone Solid experience working with hosted APIs (OpenAI, Gemini) and self-hosted models (Mistral, Ollama, LLaMA). Deep understanding of LLM orchestration, agent tool-use, memory sharing, and prompt engineering. Hands-on experience with Google Cloud Platform (GCP); especially Vertex AI, Cloud Functions, Cloud Run, and Pub/Sub. Familiarity with best practices in cloud-based software development: containerization, CI/CD, testing, monitoring. Nice to Have Experience with Google’s Agent Development Kit or similar agent ecosystems. Familiarity with multimodal AI (e.g., handling text, image, audio, or video content). Prior experience developing creator platforms, content recommendation engines, or social media analytics. Understanding of ethical AI principles, data privacy, and bias mitigation. Experience with observability tools (e.g., Sentry, OpenTelemetry, Datadog). Data engineering experience, such as: Building ETL/ELT pipelines Working with event-based ingestion and structured logs (e.g., user sessions, reactions, feeds) Using tools like BigQuery, Airflow, or dbt Designing or consuming feature stores for AI/ML applications Compensation This is an equity-only position, offering a unique opportunity to gain a stake in a rapidly growing company and contribute directly to its success. As part of your cover letter, please respond to the following questions: This position is structured on an equity-only basis. Thus, it is presently unpaid until we secure seed funding. Given this structure, are you comfortable continuing with your application for this role? Have you built or contributed to agent-based AI systems using frameworks like LangGraph, LangChain, or Google’s Agent Development Kit? Do you have experience with Retrieval-Augmented Generation (RAG) systems and vector databases (e.g., FAISS, Pinecone, Weaviate)? Have you deployed AI systems on Google Cloud Platform? If not, which cloud platforms have you used and how? Have you integrated LLMs (e.g., OpenAI, Gemini, Claude) into autonomous or multi-step workflows? Can you explain how agents collaborate and maintain memory across tasked in multi-agent systems? What is your experience with prompt engineering, tool invocation, and orchestrated LLM workflows? Do you have any public code repositories (e.g., GitHub), demo URLs, or project write-ups showcasing your work?
Job Vacancy: Agentic AI Professional (Full-Time) Location: Canary Wharf, London (Remote or Hybrid Options Available) Type: Full-Time Sector: Artificial Intelligence, Emerging Technologies About Us: We are a forward-thinking UK-based firm at the forefront of AI innovation, focusing on the next generation of intelligent systems. Our mission is to create ethical, autonomous, and highly functional agentic AI solutions that drive transformation across industries. As we scale our operations, we're looking for a talented Agentic AI Professional who is passionate about building AI agents that can plan, reason, and act independently within complex environments. Role Overview: As an Agentic AI Professional, you will play a key role in designing, developing, and deploying autonomous agents that operate across various domains, including enterprise automation, decision-support systems, and human-AI collaboration tools. Key Responsibilities: Design and develop AI agents with goal-directed behavior and planning capabilities. Integrate LLMs, multi-agent systems, and reinforcement learning approaches. Conduct experiments and iterate based on empirical evaluation of agent performance. Work closely with product teams to deploy real-world AI solutions. Contribute to the development of safe, interpretable, and ethical agentic systems. Requirements: Proven experience working with AI agents, autonomous systems, or related fields. Strong understanding of large language models (e.g., GPT, Claude, LLaMA) and their integration with agent frameworks. Proficiency in Python and relevant AI/ML libraries (e.g., LangChain, OpenAI API, HuggingFace, PyTorch). Familiarity with agentic AI architectures (e.g., ReAct, AutoGPT, OpenAgents, or custom multi-agent systems). Experience with planning, reasoning, and memory modules. Excellent problem-solving and communication skills. Degree in Computer Science, Artificial Intelligence, or a related field (or relative experience) Preferred Qualifications: Experience deploying AI agents in production environments. Knowledge of symbolic reasoning, cognitive architectures, or goal-oriented planning. Contributions to open-source agentic AI projects. Understanding of AI ethics, safety, and alignment. What We Offer: Competitive salary and benefits package. Flexible work schedule and remote work options. Access to cutting-edge compute resources and research collaborations. Opportunity to work on high-impact projects in a fast-growing space. A culture of innovation, autonomy, and continuous learning. How to Apply: Please send your CV, a brief cover letter, and (if available) links to any relevant projects or GitHub profiles. Applications are reviewed on a rolling basis.