¿Eres empresa? Contrata platform engineer candidatos en London
About Luupli Luupli is a social media app that has equity, diversity, and equality at its heart. We believe that social media can be a force for good, and we are committed to creating a platform that maximizes the value that creators and businesses can gain from it, while making a positive impact on society and the planet. Our app is currently in Beta Test, and we are excited about the possibilities it presents. Our team is made up of passionate and dedicated individuals who are committed to making Luupli a success. Job Description As an AI Engineer at Luupli, you will play a pivotal role in developing intelligent systems and orchestrating agentic workflows that power Luupli’s AI features. Your work will span Retrieval-Augmented Generation (RAG), multi-agent LLM orchestration, auto-captioning, generative media, and content moderation. You’ll use frameworks like LangGraph, LangChain, and Google’s Agent Development Kit to build persistent, scalable AI services on Google Cloud Platform (GCP). This is a full-stack AI role that spans intelligent backend APIs, LLM agent orchestration, and integration with product-facing features. Responsibilities Build and deploy multi-agent AI workflows using LangGraph, LangChain, or Google’s Agent Development Kit. Implement RAG pipelines using embeddings, semantic chunking, and vector databases (e.g., FAISS, Pinecone, Weaviate). Integrate hosted and open-source LLMs (OpenAI, Gemini, Claude, Ollama, Mistral) into intelligent systems. Build REST APIs with FastAPI and internal tools with Streamlit to expose AI functionality. Deploy production-grade services on GCP using Vertex AI, Cloud Run, Cloud Functions, IAM, and Pub/Sub. Embed AI into platform features such as auto-captioning, LuupForge (generative studio), feed personalization, and real-time moderation. Maintain modular, testable, observable, and secure code across the AI system lifecycle. Requirements 3+ years experience in applied AI/ML engineering (production-level deployments, not research-only). Strong Python development skills with full-stack AI engineering experience: FastAPI, Streamlit LangGraph, LangChain, or similar PyTorch, Transformers FAISS, Weaviate, or Pinecone Solid experience working with hosted APIs (OpenAI, Gemini) and self-hosted models (Mistral, Ollama, LLaMA). Deep understanding of LLM orchestration, agent tool-use, memory sharing, and prompt engineering. Hands-on experience with Google Cloud Platform (GCP); especially Vertex AI, Cloud Functions, Cloud Run, and Pub/Sub. Familiarity with best practices in cloud-based software development: containerization, CI/CD, testing, monitoring. Nice to Have Experience with Google’s Agent Development Kit or similar agent ecosystems. Familiarity with multimodal AI (e.g., handling text, image, audio, or video content). Prior experience developing creator platforms, content recommendation engines, or social media analytics. Understanding of ethical AI principles, data privacy, and bias mitigation. Experience with observability tools (e.g., Sentry, OpenTelemetry, Datadog). Data engineering experience, such as: Building ETL/ELT pipelines Working with event-based ingestion and structured logs (e.g., user sessions, reactions, feeds) Using tools like BigQuery, Airflow, or dbt Designing or consuming feature stores for AI/ML applications Compensation This is an equity-only position, offering a unique opportunity to gain a stake in a rapidly growing company and contribute directly to its success. As part of your cover letter, please respond to the following questions: This position is structured on an equity-only basis. Thus, it is presently unpaid until we secure seed funding. Given this structure, are you comfortable continuing with your application for this role? Have you built or contributed to agent-based AI systems using frameworks like LangGraph, LangChain, or Google’s Agent Development Kit? Do you have experience with Retrieval-Augmented Generation (RAG) systems and vector databases (e.g., FAISS, Pinecone, Weaviate)? Have you deployed AI systems on Google Cloud Platform? If not, which cloud platforms have you used and how? Have you integrated LLMs (e.g., OpenAI, Gemini, Claude) into autonomous or multi-step workflows? Can you explain how agents collaborate and maintain memory across tasked in multi-agent systems? What is your experience with prompt engineering, tool invocation, and orchestrated LLM workflows? Do you have any public code repositories (e.g., GitHub), demo URLs, or project write-ups showcasing your work?
The GAO Group is headquartered in NYC, USA, and Toronto, Canada, and its member companies are incorporated in both USA & Canada, and its member companies are leading suppliers of advanced electronics and network products for engineers worldwide. Location : Remote Job Description: · Recruit and source candidates: Job boards, social media, headhunting · Post job openings on various recruitment platforms and university portals. · Screen resumes and applications · Schedule interviews for senior HR staff · Manage communication through emails and LinkedIn with applicants and follow up with the candidates. Requirements: · You are studying for or shall have a University degree in HR, Journalism, Business, Arts, or any programs providing strong English language training or candidates with strong English language skills. · You shall be keen to learn, willing to work hard, maintain productivity, and be committed to the job. · You shall have chosen HR as your desired career and are strongly interested in an intern opportunity related to HR. Benefits of this Internship Include: · You gain real-world work experience at an internationally reputable high-tech company; · Learn real-world knowledge, work ethics, team spirits; • Receive 3 certificates, and · It is short & convenient: you can work from anywhere, which makes you much more employable and competitive in the job market.
Software Engineer/DevOps Engineer City of London £Competitive plus strong bonus and benefits Azure, Terraform, Data Tooling DevOps Engineer is sought to join a highly prestigious financial services organisation. This is a key role that will see you taking responsibility for developing Microsoft Fabric related DevOps processes, ensuring the correct balance between environmental control and ensuring Data Engineering teams have the flexibility to work efficiently. You will create bespoke modules in Terraform and actions in GitHub (or Azure DevOps) to support CI/CD workflows. You will also liaise with teams across the business to ensure the platform meets all security and performance requirements. Key Responsibilities Develop standards and strategies to manage the deployment of assets into the Microsoft Fabric ecosystem. Where required create custom actions in GitHub/Azure DevOps that use the Microsoft Fabric APIs. Where required create custom terraform modules to ensure Microsoft Fabric configuration is held as infrastructure as code. Work with Data Engineers to create the development environments engineers will use to develop and deploy products in Microsoft Fabric. Work with data owners around the business to ensure source data systems can be securely accessed. Ensure security best practices are followed. BCP/DR strategy. Work with other members of the central platform team to monitor the Microsoft Fabric feature roadmap and integrate new features into the established eco-system. Work with other members of the central platform team to define an efficient project process to deliver new data products. Key Technical Skills and Experience Terraform Modules Infrastructure as code GitHub/Azure DevOps Azure Data Factory Azure Synapse CI/CD including Databases Databricks GitHub Actions/Azure DevOps Tasks Monitoring in Azure Release Management Experience Microsoft Fabric (not essential) Curious to learn new sectors like AI, ML (Not essential) Minimum 6 years working in a cloud environment managing data engineering products.
we are building AlgoRisk AI – a next-generation platform that uses AI to transform how banks develop and govern financial models. We’re currently working on a number of projects across using GPT-4, React, Supabase, and modern LLM tools. This is a confidential, real-world project (not open-source). I’m inviting a small group of motivated contributors to work with me as unpaid interns or collaborators. What You’ll Gain: • Official Certificate of Contribution (AlgoRisk AI) • Mentorship from a fintech founder • Hands-on experience with real-world AI tooling • Strong reference or letter for future roles • Chance for future paid work post-MVP ⸻ Key Skills Needed (any 2–3 of these is enough): • React / Next.js (Frontend) • Supabase (Database + Auth) • OpenAI API (GPT-4/4o chat completion) • JavaScript or TypeScript • CodeMirror or Monaco Editor • Python / FastAPI (nice to have) • GitHub + version control • Curiosity to learn and build fast ⸻ Commitment: • Remote, flexible hours (20-30 hrs/week) • 3–5 weeks (initial phase) • Start immediately ⸻ How to Apply: DM us with: • Your name and country • LinkedIn or GitHub profile • A short sentence on why you’re interested Let’s build something impactful together.