¿Eres empresa? Contrata power systems engineer candidatos en Reino Unido
About Luupli Luupli is a social media app that has equity, diversity, and equality at its heart. We believe that social media can be a force for good, and we are committed to creating a platform that maximizes the value that creators and businesses can gain from it, while making a positive impact on society and the planet. Our app is currently in Beta Test, and we are excited about the possibilities it presents. Our team is made up of passionate and dedicated individuals who are committed to making Luupli a success. Job Description As an AI Engineer at Luupli, you will play a pivotal role in developing intelligent systems and orchestrating agentic workflows that power Luupli’s AI features. Your work will span Retrieval-Augmented Generation (RAG), multi-agent LLM orchestration, auto-captioning, generative media, and content moderation. You’ll use frameworks like LangGraph, LangChain, and Google’s Agent Development Kit to build persistent, scalable AI services on Google Cloud Platform (GCP). This is a full-stack AI role that spans intelligent backend APIs, LLM agent orchestration, and integration with product-facing features. Responsibilities Build and deploy multi-agent AI workflows using LangGraph, LangChain, or Google’s Agent Development Kit. Implement RAG pipelines using embeddings, semantic chunking, and vector databases (e.g., FAISS, Pinecone, Weaviate). Integrate hosted and open-source LLMs (OpenAI, Gemini, Claude, Ollama, Mistral) into intelligent systems. Build REST APIs with FastAPI and internal tools with Streamlit to expose AI functionality. Deploy production-grade services on GCP using Vertex AI, Cloud Run, Cloud Functions, IAM, and Pub/Sub. Embed AI into platform features such as auto-captioning, LuupForge (generative studio), feed personalization, and real-time moderation. Maintain modular, testable, observable, and secure code across the AI system lifecycle. Requirements 3+ years experience in applied AI/ML engineering (production-level deployments, not research-only). Strong Python development skills with full-stack AI engineering experience: FastAPI, Streamlit LangGraph, LangChain, or similar PyTorch, Transformers FAISS, Weaviate, or Pinecone Solid experience working with hosted APIs (OpenAI, Gemini) and self-hosted models (Mistral, Ollama, LLaMA). Deep understanding of LLM orchestration, agent tool-use, memory sharing, and prompt engineering. Hands-on experience with Google Cloud Platform (GCP); especially Vertex AI, Cloud Functions, Cloud Run, and Pub/Sub. Familiarity with best practices in cloud-based software development: containerization, CI/CD, testing, monitoring. Nice to Have Experience with Google’s Agent Development Kit or similar agent ecosystems. Familiarity with multimodal AI (e.g., handling text, image, audio, or video content). Prior experience developing creator platforms, content recommendation engines, or social media analytics. Understanding of ethical AI principles, data privacy, and bias mitigation. Experience with observability tools (e.g., Sentry, OpenTelemetry, Datadog). Data engineering experience, such as: Building ETL/ELT pipelines Working with event-based ingestion and structured logs (e.g., user sessions, reactions, feeds) Using tools like BigQuery, Airflow, or dbt Designing or consuming feature stores for AI/ML applications Compensation This is an equity-only position, offering a unique opportunity to gain a stake in a rapidly growing company and contribute directly to its success. As part of your cover letter, please respond to the following questions: This position is structured on an equity-only basis. Thus, it is presently unpaid until we secure seed funding. Given this structure, are you comfortable continuing with your application for this role? Have you built or contributed to agent-based AI systems using frameworks like LangGraph, LangChain, or Google’s Agent Development Kit? Do you have experience with Retrieval-Augmented Generation (RAG) systems and vector databases (e.g., FAISS, Pinecone, Weaviate)? Have you deployed AI systems on Google Cloud Platform? If not, which cloud platforms have you used and how? Have you integrated LLMs (e.g., OpenAI, Gemini, Claude) into autonomous or multi-step workflows? Can you explain how agents collaborate and maintain memory across tasked in multi-agent systems? What is your experience with prompt engineering, tool invocation, and orchestrated LLM workflows? Do you have any public code repositories (e.g., GitHub), demo URLs, or project write-ups showcasing your work?
Location - Oval, South London (Onsite) ** Type -** Full-Time Start Day - Immediate ** Salary -** £30,000 - £35,000 (top end reserved for outstanding candidates) ** Headquarters -** New York, NY, USA ** About Quant.ai** Quant.ai is building the world’s most advanced Agentic AI – transforming customer service, outbound engagement, and internal decision-making for global enterprises. Our technology helps clients reduce inbound volume, improve response times, and deliver futuristic digital experiences powered by human-like AI. We’re now opening a new office in Oval, South London, and we're looking for our first wave of AI-enhanced customer service reps to help us scale. The existing team in London is small but mighty – 5 people aged 24-27 – and we’re building a collaborative, fast-moving culture where you’ll grow quickly. Our global team (75+) is led by Chetan Dube, former founder & CEO of Amelia ** The Role** As a Customer Service Representative working in tandem with our Agentic AI, you’ll step in when the AI is unable to fully resolve a customer query on behalf of one of our enterprise clients. You’ll not only provide a human touch to resolve the customer’s issue – but also investigate what went wrong, working alongside engineers and product teams to improve the AI’s performance. This is an ideal role for someone in customer support looking to break into the world of AI and tech. No real prior experience is required – we’re just looking for someone with the right attitude. Full training provided. ** What You’ll Do** - Take over live chats, calls, or tickets escalated by our Agentic AI for one of our clients (e.g., in banking, energy, insurance, or telco) - Calm and resolve customer frustrations when the AI couldn’t resolve their problem - Troubleshoot why the AI failed (e.g., gaps in logic, language understanding, integrations) - Conduct light QA and error analysis across transcripts and system logs - Collaborate with product managers and engineers to improve AI flows - Participate in project management cycles for continuous iteration and model training - Learn the ins and outs of agentic systems, prompt design, and conversational logic - Help shape a brand new category of support – Human-AI collaboration ** What We're Looking For** 1. 1+ years experience in customer service (live chat, phone, or email-based) is great – but not required 2. Excellent written and verbal communication 3. Ability to empathize with frustrated customers and stay calm under pressure 4. Strong problem-solving mindset and curiosity – especially around how things work 5. Comfortable working with technical teams (no coding required – training provided) 6. Highly organized, proactive, and detail-oriented 7. Eagerness to learn AI technologies, tools, and workflows ** Nice to Have (But Not Required)** ** ** ** Why This Role Matters** This is not a traditional customer service role. You’ll be the human failsafe for some of the world’s most advanced enterprise AI systems – and the bridge between customers and technology. Your feedback will shape how millions of future interactions are handled.