Location: London Company: OneTent Commission: Earn Up To 75% Commission – First Year Introductory Offer As part of our startup launch, we're offering self-employed estate agents an exclusive 75% commission split for their first year. This is a limited-time rate designed to support new team members who join OneTent early. Also, as part of our launch, we will be inviting one senior agent to manage sales and lettings and help develop a high-performing agent network. This is a frontline leadership role, which, for the right individual, opens a path toward becoming a partner or future CEO. 💰 What You’ll Get • Up to 75% commission share in the first year, • Uncapped earning potential, • No Upfront Fees, • Self-employed model – full flexibility over your hours, territory, and clients, • CRM & tech support – state-of-the-art platforms to streamline your workflow, • Minimum 2 years in property sales or lettings, • Strong local knowledge and client-handling skills, • Ambitious, self-motivated, and service-driven, • Comfortable with self-employed responsibilities (tax, admin, etc.), • Providing free labour on all minor repairs throughout the tenancy, • Before tenancy, providing six hours free handyman labour to fix minor flaws, or to assist with chores, • Carrying out free, full photographic inventory before tenant move-in, • Tradesmen In-House: No outsourcing. Our skilled engineers, plumbers, electricians, and decorators are part of our core team.
Engineer and Developer roles include: -designing, -developing, and -maintaining software applications, -systems, and operating systems. Tech managerial roles include: -overseeing the technical aspects of projects, -overseeing teams, and systems within an organization, -ensuring alignment with business goals and efficient project execution. -managing resources, provide leadership to technical teams, and bridge the gap between technical and non-technical stakeholders. You will be paid according to projects, creative endeavors and service-based options (online meetings, homework, tutoring, virtual assistance, or social media management).
About Luupli Luupli is a social media app that has equity, diversity, and equality at its heart. We believe that social media can be a force for good, and we are committed to creating a platform that maximizes the value that creators and businesses can gain from it, while making a positive impact on society and the planet. Our app is currently in Beta Test, and we are excited about the possibilities it presents. Our team is made up of passionate and dedicated individuals who are committed to making Luupli a success. Job Description As an AI Engineer at Luupli, you will play a pivotal role in developing intelligent systems and orchestrating agentic workflows that power Luupli’s AI features. Your work will span Retrieval-Augmented Generation (RAG), multi-agent LLM orchestration, auto-captioning, generative media, and content moderation. You’ll use frameworks like LangGraph, LangChain, and Google’s Agent Development Kit to build persistent, scalable AI services on Google Cloud Platform (GCP). This is a full-stack AI role that spans intelligent backend APIs, LLM agent orchestration, and integration with product-facing features. Responsibilities Build and deploy multi-agent AI workflows using LangGraph, LangChain, or Google’s Agent Development Kit. Implement RAG pipelines using embeddings, semantic chunking, and vector databases (e.g., FAISS, Pinecone, Weaviate). Integrate hosted and open-source LLMs (OpenAI, Gemini, Claude, Ollama, Mistral) into intelligent systems. Build REST APIs with FastAPI and internal tools with Streamlit to expose AI functionality. Deploy production-grade services on GCP using Vertex AI, Cloud Run, Cloud Functions, IAM, and Pub/Sub. Embed AI into platform features such as auto-captioning, LuupForge (generative studio), feed personalization, and real-time moderation. Maintain modular, testable, observable, and secure code across the AI system lifecycle. Requirements 3+ years experience in applied AI/ML engineering (production-level deployments, not research-only). Strong Python development skills with full-stack AI engineering experience: FastAPI, Streamlit LangGraph, LangChain, or similar PyTorch, Transformers FAISS, Weaviate, or Pinecone Solid experience working with hosted APIs (OpenAI, Gemini) and self-hosted models (Mistral, Ollama, LLaMA). Deep understanding of LLM orchestration, agent tool-use, memory sharing, and prompt engineering. Hands-on experience with Google Cloud Platform (GCP); especially Vertex AI, Cloud Functions, Cloud Run, and Pub/Sub. Familiarity with best practices in cloud-based software development: containerization, CI/CD, testing, monitoring. Nice to Have Experience with Google’s Agent Development Kit or similar agent ecosystems. Familiarity with multimodal AI (e.g., handling text, image, audio, or video content). Prior experience developing creator platforms, content recommendation engines, or social media analytics. Understanding of ethical AI principles, data privacy, and bias mitigation. Experience with observability tools (e.g., Sentry, OpenTelemetry, Datadog). Data engineering experience, such as: Building ETL/ELT pipelines Working with event-based ingestion and structured logs (e.g., user sessions, reactions, feeds) Using tools like BigQuery, Airflow, or dbt Designing or consuming feature stores for AI/ML applications Compensation This is an equity-only position, offering a unique opportunity to gain a stake in a rapidly growing company and contribute directly to its success. As part of your cover letter, please respond to the following questions: This position is structured on an equity-only basis. Thus, it is presently unpaid until we secure seed funding. Given this structure, are you comfortable continuing with your application for this role? Have you built or contributed to agent-based AI systems using frameworks like LangGraph, LangChain, or Google’s Agent Development Kit? Do you have experience with Retrieval-Augmented Generation (RAG) systems and vector databases (e.g., FAISS, Pinecone, Weaviate)? Have you deployed AI systems on Google Cloud Platform? If not, which cloud platforms have you used and how? Have you integrated LLMs (e.g., OpenAI, Gemini, Claude) into autonomous or multi-step workflows? Can you explain how agents collaborate and maintain memory across tasked in multi-agent systems? What is your experience with prompt engineering, tool invocation, and orchestrated LLM workflows? Do you have any public code repositories (e.g., GitHub), demo URLs, or project write-ups showcasing your work?