Are you a business? Hire python developer candidates in London
Are you looking to kick-start a new career as a Data Scientist? We are recruiting for companies who are looking to employ our Data Science Traineeship graduates to keep up with their growth. The best part is you will not need any previous experience as full training will be provided. You will also have the reassurance of a job guarantee (£25K-£45K) within 20 miles of your location upon completion. Whether you are working full time, part-time or unemployed, this package has the flexibility to be completed at a pace that suits you. The traineeship is completed in 4 easy steps, you can be placed into your first role in as little as 6-12 months: Step 1 - Full Data Science Career Training You will begin your data science journey by studying a selection of industry-recognized courses that will take you from beginner level all the way through to being qualified to work in a junior Data Scientist role. Through the interactive courses, you will gain knowledge in Python, R, Machine Learning, AI, and much more. You will also complete mini projects to gain practical experience and test your skills while you study. Step 2 - CompTIA Data+ CompTIA Data+ is an early-career data analytics certification for professionals tasked with developing and promoting data-driven business decision-making. It teaches Data Mining, Visualization, Data Governance & Data Analytics. In any industry, gaining official certifications is very important in the recruitment process. Therefore, this globally recognized certification will enhance your CV and make you stand out from the crowd. Step 3 - Official Exam The CompTIA Data+ exam will certify that you have knowledge and skills required to transform business requirements in support of data-driven decisions through mining and manipulating data, applying basic statistical methods, and analysing complex datasets while adhering to governance and quality standards. The exam is 90 minutes long and can be sat either in your local testing centre or online. Step 4 - Practical Projects Now that you have completed your theory training and official exams, you will be assigned 2 practical projects by your tutor. The projects are the most important part of the traineeship as it will showcase to employers that you have skills required to work in a data science role. The projects will use real world scenarios where you be utilising all of the skill that you have learned. Whilst you are progressing through the projects, you will have the ongoing support from your personal tutor. Once both projects have been completed and given the final sign off, you will have completed the traineeship and will be ready to move onto the recruitment stage. Your Data Science Role Once you have completed all of the mandatory training, which includes the online courses, practical projects and building your own portfolio, we will place you into a Data Scientist role, where you will be guaranteed a starting salary of £25K-£45K. We have partnered with a number of large organisations strategically located throughout the UK, providing a nationwide reach of jobs for our candidates. We guarantee you will be offered a job upon completion, or we will refund you 100% of your course fees back. We have a proven track record of placing 1000+ candidates into new roles each year. Check out our website for our latest success stories. Read through the information? Passionate about starting a career in data science? Apply now and one of our friendly advisors will be in touch.
we are building AlgoRisk AI – a next-generation platform that uses AI to transform how banks develop and govern financial models. We’re currently working on a number of projects across using GPT-4, React, Supabase, and modern LLM tools. This is a confidential, real-world project (not open-source). I’m inviting a small group of motivated contributors to work with me as unpaid interns or collaborators. What You’ll Gain: • Official Certificate of Contribution (AlgoRisk AI) • Mentorship from a fintech founder • Hands-on experience with real-world AI tooling • Strong reference or letter for future roles • Chance for future paid work post-MVP ⸻ Key Skills Needed (any 2–3 of these is enough): • React / Next.js (Frontend) • Supabase (Database + Auth) • OpenAI API (GPT-4/4o chat completion) • JavaScript or TypeScript • CodeMirror or Monaco Editor • Python / FastAPI (nice to have) • GitHub + version control • Curiosity to learn and build fast ⸻ Commitment: • Remote, flexible hours (20-30 hrs/week) • 3–5 weeks (initial phase) • Start immediately ⸻ How to Apply: DM us with: • Your name and country • LinkedIn or GitHub profile • A short sentence on why you’re interested Let’s build something impactful together.
About Luupli Luupli is a social media app that has equity, diversity, and equality at its heart. We believe that social media can be a force for good, and we are committed to creating a platform that maximizes the value that creators and businesses can gain from it, while making a positive impact on society and the planet. Our app is currently in Beta Test, and we are excited about the possibilities it presents. Our team is made up of passionate and dedicated individuals who are committed to making Luupli a success. Job Description As an AI Engineer at Luupli, you will play a pivotal role in developing intelligent systems and orchestrating agentic workflows that power Luupli’s AI features. Your work will span Retrieval-Augmented Generation (RAG), multi-agent LLM orchestration, auto-captioning, generative media, and content moderation. You’ll use frameworks like LangGraph, LangChain, and Google’s Agent Development Kit to build persistent, scalable AI services on Google Cloud Platform (GCP). This is a full-stack AI role that spans intelligent backend APIs, LLM agent orchestration, and integration with product-facing features. Responsibilities Build and deploy multi-agent AI workflows using LangGraph, LangChain, or Google’s Agent Development Kit. Implement RAG pipelines using embeddings, semantic chunking, and vector databases (e.g., FAISS, Pinecone, Weaviate). Integrate hosted and open-source LLMs (OpenAI, Gemini, Claude, Ollama, Mistral) into intelligent systems. Build REST APIs with FastAPI and internal tools with Streamlit to expose AI functionality. Deploy production-grade services on GCP using Vertex AI, Cloud Run, Cloud Functions, IAM, and Pub/Sub. Embed AI into platform features such as auto-captioning, LuupForge (generative studio), feed personalization, and real-time moderation. Maintain modular, testable, observable, and secure code across the AI system lifecycle. Requirements 3+ years experience in applied AI/ML engineering (production-level deployments, not research-only). Strong Python development skills with full-stack AI engineering experience: FastAPI, Streamlit LangGraph, LangChain, or similar PyTorch, Transformers FAISS, Weaviate, or Pinecone Solid experience working with hosted APIs (OpenAI, Gemini) and self-hosted models (Mistral, Ollama, LLaMA). Deep understanding of LLM orchestration, agent tool-use, memory sharing, and prompt engineering. Hands-on experience with Google Cloud Platform (GCP); especially Vertex AI, Cloud Functions, Cloud Run, and Pub/Sub. Familiarity with best practices in cloud-based software development: containerization, CI/CD, testing, monitoring. Nice to Have Experience with Google’s Agent Development Kit or similar agent ecosystems. Familiarity with multimodal AI (e.g., handling text, image, audio, or video content). Prior experience developing creator platforms, content recommendation engines, or social media analytics. Understanding of ethical AI principles, data privacy, and bias mitigation. Experience with observability tools (e.g., Sentry, OpenTelemetry, Datadog). Data engineering experience, such as: Building ETL/ELT pipelines Working with event-based ingestion and structured logs (e.g., user sessions, reactions, feeds) Using tools like BigQuery, Airflow, or dbt Designing or consuming feature stores for AI/ML applications Compensation This is an equity-only position, offering a unique opportunity to gain a stake in a rapidly growing company and contribute directly to its success. As part of your cover letter, please respond to the following questions: This position is structured on an equity-only basis. Thus, it is presently unpaid until we secure seed funding. Given this structure, are you comfortable continuing with your application for this role? Have you built or contributed to agent-based AI systems using frameworks like LangGraph, LangChain, or Google’s Agent Development Kit? Do you have experience with Retrieval-Augmented Generation (RAG) systems and vector databases (e.g., FAISS, Pinecone, Weaviate)? Have you deployed AI systems on Google Cloud Platform? If not, which cloud platforms have you used and how? Have you integrated LLMs (e.g., OpenAI, Gemini, Claude) into autonomous or multi-step workflows? Can you explain how agents collaborate and maintain memory across tasked in multi-agent systems? What is your experience with prompt engineering, tool invocation, and orchestrated LLM workflows? Do you have any public code repositories (e.g., GitHub), demo URLs, or project write-ups showcasing your work?
Job Title: Research Assistant (Remote) Company: Valutrades Location: Remote Job Type: [Full-time/Part-time/Contract] Department: Research & Strategy About Valutrades: Valutrades is a global financial services provider committed to empowering traders with the tools, knowledge, and insights they need to succeed. Our mission is to deliver a premium trading experience rooted in transparency, reliability, and continuous innovation. We're looking for a Research Assistant with hands-on trading experience through Valutrades to support our research and strategy team remotely. Position Overview: We are seeking a detail-oriented and analytical Research Assistant who has an active or past trading history with Valutrades. This remote role involves supporting the research team with market analysis, data collection, and strategic insights that help enhance trading strategies and inform business decisions. Key Responsibilities: - Conduct research and analysis on financial markets, trading instruments, and macroeconomic trends. - Analyze past and current trading data, particularly your own trading experience with Valutrades, to support strategy development. - Create and maintain research reports, dashboards, and internal documentation. - Support the development and testing of new trading strategies based on market trends and performance metrics. - Assist in preparing presentations and reports for internal and external stakeholders. - Monitor financial news and events that may impact markets and summarize key insights. - Collaborate remotely with analysts, traders, and management on research initiatives. - Requirements: - Proven trading history with Valutrades (account history will be used to confirm). - Solid understanding of trading platforms, instruments, and technical/fundamental analysis. - Excellent research and analytical skills with a keen attention to detail. - Ability to interpret and work with large sets of data. - Strong communication skills, both written and verbal. - Self-motivated and able to work independently in a remote setting. - Proficiency in Microsoft Excel, Google Sheets, or similar tools; knowledge of trading platforms and indicators is a plus. - Experience with data analysis tools or programming languages (Python, R, etc.) is advantageous but not required. - Preferred Qualifications: - Background in Finance, Economics, Mathematics, or a related field. - Familiarity with economic indicators, risk management principles, and backtesting methods. - Previous experience in a research or trading support role. - What We Offer: - Flexible remote working arrangement. - Competitive compensation based on experience and contribution. - Opportunity to influence research directions and contribute to strategic trading decisions. - Access to ongoing professional development and market education resources.