Data Engineer
hace 15 días
Morley
Department: Tech & Product Employment Type: Full Time Location: Trimble Offices, Morley About the team: At Vintage Cash Cow and Vintage.com, technology is how we scale impact. Every customer journey, from sending in pre-loved items to getting paid, is powered by the systems we design, the products we build, and the data we unlock. Our Technology & Data team is at the heart of this transformation driving greenfield product development, experimenting with fresh ideas, and bringing innovative solutions to life. We’re building modern, scalable, and customer-focused platforms that set the standard for the re-commerce industry. This is a team where curiosity meets craft: blending creativity, technical excellence, and a product mindset to deliver experiences that feel simple, rewarding, and future‑proof. About the role: We’re looking for a hands‑on, detail‑loving Data Engineer to help us level up our data foundations and set us up for bigger, bolder analytics. This is our second data hire, which means you won’t just be maintaining something that already exists, you’ll be helping define how it should work end‑to‑end. You’ll own the pipes and plumbing: designing and building robust data pipelines, shaping our warehouse models, and keeping data clean, reliable, and ready for decision‑making. A big part of your focus will be digital marketing and CRM data (HubSpot especially), so our Growth, Finance, and Product teams can move fast with confidence. If you’re excited by building a modern stack, balancing build vs buy, and creating the kind of data platform that people actually love using, you’ll fit right in. This role can be based in either the UK or the Netherlands. Getting Started... • Get familiar with our current data setup and BI stack (Snowflake, dbt, FiveTran, Sigma, SQL, and friends)., • Meet your key partners across Growth/Marketing, Finance, Operations, and Product to understand the metrics that matter most., • Explore our core data sources (Adalyser, Meta Ads, Google Ads, HubSpot, Aircall, and internal platforms) and how they flow today., • Build and optimise reliable, scalable ELT/ETL pipelines into Snowflake., • Create clean, reusable models that make downstream analytics simple and consistent., • Put proactive monitoring and validation in place so teams can trust what they see., • Help define and evolve our data architecture as we scale into new markets., • Champion best practice: documentation, governance, naming conventions, testing, and performance., • Partner closely with stakeholders so data engineering solves real commercial problems (not just technical ones)., • Build and maintain a modern, scalable data platform that supports growth and decision‑making., • Ensure data is accurate, consistent, and trusted across the business., • Improve speed, reliability, and automation of data pipelines and reporting workflows., • Enable high‑quality self‑serve analytics by delivering well‑modelled, well‑documented data sets. Data architecture & pipeline development • Design, implement, and maintain robust data pipelines across multiple systems., • Ensure smooth, well‑governed flow of data from source → warehouse → BI layers., • Integrate and manage a wide range of data sources within Snowflake, including:, • Adalyser, • Meta Ads, • Google Ads, • HubSpot, • Aircall, • Performance tracking data, • Product imagery + metadata from bespoke internal platforms, • Build automated checks to monitor accuracy, completeness, and freshness., • Run regular audits and troubleshoot issues quickly and calmly., • Identify opportunities to streamline pipelines, improve performance, and reduce cost., • Automate repetitive workflows to free teams up for higher‑value analysis., • Work closely with teams across Growth, Finance, Ops, and Product to understand KPIs and reporting needs., • Translate those needs into smart, scalable data solutions., • Document architecture, pipelines, models, and workflows so everything is clear and easy to pick up., • Contribute to data standards and governance as we build out the function., • Strong Snowflake experience: loading, querying, optimising, and building views/stored procedures., • Solid SQL skills: confident writing complex queries over large datasets., • Hands‑on pipeline experience using tools like dbt, FiveTran, Airflow, Coalesce, HighTouch, Rudderstack, Snowplow, or similar., • Data warehousing know‑how and a clear view of what “good” looks like for scalable architecture., • Analytical, detail‑focused mindset: you care about quality, reliability, and root‑cause fixes., • Great communication: able to explain technical concepts in a simple, useful way., • Experience working with HubSpot data (ETL into a warehouse, understanding the schema, reporting context)., • Digital marketing analytics background: ads platforms, attribution, funnel performance, campaign measurement., • Familiarity with CRMs/marketing automation tools (HubSpot, Marketo, Salesforce, etc.)., • Python or R for automation, data wrangling, or pipeline support., • Understanding of A/B testing or experimentation frameworks., • Exposure to modern data governance/catalogue tooling. #J-18808-Ljbffr