Google Cloud Platform Data Engineer
16 days ago
Bristol
As a Principal GCP Data Engineer youll be a true subject matter expert in using the data processing and management capabilities of Google Cloud to develop data-driven solutions for our clients. You will typically lead a team or the solution delivery effort demonstrating technical excellence through leading by example. You could be providing technical support leading an engineering team or working across multiple teams as a subject matter expert who is critical to the success of a large programme of work. Your team members will look to you as a trusted expert and will expect you to define the end-to-end software development lifecycle in line with modern best practices. As part of your responsibilities you will be expected to: • Develop robust data processing jobs using tools such as Google Cloud Dataflow Dataproc and BigQuery, • Design and deliver automated data pipelines that use orchestration tools such as Cloud Composer, • Design end-to-end solutions and contribute to architecture discussions beyond data processing, • Own the development process for your team building strong principles and putting robust methods and patterns in place across architecture scope code quality and deployments., • Shape team behaviour for writing specifications and acceptance criteria estimating stories sprint planning and documentation., • Actively define and evolve PAs data engineering standards and practices ensuring we maintain a shared modern and robust approach., • Lead and influence technical discussions with client stakeholders to achieve the collective buy-in required to be successful, • Coach and mentor team members regardless of seniority and work with them to build their expertise and understanding. Qualifications : To be successful in this role you will need to have: • Experience delivering and deploying production-ready data processing solutions using BigQuery Pub/Sub Dataflow and Dataproc, • Experience developing end-to-end solutions using batch and streaming frameworks such as Apache Spark and Apache Beam., • Expert understanding of when to use a range of data storage technologies including relational/non-relational document row-based/columnar data stores data warehousing and data lakes., • Expert understanding of data pipeline patterns and approaches such as event-driven architectures ETL/ELT stream processing and data visualisation., • Experience working with business owners to translate business requirements into technical specifications and solution designs that satisfies the data requirements of the business., • Experience working with metadata management products such as Cloud Data Catalog and Collibra and Data Governance tools like Dataplex, • Experience in developing solutions on GCP using cloud-native principles and patterns., • Experience building data quality alerting and data quarantine solutions to ensure downstream datasets can be trusted., • Experience implementing CI/CD pipelines using techniques including as git code control/branching automated tests and automated deployments., • Experience of working on migrations of enterprise scale data platforms including Hadoop and traditional data warehouses, • An understanding of machine learning model development lifecycle feature engineering training and testing, • Good understanding or hands-on experience of Kafka, • Experience as a DBA or developer on RDBMS such as PostgreSQL MySQL Oracle or SQL Server, • You are pragmatic and already understand that writing code is only part of what a data engineer does., • You can clearly communicate with both clients and peers describing technical issues and solutions in both written and meeting/workshop contexts., • You are able to clearly explain technical concepts to non technical audiences at all levels of an organisation., • You are able to influence and persuade senior and specialist client stakeholders potentially across multiple organisational boundaries without direct authority., • You are a confident problem solver and troubleshooter., • You are confident and generous in sharing your specialist knowledge ideas and solutions., • You are constantly learning and able to make others better by consciously teaching and unconsciously inspiring. Additional Information : Benefits package at PA: • Private medical insurance, • Interest free season ticket loan, • 25 days annual leave with the opportunity to buy 5 additional days, • Company pension scheme, • Annual performance-based bonus, • Life and Income protection insurance, • Tax efficient benefits (cycle to work give as you earn childcare benefits) #LI-NF1 Remote Work : No Employment Type : Full-time Key Skills Apache Hive,S3,Hadoop,Redshift,Spark,AWS,Apache Pig,NoSQL,Big Data,Data Warehouse,Kafka,Scala Experience: years Vacancy: 1