Portland
Job Description Data Engineer Location: Portland, OR (Remote/Hybrid Options Available) Department: Software Development & Data Engineering Overview Certified Languages International (CLI) is modernizing its interpreter services platform and scaling its cloud-based systems to support thousands of interpreters and customers worldwide. We are seeking a Data Engineer to build and optimize data pipelines, design scalable data architectures, and integrate complex on-premises and cloud data sources. This role is engineering-heavy: you will work directly with DBAs, backend developers, and cloud engineers to create robust data ingestion, transformation, and warehousing solutions that power mission-critical analytics, reporting, and machine learning. Our environment spans SQL Server, Azure, and Snowflake—along with integrations into CRM, telephony, and accounting systems. Key Responsibilities • Pipeline Engineering: Design, implement, and maintain large-scale, production-grade ETL/ELT pipelines using tools such as Azure Data Factory, Databricks, and SSIS., • Data Architecture: Develop and manage data lakes, warehouses, and marts (Azure SQL, Snowflake) using modern patterns like medallion architecture, star schemas, and dimensional modeling., • Streaming & Real-Time Data: Build and optimize event-driven and streaming pipelines (Kafka, Event Hubs, Structured Streaming) to capture interpreter session data, call metrics, and workflow events., • Integration: Connect diverse systems — on-prem DBs, QuickBooks, Genesys/NICE CXone, MERFi, Salesforce — into unified cloud pipelines with automated validation., • Scalability & Performance: Tune SQL queries, ETL/ELT jobs, and orchestration workflows to handle high-volume, low-latency data at scale., • Automation & CI/CD: Implement automated workflows for data delivery, testing, and deployment using Azure Git/DevOps pipelines, Airflow, or equivalent orchestration tools., • Monitoring & Reliability: Build observability into data pipelines with logging, alerting, and error handling frameworks to guarantee high availability., • Collaboration: Partner with DBAs on database refactoring and optimization, backend engineers on service-level integrations, and data analysts/scientists on clean and performant data delivery., • Bachelor’s degree in Computer Science, Data Engineering, or equivalent professional experience., • 5+ years of data engineering experience building pipelines and data platforms., • Deep expertise in SQL Server, T-SQL, and query optimization., • Hands-on experience with Python and PySpark for large-scale data processing., • Strong background in Azure Data Factory, Databricks, or equivalent cloud ETL frameworks., • Experience with Snowflake or Azure SQL Data Warehouse., • Familiarity with CI/CD practices for data pipelines (GitHub, Azure DevOps, or GitLab)., • Certification in Azure Data Engineering (DP-203) or Snowflake SnowPro., • Experience with Kafka/Event Hubs for real-time ingestion and streaming pipelines., • Background with containerized/cloud-native data services (Docker, Azure Container Apps)., • Familiarity with NoSQL, APIs, and semi-structured data (JSON, XML)., • Engineering-first mindset with a focus on scalability, maintainability, and performance., • Strong debugging, problem-solving, and system design skills., • Able to work cross-functionally with DBAs, cloud engineers, and developers on complex systems.