Senior DataOps Engineer
11 days ago
Grand Rapids
Job DescriptionDescription: Overview: SpendMend partners with hospitals, health systems, and higher education institutions to help them improve financial performance and fuel their mission of patient care. We offer a comprehensive suite of solutions - including cost recovery, pharmacy software, compliance auditing, and medical device management - that deliver insights, reduce risk, and uncover savings across the cost cycle. Our mission is to positively impact patient care by delivering innovative, value-driven services that empower our clients to make smarter financial decisions. With a focus on excellence, integrity, and collaboration, SpendMend is committed to being a trusted partner in healthcare and education. The Senior DataOps Engineer role will be critical to the buildout and operation of our modern lakehouse infrastructure. The ideal candidate will have a strong background in data engineering and platform reliability, and will be responsible for constructing high-performance pipelines, maintaining operational excellence, and enabling real-time and batch data-driven insights across product domains. Requirements: Essential Duties and Responsibilities: Data Engineering • Architect and develop scalable ingestion pipelines using Databricks, Delta Lake, and dbt., • Model and maintain curated data marts and semantic layers for analytical and machine learning applications., • Tune Spark-based systems for optimal cost and performance., • Implement and enforce data contracts and schema governance policies. DataOps and Observability • Define and monitor SLIs/SLOs with actionable dashboards., • Integrate quality checks within CI/CD to ensure deployment confidence., • Establish lineage tracking, automated backfills, and runbook processes., • Enable telemetry for business outcomes and Return on Projection (ROP) reporting., • Lead incident triage, resolution, and postmortem documentation. Collaboration & Partnership • Partner with Platform, SRE, Security, and GRC teams to support secure, observable, and auditable infrastructure., • Align with Analytics Engineering and MLOps to support downstream data consumers., • Work with product owners and data stakeholders to ensure timely delivery of high-quality data assets. Minimum Qualifications: • Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent work experience., • 5+ years of hands-on experience building data platforms and pipelines at scale., • Strong SQL and Python programming proficiency., • Expertise with Databricks, Apache Spark, and dbt., • Proficiency in Git-based CI/CD pipelines (GitHub Actions or Azure DevOps)., • Experience managing production incidents and authoring technical postmortems., • Deep understanding of ADLS Gen2 architecture, performance tuning, and security., • Hands-on experience with Azure identity/access management (Microsoft Entra ID, RBAC, Managed Identities, Service Principals)., • Experience with LLM application stacks (e.g., LangChain, OpenAI API)., • Proficiency in scikit-learn for feature engineering and predictive modeling., • Experience with event-based streaming using Kafka or Azure Event Hubs., • Databricks, Delta Lake, Unity Catalog, • dbt, Python, SQL, • Azure DevOps, GitHub Actions, • Azure Monitor, Log Analytics, ADLS Gen2, • LangChain, OpenAI API, scikit-learn Interview Process: • Initial Screen with Hiring Manager, • Online Technical Assessment, • Technical Interview (Reliability, Testing, Incident Response), • Panel Interview (Team Collaboration & Stakeholder Management), • Final Interview with Leadership (Data Engineering Manager, VP, CTO) Equal Opportunity Statement SpendMend is an equal opportunity employer and values diversity. All employment decisions are made based on qualifications, merit, and business need. We are committed to providing reasonable accommodation during the recruitment process. Job Location: Remote (U.S. Time Zones) or Hybrid (Grand Rapids, MI) Travel expected at least once per year for an onsite company meeting. Work Environment: The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. The role requires regular use of a computer and phone, prolonged periods of sitting, and occasional bending, twisting, or light lifting. These conditions are representative of a typical office environment. Reasonable accommodation may be made to enable individuals with disabilities to perform the essential functions of the role. This job description is dynamic and reflects the core responsibilities of the Senior DataOps Engineer role at SpendMend. It is subject to evolve as SpendMend grows and adapts. Note: We are not able to sponsor work visas for this position.