O Rosal
About Us ¿Tiene lo que se necesita para triunfar? La siguiente información debe ser leída atentamente por todos los candidatos. Staq is a leading Banking-as-a-Service (BaaS) and embedded finance platform, transforming the way businesses integrate banking and financial services. At Staq, we empower our clients to innovate, expand, and streamline their financial services offerings using our cutting-edge platform. Our mission is to bridge the gap between traditional banking and the digital era by providing seamless, scalable, and secure financial solutions. The Role Our agents, recommendation systems, and automations are only as good as the data they consume. An agent giving financial advice needs rich, accurate, timely context about a user’s accounts, transactions, spending patterns, and financial goals. A recommendation engine needs well-structured feature data. An automation trigger needs reliable signals. Right now that data plumbing doesn’t have a dedicated owner. As we scale from one product to an SDK that multiple banking applications use, the data layer becomes a shared dependency that every AI feature builds on top of. This role owns the pipelines that feed the intelligence platform, the evaluation data that tells us if our AI is working, and the infrastructure that lets us iterate on data quality without slowing down AI development. Key Responsibilities Context & Feature Pipelines for AI • Build and maintain the data pipelines that transform raw financial data (Plaid transactions, bank accounts, credit data, subscription records) into the enriched context that agents consume at runtime, • Design the feature store or context layer that serves real-time and batch features to agents, recommendation engines, and automation triggers, • Ensure data freshness, quality, and consistency across all pipelines feeding the intelligence platform, • Build the data infrastructure for AI evaluation — collecting agent decisions, recommendation results, automation outcomes, and user feedback into queryable, analyzable datasets, • Own the LLM observability data layer — structured collection of call latencies, token usage, cost per flow, error rates, and model performance metrics across all agent and automation flows, • Create dashboards and data products that let the AI team measure agent quality, recommendation relevance, automation success rates, and LLM operational health, • Design data contracts and schemas that serve both Zeen and future banking applications that plug into the intelligence platform SDK, • Own the ingestion layer for partner and third-party data sources — as the SDK expands to other banks, each will bring their own data formats and integration patterns, • Own data quality monitoring, validation, and alerting across all pipelines, • Build data lineage tracking so we can trace any agent decision back to the data that informed it, • Python for pipeline development; SQL for analytics and data modeling, • Financial data sources: Plaid, partner APIs, internal domain services (banking, credit, subscriptions, journal/ledger), • OpenTelemetry traces and structured artifacts as data sources for AI evaluation, • Cloud-native xcskxlj infrastructure; containerized services Must Have • 3+ years building and operating production data pipelines, • Strong Python and SQL; experience with data transformation frameworks, • Experience designing schemas and data contracts for consumption by application services or ML/AI systems, • Understanding of data quality practices — validation, monitoring, alerting on pipeline failures, • Experience building data infrastructure that feeds AI/ML systems (feature stores, context pipelines, evaluation datasets), • Fintech or financial services background, • Familiarity with observability data (OpenTelemetry, structured logs) as a data source, • Experience building monitoring and analytics for LLM systems — latency tracking, cost attribution, and performance dashboards, • Experience with data lineage, audit trails, or data governance, • Exposure to real-time streaming alongside batch processing, • Experience designing data contracts for multi-tenant or multi-product platforms