Dunwoody
Role Description As a Data Architect within our company, you will design and own the data and knowledge Foundations that power enterprise-grade agentic AI systems. You will architect scalable, secure, and production-ready data ecosystems - including retrieval pipelines, vector search infrastructure, knowledge graphs, and multi‑modal data orchestration - to support agents, workflows, and advanced LLM capabilities. This role requires a deep mix of data engineering, knowledge management, AI infrastructure, and enterprise architecture. You will partner closely with engineering, product, evaluation, and delivery teams to define how data flows, transforms, and powers Evergreen’s 0->1 agentic AI offerings. You will bring a strong product mindset, exceptional communication skills, and the ability to translate complex technical concepts into clear business value for Fortune 500 clients. Key Responsibilities • Data & Knowledge Architecture: Define and evolve our data/knowledge architecture including vector pipelines, retrieval augmentations, embedding stores, metadata indexing, and ontology modeling., • Architect scalable RAG pipelines, retrieval patterns, and knowledge graph integrations optimized for agentic workflows and multi-agent orchestration., • Own schema, ontology, taxonomies, and data modeling standards for LLM/agent training, tuning, and augmentation., • Platform Engineering & Framework Development: Build reusable, production-grade data and knowledge frameworks used across our agent ecosystem., • Define standards for data ingestion, preprocessing, enrichment, normalization, and multi‑modal data integration., • Collaborate with platform engineering on storage, compute, networking, security tiers, and orchestration patterns., • Delivery Excellence: Ensure pipelines meet SLAs for latency, security, lineage, governance, observability, and resiliency across multi‑region deployments., • Implement production MLOps/LLMOps capabilities including feature stores, embedding pipelines, offline/online stores, and evaluation datasets., • Governance, Security & Compliance: Establish robust data governance policies and ensure compliance with SOC 2, ISO 27001, HIPAA, and GDPR., • Client Engagement: Engage with C‑suite and architecture stakeholders to translate business problems into data-powered AI architectures; support pre‑sales and strategy workshops., • Culture & Collaboration: Work cross‑functionally, contribute to accelerators, and foster a growth mindset within engineering teams. Qualifications Required: • 10+ years of experience in data architecture, data engineering, or enterprise information systems., • Master’s degree in Computer Science or related field., • Proven ability delivering enterprise-grade data platforms for large-scale AI, ML, or analytics solutions., • Expertise with LLM data/knowledge systems including RAG pipelines, metadata filtering, hybrid search, and vector databases (Pinecone, Weaviate, Milvus, FAISS, Azure Vector Search)., • Experience with embeddings, ontology modeling, and knowledge graph architectures., • Hands-on experience designing distributed systems and data pipelines (Databricks, Spark, Kafka/Event Hub, Airflow, ADF)., • Cloud architecture expertise (Azure preferred)., • Experience with LLMOps/MLOps (MLflow, Docker/Kubernetes, Azure ML, DevOps CI/CD)., • Strong communication, documentation, and stakeholder engagement skills. Preferred: • Experience supporting agentic AI architectures, multi-agent orchestration, or workflow systems., • Experience operationalizing knowledge graphs in enterprise contexts., • Background in security, compliance, or data privacy engineering.