Enterprise Data Architect & Modeler
21 hours ago
New York
Job Description Job Code 1144 Department Name IT Reports To Assistant Director, Data Architecture & Data Governance FLSA Status Exempt Union Code N/A Management No About Us: Building Services 32BJ Benefit Funds (“the Funds”) is the umbrella organization responsible for administering Health, Pension, Retirement Savings, Training, and Legal Services benefits to over 100,000 SEIU 32BJ members. Our mission is to make significant contributions to the lives of our members by providing high quality benefits and services. Through our commitment, we embody five core values: Flexibility, Initiative, Respect, Sustainability, and Teamwork (FIRST). By following our core values, employees are open to different and new ways of doing things, take active steps to improve the organization, create an environment of trust and respect, approach their work with the intent of a positive outcome, and work collaboratively with colleagues. The Funds oversees and manages $9 billion of dollars in assets, which are made up of many, varied and complex funds. The dollars come from a number of sources, including the property owners who pay into the funds on behalf of their employees, and as such, requires those who oversee and manage the money to be highly skilled financial management people. For 2025 and beyond, 32BJ Benefit Funds will continue to drive innovation, equity, and technology insights to further help the lives of our hard-working members and their families. We use cutting edge technology such as: M365, Dynamics 365 CRM, Dynamics 365 F&O, Azure, AWS, SQL, Snowflake, QlikView, and more. Please take a moment to watch our video to learn more about our culture and contributions to our members: youtu.be/hYNdMGLn19A Job Summary: Reporting to the Assistant Director of Data Architecture & Data Governance, the Enterprise Data Architect & Modeler will lead the design, development, implementation, and management of our data architecture and modeling initiatives within the 32BJ Benefit Funds ensuring that data is organized integrated, and accessible across the organization. The Enterprise Data Architect & Modeler will develop enterprise data solutions that enable seamless integration, accessibility, and governance of critical data assets across hybrid cloud and on-prem environments. The ideal candidate will have hands-on experience in data modeling, data warehousing, and data pipeline development, working within Microsoft SQL Server, Azure, Databricks, and AWS ecosystems. This role will collaborate with IT teams and business stakeholders to build and optimize enterprise data models, ensure data integrity, and support master data management (MDM) and data governance initiatives. Essential Duties and Responsibilities: • Lead development of conceptual, logical, and physical data models that meet business and technical requirements while balancing normalization, performance, and maintainability., • Produce implementation artifacts (ER diagrams, dimensional models, data dictionaries, model change impact analyses) to guide engineering, analytics, and operations teams., • Apply modeling best practices and patterns (e.g., Type 2 Slowly Changing Dimensions, surrogate vs. natural keys) and define partitioning, indexing, and retention strategies to optimize query and storage performance., • Translate logical models into platform-specific physical designs for Microsoft SQL Server and cloud targets (e.g., Databricks Delta Lake), including recommendations for table design, materialized views, and aggregation layers., • Document architectural tradeoffs (performance, storage, consistency) and versioning/change-control for models., • Architect ETL/ELT and ingestion patterns for structured and unstructured data, including batch, and streaming (event-driven) approaches., • Define reusable integration templates and data contracts for source teams and downstream consumers; support ingestion from CRM, member portals, payroll/financial systems, and third-party APIs., • Specify orchestration and operational patterns (e.g., Azure Data Factory, Databricks Jobs) covering idempotency, retry/backoff, checkpointing, and partitioning for scale and recoverability., • Embed observability and quality: schema-evolution handling, automated profiling, validation gates, lineage capture, monitoring/alerting, and SLA definitions., • Advise on performance and cost tradeoffs (e.g., file formats such as Parquet/Delta, compaction, columnar storage) and recommend optimization strategies., • Design and operationalize MDM approaches to establish authoritative records and a Single Source of Truth (SSOT) for core domains (e.g., member, provider)., • Define entity resolution, deduplication, survivorship rules, matching confidence thresholds, and stewardship workflows for record reconciliation and updates., • Collaborate with Data Governance on metadata management, lineage, and data quality metrics (e.g., completeness, uniqueness, accuracy), and recommend remediation and monitoring strategies., • Ensure models and pipelines incorporate privacy, security, and compliance controls (e.g., PII tagging, RBAC, encryption), supporting applicable frameworks (e.g., HIPAA, SOC 2) where relevant., • Define stewardship roles, SLAs for issue remediation, and integration points between MDM and the business glossary/downstream consumers., • Translate complex data concepts into business-friendly terms to facilitate understanding across non-technical stakeholders., • Partner with business owners, analytics, engineering, and operations to translate requirements into data products and model designs; present tradeoffs and recommended approaches in business-facing terms., • Produce clear documentation and training artifacts (data dictionaries, runbooks, design briefs) to drive consistent adoption and model transparency., • Mentor and coach technical staff, data stewards, and analysts on modeling standards, architecture principles, and data literacy; support training sessions to increase adoption of governance practices, and promoting a culture of continuous learning., • Facilitate model reviews, architecture governance sessions, and stakeholder sign-offs for critical changes., • Research and evaluate new database, lakehouse, modeling, and integration technologies (e.g., improvements to Databricks, cloud-native warehouses) and lead POCs to validate technical and business value., • Advocate for DataOps/CI-CD practices around tests, automated deployments, and reproducibility for pipelines and model artifacts., • Conduct periodic architecture and model reviews, identify process/technology gaps, and lead initiatives to increase operational maturity, reliability, and scalability. Qualifications (Competencies): • 7+ years of hands-on experience in data architecture, data modeling, or data engineering in hybrid on-prem/cloud environments., • Deep experience across conceptual, logical, and physical modeling; dimensional modeling and OLAP/OLTP design patterns., • Excellent SQL programming skills and experience with Microsoft SQL Server, Azure Synapse, or AWS Redshift and experience with performance tuning (partitioning, indexing, query optimization)., • Practical experience building and supporting data warehouses and pipelines using Microsoft SQL Server and cloud platforms such as Azure (e.g., ADF, Synapse), Databricks, and AWS (e.g., Redshift, S3, Glue)., • Proven experience with ETL/ELT patterns, schema evolution handling, pipeline observability, and lineage., • Experience supporting MDM, metadata management, and compliance requirements., • Experience with orchestration/transformation tools (e.g., dbt, Airflow, SSIS)., • Basic knowledge of BI and reporting tools (Power BI, Qlik, Tableau)., • Familiarity with lakehouse patterns and file formats (e.g., Delta Lake, Parquet) and columnar warehouses (e.g., Redshift, Snowflake)., • Scripting/automation experience (e.g., Python) for prototyping and tool integration., • Strong understanding of data governance frameworks (DAMA DMBOK, DCAM) and metadata management principles., • Prior experience in regulated industries (healthcare, retirement/benefits, finance)., • Experience using Azure DevOps to manage Agile project work (sprint planning, backlog grooming, ticket creation/triage, and tracking). Soft Skills (Interpersonal Skills): • Strong verbal and written communication skills, with ability to present technical topics to non-technical stakeholders., • Detail-oriented with a passion for data accuracy and integrity., • Strong problem-solving and analytical abilities., • Ability to multi-task and work on multiple projects in a fast-paced environment. Education: • Bachelor’s degree in Computer Science, Information Systems, Data Science, or equivalent experience. Language Skills: Speak, read, write, and understand English. Reasoning Ability: High Certificates, Licenses, Registrations: • Microsoft Certified: Azure Data Engineer Associate is preferred but not required., • TOGAF, CDMP, or other relevant data architecture certifications are highly preferred. Physical Demands: The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals to perform the essential functions. The physical demands described here are representative of those that must be met by an employee to successfully perform the essential functions of this job. Reasonable accommodations may be made to enable individuals to perform the essential functions. • Under 1/3 of the time: Standing, Walking, Climbing or Balancing, Stooping, Kneeling, Crouching, or Crawling, • Over 2/3 of the time: Talking or Hearing, • 100% of the time: Using Hands Work Environment: The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodation may be made to enable individuals with disabilities to perform the essential functions. • 1/3 to 2/3 of the time: Work near moving or mechanical parts, exposure to radiation, moderate noise.