Are you a business? Hire apache spark candidates in United States
Hadoop and Apache Spark cluster and management * SQL and database development skills, using RDBMS such as MySQL, PostgreSQL, AWS RDS * NoSQL databases such as AWS DynamoDB * AWS Services: EC2, S3 ...
... Apache Spark) and cloud platforms (e.g., AWS, Azure, GCP) is a plus. - Strong communication skills and ability to work effectively in a remote, collaborative environment. - Passion for learning and a ...
Design, implement, and maintain complex ETL pipelines using tools such as Apache Spark, Apache Airflow, and AWS Glue for both batch and near real-time data processing * Collaborate with data ...
Strong understanding of data engineering concepts and experience with tools such as SQL, NoSQL databases, and data processing frameworks (e.g., Apache Spark). * Experience with cloud platforms (e.g ...
Proficiency in Python, Spark and strong understanding of Apache Spark architecture and concepts. Education * Bachelors in IT or 10 years of technical IT experience with progressive growth Clearance ...
Apache Spark (preferred) or Apache Flink * Strong understanding of data warehousing concepts * Familiarity with implementing Machine Learning algorithms * Possess solid analytical, problem-solving ...
Deep understanding of Apache Spark, Delta Lake, and their integration within the Databricks environment. * Proficient in Terraform for implementing infrastructure as code (IaC) solutions. * Strong ...
Leverage AWS cloud services like S3, Kinesis, Redshift, EMR, Glue, Lambda, and others in combination with Data Lakehouse platform/Apache Spark Integration for advanced data processing and analytics
Working knowledge of Apache Spark's execution model, query planning and execution and familiarity with Big Data technology * Knowledge of Agile, SAFe and Waterfall software development methodologies ...
Hands-on experience with big data processing frameworks like Apache Spark, Apache Hadoop * Experience with SQL and NoSQL databases for data storage and retrieval * Familiarity with Docker and ...
Demonstrated experience using big data processing tools such as Apache Spark or Trino. * Demonstrated experience using container frameworks such as Docker or Kubernetes. * Demonstrated experience ...
Familiarity with big data technologies such as Apache Spark is a plus. * Solid understanding of data warehousing concepts and relational databases. * Strong programming skills in languages such as ...
Experience with streaming platforms and data engineering such as Apache Kafka or Apache Spark a plus * Experience with CI/CD pipelines such as Jenkins, ArgoCD, Bitbucket Preferred Qualifications ...
Knowledge of Java/Scala/Apache Spark/Python. Why Business Technology Solutions For anyone who wants to use technology and data to make a difference in people's lives, shape the digital transformation ...
Apache Spark, Hive, Hadoop, BigQuery, BigTable, Cloud Composure, Dataflow, Google Cloud Storage, Python, SQL, Shell Scripting, Git. Good to have Skill Set: CI/CD, Jenkins, Security and Networking ...
Experience with data processing frameworks such as Apache Spark * Experience building and maintaining Machine Learning pipelines * Demonstrated ability to work independently and collaboratively in a ...
Experience with ETL tools and processes, such as SSIS, Informatica, Talend, or Apache Spark. * Familiarity with data governance and compliance requirements (e.g., GDPR, HIPAA). * Excellent problem ...
Deep understanding of data pipeline tools and frameworks (e.g., Apache Kafka, Apache Spark, Apache Airflow) and containerization technologies (e.g., Docker, Kubernetes) * Demonstrated experience with ...
Experience with big data technologies like Apache Spark or Flink, as well as cloud data platforms; experience building applications using generative AI, a plus * Skilled in data manipulation and ...
Nice to Have: 1. Demonstrated experience with cloud services, such as AWS. 2. Demonstrated experience using big data processing tools such as Apache Spark or Trino. 3. Demonstrated experience using ...
Implement data processing and transformation workflows using Databricks, Apache Spark, and SQL to support analytics and reporting requirements. * Build and maintain orchestration workflows using ...
Design, develop, and implement efficient and scalable data pipelines using Apache Spark on Databricks. * Develop and maintain data pipelines for various financial services domains, such as trading ...
Working knowledge of Apache Spark * Familiarity with streaming technologies (e.g., Kafka, Kinesis, Flink) Nice-to-Haves: * Experience with Machine Learning * Familiarity with Looker a plus
Have experience with big data processing frameworks (e.g., Apache Spark, Hadoop) and are comfortable working with large datasets. * Are proficient in programming languages commonly used in data ...