
Globaldev Group
Responsibilities:
- Manage the end to end data pipelines that pull data from operational system and from different end points, transform and load the data to the company data lake
- Current environment is set up on hadoop, but the team will be responsible to complete migration to a new Spark based environment on AWS
Qualifications:
- 4+ years of hands-on experience as a Data Engineer
- BA/BSc in a related field such as CS, engineering, information systems, or equivalent.
- High proficiency in SQL & Python
- Experience working with cloud-hosted data warehouses (Hive, Snowflake)
- Hands-on experience with developing end-to-end ETL/ELT processes (Spark and SQL)
- Experience with the Hadoop framework and parallel computing (EMR, HIVE, Presto/Trino/Athena, Glue)
- Proven experience with data warehousing, modeling paradigms, and architectures, proficient in DWH methodologies and best practices
Will be a plus:
- Familiar with Kafka, Confluent, Fluentd, Spark, and Airflow
- Experience with data visualization tools such as Tableau
- Experience with the media or TV industry – an advantage
Soft skills:
- Strong analytical skills and attention to detail
- Team player
- Ability to handle several tasks simultaneously
- Comfortable working independently and leading projects from end to end
Apply now
To help us track our recruitment effort, please indicate in your cover/motivation letter where (skilledworkerjobs.com) you saw this job posting.