Description
• You will be responsible for building a data platform for running big data workloads at scale, collecting and combining data from various sources and help data consumers to consume data in our data lake.• You create the technical foundation for Customer Journey tracking: enable developers to collect data in frontend and backend and make data available in real time on the data platform for data analysts.
• You promote the usage of our data platform within the whole company by providing comprehensive documentation about our data platform and you improve our tooling layer for administering the Kafka ecosystem for loading and unloading data into our Kafka cluster • 1+ years of experience as Data Engineer or Backend Developer
• Solid expertise in at least one programming language (Scala, Java or Python)
• Experience with big data technologies (Spark, Kafka, Hive, Athena)
• Practical experience with Amazon Web Services (S3, EMR, IAM)
• Excellent SQL and data management knowledge
Nice to have:
• Previous experience in the design and implementation of complex data pipelines in the AWS environment (AWS Batch, AWS Data Pipeline, AWS Lambda)
• Hands on experience with Docker (and Kubernetes)
• Written CI/CD pipelines
• Knowledge of the data science stack (Pandas, Scikit-learn, Numpy, Keras, etc)
• Familiarity with machine learning
Benefit
After work d...
Anniversary ...
Employee ref...
Free bus rid...
Health insur...
Maternity/pa...
Mobile langu...
Paid sick da...
Performance ...
Regular team...
Relocation s...
Table footba...
CLOSED VACANCY
Apply now