Yacht Freelance
Freelance Data Engineer (ZZP)
Data Engineer at Operational Awareness team within Digital Service Platform (DSP) Job Location: the Netherlands (partially remote, partially in Veghel)
Vanderlande provides baggage handling systems for 600 airports around the globe, capable of moving over 4 billion pieces of baggage around the world per year. For the parcel market our systems handle 52 million parcels per day. All these systems generate data.
Do you see a challenge in building data-driven services for our customers using that data? Do you want to contribute to the fast-growing Vanderlande Technology Department on its journey to become more data driven? If so, then join our Digital Service Platform stream!
Your Position
As a Data engineer, you will be responsible for delivering data intelligence solutions to our customers all around the globe, based on an innovative and new product, which provides insights into the performance of their material handling systems. You will be working on implementing and deploying the product as well as designing solutions to fit it to our customer needs. You will work together with an energetic and multidisciplinary team.
Your tasks and responsibilities
You will deploy and maintain solutions to bridge the gap between the data intelligence product and the customer needs.
You will be deploying, testing, and documenting project solutions for our customers and
customizing them for their specific needs.
You collect feedback and always search for opportunities to improve the existing standardized product.
You will deploy and automate the data pipelines and enable further project implementation.
You will develop & deploy necessary configurations to map the raw data with the domain knowledge.
You will work with multidisciplinary internal teams to develop and deploy our product and project specials.
You will monitor, and support implemented project solutions at our existing customers.
You embrace working in an international, diverse team, with an open and respectful atmosphere.
You will be part of an agile team that encourages you to speak up freely about improvements, concerns, and blockages. As part of Scrum methodology, you will independently create stories and participate in the refinement process.
You enjoy an independent and self-reliant way of working with a proactive style of communication to take ownership to provide the best possible solution.
Job requirements:
Experience in guiding engineers.
Minimum 2 years' experience with building and deploying complex data pipelines and data
solutions.
Bachelor’s or Master’s degree in Computer Science, IT, or equivalent.
Experience with NoSQL and unstructured data; event processing tools like Splunk or the ELK
stack.
Hands-on experience with data modeling.
Hands-on experience with programming in Python.
Experience in data engineering using DevOps principles.
Experience in deploying highly performant and secure data pipelines.
Data Schema’s (e.g. JSON/XML/Avro).
Storage formats (e.g. Azure Blob, SQL, NoSQL).
Deploying services as containers (e.g. Docker, Podman).
Streaming and/or batch storage (e.g. Kafka, Oracle) is a plus.
Streaming processing (e.g., Kafka Streams, Spark Structured Streaming) is a plus.
Working with cloud services (preferably with Azure).
Experience in building API is a plus.
Experience in data quality management and monitoring is a plus.