Yacht Freelance

Freelance IT Engineer IV (ZZP)

Geplaatst 19 mrt. 2026
Project ID: 9203735
Plaats
Amsterdam
Uren
36 Uur/week
Periode
6 maanden
Start: 1 apr. 2026
Einde: 30 sep. 2026
Tarief
0 - 101.25 €/uur
Uiterste voorsteldatum: 24 mrt. 2026 13:00
  • Graag een motivatie mbt de eisen toevoegen


Description of activities:

We are looking for a data engineer at BUUT to design, build, and maintain a scalable, general-purpose data lake and data pipelines that enable the L&I (learning&insights) team to generate actionable insights. This role will ensure data accessibility, quality, and performance for both analytical and reporting needs, reducing dependency on multiple engineers and creating a unified, efficient data infrastructure. 

- With the following results:


  • Data Pipeline Development

  • Design and implement at least 2–3 robust data pipelines that support L&I analytics and reporting needs.

  • Ensure pipelines are automated, tested, and monitored for reliability.

  • Enable Analytics & Insights

  • Provide usable datasets and query capabilities for L&I to generate insights.

  • Deliver at least one analytics feature or dashboard powered by the new data infrastructure.

  • Operational Excellence

  • Implement CI/CD workflows for data jobs (GitHub Actions).

  • Set up basic monitoring and alerting for pipeline health and data quality.

  • Data Governance & Quality

  • Define and apply data quality checks (row-level and aggregate).

  • Establish data lineage documentation for key pipelines.


Relevant knowledge skills & competences:

Must-haves:

Tools & Platforms:

  • Version Control & CI/CD: GitHub, GitHub Actions

  • At least one major cloud platform: AWS, Azure, or Databricks

  • Languages: Python, SQLExperience: Minimum 3 years creating and maintaining data pipelinesCore Knowledge:

  • Data pipelines (ETL), orchestration (batch vs streaming)

  • Data modeling (star schema, dimensional modeling, SCD)

  • OLTP vs OLAP concepts

  • Testing (unit, integration, E2E)

  • Data governance basics (lineage, quality checks)



Nice-to-haves:

Tools & Platforms:

  • AWS services: Glue, Athena, DynamoDB, Step Functions

  • Languages: PySpark, Golang


Patterns & Techniques:

  • Infrastructure as Code (AWS CloudFormation)

  • Table formats: Iceberg / Delta / Hudi

  • Schema evolution, reprocessing, monitoring

  • Medallion architecture

  • Distributed data processing


Experience:

  • Improving SDLC for data teams (validation, testing automation)

  • Generating insights for end-users (e.g., personalization)



Soortgelijke projecten

+ Bekijk alle projecten