Your tasks
Design and development of data transformation processes (ETL) within our Lakehouse architecture
Creation and development of data models (Kimball)
Design and development of data quality checks and optimizations
Implementation of regular deployments as part of the software delivery process
Documentation of data models and processing pipelines for internal purposes, service providers, and users
Coordination with responsible system owners of data sources to be integrated, as well as IT and other stakeholders
Close collaboration with business intelligence engineers in the creation of management dashboards and reports, in the form of joint requirements, concepts, and work packages
Close collaboration with data architecture, data integration, and data operations
Collaboration in cross-functional project teams to implement use cases
Your profile
Successfully completed degree in (business) information technology, STEM or a comparable qualification
Experience in data modeling (star schema, data vault, third normal form) and implementation in data products.
In-depth knowledge of software development, ideally with Databricks, Azure Data Lake, SQL, Python, PySpark, and GIT
Experience with the development and operation of big data cloud technologies (Microsoft Azure, GCP or AWS)
A high degree of independence, diligence, and quality awareness
Experience working in agile SCRUM teams
Ability to develop a clear understanding of business requirements and translate them into data-driven projects
Strong communication and consulting skills
Analytical thinking, a structured approach, and enjoyment of solving complex new tasks
Your benefits
MNCJobs.de will not be responsible for any payment made to a third-party. All Terms of Use are applicable.