Job Description
In this consultative and contributive role, you will be responsible for designing the data warehouse and provide guidance for implementation of Snowflake and supporting big data technologies.
This role will be focused on the modernization of automated data pipelines using cloud-based implementation and testing. You can expect a significant amount of testing and documentation to ensure requirements create both technical and functional specs. This role will include written and oral presentations to stakeholders.
What you will do
You will partner with teammates to create complex data processing pipelines to solve our stakeholders' most ambitious challenges
Architect the data warehouse and provide guidance to the team in implementation using Snowflake SnowSQL and other big data technologies
Identifies and implements ways to improve data reliability, efficiency, and quality
Understand Snowflake best practices & considerations for managing load operations and performance which will aid in creating right size Snowflake warehouses
Qualifications
Working experience with Snowflake - data modeling, ELT using SQL, implementing complex stored Procedures and standard DWH and ETL concepts
Experience with structured and unstructured data in batch and streaming use cases
Advanced fluency of Python
Previous experience with Snowflake Internals and integration of Snowflake with other data processing and reporting technologies
Proven ability to design requirements for technical and functional specs
Work Experience on optimizing the performance of the Spark jobs