· 7+ years of experience in Data Engineering role
· 5+ years of hands-on experience of Data Lake/Hadoop platform and performance tuning Hadoop/Spark implementations.
· 5+ years of programming experience with Python / PySpark
· 5+ years experience in writing SQL statements
· 5+ years experience with schema design and dimensional data modeling.
· Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
· Experienced developing in cloud platforms such as Google Cloud Platform (preferred), AWS, Azure, or Snowflake at scale
· Excellent communication and presentation skills, strong business acumen, critical thinking, and ability to work cross functionally through collaboration with engineering and business partners.
· (optional) Experience in designing data engineering solutions using open source and proprietary cloud data pipeline tools such as Airflow