Jobs at Syrinx

View all jobs

Cloud Data Architect - Azure, Python, Spark Req - remote within USA - EST working hours - no transfer or sponsorship available

Boston, MA

RESPONSIBILITIES

  • Lead the design and implementation of an Azure-based, configuration- and event-driven data platform, and related data pipeline, security, messaging and quality frameworks
  • Create reference implementations and/or provide guidance to data engineers creating configuration-driven data pipelines to transform source data into a common data asset and purpose-built outputs to meet key business needs
  • Work with Data Scientists and application developers to understand application data requirements and define interfaces for applications to consume data stored in the common data platform
  • Work with other technical leaders to design and implement a CI / CD framework to streamline development and accelerate business benefits by solidifying the SDLC.
  • Work with the VP of Data Management and other data stakeholders to define a common data model
  • Recommend Business Intelligence tooling, and define related data presentation approaches to support analyses by of varying skill sets
  • Oversee testing, QA, and documentation of data pipelines and systems.
  • Assist as needed with monitoring and troubleshooting of production data pipelines

QUALIFICATIONS

The ideal candidate must be enthusiastic, self-motivated, hands-on, results-oriented, and a team player.

  • Experience leading cloud data platform / solution development
  • Experience leading teams responsible for creating and maintaining data pipeline and assets
  • Expertise in object-oriented/object function concepts
  • Expertise with distributed systems utilizing tools such as Spark
  • Experience using cloud-based data technologies to construct multi-stage data pipelines. Experience with Microsoft Azure a plus.
  • Proficiency with workflow orchestration concepts
  • Proficiency with SQL and relational databases such as SQL Server and MySQL.
  • Proficiency with Python in a data engineering context. Ex: Pandas, PySpark, SparQL
  • Experience with source control tools such as Git and Azure DevOps
  • Experience working in Agile Scrum environments
  • Demonstrated team leadership / mentoring experience
  • Strong listening, interaction and collaboration capabilities
  • Strong written and spoken communications skills
  • Experience working with graph / NoSQL databases such as Redis or Mongo DB is a plus.

Share This Job

Powered by