Maintain and enhance the performance of existing data pipelines.
Monitoring data pipelines and related systems to ensure optimized performance.
Automate data pipelines to increase efficiency.
Performing debugging procedures on database scripts and programs, as well as resolving conflicts.
Develop quality assurance tests to ensure pipelines and scripts are accurate.
Collaborate with researchers to understand data requirements and review results of data pipelines.
Adhering to best practices in securely storing, backing up, and archiving data.
Documenting processes related to data pipeline design, configuration, and performance.
Keeping abreast of developments and best practices in database engineering.
Education Qualifications:
Minimum Bachelor’s Degree in information systems, information technology, computer science, or related field and 5 years relevant work experience in a Software Engineering role
Technical Qualifications:
Strong experience in working Databases like Oracle, Microsoft SQL Server, and MySQL
Strong knowledge in developing complex SQL queries, Stored procedures
Hand-on experience in scripting languages like Bash, Python
Must have experience in any of the Cloud Environments such as AWS, GCP, Azure, preferably AWS
Exposure to any industry standard ETL providers such as AWS Glue, Google Data Flow, Informatica, Talend, Pentaho
Personal Qualifications:
Excellent verbal and written communication skills. Strong organizational skills and attention to detail