Job Description:
• Work with the business lines to understand the data ingestion and cloud data engineering requirements
• Work with the data modellers to translate the same into cloud/on-prem data engineering workloads design and then develop the same into appropriate engineering language – Databricks pipelines and ADF pipelines.
• Very strong hands-on skills and at least 5+ years’ experience in developing databricks and Azure Data Factory pipelines
• Follow the principle of medallion architecture (3 layers of data in Azure datalake) of getting the data in right data zones basis the requirement
• Work and collaborate in multi-disciplinary Agile teams, adopting Agile spirit, methodology and tools
• Help code and scale applications in a multi-cloud environment integrated with on-premises
• Work with the platform lead to spot out and remediate the potential operational risks of the platform.
· Overall #4_6 years of experience in data engineering and transformation on Cloud
· 5+ Years of Very Strong Experience in Azure Data Engineering, Databricks
· Expertise in supporting/developing Data warehouse workloads at enterprise level
· Experience in pyspark is required – developing and deploying the workloads to run on the Spark distributed computing