• Design, develop, and maintain data pipelines and data models using Azure Databricks and related Azure data services.
• Collaborate with data analysts and business stakeholders to understand data needs and deliver robust, high-performance solutions.
• Build and optimize data architecture to support data ingestion, processing, and analytics workloads.
• Implement best practices for data governance, security, and performance tuning in a cloud-native environment.
• Work with structured and unstructured data from various sources including APIs, files, databases, and data lakes.
• Create reusable code and components for data processing and modeling workflows.
• Monitor and troubleshoot jobs, ensuring data quality, reliability, and efficiency.
• 3+ years of experience as a Data Engineer, Big Data Engineer, or similar role.
• Excellent English communication skills (Spoken/Written).
• Experience in developing and maintaining data pipelines using Azure Databricks, Spark, and other Big Data technologies.
• Strong hands-on experience with Azure Databricks, Apache Spark, and Delta Lake.
• Proficiency in Python, SQL, and PySpark.
• Experience with Azure services: Azure Data Lake Storage (ADLS), Azure Synapse Analytics, Azure Data Factory, Event Hub, or similar.
• Solid understanding of data modeling (dimensional modeling, star/snowflake schemas).
• Familiarity with CI/CD pipelines and version control (e.g., Git).
• Experience working in agile/scrum teams.
• Good problem-solving, critical thinking, and presentation skills.
• Work diligence and responsibility.
• Occasionally lift objects weighing up to 25 pounds
• Sit for long periods of time
• Occasionally stoop, kneel or crouch
• Use hands and arms to reach for, grasp and manipulate objects
Most work time is spent in a home office environment.