Summary
Responsible for designing and developing scalable data applications, services, pipelines and database solutions to deliver near real-time analytics and insights. Collaborates with product managers, analysts, data scientists and engineering teams to drive data engineering best practices, operational excellence and business outcomes. Role is hybrid and requires in-office presence multiple days per week.
Responsibilities
- Design and develop scalable data analytical solutions using modern technologies in a cloud environment
- Build distributed data processing pipelines and implement data modeling and infrastructure technologies
- Perform code reviews and enforce software engineering principles
- Develop testing tools and CI/CD pipelines to automate delivery
- Identify and resolve data performance and data quality issues
- Provide production support and participate in on-call rotation
- Create monitoring, alerts and logging dashboards to support operations
- Collaborate with stakeholders to deliver self-service BI solutions
Requirements
- Bachelor's or Master's degree in Computer Science, Engineering or equivalent experience
- 3+ years of experience in a modern programming language such as Java or Python
- 2+ years of experience with databases, SQL, data modeling and automated engineering solutions
- Experience with cloud platforms such as AWS or GCP and big data technologies
- Experience developing near real-time distributed processing using Kafka, Flink, Spark or similar
- Familiarity with Kubernetes, Airflow and CI/CD practices
- Strong understanding of scalable distributed systems and event-driven architectures
We have summarized this job description for you, click apply to see more details from the employer.