Description and Requirements
Data Engineer, LATAM Data Hub
Role Value Proposition:
At LATAM Data Hub (LDH), our mission is to build the next generation data lakehouse for MetLife, and to help deploy it across various LATAM countries. We have developed a world-class, cloud-native platform, to enable reporting, analytics, data supply pipeline, and real time supply of the data to various digital and non digital channels. The platform leverages cutting-edge, open source and proprietary technologies to create a highly configurable system that can be adapted to individual market needs quickly, and at a low cost. The platform runs in a fully containerized, elastic cloud environment, and is designed to scale to serve millions of users.
We are looking for a Senior Data Engineer with a track record of designing and implementing large and complex technology projects at a global scale. The ideal candidate would have a solid foundation in hands-on ETL and analytical warehouse development, understand complexities in managing end to end data pipelines and in-depth knowledge of data governance and data management concepts. To be successful in this role, the candidate would require a balance of product-centric technical expertise and navigating complex deployments with multiple systems and teams. This role requires interaction with technical staff and senior business and IT partners around the world. This position is also responsible for ensuring operational readiness by incorporating configuration management, exception handling, logging, end-to-end batch and real-time data pipeline operationalization for getting data, managing and processing into the hub.
Key Responsibilities:
· Design, build and maintain efficient and scalable extract, transform, load (ETL) pipelines to support business requirements.
· Analyze and optimize data pipelines to enhance execution speed and reliability through the development of quality code.
· Collaborate with DevOps team to implement robust monitoring and alerting solutions for critical workflows.
· Implement and manage infrastructure using Terraform (or similar) to ensure consistency and scalability.
· Interact closely with stakeholders, including Data Scientists, Developers, Analyst and Business Teams to align technical solutions with business goals.
· Appetite to learn new technologies and be ready to work on new cutting-edge cloud technologies.
· Ingesting huge volumes data from various platforms for needs and writing high-performance, reliable, and maintainable ETL, ELT code
· Provide technical support, investigate and resolve production issues in data pipelines, ensuring minimal disruption to operations.
Essential Business Experience and Technical Skills:
Required
- 5+ years of ETL and data warehousing development experience
- 3+ plus years of experience designing ETL, ELT and data lakes on cloud or big data based platforms
- Demonstrated experience with implementing, and deploying scalable and performant data hubs at global scale
- Demonstrated experience in cutting-end database technologies, and cloud services such as Azure, GCP, Azure, Data Bricks or SnowFlake; deep experience in technologies such as Spark(Scala/python/Java), ADLS, Kafka, SQL, Synapse SQL, Cosmos DB, Graph DBs or others
- Hands on expertise in: Implementing analytical data stores such as Azure platform using ADLS/Azure data factory /Data Bricks and Cosmos DB (mongo/graph API), or others
- Strong analytic skills related to working with unstructured datasets
- Strong problem-solving abilities, and effective collaboration in cross functional teams.
- Strong Python language knowledge.
- Strong SQL knowledge and data analysis skills for data anomaly detection and data quality assurance.
- 3+ years of experience with Terraform for infrastructure provisioning.
- Eagerness to learn new technologies on the fly and ship to production
- Excellent communication skills: Demonstrated ability to explain complex technical content to both technical and non-technical audiences
- Experience working enterprise complex large scale, multi-team, programs in agile development methodologies
- Experience in Solution implementation, performance testing and tuning; ADLS, Synapse SQL database or GCP Big Query and GCS management and Performance tuning - Partitioning / Bucketing.
- Knowledge on computational complexity
- Bachelor’s degree in computer science or related field
Preferred:
- Working knowledge of English
- Experience in Data Warehouse projects.
Nuestros beneficios están diseñados para cuidar su bienestar holístico con programas para la salud física y mental, el bienestar financiero y el apoyo para las familias. Ofrecemos seguro de gastos médicos mayores, seguro de vida en combinación con un paquete de compensación competitivo junto con bonificaciones por rendimiento, fondo de ahorro y plan de pensiones. También ofrecemos permisos parentales y de adopción ampliados, así como beneficios adicionales como tiempo libre de voluntariado, días libres por su cumpleaños y el Día del Patrimonio Cultural, eventos culturales y deportivos, ¡y mucho más!