Skills:Data Engineer | Location: Melbourne , Victoria , Australia
Views:155
eRole: Data Engineer Location: Remote Contract NOTE- Must have experience with Spark pipelines implementation Job Description : Implementing Spark pipelines using Scala for it. · Then, the candidate must have enough understanding and experience working with Spark. · How to create, deploy and execute spark applications to apply complex transformations to the data. · At least it is required to be at a middle level. For example, how to: create, join, filter, and transform Data Frames, efficiently. About Scala, need to be focusing on Spark applications. · Then the candidate should know enough to implement Spark pipelines. · An expert on Scala is not needed, but at least to have good fundamentals. · Understand the primary types of objects/structures like Case Classes, Tuples, and Collections · How to interact with Collections to transform them (map, flat Map, foreach, etc.), and How to deal with NULLS that are discouraged in Scala. Reference : Data Engineer - Remote jobs
Register/Login
Quick Apply