Skills:Data Engineer | Location: United States Of America
Views:72
Position: Data Engineer Location: Remote Contract Job Description: Implementing Spark pipelines using Scala for it. · Then, the candidate must have enough understanding and experience working with Spark. · How to create, deploy and execute spark applications to apply complex transformations to the data. · At least it is required to be at a middle level. For example, how to: create, join, filter, and transform Data Frames, efficiently. About Scala, need to be focusing on Spark applications. · Then the candidate should know enough to implement Spark pipelines. · An expert on Scala is not needed, but at least to have good fundamentals. · Understand the primary types of objects/structures like Case Classes, Tuples, and Collections · How to interact with Collections to transform them (map, flat Map, foreach, etc.), and How to deal with NULLS that are discouraged in Scala. Reference : Data Engineer as Remote jobs
Register/Login
Quick Apply