Job Description:

Mandatory Skills

DB: Hive, Impala, HBASE.

Data Processing: Spark core and SQL.

build tool: Maven , Testing framework: Cucumber

Required Skills :

· Over all 8 to 10 years of IT experience.

· Extensive experience in Big Data, Analytics, ETL technologies.

· Application Development background along with knowledge of Analytics libraries, statistical and big data computing libraries.

· Minimum 3+ years of experience in Spark/PySpark, Python/Scala/java programming.

· Hands-on experience in coding, designing and developing of complex data pipelines using big data technologies.

· Experience in developing applications on Big Data. Design and build highly scalable data pipelines.

· Expertise in Python, SQL Database, Spark, non-relational databases.

. Responsible to ingest data from files, streams and databases. Process the data using Spark,  Python.

· Develop programs in PySpark and Python as part of data cleaning and processing.

· Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.

· Develop efficient software code for multiple use cases leveraging Python and Big Data technologies for various use cases built on the platform.

Job Category: Developer
Job Location: Raliegh/Charlotte NC

Apply for this position

Allowed Type(s): .pdf, .doc, .docx
Latest Webinars
Card Image

Second Paragraph

Author: admin
Date: 25-07-2022
Card Image

First Paragraph

Author: admin
Date: 25-07-2022
Latest Blogs
Card Image

Second Paragraph

Author: admin
Date: 25-07-2022
Card Image

First Paragraph

Author: admin
Date: 25-07-2022