Role: Data Engineer
• Minimum 5 years of work experience in building data pipelines using Python, PySpark, DJango.
• Hands-On experience in working with Python and related packages (like NumPy, pandas etc.) to load and scrap the data.
• Hands-on experience with at least one of the tools the Hadoop eco-system (HDFS, AWS Glue, MapReduce, Yarn, Hive, Pig, Impala, Spark, Kafka).
• Working experience on Relational/Non-relational databases and familiarity with data model concepts
• Working exposure in blending as part of larger scrum team and understanding of related scrum ceremonies
• Working knowledge of Unix/Linux.
• Knowledge of cloud platforms (e.g., AWS, Azure, GCP)
More Information
Application Details
-
Organization Details
TCS / Tata Consultancy Services
Recommended Comments
There are no comments to display.
Join the conversation
You are posting as a guest. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.