Data Engineer | EATON | Job Alert | Latest Job Update Career Height 2022

Advertisements
Data Engineer | EATON | Job Alert | Latest Job Update Career Height 2022

Data Engineer | EATON | Job Alert | Latest Job Update Career Height 2022

About Company

The world now runs on crucial infrastructure and technologies. Planes, hospitals, factories, and data centres are all examples of infrastructure. Vehicles. The electrical system. People rely on these things every day. And the firms that support them rely on us to assist them in overcoming some of the world’s most difficult power management difficulties. Eaton is committed to improve people’s lives and the environment by developing power management technologies that are more dependable, efficient, safe, and long-lasting.

We are a power management corporation with operations in over 175 countries. Our energy-saving solutions and services enable our customers to handle electrical, hydraulic, and mechanical power more effectively, efficiently, safely, and sustainably. By empowering people to use power more effectively. Assisting businesses in conducting more sustainable operations. And by inspiring every Eaton employee to think differently about our company, our communities, and the positive influence we can have on the world.

CLICK HERE FOR MORE INTERNSHIP

CLICK HERE FOR MORE JOBS

Data Engineer Job Description

Responsibilities

  • Develops solutions to gather data to deliver Business Tools that add value to the business 
  • Designing, creating, deploying, and managing Data Architecture/Data models. 
  • Defining, in alignment with Business Intelligence team, how the data will be stored, consumed, integrated and managed within Cloudera or other data hub repositories, as well as any applications using or processing that data.  
  • Handling the linkages and transformations between different databases, merging and aggregating data 
  • Locates data sources, analyzes statistics, and implements data quality procedures. 
  • Performs data studies of new and diverse data sources. 
  • Finds new uses for existing data sources. 
  • Conducts scalable data research. 
  • Designs, modifies, and builds new data wrangling processes. 
  • Collaborates with stakeholders, other teams, database engineers and data scientists. 
  • Develops and maintains scalable data pipelines and builds out new integrations to support continuing increases in data volume and complexity. 
  • Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility, and fostering data-driven decision making across the organization. 
  • Implements processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. 
  • Writes unit/integration tests and documents work. 
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues. 
  • Works closely with a team of frontend and backend engineers, product managers, and analysts. 
  • Defines company data assets (data models), spark, sparkSQL, Impala, and hiveSQL jobs to populate data models. 
  • Design/develop data integrations and data quality framework. 
  • Designs and evaluates open source and vendor tools for data lineage

Eligibilities

  • We are looking for a candidate with strong python skills, who has attained a Graduate degree in Computer Science/IT.
  • Displays superior technical coding abilities
  • Strong Python based data manipulation skills is a must
  • Knowledge of Supply Chain Domain is a big plus(however not mandatory)
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Experience with Data Manipulation in bigdata platform like Cloudera is a plus(however not mandatory)
  • Comfortable with writing complex Python codes for Business Rules on large datasets
  • Experience building and optimizing  data pipelines and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large, disconnected datasets.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Knowledge of Data visualization tools like -Power BI, Tableau etc
  • Knowledge of ETL tools like Informatica, Talend etc(however not mandatory).

Аррly Link is given belоw jоin us fоr Reсent Uрdаte