Job Update: BA/BS in Computer Science Vacancy at Pepsico

BA/BS in Computer Science Vacancy at Pepsico

Kajal | Sep 27, 2022 | Views 78143

Job Update: BA/BS in Computer Science Vacancy at Pepsico

BA/BS in Computer Science Vacancy at Pepsico

Overview:

Pepsico is hiring an experienced Analyst at their Hyderabad. As an Analyst , Data Modeling, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include i analyzing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams.

The complete details of this job are as follows:-

Roles and Responsibilities:

The Ideal Candidate should be able to:

Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, DataBricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies.

Governs data design/modeling – documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned.

Support data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting.

Support assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools.

Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development.

Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework.

Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse.

Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper management: business and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations.

Assist with data planning, sourcing, collection, profiling, and transformation.

Create Source To Target Mappings for ETL and BI developers.

Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing.

Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders.

Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization.

The Ideal Candidate should also have:

Excellent communication skills, both verbal and written.

Comfortable with change, especially that which arises through company growth.

Ability to understand and translate business requirements into data and technical requirements with minimal help from senior members of the team.

Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment.

Good interpersonal skills; comfortable managing trade-offs.

Foster a team culture of accountability, communication, and self-management.

Consistently attain/exceed individual and team goals.

Eligibility:

5+ years of overall technology experience that includes at least 3+ years of data modeling and systems architecture.

1+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools.

3+ years of experience developing enterprise data models.

Experience in building solutions in the retail or in the supply chain space is plus.

Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models).

Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations.

Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets.

Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or SnowFlake.

Experience with version control systems like Github and deployment & CI tools.

Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus.

Experience with building solutions in the retail or in the supply chain space is a plus.

Experience of metadata management, data lineage, and data glossaries is a plus.

Working knowledge of agile development, including DevOps and DataOps concepts.

Familiarity with business intelligence tools (such as PowerBI).

BA/BS in Computer Science, Math, Physics, or other technical fields.

To Apply for this Job Click Here

Disclaimer: The Recruitment Information provided above is for informational purposes only. The above Recruitment Information has been taken from the official site of the Organisation. We do not provide any Recruitment guarantee. Recruitment is to be done as per the official recruitment process of the company or organization posted the recruitment Vacancy. We don’t charge any fee for providing this Job Information. Neither the Author nor Studycafe and its Affiliates accepts any liabilities for any loss or damage of any kind arising out of any information in this article nor for any actions taken in reliance thereon.

StudyCafe Membership

Join StudyCafe Membership. For More details about Membership Click Join Membership Button
Join Membership

In case of any Doubt regarding Membership you can mail us at [email protected]

Join Studycafe's WhatsApp Group or Telegram Channel for Latest Updates on Government Job, Sarkari Naukri, Private Jobs, Income Tax, GST, Companies Act, Judgements and CA, CS, ICWA, and MUCH MORE!"