0% found this document useful (0 votes)
141 views5 pages

Software Engineer Mobile: + 91-8095920176 P S:: Rofessional Ummary

Dillip Ku. Nayak has over 5.8 years of experience as a software engineer specializing in Big Data processing using Apache Spark and Hadoop. He has extensive expertise in developing applications using Oracle databases, PL/SQL, and SQL. Some of his key projects include developing Spark and Hive applications for supply chain operations at DXC Technologies, building an order prioritization system for HPI in Japan using Oracle and PL/SQL, and working on a centralized booking system for invoices using Oracle. He is proficient in Apache Spark, Hadoop ecosystems, Oracle databases, SQL, PL/SQL and has experience in all phases of the software development lifecycle.

Uploaded by

Dillip Nayak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
141 views5 pages

Software Engineer Mobile: + 91-8095920176 P S:: Rofessional Ummary

Dillip Ku. Nayak has over 5.8 years of experience as a software engineer specializing in Big Data processing using Apache Spark and Hadoop. He has extensive expertise in developing applications using Oracle databases, PL/SQL, and SQL. Some of his key projects include developing Spark and Hive applications for supply chain operations at DXC Technologies, building an order prioritization system for HPI in Japan using Oracle and PL/SQL, and working on a centralized booking system for invoices using Oracle. He is proficient in Apache Spark, Hadoop ecosystems, Oracle databases, SQL, PL/SQL and has experience in all phases of the software development lifecycle.

Uploaded by

Dillip Nayak
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Dillip Ku. Nayak Email:dilliphss@gmail.

com
Software Engineer Mobile: + 91-8095920176
PROFESSIONAL SUMMARY:

 Over 5.8+ years of IT experience in Application Development including 3 years of


experience in Big Data processing as Spark developer and Hadoop developer and its
ecosystems Spark Core, Spark SQL, Kafka, Hive, HDFS, Sqoop, Yarn. Knowledge on
Hadoop HDFS architecture and Map Reduce Framework.
 Worked on Cloudera Distribution of Hadoop.
 In-depth knowledge and hands-on experience in dealing with Apache Hadoop components
like HDFS, HiveQL, Pig, Hive and Sqoop.
 Capable of processing large sets of Structured, semi-structure data.
 Loading data into the HDFS from dynamically generated files, relational database
management systems using Sqoop.
 Expertise in Client-Server application development using Oracle 11g/10g/9i, PL/SQL,
SQL *PLUS, TOAD and SQL*LOADER.
 Creation/modification of database objects like Tables, Packages, Procedures, Functions,
Triggers, views, Global Temporary Tables, Sequences, Synonyms.
Experiences using PL/SQL Collections, Ref Cursor, Bulk collect, bulk bind, Pragma
Autonomous Transaction and Oracle Supplied Packages.
 Experience in Performance tuning of queries.
 Having good knowledge in Table Partitions & Materialized Views.
 Involved in all phases of the SDLC (Software Development Life Cycle) from analysis,
design, development, testing, implementation and maintenance with timely delivery
against aggressive deadlines.
 Experienced with UNIX including basic commands and shell scripting.
 Worked in 24/7 production support.
 Involved in resolving production problems for the applications and ensure all support
Service Level Agreements.
 Excellent problem-solving skills with good interpersonal skills, Quick learner and excellent team
player.
PROFESSIONAL EXPERIENCE:
 Working as Software Engineer in Dxc technologies (Hewlett Packard Enterprise) Bangalore from
Dec-2015 to till date.
TECHNICAL SKILLS:

Apache Spark Spark Core, Spark SQL, Spark Streaming


Hadoop Ecosystems HDFS, YARN, Hive, PIG Latin, Sqoop, Map Reduce.
Languages SQL,PL/SQL,Hadoop,Spark/Scala
Database Oracle 9i and 10g and 11g
Operating Systems Windows, Linux
Tools used Intel, Eclipse, SQL Developer, SBT, SVN, Github, WinSCP, Putty

EDUCATION:

 Completed BTech in ETC from BPUT.


PROJECT #3

Project Name : WWCLASS(SUPPLY CHAIN OPERATIONS UNIVERSAL TRACKING)


Domain : SUPPLY CAHIN
Role : Hadoop/Spark Developer

Description:

WWCLASS records the classification information for parts and products. Applicable for both
HPE and HPI(for all the regions like APJ,AMS,EMEA)
Classification Data classification details contains HTS and ECCN codes. HTS and ECCN are
import and export codes with respect to the countries.

Roles and Responsibilities:

 Analyzed the functional specifications as per the requirements. To understand and figure
out the core functional needs and specifications.
 Created Hive tables, dynamic partitions, buckets for sampling, and working on them
using Hive QL.
 Worked with different kind of compression techniques like Snappy, Gzip to save data and
optimize data transfer over network using Parquet, ORC file.
 Experience in developing HiveQL scripts for Data Analysis and ETL purposes and also
extended the default functionality by writing User Defined Functions (UDFs) for data
specific processing. 
 Worked in migrating HiveQL into Spark to minimize query response time.
 Implemented Scoring in Spark using Spark Core, Spark SQL and Scala for faster
scoring and processing of data. 
 Worked on Streaming concepts to load batch files into database.
 Used Spark API over Cloudera Hadoop YARN to perform analytics on data in Hive.
 Defect fixing.
PROJECT #2

Project Name :JPOPS-(japan order prioritization system)


Customer :HPI
Domain : manufacturing
Role :SQL/PLSQL Developer
Languages : Oracle (PLSQL, SQL)
Database : Oracle 11G
Environment :SQL developer, TOAD,UNIX shell scripting, JIRA, Java.
Description:

This system used by HPI where the existing ops system used by HPE. This is only for
japan. Here they can take the orders and they will do their own prioritization. APJ fusion is the
only one order taking system.
Normally all HPI orders comes to APJ fusion and there it self-order prioritization done meaning
as per the availability of the material customer gets their delivery date.

In japan 2 type process are there


1st-japan have their own specific process based on the local conditions .whatever orders come
to fusion
That they takes and do their own prioritization and again sending the available date to fusion.
2nd-FIX TAT (turnaround time i, e 5days in japan) meaning if they will take order then they will
give deliver within 5 days.
Week days the critical jobs are running where as in weekend non critical jobs are running I, e
data load jobs.
Day time they are using the frontend which is written in .net codes for user and in night time
necessary data will load and some important jobs are running.

Roles and Responsibilities:

• Involved in Developing and handling the PL/SQL Packages, Procedures, and


Functions.
• Coordinate with the front end design team to provide them with the necessary
stored package and procedures and necessary insight into the data.
• Interacted with clients to understand the requirements.
• Created Materialized Views and partitioning tables for performance reasons.
• Designed SQL Loader control file to load multiple flat file system data's into the
database.
• Used Ref cursor, Bulk collect, Dynamic SQL, Dynamic Ref Cursor Involved in
unit testing.
• Monitoring day-to-day Process for different Data Loads and resolving Issues.
• Resolving the issues on priority basis
• Involved in the whole life cycle of the project.

PROJECT #1

Project Name : FIT (Eiffel)


Customer : HPI
Languages :Oracle (PLSQL, SQL)
Database : Oracle 10G
Environment :SQL developer, UNIX shell scripting, JIRA, Java.

Description:

EIFFEL is a centralized booking system that process Intercompany, revenue and inventory
invoices through system and uses business logic to book invoice data to book accounts. EIFFEL
is the system of records for IC Invoicing associated with buy/sell Level A and Level B,
Inventory and Revenue.EIFFEL is the part of Financial Operation Systems and a middleware
system.In EIFFEL 3 subsystems are there i.e1.EIFFEL Intercompany(IC) .

Roles and Responsibilities:

 Involved in Developing and handling the PL/SQL Packages, Procedures and Database
Triggers.
 Involved in tuning SQL queries by using quest control tools and manual by EXPLAIN
PLAN.
 Coordinate with the front end design team to provide them with the necessary stored
package and procedures and necessary insight into the data.
 Involved in developing UNIX shell scripts for Loading Database Tables.
 Loading data from flat files into database tables using SQL* Loader.
 Preparing documentation for requirements, design, install and Unit testing and System
Integration.
 Created Materialized Views and partitioning tables for performance reasons.
 Preparing documentation for requirements, design, install and Unit testing.
 Monitoring day-to-day Process for different Data Loads and resolving Issues.
 Resolving the issues on priority basis
Declaration:

I hereby declare that all statements made herein are true and correct to the best of my knowledge
and belief.

Place: Bangalore Dillip Kumar


Nayak

You might also like