Xerago, Chennai is on the lookout for a Person with
#Datamodelling Skills, having 8 to 10 years of experience, with few years in managing teams as well. Should have a good working knowledge of tools like SPSS, SAS, R, or Python. Exposure to Banking & Insurance Industries would be preferred. Please mail resume to hr@xerago.com #hiring #python
#Datamodelling Skills, having 8 to 10 years of experience, with few years in managing teams as well. Should have a good working knowledge of tools like SPSS, SAS, R, or Python. Exposure to Banking & Insurance Industries would be preferred. Please mail resume to hr@xerago.com #hiring #python
We are #hiring at D Cube Analytics
Data Engineer & Associate Product Architect. Preference will be given to those who can join within 15 days.
Send #resume hr_india@d3analytics.com
If you are a #dataengineer and like to accept challenges then we are a right place for you. Come and join one of the fastest-growing Organization in biopharma domain.
Job Location: Bangalore (India), USA
Qualification: Bachelors / Masterβs in computer science
Experience: 3-8 years
SKILL SET
β #sqldatabase , #datamodelling Techniques & Data Lake projects, #etltools process, Performance optimization techniques is a must.
β Minimum of 2 years of experience in working with batch / real-time systems using various technologies like #databricks #redshift , #hadoop #apache #spark , #hive / #impala and #hdfs
β Minimum of 2 years of experience in #AWS or #azure or Google Cloud, Programming Languages (Preferably python)
β Experience in #biopharma Domain will be a very Big Plus.
Data Engineer & Associate Product Architect. Preference will be given to those who can join within 15 days.
Send #resume hr_india@d3analytics.com
If you are a #dataengineer and like to accept challenges then we are a right place for you. Come and join one of the fastest-growing Organization in biopharma domain.
Job Location: Bangalore (India), USA
Qualification: Bachelors / Masterβs in computer science
Experience: 3-8 years
SKILL SET
β #sqldatabase , #datamodelling Techniques & Data Lake projects, #etltools process, Performance optimization techniques is a must.
β Minimum of 2 years of experience in working with batch / real-time systems using various technologies like #databricks #redshift , #hadoop #apache #spark , #hive / #impala and #hdfs
β Minimum of 2 years of experience in #AWS or #azure or Google Cloud, Programming Languages (Preferably python)
β Experience in #biopharma Domain will be a very Big Plus.