NOT Getting Job Interviews? Get A Professional CV Today. Click Here For Details

Lead Data Science Jobs, Insurance Jobs, ICT Jobs, Jubilee Insurance Jobs

Role Purpose

The position holder will be responsible for driving and spearheading Jubilee’s Health Insurance Data Management and Business Intelligence solutions in the data science and analytics team. The right individual will possess experience in all stages of project management process (data mining, data analytics, requirements and logical design, physical design, implementation, testing and deployment). He/She will also ensure data consolidation, reports are designed and developed in accordance with specifications and maintain high levels of quality through management of a quality assurance process working closely with information managers and leads.


  • Oversee design, development and deployment of the ETL Layer to ensure up to date data is available
  • Supporting the data warehouse in modifying datasets as per requirements
  • Supporting initiatives for data integrity and normalization
  • Assessing tests and implementing new or upgraded software and assisting with strategic decisions on new systems.
  • Identify valuable data sources and automate collection processes
  • Undertake pre-processing of structured and unstructured data from different data sources
  • Analyse large amounts of information to discover trends and patterns
  • Build predictive models and machine-learning algorithms
  • Combine models through ensemble modelling
  • Present information using data visualization techniques
  • Maintain detailed and up to date specialist knowledge of data management techniques and tools and implement within the organization as appropriate
  • Propose solutions and strategies to business challenges
  • Collaborate with engineering and product development teams
  • Providing technical expertise on data storage structures, data mining, and data cleansing

Functional skills

  • Data mining techniques using Python, R Programming
  • NoSQL Databases (SysBase, Cassandra, Mongo DB, Couch DB, Apache Hive)
  • Relational Database and SQL Language (PL/SQL)
  • Extraction, Transformation and Loading (ETL)
  • Data Warehouse Solution Design
  • Dimensional Modeling
  • Analytics / OLAP Cube Development (MDX)
  • Reports & Dashboard Development
  • Knowledge of Big Data applications eg Apache Kafka and Hadoop technologies with clear understanding of MapR and HFDS.
  • Strong knowledge of and experience with reporting and data mining packages (Business Objects etc), databases (SQL etc), programming (XML, CSS, HTML, Javascript or ETL frameworks eg Oracle Data Integrator, Informatica PowerCentre, Microsoft – SQL Server Integrated Services (SSIS), Apache Nifi, Oracle Warehouse Builder, Sybase ETL etc)
  • Knowledge of Enterprise reporting tools such as OBIEE, BI Publisher, Tableau, Qlick Sense and Power BI


  • Bachelor’s of Science Degree in Computer Science or it’s equivalent

Relevant Experience

  • Minimum of three (3) years’ work experience in data science and database design or integration experience with both relational and unstructured databases.

How To Apply

Applications to be sent to quoting the Job Reference Number and Designation given above before 24th June 2019.

Only shortlisted candidates will be contacted