library(tidyverse)
library(formatR)
library(corrplot)
library(RColorBrewer)
library(survival)
library(ggfortify)
library(caret)
library(riskRegression)
library(stringr)
library(zoo)
library(randomForestSRC)
Employee turnover prediction
Introduction
One challenge that large organizations face today is the problem of understanding and predicting which employees are going to leave the business, called employee turnover prediction. Indeed, if you have a large workforce, then you may want to be able to predict which employees are at risk of leaving at any given time, how long they are expected to stay, and get a hint of which interventions may have a chance of reducing attrition (of valuable employees). Furthermore, frequent employment turnover can create a major money loss [1] in the company. So you want to identify and address quickly the issues which cause employees to leave from your company.
Survival Analysis is one of the best approach to predict employee turnover. Indeed, contrary to classification methods, we would be able to predict individual quitting risk.
Packages
Presentation of the dataset
Here is the dataset we will study [2] in R :
Code
# Dataset importation
<- read.csv("turnover2.csv", sep = ";", header = TRUE)
turnover
# Conversion of categorical variables into "factors"
<- turnover %>% mutate(gender = as.factor(gender),
turnover industry = as.factor(industry),
profession = as.factor(profession),
traffic = as.factor(traffic),
coach = as.factor(coach),
head_gender = as.factor(head_gender),
greywage = as.factor(greywage),
Transportation = as.factor(Transportation)) %>%
rename(transport = Transportation)
# First rows of the dataset
# glimpse(turnover)
turnover
And here are the features’ description :
duration
: Experience in months.event
: Censorship flag (1 if quit, 0 otherwise).gender
: Gender (f
for female orm
for male).age
: Age in years.industry
: Employee’s industry (Agriculture
,Banks
,Building
,Consult
,HoReCa
,IT
,manufacture
,Mining
,Pharma
,PowerGeneration
,RealEstate
,Retail
,State
,Telecom
,transport
,etc
).profession
: Employee’s profession (Accounting
,BusinessDevelopment
,Commercial
,Consult
,Engineer
,Finanñe
,HR
,IT
,Law
,manage
,Marketing
,PR
,Sales
,Teaching
,etc
).traffic
: How employee came to the company :advert
(direct contact of one’s own initiative).recNErab
(direct contact on the recommendation of a friend, not an employ of the company).referal
(direct contact on the recommendation of a friend, an employee of the company).youjs
(applied on a job site).KA
(recruiting agency brought).rabrecNErab
(employer contacted on the recommendation of a person who knows the employee).empjs
(employer reached on the job site).
coach
: Presence of a coach on probation (my head
,yes
orno
).head_gender
: Gender of the supervisor (f
for female orm
for male).greywage
: Whether the salary is fully registered with tax authorities (white
otherwisegrey
).transport
: Employee’s means of transportation (bus
,car
orfoot
).extraversion
,independ
,selfcontrol
,anxiety
andnovator
:\ Scores between 1 and 10 given by Big Five Personality Test.
Data preprocessing
Let’s look for missing values and duplicate observations :
Code
cat("Number of missing values :", sum(is.na(turnover)))
Number of missing values : 0
Code
cat("Number of duplicats: ", turnover %>%
duplicated() %>%
sum())
Number of duplicats: 13
There are no missing values, but we have found some duplicates that need to be removed from our dataset in order to improve its quality.
Code
<- unique(turnover) turnover
Exploratory data analysis
This time, we will investigate our dataset thanks to data visualization methods. Indeed, it may help us to better understand its main characteristics, discover patterns, spot anomalies, test a hypothesis or check assumptions.
First, we are interested in the variable duration
according to the variable event
:
Code
%>%
turnover ggplot(aes(x = duration, color = factor(event),
fill = factor(event))) +
geom_histogram(aes(y = ..density..), alpha = 0.5) +
geom_density(alpha = 0.05) + scale_color_brewer(palette = "Dark2") +
scale_fill_brewer(palette = "Dark2") + theme_minimal() +
theme(legend.position = "top")
It seems that almost half of the data is censored. Let’s verify this by hand :
Code
<- dim(turnover)[1]
n cat((n - sum(turnover$event))/n * 100, "% of observations are censored")
49.82079 % of observations are censored
We must be careful with our analysis, because we could say without thinking that the number of employees who leave a company is equal to half of the total number of employees in a company. But this is not the case since turnover also depends on other variables. Despite the previous remark, let’s plot histograms for continuous covariates and bar charts for discrete ones by coloring according to the value of event
as follows :
Code
# Selection of covariates (that are only discrete)
<- turnover %>%
turnover.cat select_if(is.factor) %>%
mutate(event = turnover$event)
# Continuous covariates
%>% ggplot(aes(x = duration, color = factor(event),
turnover fill = factor(event))) +
geom_histogram(aes(y = ..density..), alpha = 0.5) +
geom_density(alpha = 0.05) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = event, color = factor(event),
turnover fill = factor(event))) +
geom_histogram(aes(y = ..density..), alpha = 0.5) +
geom_density(alpha = 0.05) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = age, color = factor(event),
turnover fill = factor(event))) +
geom_histogram(aes(y = ..density..), alpha = 0.5) +
geom_density(alpha = 0.05) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = extraversion, color = factor(event),
turnover fill = factor(event))) +
geom_histogram(aes(y = ..density..), alpha = 0.5) +
geom_density(alpha = 0.05) + scale_color_brewer(palette = "Dark2") +
scale_fill_brewer(palette = "Dark2") + theme_minimal() +
theme(legend.position = "top")
%>% ggplot(aes(x = independ, color = factor(event),
turnover fill = factor(event))) +
geom_histogram(aes(y = ..density..), alpha = 0.5) +
geom_density(alpha = 0.05) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = selfcontrol, color = factor(event),
turnover fill = factor(event))) +
geom_histogram(aes(y = ..density..), alpha = 0.5) +
geom_density(alpha = 0.05) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = anxiety, color = factor(event),
turnover fill = factor(event))) +
geom_histogram(aes(y = ..density..), alpha = 0.5) +
geom_density(alpha = 0.05) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = novator, color = factor(event),
turnover fill = factor(event))) +
geom_histogram(aes(y = ..density..), alpha = 0.5) +
geom_density(alpha = 0.05) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
# Discrete covariates
%>% ggplot(aes(x = gender, color = factor(event),
turnover.cat fill = factor(event))) +
geom_bar(alpha = 0.5) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = industry, color = factor(event),
turnover.cat fill = factor(event))) +
geom_bar(alpha = 0.5) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = profession, color = factor(event),
turnover.cat fill = factor(event))) +
geom_bar(alpha = 0.5) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = traffic, color = factor(event),
turnover.cat fill = factor(event))) +
geom_bar(alpha = 0.5) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = coach, color = factor(event),
turnover.cat fill = factor(event))) +
geom_bar(alpha = 0.5) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = head_gender, color = factor(event),
turnover.cat fill = factor(event))) +
geom_bar(alpha = 0.5) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = greywage, color = factor(event),
turnover.cat fill = factor(event))) +
geom_bar(alpha = 0.5) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
%>% ggplot(aes(x = transport, color = factor(event),
turnover.cat fill = factor(event))) +
geom_bar(alpha = 0.5) +
scale_color_brewer(palette = "Dark2") + scale_fill_brewer(palette = "Dark2") +
theme_minimal() + theme(legend.position = "top")
Finally, let’s check for correlations between our variables :
Code
# Other encoding of discrete variables
<- turnover.cat[-9]
turnover.num
levels(turnover.num$gender) <- 1:length(levels(turnover.cat$gender))
levels(turnover.num$industry) <- 1:length(levels(turnover.cat$industry))
levels(turnover.num$profession) <- 1:length(levels(turnover.cat$profession))
levels(turnover.num$traffic) <- 1:length(levels(turnover.cat$traffic))
levels(turnover.num$coach) <- 1:length(levels(turnover.cat$coach))
levels(turnover.num$head_gender) <- 1:length(levels(turnover.cat$head_gender))
levels(turnover.num$greywage) <- 1:length(levels(turnover.cat$greywage))
levels(turnover.num$transport) <- 1:length(levels(turnover.cat$transport))
<- as.data.frame(apply(turnover.num, 2, as.numeric))
turnover.num
<- cbind(turnover.num, turnover %>%
turnover.num select_if(is.numeric))
# Correlation matrix
corrplot(cor(turnover.num), col = brewer.pal(10, "BrBG"), method = "square", diag = FALSE)
Actually, our approach (ie. converting discrete variables into continuous ones to compute the matrix of correlation) might be wrong. Yet, it may give us a quick intuition about variables correlation. By the way, we notice that there aren’t covariates that are correlated significantly (ie. extreme values). Consequently, we decided to conserve all features in the dataset.
Survival & longitudinal data analysis
Let’s graphically represent the survival functions in the subgroups defined by all categorical variables of our dataset :
Code
<- colnames(turnover.cat[-9])
v.cat
# Survival function for each covariates
for(v in v.cat){
<- as.formula(paste("Surv(duration, event) ~ ", v))
f
# print(f)
# print(survdiff(f, data = turnover))
print(autoplot(survfit(f, data = turnover)) +
theme_minimal() + ggtitle(paste("Survival plot for the", v, "variable")))
# cat("============================================================\n")
}
We see that some variables might have an influence on employee turnover (like industry
), because survival curves of subgroups seems to be different.
However, we can do more than just analyze survival curves. Analysis like that represent a limited use case of the potential of survival analysis for turnover modeling, because we are using it for the aggregated level of the data. Instead of that, we can create survival curves for individual cases, based on the effects of covariates and try to predict when employees leave.
Hazard modeling
Now, we will compare survival analysis methods, especially Cox model and survival Random Forests (from the library randomForestSRC
[3]).
And to compare their performances, we will create a 75/25% partition of data in train and test samples via the caret
library (Yet, the partitions are stratified well such that there is approximately the same percentage of censorship in train and test).
Code
<- createDataPartition(y = turnover$event, p = 0.75, list = FALSE)
inTrain
# Stratified train-test split
<- turnover[inTrain, ]
train <- turnover[-inTrain, ]
test
# Verify whether the stratification was done correctly or not
cat(sum(train$event)/dim(train)[1] * 100, "% of train data is censored")
50.89606 % of train data is censored
Code
cat(sum(test$event)/dim(test)[1] * 100, "% of test data is censored")
48.02867 % of test data is censored
Cox model
As a first hazard model, we might consider a Cox model with all variables as follows :
Code
<- coxph(Surv(duration, event) ~ ., data = train)
cox.model summary(cox.model)
Call:
coxph(formula = Surv(duration, event) ~ ., data = train)
n= 837, number of events= 426
coef exp(coef) se(coef) z Pr(>|z|)
genderm -0.041341 0.959502 0.147833 -0.280 0.779748
age 0.022910 1.023174 0.008570 2.673 0.007513 **
industryAgriculture 0.606685 1.834340 0.594356 1.021 0.307376
industryBanks 0.373461 1.452754 0.494704 0.755 0.450298
industryBuilding -0.009373 0.990671 0.534129 -0.018 0.986000
industryConsult 0.120358 1.127900 0.508961 0.236 0.813063
industryetc 0.101029 1.106309 0.499626 0.202 0.839753
industryIT -0.752624 0.471129 0.523505 -1.438 0.150530
industrymanufacture -0.365176 0.694075 0.493813 -0.740 0.459602
industryMining -0.161143 0.851171 0.587682 -0.274 0.783931
industryPharma -1.009596 0.364366 0.654035 -1.544 0.122675
industryPowerGeneration -0.617098 0.539508 0.585208 -1.054 0.291657
industryRealEstate -0.851400 0.426817 0.775771 -1.097 0.272427
industryRetail -0.543500 0.580712 0.487374 -1.115 0.264781
industryState -0.172350 0.841684 0.543160 -0.317 0.751008
industryTelecom -0.628101 0.533604 0.566704 -1.108 0.267715
industrytransport -0.232227 0.792766 0.545980 -0.425 0.670589
professionBusinessDevelopment 0.501232 1.650754 0.586060 0.855 0.392409
professionCommercial 0.905927 2.474226 0.575448 1.574 0.115418
professionConsult 0.444814 1.560200 0.593312 0.750 0.453427
professionEngineer 0.888817 2.432250 0.630370 1.410 0.158542
professionetc 0.333452 1.395778 0.551172 0.605 0.545188
professionFinanñe 0.054414 1.055922 0.603885 0.090 0.928202
professionHR 0.132027 1.141140 0.493058 0.268 0.788874
professionIT 0.063012 1.065040 0.567537 0.111 0.911595
professionLaw 0.167523 1.182373 0.782185 0.214 0.830412
professionmanage 1.098265 2.998957 0.568419 1.932 0.053342 .
professionMarketing 0.879458 2.409594 0.550229 1.598 0.109965
professionPR 0.682559 1.978936 0.885274 0.771 0.440698
professionSales 0.272958 1.313845 0.540962 0.505 0.613854
professionTeaching 1.126087 3.083567 0.666417 1.690 0.091073 .
trafficempjs 1.368925 3.931124 0.374261 3.658 0.000255 ***
trafficfriends 0.515974 1.675270 0.408603 1.263 0.206669
trafficKA 0.483011 1.620947 0.417989 1.156 0.247862
trafficrabrecNErab 1.043585 2.839377 0.369329 2.826 0.004719 **
trafficrecNErab 0.191535 1.211108 0.439362 0.436 0.662881
trafficreferal 0.813081 2.254844 0.378460 2.148 0.031682 *
trafficyoujs 1.026777 2.792053 0.367956 2.790 0.005263 **
coachno 0.035736 1.036382 0.131116 0.273 0.785197
coachyes -0.005985 0.994033 0.176398 -0.034 0.972934
head_genderm 0.017973 1.018135 0.121618 0.148 0.882515
greywagewhite -0.436745 0.646136 0.165845 -2.633 0.008452 **
transportcar -0.248336 0.780098 0.121902 -2.037 0.041632 *
transportfoot -0.330709 0.718415 0.200812 -1.647 0.099588 .
extraversion 0.009508 1.009554 0.040012 0.238 0.812164
independ -0.046770 0.954307 0.042213 -1.108 0.267876
selfcontrol -0.022534 0.977718 0.040036 -0.563 0.573531
anxiety -0.084763 0.918730 0.041396 -2.048 0.040598 *
novator 0.029039 1.029465 0.035258 0.824 0.410160
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
exp(coef) exp(-coef) lower .95 upper .95
genderm 0.9595 1.0422 0.7181 1.2820
age 1.0232 0.9774 1.0061 1.0405
industryAgriculture 1.8343 0.5452 0.5722 5.8803
industryBanks 1.4528 0.6883 0.5509 3.8308
industryBuilding 0.9907 1.0094 0.3478 2.8222
industryConsult 1.1279 0.8866 0.4160 3.0584
industryetc 1.1063 0.9039 0.4155 2.9455
industryIT 0.4711 2.1226 0.1689 1.3145
industrymanufacture 0.6941 1.4408 0.2637 1.8270
industryMining 0.8512 1.1749 0.2690 2.6931
industryPharma 0.3644 2.7445 0.1011 1.3130
industryPowerGeneration 0.5395 1.8535 0.1713 1.6987
industryRealEstate 0.4268 2.3429 0.0933 1.9524
industryRetail 0.5807 1.7220 0.2234 1.5094
industryState 0.8417 1.1881 0.2903 2.4406
industryTelecom 0.5336 1.8740 0.1757 1.6203
industrytransport 0.7928 1.2614 0.2719 2.3114
professionBusinessDevelopment 1.6508 0.6058 0.5234 5.2064
professionCommercial 2.4742 0.4042 0.8010 7.6430
professionConsult 1.5602 0.6409 0.4877 4.9912
professionEngineer 2.4322 0.4111 0.7070 8.3672
professionetc 1.3958 0.7164 0.4739 4.1113
professionFinanñe 1.0559 0.9470 0.3233 3.4487
professionHR 1.1411 0.8763 0.4342 2.9994
professionIT 1.0650 0.9389 0.3502 3.2393
professionLaw 1.1824 0.8458 0.2552 5.4771
professionmanage 2.9990 0.3334 0.9843 9.1371
professionMarketing 2.4096 0.4150 0.8196 7.0843
professionPR 1.9789 0.5053 0.3490 11.2196
professionSales 1.3138 0.7611 0.4551 3.7933
professionTeaching 3.0836 0.3243 0.8352 11.3844
trafficempjs 3.9311 0.2544 1.8878 8.1863
trafficfriends 1.6753 0.5969 0.7521 3.7315
trafficKA 1.6209 0.6169 0.7145 3.6776
trafficrabrecNErab 2.8394 0.3522 1.3767 5.8559
trafficrecNErab 1.2111 0.8257 0.5119 2.8653
trafficreferal 2.2548 0.4435 1.0739 4.7344
trafficyoujs 2.7921 0.3582 1.3574 5.7429
coachno 1.0364 0.9649 0.8015 1.3401
coachyes 0.9940 1.0060 0.7035 1.4046
head_genderm 1.0181 0.9822 0.8022 1.2922
greywagewhite 0.6461 1.5477 0.4668 0.8943
transportcar 0.7801 1.2819 0.6143 0.9906
transportfoot 0.7184 1.3920 0.4847 1.0649
extraversion 1.0096 0.9905 0.9334 1.0919
independ 0.9543 1.0479 0.8785 1.0366
selfcontrol 0.9777 1.0228 0.9039 1.0575
anxiety 0.9187 1.0885 0.8471 0.9964
novator 1.0295 0.9714 0.9607 1.1031
Concordance= 0.662 (se = 0.014 )
Likelihood ratio test= 141 on 49 df, p=8e-11
Wald test = 139.8 on 49 df, p=1e-10
Score (logrank) test = 146.7 on 49 df, p=1e-11
Code
cat("BIC =", BIC(cox.model))
BIC = 5082.27
But we can do better thanks to a forward selection in order to select a sparse subset of covariates (we chose wisely according to the BIC) :
Code
# Forward selection ran by hand
# Model with no covariate
<- coxph(Surv(duration, event) ~ 1, data = train)
cox.model
cat("Surv(duration, event) ~ 1\n")
print(summary(cox.model))
cat("BIC =", BIC(cox.model)) # BIC = 4830.165
cat("\n============================================================\n")
<- colnames(turnover)[3:16]
v.all
# 1st iteration of forward selection
for(v in v.all){
<- as.formula(paste("Surv(duration, event) ~ ", v))
f
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 2nd iteration of forward selection/ with "profession"
# BIC = 4892.65
<- v.all[-4]
v.all
for(v in v.all){
<- as.formula(paste("Surv(duration, event) ~ profession + ", v))
f
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 3rd iteration of forward selection/ with "industry"
# BIC = 4937.944
<- v.all[-3]
v.all
for(v in v.all){
<- as.formula(paste("Surv(duration, event) ~ profession + industry + ", v))
f
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 4th iteration of forward selection/ with "coach"
# BIC = 4949.908
<- v.all[-4]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 5th iteration of forward selection/ with "independ"
# BIC = 4955.896
<- v.all[-8]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + independ + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 6th iteration of forward selection/ with "head_gender"
# BIC = 4961.574
<- v.all[-4]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + independ + head_gender + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 7th iteration of forward selection/ with "anxiety"
# BIC = 4966.105
<- v.all[-8]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + independ + head_gender + anxiety + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 8th iteration of forward selection/ with "transport"
# BIC = 4969.996
<- v.all[-5]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + independ + head_gender + anxiety + transport + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 9th iteration of forward selection/ with "gender"
# BIC = 4973.29
<- v.all[-1]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + independ + head_gender + anxiety + transport + gender + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 10th iteration of forward selection/ with "novator"
# BIC = 4976.8
<- v.all[-6]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + independ + head_gender + anxiety + transport + gender + novator + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 11th iteration of forward selection/ with "extraversion"
# BIC = 4981.899
<- v.all[-4]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + independ + head_gender + anxiety + transport + gender + novator + extraversion + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 12th iteration of forward selection/ with "selfcontrol"
# BIC = 4987.378
<- v.all[-4]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + independ + head_gender + anxiety + transport + gender + novator + extraversion + selfcontrol + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# 13th iteration of forward selection/ with "traffic"
# BIC = 4988.465
<- v.all[-2]
v.all
for(v in v.all){
<- as.formula(
f paste("Surv(duration, event) ~ profession + industry + coach + independ + head_gender + anxiety + transport + gender + novator + extraversion + selfcontrol + traffic + ", v))
print(f)
<- coxph(f, data = train)
cox.model print(summary(cox.model))
cat("BIC =", BIC(cox.model))
cat("\n============================================================\n")
}
# We stop, because the BIC criterion doesn't improve by adding covariates to the model
Thus, we have the following model :
<- coxph(Surv(duration, event) ~ profession + industry + coach + independ + head_gender + anxiety + transport + gender + novator + extraversion + selfcontrol + traffic, data = train, x = TRUE, y = TRUE)
cox.model summary(cox.model)
Call:
coxph(formula = Surv(duration, event) ~ profession + industry +
coach + independ + head_gender + anxiety + transport + gender +
novator + extraversion + selfcontrol + traffic, data = train,
x = TRUE, y = TRUE)
n= 837, number of events= 426
coef exp(coef) se(coef) z Pr(>|z|)
professionBusinessDevelopment 0.804252 2.235025 0.582112 1.382 0.16709
professionCommercial 1.250108 3.490718 0.572204 2.185 0.02891 *
professionConsult 0.730897 2.076942 0.588569 1.242 0.21430
professionEngineer 1.010589 2.747219 0.629624 1.605 0.10848
professionetc 0.555846 1.743416 0.552618 1.006 0.31449
professionFinanñe 0.304144 1.355464 0.598918 0.508 0.61158
professionHR 0.361479 1.435451 0.493587 0.732 0.46395
professionIT 0.213951 1.238562 0.570343 0.375 0.70757
professionLaw 0.484502 1.623366 0.780587 0.621 0.53480
professionmanage 1.323416 3.756232 0.569652 2.323 0.02017 *
professionMarketing 1.083883 2.956137 0.553376 1.959 0.05015 .
professionPR 0.778745 2.178737 0.886466 0.878 0.37968
professionSales 0.576262 1.779375 0.539106 1.069 0.28511
professionTeaching 1.451104 4.267822 0.663512 2.187 0.02874 *
industryAgriculture 0.542734 1.720705 0.594487 0.913 0.36127
industryBanks 0.337672 1.401680 0.493094 0.685 0.49347
industryBuilding -0.045609 0.955415 0.532830 -0.086 0.93179
industryConsult 0.129622 1.138398 0.507555 0.255 0.79843
industryetc 0.056447 1.058071 0.498556 0.113 0.90986
industryIT -0.731992 0.480950 0.521156 -1.405 0.16015
industrymanufacture -0.282805 0.753667 0.491720 -0.575 0.56520
industryMining -0.215132 0.806435 0.585165 -0.368 0.71314
industryPharma -1.035875 0.354916 0.651072 -1.591 0.11160
industryPowerGeneration -0.631509 0.531789 0.583183 -1.083 0.27887
industryRealEstate -0.708711 0.492278 0.777631 -0.911 0.36210
industryRetail -0.559354 0.571578 0.485238 -1.153 0.24902
industryState -0.179655 0.835559 0.541329 -0.332 0.73998
industryTelecom -0.655324 0.519274 0.565064 -1.160 0.24616
industrytransport -0.257112 0.773282 0.545542 -0.471 0.63743
coachno 0.123277 1.131197 0.127921 0.964 0.33520
coachyes -0.028167 0.972226 0.176331 -0.160 0.87308
independ -0.044005 0.956949 0.042054 -1.046 0.29538
head_genderm 0.104084 1.109693 0.117997 0.882 0.37773
anxiety -0.072896 0.929697 0.041314 -1.764 0.07766 .
transportcar -0.236672 0.789250 0.122503 -1.932 0.05336 .
transportfoot -0.373243 0.688498 0.200218 -1.864 0.06230 .
genderm -0.055568 0.945947 0.145867 -0.381 0.70324
novator 0.027505 1.027887 0.035154 0.782 0.43397
extraversion -0.003132 0.996873 0.039844 -0.079 0.93734
selfcontrol -0.023150 0.977116 0.039742 -0.583 0.56022
trafficempjs 1.521871 4.580786 0.374885 4.060 4.92e-05 ***
trafficfriends 0.642292 1.900833 0.410632 1.564 0.11778
trafficKA 0.600502 1.823035 0.418791 1.434 0.15160
trafficrabrecNErab 1.146138 3.146020 0.371798 3.083 0.00205 **
trafficrecNErab 0.348865 1.417457 0.437721 0.797 0.42545
trafficreferal 0.816850 2.263359 0.381239 2.143 0.03214 *
trafficyoujs 1.157530 3.182064 0.370107 3.128 0.00176 **
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
exp(coef) exp(-coef) lower .95 upper .95
professionBusinessDevelopment 2.2350 0.4474 0.71415 6.995
professionCommercial 3.4907 0.2865 1.13724 10.715
professionConsult 2.0769 0.4815 0.65529 6.583
professionEngineer 2.7472 0.3640 0.79975 9.437
professionetc 1.7434 0.5736 0.59022 5.150
professionFinanñe 1.3555 0.7378 0.41907 4.384
professionHR 1.4355 0.6966 0.54557 3.777
professionIT 1.2386 0.8074 0.40499 3.788
professionLaw 1.6234 0.6160 0.35154 7.496
professionmanage 3.7562 0.2662 1.22988 11.472
professionMarketing 2.9561 0.3383 0.99929 8.745
professionPR 2.1787 0.4590 0.38339 12.381
professionSales 1.7794 0.5620 0.61856 5.119
professionTeaching 4.2678 0.2343 1.16258 15.667
industryAgriculture 1.7207 0.5812 0.53663 5.517
industryBanks 1.4017 0.7134 0.53324 3.684
industryBuilding 0.9554 1.0467 0.33624 2.715
industryConsult 1.1384 0.8784 0.42098 3.078
industryetc 1.0581 0.9451 0.39824 2.811
industryIT 0.4810 2.0792 0.17318 1.336
industrymanufacture 0.7537 1.3268 0.28749 1.976
industryMining 0.8064 1.2400 0.25614 2.539
industryPharma 0.3549 2.8176 0.09907 1.272
industryPowerGeneration 0.5318 1.8804 0.16956 1.668
industryRealEstate 0.4923 2.0314 0.10722 2.260
industryRetail 0.5716 1.7495 0.22082 1.479
industryState 0.8356 1.1968 0.28920 2.414
industryTelecom 0.5193 1.9258 0.17156 1.572
industrytransport 0.7733 1.2932 0.26544 2.253
coachno 1.1312 0.8840 0.88034 1.454
coachyes 0.9722 1.0286 0.68814 1.374
independ 0.9569 1.0450 0.88124 1.039
head_genderm 1.1097 0.9012 0.88057 1.398
anxiety 0.9297 1.0756 0.85738 1.008
transportcar 0.7893 1.2670 0.62078 1.003
transportfoot 0.6885 1.4524 0.46502 1.019
genderm 0.9459 1.0571 0.71073 1.259
novator 1.0279 0.9729 0.95945 1.101
extraversion 0.9969 1.0031 0.92199 1.078
selfcontrol 0.9771 1.0234 0.90390 1.056
trafficempjs 4.5808 0.2183 2.19704 9.551
trafficfriends 1.9008 0.5261 0.84999 4.251
trafficKA 1.8230 0.5485 0.80227 4.143
trafficrabrecNErab 3.1460 0.3179 1.51805 6.520
trafficrecNErab 1.4175 0.7055 0.60106 3.343
trafficreferal 2.2634 0.4418 1.07212 4.778
trafficyoujs 3.1821 0.3143 1.54054 6.573
Concordance= 0.653 (se = 0.015 )
Likelihood ratio test= 127.5 on 47 df, p=2e-09
Wald test = 126.2 on 47 df, p=4e-09
Score (logrank) test = 132.2 on 47 df, p=5e-10
cat("BIC =", BIC(cox.model))
BIC = 5083.611
The likelihood ratio p-value is significantly lower than 5%. So the model is different from the Null model (ie. model without any covariate).
Then for model comparison, we computed the Brier score [4] (thanks to riskRegression
library [5]) as a function of time :
Code
<- Score(list("final_cox" = cox.model),
cox.brier formula = Surv(duration, event) ~ 1, data = test,
metrics = "brier", times = sort(unique(test$duration)))
$Brier$score %>% select(model, times, Brier) %>%
cox.brierggplot(aes(x = times, y = Brier, color = model)) +
geom_line() +
scale_colour_manual(values=c("darkgrey", "orange")) +
theme_minimal() +
ggtitle("Brier Score for the Cox Model") + xlab("Time")
But, we prefer to compute the Integrated Brier Score (IBS) [4] that provides a better overview of model performances (we will later compare this value to the IBS given by the Random Forest). The IBS was computed using the formula in [4], especially using the trapezoid method [6] to approximate the integral of Brier score (ie. the area under the curves) :
Code
# Scores extraction
<- summary(Score(list("final_cox" = cox.model),
cox.brier.data formula = Surv(duration, event) ~ 1,
data = test, metrics = "brier",
times = sort(unique(test$duration))),
models = "final_cox")$score[, - c("Model")]
colnames(cox.brier.data) <- c("times", "brier")
# Removal of confidence interval
<- function(x) {
extract_brier if (str_sub(x, 2, 2) == ".") { return(str_sub(x, 1, 3)) }
else { return(str_sub(x, 1, 4)) }
}$brier <- as.numeric(lapply(cox.brier.data$brier, extract_brier))
cox.brier.data
# Approximation of the area under the curve with trapezoidal method
<- function(x, y) {
trapezoidal return(sum(diff(x) * (head(y, -1) + tail(y, -1)))/2)
}
# Computation of the Integrated Brier Score
<- trapezoidal(cox.brier.data$times,
cox.brier.ibs $brier)/tail(cox.brier.data$times, 1)
cox.brier.datacat("Integrated Brier Score of Cox model =", cox.brier.ibs)
Integrated Brier Score of Cox model = 18.19555
Random Forest
Here, we will consider a Random Forest for survival analysis purposes :
<- rfsrc(Surv(duration, event) ~ ., data = train)
rf.model
# Visualization of the importance
<- data.frame(colnames(turnover)[3:16],
rf.importance vimp(rf.model)$importance)
rownames(rf.importance) <- 1:(dim(rf.importance)[1])
colnames(rf.importance) <- c("variable", "importance")
ggplot(rf.importance, aes(variable, importance, fill = variable)) +
geom_bar(stat = "identity", alpha = 0.5) + coord_flip() + theme_minimal() +
geom_text(aes(label = sprintf("%0.3f", round(importance, digits = 3))),
color = "black", size = 3.4, hjust = 1.1)
Above, the importance given by our Random Forest gives us an insight into the most “important” variables. In that case, we notice that the main variables are not necessarily the same as the sparse variables given by Cox model (with a forward selection).
Then likewise, we computed the Brier score as a function of time :
Code
<- Score(list("rfsrc" = rf.model),
rf.brier formula = Surv(duration, event) ~ 1, data = test,
metrics = "brier", times = sort(unique(test$duration)))
$Brier$score %>%
rf.brierselect(model, times, Brier) %>%
ggplot(aes(x = times, y = Brier, color = model)) +
geom_line() +
scale_colour_manual(values=c("darkgrey", "orange")) +
theme_minimal() +
ggtitle("Brier Score for the Random Forest") + xlab("Time")
And we also obtained the following IBS :
Code
# Scores extraction
<- summary(Score(list("rfsrc" = rf.model),
rf.brier.data formula = Surv(duration, event) ~ 1,
data = test, metrics = "brier",
times = sort(unique(test$duration))),
models = "rfsrc")$score[, - c("Model")]
colnames(rf.brier.data) <- c("times", "brier")
# Removal of confidence interval
$brier <- as.numeric(lapply(rf.brier.data$brier, extract_brier))
rf.brier.data
# Computation of the Integrated Brier Score
<- trapezoidal(rf.brier.data$times,
rf.brier.ibs $brier)/tail(rf.brier.data$times, 1)
rf.brier.datacat("Integrated Brier Score of Random Forest =", rf.brier.ibs)
Integrated Brier Score of Random Forest = 15.83226
Therefore, we decided to take the model given by survival Random Forest, because it gave us the lowest IBS (15.8322565 < 18.1955542).
Predictions
In this final section, let’s test the Random Forest (with different industry
) on the following profiles :
- Considering an employee whose features are Female of age 30, referred by an employee of the company (
referral
), professionHR
, commuting by bus, having a coach during the probation, with male supervisor, whose characteristic scores are 5 for all categories, let’s give an estimate of the probability that this employee will stay for longer than 3 years. - Considering another employee with the same profile but who has already worked for one year, let’s give an estimate of the probability that this employee will stay for another 2 years.
Code
# Imputation of the first individual
<- data.frame(gender = "f", age = 30, traffic = "referal",
new_individu profession = "HR", transport = "bus", coach = "yes",
greywage = "white", head_gender = "m",
extraversion = 5, independ = 5, selfcontrol = 5,
anxiety = 5, novator = 5, industry = " HoReCa",
duration = 0, event = 0)
<- levels(turnover$industry)
industry <- NULL
new_individu.surv
# Random Forest with all the dataset
<- rfsrc(Surv(duration, event) ~ ., data = turnover)
final.model
# Survival computation
for (i in industry) {
$industry <- i
new_individu<- predict(final.model, new_individu)
p # Return the index which gives the closest time to 36 months
<- which(p$time.interest > 36)[1] - 1
j <- c(new_individu.surv, p$survival[j])
new_individu.surv
}
# Visualization of survival probability according industries
<- data.frame(industry, new_individu.surv)
industry.surv rownames(industry.surv) <- 1:(dim(industry.surv)[1])
colnames(industry.surv) <- c("industry", "survival")
<- industry.surv[industry.surv$industry=="IT", "survival"]
p1
ggplot(industry.surv, aes(industry, survival, fill = industry)) +
geom_bar(stat = "identity", alpha = 0.5) + coord_flip() + theme_minimal() +
ggtitle("The probability that the employee will stay for longer than 3 years") +
geom_text(aes(label = sprintf("%0.3f", round(survival, digits = 3))),
color = "black", size = 3.5, hjust = 1.1) + ylab("Probability")
Code
# Imputation of the second individual
<- data.frame(gender = "f", age = 30, traffic = "referal",
new_individu2 profession = "HR", transport = "bus", coach = "yes",
greywage = "white", head_gender = "m",
extraversion = 5, independ = 5, selfcontrol = 5,
anxiety = 5, novator = 5, industry = " HoReCa",
duration = 12, event = 0)
<- NULL
new_individu2.surv
# Survival computation
for (i in industry) {
$industry <- i
new_individu2<- predict(final.model, new_individu2)
p <- which(p$time.interest > 24)[1] - 1
j <- c(new_individu2.surv, p$survival[j])
new_individu2.surv
}
# Visualization of survival probability according industries
<- data.frame(industry, new_individu2.surv)
industry.surv rownames(industry.surv) <- 1:(dim(industry.surv)[1])
colnames(industry.surv) <- c("industry", "survival")
<- industry.surv[industry.surv$industry=="IT", "survival"]
p2
ggplot(industry.surv, aes(industry, survival, fill = industry)) +
geom_bar(stat = "identity", alpha = 0.5) + coord_flip() + theme_minimal() +
ggtitle("The probability that the employee will stay for another 2 years") +
geom_text(aes(label = sprintf("%0.3f", round(survival, digits = 3))),
color = "black", size = 3.5, hjust = 1.1) + ylab("Probability")
We observe that, for both profiles, the industry has an effect on the probabilities. More generally, this is the main reason why we should not decide beforehand (like in EDA section) whether a covariate has an influence or not on a survival problem.
Now let’s fix their industry
profile as IT
for instance. In theory, we might think that nowadays, the longer you stay in a company, the higher chances of leaving are (for example, you want to move elsewhere, to work for another company, aso.). Yet, we notice the inverse in practice (with our case), because the second probability is higher than the first one (0.8444626 > 0.761017). In other words, a person that has already worked one year has a high chance to stay two years within the same company (ie. three years in total).
As a consequence, our predictions are not consistent with the reality. This may be due to the fact that the data is biased or not up-to-date with the current generations.
Conclusion
Turnover calculations are an important metric in business. The benefits of applying an employee turnover prediction model like this extend beyond its pure prediction capabilities towards insights that can modify the operations of the organization as a whole. And the cost savings to the organization are two-fold as HR professionals can use the model’s explanations to develop retention policies across the business and also target high risk individuals with retention initiatives.
References
[1] Work Institute’s 2019 retention report
[2] Employee turnover dataset shared from Edward Babushkin’s blog
[3] randomForestSRC documentation
[4] Brier score and IBS documentation from Python library pysurvival
[5] riskRegression documentation
[6] A brief explanation of trapezoid method