F1 score in R programming

Filed Under: R Programming
F1 Score In R Programming

Hello readers! In this article, we would be focusing on an important error metric in Machine Learning – F1 score in R programming, in detail! We recently covered another error metric – Recall. Now, let us understand the F1 score error metric in further detail!! 🙂


So, what is F1 score?

F1 score is a Classification Error metric used to evaluate the classification machine learning algorithms. Mostly, it is useful in evaluating the prediction for binary classification of data.

Have a look at the below formula–

F1 = 2 * (precision * recall) / (precision + recall)

F1 score is a combination of two important error metrics: Precision and Recall. Thus, it can be considered as the Harmonic mean of Precision and Recall error metrics for an imbalanced dataset with respect to binary classification of data.


Calculating F1 score in R for Logistic Regression model

In this article, we would be using the Loan Defaulter Prediction problem to evaluate the working of the F1 score error metric for the same. You can find the dataset here!

In this problem statement, our task is to predict whether a particular customer is a loan defaulter or not.

So, let us begin!

1. Load the dataset

We have loaded the dataset into the R environment using read.csv() function.

rm(list = ls())
#Setting the working directory
setwd("D:/Loan_Defaulter/")
getwd()
#Load the dataset
dta = read.csv("bank-loan.csv",header=TRUE)

2. Exploratory Data Analysis in R

At first, we try to understand the data type and type of values comprised by the columns through str() function from the R documentation.

Using R summary() function, we get an insight into the statistical data distribution of the variables.

The dim() function gives us the dimensions (number of rows and columns) present in the dataset.

From the dataset, we can say that the bank loan defaulter prediction problem is a Classification Problem.

### Exploratory Data Analysis ###
#  Understanding the data values of the dataset
str(dta)

# Understanding the data distribution of the dataset
summary(dta)

#  Checking the dimensions of the dataset
dim(dta)

#####Insights from EDA--
# 'default' is the response variable that is categorical in nature.
#Thus, we can say that the given problem statement is a Classification Problem statement

3. Sampling of dataset

Prior to modelling, it is very important to segregate the dataset into training and testing values. We have made use of createDataPartition() method to split the dataset into training and testing data. To add, we have made use of dummies library to create dummy variables for the categorical variables. This enables better modelling on the data values.

### Data SAMPLING ###
categorical_col= c('ed')
library(dummies)
data = dta
data = dummy.data.frame(data,categorical_col)
dim(data)
library(caret)
set.seed(101)
split = createDataPartition(data$default, p = 0.80, list = FALSE)
train_data = data[split,]
test_data = data[-split,]

4. Evaluation – Error Metrics

We have created a Confusion Matrix as a customized function to evaluate the model through F1 score and Accuracy.

#error metrics -- Confusion Matrix
err_metric=function(CM)
{
  TN =CM[1,1]
  TP =CM[2,2]
  FP =CM[1,2]
  FN =CM[2,1]
  precision =(TP)/(TP+FP)
  recall_score =(FP)/(FP+TN)

  f1_score=2*((precision*recall_score)/(precision+recall_score))
  accuracy_model  =(TP+TN)/(TP+TN+FP+FN)
  False_positive_rate =(FP)/(FP+TN)
  False_negative_rate =(FN)/(FN+TP)

  print(paste("Precision value of the model: ",round(precision,2)))
  print(paste("Accuracy of the model: ",round(accuracy_model,2)))
  print(paste("Recall value of the model: ",round(recall_score,2)))
  print(paste("False Positive rate of the model: ",round(False_positive_rate,2)))

  print(paste("False Negative rate of the model: ",round(False_negative_rate,2)))

  print(paste("f1 score of the model: ",round(f1_score,2)))
}

5. Modelling

Now is the time to apply the model to the dataset. We would be applying Logistic Regression on the data. The glm() function enables us to perform binary classification using logistic Regression.

Further, we make use of predict() function to make predictions on the testing data.

Finally, we call the created Confusion Matrix function to evaluate the model. Having done this, we plot the ROC curve for the model.

########################MODELLING####################################
# 1. Logistic regression
logit_m =glm(formula = default~. ,data =train_data ,family='binomial')
summary(logit_m)
logit_P = predict(logit_m , newdata = test_data[-13] ,type = 'response' )
logit_P <- ifelse(logit_P > 0.5,1,0) # Probability check
CM= table(test_data[,13] , logit_P)
print(CM)
err_metric(CM)
library(pROC)
roc_score=roc(test_data[,13], logit_P) #AUC score
plot(roc_score ,main ="ROC curve -- Logistic Regression ")

Output:

[1] "Precision value of the model:  0.67"
[1] "Accuracy of the model:  0.84"
[1] "Recall value of the model:  0.04"
[1] "False Positive rate of the model:  0.04"
[1] "False Negative rate of the model:  0.64"
[1] "f1 score of the model:  0.08"
ROC Curve Logistic Regression
ROC Curve-Logistic Regression

F1 score using inbuilt R function

In the below example, we have made use of an inbuilt function to calculate F1 score in R — F1_Score() function.

You can find the function in the library ‘MLmetrics‘.

rm(list = ls())
library(MLmetrics)
y_pred = c(1,2,3,4,5)
y_actual = c(1,2,7,1,5)
res = F1_Score(y_pred,y_actual)
print(res)

As seen below, we have obtained a F1 score of 66.7% for the data used in it.

Output:

0.6666667

Conclusion

By this, we have come to the end of this topic. Feel free to comment below, in case you come across any question.

Do let us know about your experience in implementing the concept of F1 score with other machine learning algorithms.

For more such posts related to R, stay tuned and till then, Happy Learning!! 🙂


References

Comments

  1. Binbaek says:

    you have the wrong recall score. It’s (TP)/(TP+FN) not (FP)/(FP+TN)

Leave a Reply

Your email address will not be published. Required fields are marked *

close
Generic selectors
Exact matches only
Search in title
Search in content