Enhancing Supervised Model Performance in Credit Risk Classification Using Sampling Strategies and Feature Ranking
Journal article
Authors/Editors
Strategic Research Themes
Publication Details
Author list: Wattanakitrungroj, Niwan; Wijitkajee, Pimchanok; Jaiyen, Saichon; Sathapornvajana, Sunisa; Tongman, Sasiporn;
Publisher: MDPI
Publication year: 2024
Journal acronym: Big Data Cogn. Comput.
Volume number: 8
Issue number: 3
Start page: 1
End page: 22
Number of pages: 22
ISSN: 2504-2289
eISSN: 2504-2289
Languages: English-Great Britain (EN-GB)
Abstract
For the financial health of lenders and institutions, one important risk assessment called credit risk is about correctly deciding whether or not a borrower will fail to repay a loan. It not only helps in the approval or denial of loan applications but also aids in managing the non-performing loan (NPL) trend. In this study, a dataset provided by the LendingClub company based in San Francisco, CA, USA, from 2007 to 2020 consisting of 2,925,492 records and 141 attributes was experimented with. The loan status was categorized as “Good” or “Risk”. To yield highly effective results of credit risk prediction, experiments on credit risk prediction were performed using three widely adopted supervised machine learning techniques: logistic regression, random forest, and gradient boosting. In addition, to solve the imbalanced data problem, three sampling algorithms, including under-sampling, over-sampling, and combined sampling, were employed. The results show that the gradient boosting technique achieves nearly perfect (Formula presented.), (Formula presented.), (Formula presented.), and (Formula presented.) values, which are better than 99.92%, but its (Formula presented.) values are greater than 99.77%. Three imbalanced data handling approaches can enhance the model performance of models trained by three algorithms. Moreover, the experiment of reducing the number of features based on mutual information calculation revealed slightly decreasing performance for 50 data features with (Formula presented.) values greater than 99.86%. For 25 data features, which is the smallest size, the random forest supervised model yielded 99.15% (Formula presented.). Both sampling strategies and feature selection help to improve the supervised model for accurately predicting credit risk, which may be beneficial in the lending business. © 2024 by the authors.
Keywords
credit risk classification, feature ranking, imbalance handling, Machine Learning