Enhancing Supervised Model Performance in Credit Risk Classification Using Sampling Strategies and Feature Ranking

Journal article


Authors/Editors


Strategic Research Themes


Publication Details

Author listWattanakitrungroj, Niwan; Wijitkajee, Pimchanok; Jaiyen, Saichon; Sathapornvajana, Sunisa; Tongman, Sasiporn;

PublisherMDPI

Publication year2024

Journal acronymBig Data Cogn. Comput.

Volume number8

Issue number3

Start page1

End page22

Number of pages22

ISSN2504-2289

eISSN2504-2289

URLhttps://www.scopus.com/inward/record.uri?eid=2-s2.0-85188840289&doi=10.3390%2fbdcc8030028&partnerID=40&md5=3f39120ec9f714db783152caaaa96147

LanguagesEnglish-Great Britain (EN-GB)


View on publisher site


Abstract

For the financial health of lenders and institutions, one important risk assessment called credit risk is about correctly deciding whether or not a borrower will fail to repay a loan. It not only helps in the approval or denial of loan applications but also aids in managing the non-performing loan (NPL) trend. In this study, a dataset provided by the LendingClub company based in San Francisco, CA, USA, from 2007 to 2020 consisting of 2,925,492 records and 141 attributes was experimented with. The loan status was categorized as “Good” or “Risk”. To yield highly effective results of credit risk prediction, experiments on credit risk prediction were performed using three widely adopted supervised machine learning techniques: logistic regression, random forest, and gradient boosting. In addition, to solve the imbalanced data problem, three sampling algorithms, including under-sampling, over-sampling, and combined sampling, were employed. The results show that the gradient boosting technique achieves nearly perfect (Formula presented.), (Formula presented.), (Formula presented.), and (Formula presented.) values, which are better than 99.92%, but its (Formula presented.) values are greater than 99.77%. Three imbalanced data handling approaches can enhance the model performance of models trained by three algorithms. Moreover, the experiment of reducing the number of features based on mutual information calculation revealed slightly decreasing performance for 50 data features with (Formula presented.) values greater than 99.86%. For 25 data features, which is the smallest size, the random forest supervised model yielded 99.15% (Formula presented.). Both sampling strategies and feature selection help to improve the supervised model for accurately predicting credit risk, which may be beneficial in the lending business. © 2024 by the authors.


Keywords

credit risk classificationfeature rankingimbalance handlingMachine Learning


Last updated on 2024-21-06 at 00:00