INTERPRETABLE MACHINE LEARNING FOR STATISTICAL MODELING: BRIDGING CLASSICAL AND MODERN APPROACHES
Keywords:
Interpretable Machine Learning, Statistical Modeling, Model Explainability, Predictive Analytics, Predictive Analytics.Abstract
As data-driven decision-making becomes increasingly central across domains, the balance between predictive performance and model interpretability has grown critical. This study explores the intersection of classical statistical modeling and interpretable machine learning (IML), comparing their performance across healthcare, finance, and synthetic datasets. We categorize models into three groups: classical models (logistic and linear regression), inherently interpretable machine learning models (decision trees, GAMs), and high-performance models enhanced with post-hoc interpretation methods (e.g., SHAP, LIME with XGBoost and Random Forests). Our results reveal that while complex ML models achieve higher predictive accuracy, classical and interpretable models retain an essential role in transparency, inference, and user trust. By evaluating each model type on both performance metrics and interpretability criteria, we provide a practical framework for choosing the right model depending on the analytical goal: prediction, explanation, or both. The paper concludes with recommendations for model selection in real-world applications where interpretability and accuracy must be balanced.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.











