Skip to content Skip to footer

Examining and Improving Model Efficiency for Tabular Data with XGBoost and Ensembles: A Step Further than Deep Learning

Model selection is a critical part of addressing real-world data science problems. Traditionally, tree ensemble models such as XGBoost have been favored for tabular data analysis. However, deep learning models have been gaining traction, purporting to offer superior performance on certain tabular datasets. Recognising the potential inconsistency in benchmarking and evaluation methods, a team of researchers from Intel’s IT AI Group embarked on a comprehensive study comparing the performance of XGBoost and deep learning models on tabular data.

The research found that, in general, XGBoost outperformed deep learning models across a variety of datasets, including ones initially used to demonstrate the effectiveness of deep models. The study also noted that XGBoost required significantly less hyperparameter tuning. However, the highest levels of performance were achieved when XGBoost was used in conjunction with deep learning models in an ensemble.

Gradient-Boosted Decision Trees (GBDT) like XGBoost have been successful in applications that deal with tabular data. Despite the emergence of specialized deep learning models for tabular data like TabNet, NODE, and DNF-Net, XGBoost continues to be competitive. The synergistic combination of multiple models, known as ensemble learning, has the potential to push performance even further.

In this extensive study, 11 disparate tabular datasets were scrutinized, with deep learning models like NODE, DNF-Net, and TabNet pitted against conventional methods like XGBoost and ensemble approaches. The datasets, culled from well-regarded repositories and Kaggle competitions, presented a broad range of features, classes, and sample sizes. Examined criteria included accuracy, efficiency during training and inference, and the time required for hyperparameter tuning.

The deep learning models performed best on datasets that they were originally designed for, suggesting a predisposition to overfitting. By contrast, XGBoost outshone the deep learning models on 8 of the 11 datasets, displaying its adaptability across varied domains.

The study also assessed the proficiencies of ensemble methods that combine deep learning models with XGBoost. It found that these partnerships often resulted in better performance than single model approaches. The complementary strengths of deep learning and tree-based models became apparent, with deep networks capturing complex patterns and XGBoost providing robust, generalized performance.

Despite the computational benefits of deep models, XGBoost was quicker and more efficient during hyperparameter optimization, achieving optimal performance with fewer iterations and less resources. These findings underscore the importance of careful model selection and exploiting the unique strengths of different algorithmic approaches to handle various tabular data problems.

The study postulated that future research should focus on the development of deep learning models that are easier to optimize and that can compete more effectively with XGBoost. It also emphasized the need to test models on a diverse range of datasets to get a fuller picture of their capabilities. Despite the strides made in deep learning, the study demonstrated that XGBoost remains an effective and efficient solution for tabular data challenges.

Leave a comment

0.0/5