Over the last decade Deep learning-based models surpasses classical machine learning models in a variety of text classification tasks. The primary challenge with text classification is determining the most appropriate deep learning classifier. Numerous research initiatives incorporated ensemble learning to boost the performance, minimize errors and avoid overfitting. However, the performance of the ensemble-methods is limited by the baseline classifiers and the fusion method. The current study makes the following contributions: First, it proposes a new meta-learning ensemble method that fuses baseline deep learning models using 2-tiers of meta-classifiers. Second, it conducts several experiments on six public benchmark datasets to evaluate the performance of the proposed ensemble. For each benchmark dataset, committees of different deep baseline classifiers are trained, and their best performance is compared with the performance of the proposed ensemble. Furthermore, the paper extends the results by comparing the performance of the proposed ensemble method to other state-of-the-art ensemble methods. The findings indicate that the proposed ensemble method significantly improve the classification accuracy of the baseline deep models. Furthermore, the proposed method outperforms the state-of-art ensemble methods. Finally, using the probability distributions for each class label of the deep baseline models improves the performance of the proposed ensemble method.
n/a