- Learning with ensembles
- Combining classifiers via majority vote
- Implementing a simple majority vote classifier
- Using the majority voting principle to make predictions
- Evaluating and tuning the ensemble classifier
- Bagging – building an ensemble of classifiers from bootstrap samples
- Bagging in a nutshell
- Applying bagging to classify samples in the Wine dataset
- Leveraging weak learners via adaptive boosting
- How boosting works
- Applying AdaBoost using scikit-learn
- Gradient boosting -- training an ensemble based on loss gradients
- Comparing AdaBoost with gradient boosting
- Outlining the general gradient boosting algorithm
- Explaining the gradient boosting algorithm for classification
- Illustrating gradient boosting for classification
- Using XGBoost
- Summary
Please refer to the README.md file in ../ch01
for more information about running the code examples.