pip install bbox
BLO-Toolbox is an extensive PyTorch-based toolbox designed to facilitate the exploration and development of bi-level optimization (BLO) applications in machine learning and signal processing. The repository is associated with the tutorial paper, "An Introduction to Bi-level Optimization: Foundations and Applications in Signal Processing and Machine Learning." The toolbox supports a number of large-scale applications including adversarial training, model pruning, wireless resource allocation, and invariant representation learning. It contains code, tools, and examples that are built upon state-of-the-art methods.
Bi-level optimization (BLO) has a growing presence in the fields of machine learning and signal processing. It serves as a bridge between traditional optimization techniques and novel problem formulations. The BLO-Toolbox aims to provide researchers, developers, and enthusiasts a flexible platform to build and experiment with various BLO algorithms.
We provide reference implementations of various BLO applications, including:
- Wireless Resource Allocation
- Wireless Signal Demodulation
- Invariant Representation Learning
- Adversarial Robust Training
- Model Pruning
While each of the above examples traditionally has a distinct implementation style, note that our implementations share the same code structure. More examples are on the way!
- Implicit Gradient
- Hessian-Free Approximation (Revisiting and Advancing Fast Adversarial Training Through The Lens of Bi-Level Optimization)
- WoodFishr Approximation (Woodfisher: Efficient second-order approximation for neural network compression)
- Finite Difference (or T1-T2) (DARTS: Differentiable Architecture Search)
- Neumann Series (Optimizing Millions of Hyperparameters by Implicit Differentiation)
- Conjugate Gradient (Meta-Learning with Implicit Gradients)
- Gradient Unrolling
- Forward Gradient Unrolling (MetaPoison: Practical General-Purpose Clean-Label Data Poisoning)
- Backward Gradient Unrolling (Model-Agnostic Meta-Learning (MAML))
- Truncated Gradient Unrolling (runcated back-propagation for bilevel optimization)
- K-step Truncated Back-propagation (K-RMD)
- Sign-based Gradient Unrolling (Sign-MAML: Efficient model-agnostic meta-learning by signSGD)
- Gradient accumulation
- FP16/BF16 training
- Gradient clipping
We welcome contributions from the community! Please see our contributing guidelines for details on how to contribute to Betty.
If you use this toolbox in your research, please cite our paper with the following Bibtex entry.
@article{zhang2023introduction,
title={An introduction to bi-level optimization: Foundations and applications in signal processing and machine learning},
author={Zhang, Yihua and Khanduri, Prashant and Tsaknakis, Ioannis and Yao, Yuguang and Hong, Mingyi and Liu, Sijia},
journal={arXiv preprint arXiv:2308.00788},
year={2023}
}
Feel free to reach out with any questions, comments, or inquiries. You can contact us or open an issue in the repository.