diff --git a/README.md b/README.md
index b89a16d..0a3cbeb 100644
--- a/README.md
+++ b/README.md
@@ -125,137 +125,156 @@ Useful Tools
Please run experiments or find results on each config page. Refer to [Mixup Benchmarks](docs/en/mixup_benchmarks) for benchmarking results of mixup methods. View [Model Zoos Sup](docs/en/model_zoos/Model_Zoo_sup.md) and [Model Zoos SSL](docs/en/model_zoos/Model_Zoo_selfsup.md) for a comprehensive collection of mainstream backbones and self-supervised algorithms. We also provide the paper lists of [Awesome Mixups](docs/en/awesome_mixups) and [Awesome MIM](docs/en/awesome_selfsup/MIM.md) for your reference. Please view config files and links to models at the following config pages. Checkpoints and training logs are on updating!
-* Backbone architectures for supervised image classification on ImageNet.
-
-
- Currently supported backbones
-
- - [x] [AlexNet](https://dl.acm.org/doi/10.1145/3065386) (NIPS'2012) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/alexnet/)]
- - [x] [VGG](https://arxiv.org/abs/1409.1556) (ICLR'2015) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/vgg/)]
- - [x] [InceptionV3](https://arxiv.org/abs/1512.00567) (CVPR'2016) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/inception_v3/)]
- - [x] [ResNet](https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html) (CVPR'2016) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/)]
- - [x] [ResNeXt](https://arxiv.org/abs/1611.05431) (CVPR'2017) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/)]
- - [x] [SE-ResNet](https://arxiv.org/abs/1709.01507) (CVPR'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/)]
- - [x] [SE-ResNeXt](https://arxiv.org/abs/1709.01507) (CVPR'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/resnet/)]
- - [x] [ShuffleNetV1](https://arxiv.org/abs/1807.11164) (CVPR'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/shufflenet_v1/)]
- - [x] [ShuffleNetV2](https://arxiv.org/abs/1807.11164) (ECCV'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/shufflenet_v2/)]
- - [x] [MobileNetV2](https://arxiv.org/abs/1801.04381) (CVPR'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mobilenet_v2/)]
- - [x] [MobileNetV3](https://arxiv.org/abs/1905.02244) (ICCV'2019) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mobilenet_v3/)]
- - [x] [EfficientNet](https://arxiv.org/abs/1905.11946) (ICML'2019) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/efficientnet/)]
- - [x] [EfficientNetV2](https://arxiv.org/abs/2104.00298) (ICML'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/efficientnet_v2/)]
- - [x] [HRNet](https://arxiv.org/abs/1908.07919) (TPAMI'2019) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/hrnet/)]
- - [x] [Res2Net](https://arxiv.org/abs/1904.01169) (ArXiv'2019) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/res2net/)]
- - [x] [CSPNet](https://arxiv.org/abs/1911.11929) (CVPRW'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/cspnet/)]
- - [x] [RegNet](https://arxiv.org/abs/2003.13678) (CVPR'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/regnet/)]
- - [x] [Vision-Transformer](https://arxiv.org/abs/2010.11929) (ICLR'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/vision_transformer/)]
- - [x] [Swin-Transformer](https://arxiv.org/abs/2103.14030) (ICCV'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/swin_transformer/)]
- - [x] [PVT](https://arxiv.org/abs/2102.12122) (ICCV'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/pvt/)]
- - [x] [T2T-ViT](https://arxiv.org/abs/2101.11986) (ICCV'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/t2t_vit/)]
- - [x] [LeViT](https://arxiv.org/abs/2104.01136) (ICCV'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/levit/)]
- - [x] [RepVGG](https://arxiv.org/abs/2101.03697) (CVPR'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/repvgg/)]
- - [x] [DeiT](https://arxiv.org/abs/2012.12877) (ICML'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/deit/)]
- - [x] [MLP-Mixer](https://arxiv.org/abs/2105.01601) (NIPS'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mlp_mixer/)]
- - [x] [Twins](https://proceedings.neurips.cc/paper/2021/hash/4e0928de075538c593fbdabb0c5ef2c3-Abstract.html) (NIPS'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/twins/)]
- - [x] [ConvMixer](https://arxiv.org/abs/2201.09792) (Openreview'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/convmixer/)]
- - [x] [BEiT](https://arxiv.org/abs/2106.08254) (ICLR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/beit/)]
- - [x] [UniFormer](https://arxiv.org/abs/2201.09450) (ICLR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/uniformer/)]
- - [x] [MobileViT](http://arxiv.org/abs/2110.02178) (ICLR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mobilevit/)]
- - [x] [PoolFormer](https://arxiv.org/abs/2111.11418) (CVPR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/poolformer/)]
- - [x] [ConvNeXt](https://arxiv.org/abs/2201.03545) (CVPR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/convnext/)]
- - [x] [MViTV2](https://arxiv.org/abs/2112.01526) (CVPR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mvit/)]
- - [x] [RepMLP](https://arxiv.org/abs/2105.01883) (CVPR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/repmlp/)]
- - [x] [VAN](https://arxiv.org/abs/2202.09741) (CVMJ'2023) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/van/)]
- - [x] [DeiT-3](https://arxiv.org/abs/2204.07118) (ECCV'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/deit3/)]
- - [x] [LITv2](https://arxiv.org/abs/2205.13213) (NIPS'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/lit_v2/)]
- - [x] [HorNet](https://arxiv.org/abs/2207.14284) (NIPS'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/hornet/)]
- - [x] [DaViT](https://arxiv.org/abs/2204.03645) (ECCV'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/davit/)]
- - [x] [EdgeNeXt](https://arxiv.org/abs/2206.10589) (ECCVW'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/edgenext/)]
- - [x] [EfficientFormer](https://arxiv.org/abs/2206.01191) (ArXiv'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/efficientformer/)]
- - [x] [MogaNet](https://arxiv.org/abs/2211.03295) (ICLR'2024) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/moganet/)]
- - [x] [MetaFormer](http://arxiv.org/abs/2210.13452) (ArXiv'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/metaformer/)]
- - [x] [ConvNeXtV2](http://arxiv.org/abs/2301.00808) (ArXiv'2023) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/convnext_v2/)]
- - [x] [CoC](https://arxiv.org/abs/2303.01494) (ICLR'2023) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/context_cluster/)]
- - [x] [MobileOne](http://arxiv.org/abs/2206.04040) (CVPR'2023) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mobileone/)]
- - [x] [VanillaNet](http://arxiv.org/abs/2305.12972) (NeurIPS'2023) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/vanillanet/)]
- - [x] [RWKV](https://arxiv.org/abs/2305.13048) (ArXiv'2023) [[config](IP51/openmixup/configs/classification/imagenet/rwkv/)]
- - [x] [UniRepLKNet](https://arxiv.org/abs/2311.15599) (CVPR'2024) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/unireplknet/)]
- - [x] [TransNeXt](https://arxiv.org/abs/2311.17132) (CVPR'2024) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/transnext/)]
- - [x] [StarNet](https://arxiv.org/abs/2403.19967) (CVPR'2024) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/starnet/)]
-
-
-* Mixup methods for supervised image classification.
-
-
- Currently supported mixup methods
-
- - [x] [Mixup](https://arxiv.org/abs/1710.09412) (ICLR'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [CutMix](https://arxiv.org/abs/1905.04899) (ICCV'2019) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [ManifoldMix](https://arxiv.org/abs/1806.05236) (ICML'2019) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [FMix](https://arxiv.org/abs/2002.12047) (ArXiv'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [AttentiveMix](https://arxiv.org/abs/2003.13048) (ICASSP'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [SmoothMix](https://openaccess.thecvf.com/content_CVPRW_2020/papers/w45/Lee_SmoothMix_A_Simple_Yet_Effective_Data_Augmentation_to_Train_Robust_CVPRW_2020_paper.pdf) (CVPRW'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [SaliencyMix](https://arxiv.org/abs/1710.09412) (ICLR'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [PuzzleMix](https://arxiv.org/abs/2009.06962) (ICML'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [SnapMix](https://arxiv.org/abs/2012.04846) (AAAI'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/cifar100/mixups/)]
- - [x] [GridMix](https://www.sciencedirect.com/science/article/pii/S0031320320303976) (Pattern Recognition'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [ResizeMix](https://arxiv.org/abs/2012.11101) (CVMJ'2023) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [AlignMix](https://arxiv.org/abs/2103.15375) (CVPR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [TransMix](https://arxiv.org/abs/2111.09833) (CVPR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [AutoMix](https://arxiv.org/abs/2103.13027) (ECCV'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/automix)]
- - [x] [SAMix](https://arxiv.org/abs/2111.15454) (ArXiv'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/samix)]
- - [x] [DecoupleMix](https://arxiv.org/abs/2203.10761) (NeurIPS'2023) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/decouple)]
- - [ ] [SMMix](https://arxiv.org/abs/2212.12977) (ICCV'2023) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [AdAutoMix](https://arxiv.org/abs/2312.11954) (ICLR'2024) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/adautomix)]
- - [ ] [SUMix](https://arxiv.org/abs/2312.11954) (ECCV'2024)
-
-
-
- Currently supported datasets for mixups
-
- - [x] [ImageNet](https://arxiv.org/abs/1409.0575) [[download (1K)](http://www.image-net.org/challenges/LSVRC/2012/)] [[download (21K)](https://image-net.org/data/imagenet21k_resized.tar.gz)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/imagenet/mixups/)]
- - [x] [CIFAR-10](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) [[download](https://www.cs.toronto.edu/~kriz/cifar.html)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/cifar10/)]
- - [x] [CIFAR-100](https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf) [[download](https://www.cs.toronto.edu/~kriz/cifar.html)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/cifar100/)]
- - [x] [Tiny-ImageNet](https://arxiv.org/abs/1707.08819) [[download](http://cs231n.stanford.edu/tiny-imagenet-200.zip)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/tiny_imagenet/)]
- - [x] [FashionMNIST](https://arxiv.org/abs/1708.07747) [[download](https://github.com/zalandoresearch/fashion-mnist)]
- - [x] [STL-10](http://proceedings.mlr.press/v15/coates11a/coates11a.pdf) [[download](https://cs.stanford.edu/~acoates/stl10/)]
- - [x] [CUB-200-2011](https://resolver.caltech.edu/CaltechAUTHORS:20111026-120541847) [[download](http://www.vision.caltech.edu/datasets/cub_200_2011/)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/cub200/)]
- - [x] [FGVC-Aircraft](https://arxiv.org/abs/1306.5151) [[download](https://www.robots.ox.ac.uk/~vgg/data/fgvc-aircraft/)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/aircrafts/)]
- - [x] [Stanford-Cars](http://ai.stanford.edu/~jkrause/papers/3drr13.pdf) [[download](http://ai.stanford.edu/~jkrause/cars/car_dataset.html)]
- - [x] [Places205](http://places2.csail.mit.edu/index.html) [[download](http://places.csail.mit.edu/downloadData.html)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/place205/)]
- - [x] [iNaturalist-2017](https://arxiv.org/abs/1707.06642) [[download](https://github.com/visipedia/inat_comp/tree/master/2017)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/inaturalist2017/)]
- - [x] [iNaturalist-2018](https://arxiv.org/abs/1707.06642) [[download](https://github.com/visipedia/inat_comp/tree/master/2018)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/classification/inaturalist2018/)]
- - [x] [AgeDB](https://ieeexplore.ieee.org/document/8014984) [[download](https://ibug.doc.ic.ac.uk/resources/agedb/)] [[download (baidu)](https://pan.baidu.com/s/1XdibVxiGoWf46HLOHKiIyw?pwd=0n6p)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/regression/agedb)]
- - [x] [IMDB-WIKI](https://link.springer.com/article/10.1007/s11263-016-0940-3) [[download (imdb)](https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/static/imdb_crop.tar)] [[download (wiki)](https://data.vision.ee.ethz.ch/cvl/rrothe/imdb-wiki/static/wiki_crop.tar)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/regression/imdb_wiki)]
- - [x] [RCFMNIST](https://arxiv.org/abs/2210.05775) [[download](https://github.com/zalandoresearch/fashion-mnist)] [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/regression/rcfmnist)]
-
-
-* Self-supervised algorithms for visual representation learning.
-
-
- Currently supported self-supervised algorithms
-
- - [x] [Relative Location](https://arxiv.org/abs/1505.05192) (ICCV'2015) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/relative_loc/)]
- - [x] [Rotation Prediction](https://arxiv.org/abs/1803.07728) (ICLR'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/rotation_pred/)]
- - [x] [DeepCluster](https://arxiv.org/abs/1807.05520) (ECCV'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/deepcluster/)]
- - [x] [NPID](https://arxiv.org/abs/1805.01978) (CVPR'2018) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/npid/)]
- - [x] [ODC](https://arxiv.org/abs/2006.10645) (CVPR'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/odc/)]
- - [x] [MoCov1](https://arxiv.org/abs/1911.05722) (CVPR'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/mocov1/)]
- - [x] [SimCLR](https://arxiv.org/abs/2002.05709) (ICML'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/simclr/)]
- - [x] [MoCoV2](https://arxiv.org/abs/2003.04297) (ArXiv'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/mocov2/)]
- - [x] [BYOL](https://arxiv.org/abs/2006.07733) (NIPS'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/byol/)]
- - [x] [SwAV](https://arxiv.org/abs/2006.09882) (NIPS'2020) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/swav/)]
- - [x] [DenseCL](https://arxiv.org/abs/2011.09157) (CVPR'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/densecl/)]
- - [x] [SimSiam](https://arxiv.org/abs/2011.10566) (CVPR'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/simsiam/)]
- - [x] [Barlow Twins](https://arxiv.org/abs/2103.03230) (ICML'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/barlowtwins/)]
- - [x] [MoCoV3](https://arxiv.org/abs/2104.02057) (ICCV'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/mocov3/)]
- - [x] [DINO](https://arxiv.org/abs/2104.14294) (ICCV'2021) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/dino/)]
- - [x] [BEiT](https://arxiv.org/abs/2106.08254) (ICLR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/beit/)]
- - [x] [MAE](https://arxiv.org/abs/2111.06377) (CVPR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/mae/)]
- - [x] [SimMIM](https://arxiv.org/abs/2111.09886) (CVPR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/simmim/)]
- - [x] [MaskFeat](https://arxiv.org/abs/2112.09133) (CVPR'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/maskfeat/)]
- - [x] [CAE](https://arxiv.org/abs/2202.03026) (ArXiv'2022) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/cae/)]
- - [x] [A2MIM](https://arxiv.org/abs/2205.13943) (ICML'2023) [[config](https://github.com/Westlake-AI/openmixup/tree/main/configs/selfsup/a2mim/)]
-
+
+
+
+
+ Supported Backbone Architectures
+ |
+
+ Mixup Data Augmentations
+ |
+
+
+
+
+ |
+
+
+ |
+
+
+
+
+
+
+
+
+ Self-supervised Learning Algorithms
+ |
+
+ Supported Datasets
+ |
+
+
+
+
+ |
+
+
+ |
+
+
(back to top)
diff --git a/docs/en/awesome_mixups/Mixup_SL.md b/docs/en/awesome/awesome_mixup.md
similarity index 54%
rename from docs/en/awesome_mixups/Mixup_SL.md
rename to docs/en/awesome/awesome_mixup.md
index b9aedc5..935e902 100644
--- a/docs/en/awesome_mixups/Mixup_SL.md
+++ b/docs/en/awesome/awesome_mixup.md
@@ -10,18 +10,35 @@ The list of awesome mixup methods is summarized in chronological order and is on
## Table of Contents
- - [Sample Mixup Methods](#sample-mixup-methods)
- + [Pre-defined Policies](#pre-defined-policies)
- + [Saliency-guided Policies](#saliency-guided-policies)
- - [Label Mixup Methods](#label-mixup-methods)
+- [Awesome-Mixup](#awesome-mixup)
+ - [Introduction](#introduction)
+ - [Table of Contents](#table-of-contents)
+ - [Fundermental Methods](#fundermental-methods)
+ - [Sample Mixup Methods](#sample-mixup-methods)
+ - [**Pre-defined Policies**](#pre-defined-policies)
+ - [**Adaptive Policies**](#adaptive-policies)
+ - [Label Mixup Methods](#label-mixup-methods)
+ - [Mixup for Self-supervised Learning](#mixup-for-self-supervised-learning)
+ - [Mixup for Semi-supervised Learning](#mixup-for-semi-supervised-learning)
+ - [Mixup for Regression](#mixup-for-regression)
+ - [Mixup for Robustness](#mixup-for-robustness)
+ - [Mixup for Multi-modality](#mixup-for-multi-modality)
- [Analysis of Mixup](#analysis-of-mixup)
+ - [Natural Language Processing](#natural-language-processing)
+ - [Graph Representation Learning](#graph-representation-learning)
- [Survey](#survey)
+ - [Benchmark](#benchmark)
- [Contribution](#contribution)
+ - [License](#license)
+ - [Acknowledgement](#acknowledgement)
- [Related Project](#related-project)
-## Sample Mixup Methods
-### Pre-defined Policies
+## Fundermental Methods
+
+### Sample Mixup Methods
+
+#### **Pre-defined Policies**
* **mixup: Beyond Empirical Risk Minimization**
*Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz*
@@ -261,9 +278,17 @@ NIPS'2022 [[Paper](https://arxiv.org/abs/2206.14502)]
+* **ContextMix: A context-aware data augmentation method for industrial visual inspection systems**
+*Hyungmin Kim, Donghun Kim, Pyunghwan Ahn, Sungho Suh, Hansang Cho, Junmo Kim*
+EAAI'2024 [[Paper](https://arxiv.org/abs/2401.10050)]
+
+ ConvtextMix Framework
+
+
+
(back to top)
-### Saliency-guided Policies
+#### **Adaptive Policies**
* **SaliencyMix: A Saliency Guided Data Augmentation Strategy for Better Regularization**
*A F M Shahab Uddin and Mst. Sirazam Monira and Wheemyung Shin and TaeChoong Chung and Sung-Ho Bae*
@@ -455,6 +480,7 @@ AAAI'2023 [[Paper](https://arxiv.org/abs/2306.16612)]
* **MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer**
*Qihao Zhao, Yangyu Huang, Wei Hu, Fan Zhang, Jun Liu*
ICLR'2023 [[Paper](https://openreview.net/forum?id=dRjWsd3gwsm)]
+[[Code](https://github.com/fistyee/MixPro)]
MixPro Framework
@@ -478,9 +504,10 @@ ICCV'2023 [[Paper](https://arxiv.org/abs/2212.12977)]
-* **Teach me how to Interpolate a Myriad of Embeddings**
+
+* **Embedding Space Interpolation Beyond Mini-Batch, Beyond Pairs and Beyond Examples**
*Shashanka Venkataramanan, Ewa Kijak, Laurent Amsaleg, Yannis Avrithis*
-Arxiv'2022 [[Paper](https://arxiv.org/abs/2206.14868)]
+NeurIPS'2023 [[Paper](https://arxiv.org/abs/2206.14868)]
MultiMix Framework
@@ -494,9 +521,35 @@ ICME'2023 [[Paper](https://ieeexplore.ieee.org/abstract/document/10219625)]
+* **LGCOAMix: Local and Global Context-and-Object-Part-Aware Superpixel-Based Data Augmentation for Deep Visual Recognition**
+*Fadi Dornaika, Danyang Sun*
+TIP'2023 [[Paper](https://ieeexplore.ieee.org/document/10348509)]
+[[Code](https://github.com/DanielaPlusPlus/LGCOAMix)]
+
+ LGCOAMix Framework
+
+
+
+* **Catch-Up Mix: Catch-Up Class for Struggling Filters in CNN**
+*Minsoo Kang, Minkoo Kang, Suhyun Kim*
+AAAI'2024 [[Paper](https://arxiv.org/abs/2401.13193)]
+
+ Catch-Up-Mix Framework
+
+
+
+* **Adversarial AutoMixup**
+*Huafeng Qin, Xin Jin, Yun Jiang, Mounim A. El-Yacoubi, Xinbo Gao*
+ICLR'2024 [[Paper](https://arxiv.org/abs/2312.11954)]
+[[Code](https://github.com/jinxins/adversarial-automixup)]
+
+ AdAutoMix Framework
+
+
+
(back to top)
-## Label Mixup Methods
+### Label Mixup Methods
* **mixup: Beyond Empirical Risk Minimization**
*Hongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz*
@@ -564,7 +617,7 @@ ArXiv'2022 [[Paper](https://arxiv.org/abs/2201.02354)]
-* **Decoupled Mixup for Data-efficient Learning**
+* **Harnessing Hard Mixed Samples with Decoupled Regularizer**
*Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li*
NIPS'2023 [[Paper](https://arxiv.org/abs/2203.10761)]
[[Code](https://github.com/Westlake-AI/openmixup)]
@@ -615,7 +668,8 @@ arXiv'2022 [[Paper](https://arxiv.org/abs/2211.15846)]
* **MixupE: Understanding and Improving Mixup from Directional Derivative Perspective**
*Vikas Verma, Sarthak Mittal, Wai Hoh Tang, Hieu Pham, Juho Kannala, Yoshua Bengio, Arno Solin, Kenji Kawaguchi*
-arXiv'2022 [[Paper](https://arxiv.org/abs/2212.13381)]
+UAI'2023 [[Paper](https://arxiv.org/abs/2212.13381)]
+[[Code](https://github.com/onehuster/mixupe)]
MixupE Framework
@@ -654,54 +708,586 @@ arXiv'2023 [[Paper](https://arxiv.org/abs/2308.03236)]
+
+(back to top)
+
+## Mixup for Self-supervised Learning
+
+* **MixCo: Mix-up Contrastive Learning for Visual Representation**
+*Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun*
+NIPSW'2020 [[Paper](https://arxiv.org/abs/2010.06300)]
+[[Code](https://github.com/Lee-Gihun/MixCo-Mixup-Contrast)]
+
+ MixCo Framework
+
+
+
+* **Hard Negative Mixing for Contrastive Learning**
+*Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus*
+NIPS'2020 [[Paper](https://arxiv.org/abs/2010.01028)]
+[[Code](https://europe.naverlabs.com/mochi)]
+
+ MoCHi Framework
+
+
+
+* **i-Mix A Domain-Agnostic Strategy for Contrastive Representation Learning**
+*Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee*
+ICLR'2021 [[Paper](https://arxiv.org/abs/2010.08887)]
+[[Code](https://github.com/kibok90/imix)]
+
+ i-Mix Framework
+
+
+
+* **Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation**
+*Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing*
+AAAI'2022 [[Paper](https://arxiv.org/abs/2003.05438)]
+[[Code](https://github.com/szq0214/Un-Mix)]
+
+ Un-Mix Framework
+
+
+
+* **Beyond Single Instance Multi-view Unsupervised Representation Learning**
+*Xiangxiang Chu, Xiaohang Zhan, Xiaolin Wei*
+BMVC'2022 [[Paper](https://arxiv.org/abs/2011.13356)]
+
+ BSIM Framework
+
+
+
+* **Improving Contrastive Learning by Visualizing Feature Transformation**
+*Rui Zhu, Bingchen Zhao, Jingen Liu, Zhenglong Sun, Chang Wen Chen*
+ICCV'2021 [[Paper](https://arxiv.org/abs/2108.02982)]
+[[Code](https://github.com/DTennant/CL-Visualizing-Feature-Transformation)]
+
+ FT Framework
+
+
+
+* **Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning**
+*Jingwei Liu, Yi Gu, Shentong Mo, Zhun Sun, Shumin Han, Jiafeng Guo, Xueqi Cheng*
+OpenReview'2021 [[Paper](https://openreview.net/forum?id=DnG8f7gweH4)]
+
+ PCEA Framework
+
+
+
+* **Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing**
+*Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das*
+NIPS'2021 [[Paper](https://arxiv.org/abs/2011.02697)]
+[[Code](https://cvir.github.io/projects/comix)]
+
+ CoMix Framework
+
+
+
+* **Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup**
+*Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li*
+Arxiv'2021 [[Paper](https://arxiv.org/abs/2111.15454)]
+[[Code](https://github.com/Westlake-AI/openmixup)]
+
+ SAMix Framework
+
+
+
+* **MixSiam: A Mixture-based Approach to Self-supervised Representation Learning**
+*Xiaoyang Guo, Tianhao Zhao, Yutian Lin, Bo Du*
+OpenReview'2021 [[Paper](https://arxiv.org/abs/2111.02679)]
+
+ MixSiam Framework
+
+
+
+* **Mix-up Self-Supervised Learning for Contrast-agnostic Applications**
+*Yichen Zhang, Yifang Yin, Ying Zhang, Roger Zimmermann*
+ICME'2021 [[Paper](https://arxiv.org/abs/2204.00901)]
+
+ MixSSL Framework
+
+
+
+* **Towards Domain-Agnostic Contrastive Learning**
+*Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le*
+ICML'2021 [[Paper](https://arxiv.org/abs/2011.04419)]
+
+ DACL Framework
+
+
+
+* **Center-wise Local Image Mixture For Contrastive Representation Learning**
+*Hao Li, Xiaopeng Zhang, Hongkai Xiong*
+BMVC'2021 [[Paper](https://arxiv.org/abs/2011.02697)]
+
+ CLIM Framework
+
+
+
+* **Contrastive-mixup Learning for Improved Speaker Verification**
+*Xin Zhang, Minho Jin, Roger Cheng, Ruirui Li, Eunjung Han, Andreas Stolcke*
+ICASSP'2022 [[Paper](https://arxiv.org/abs/2202.10672)]
+
+ Mixup Framework
+
+
+
+* **ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning**
+*Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z.Li*
+ICML'2022 [[Paper](https://arxiv.org/abs/2110.02027)]
+[[Code](https://github.com/junxia97/ProGCL)]
+
+ ProGCL Framework
+
+
+
+* **M-Mix: Generating Hard Negatives via Multi-sample Mixing for Contrastive Learning**
+*Shaofeng Zhang, Meng Liu, Junchi Yan, Hengrui Zhang, Lingxiao Huang, Pinyan Lu, Xiaokang Yang*
+KDD'2022 [[Paper](https://sherrylone.github.io/assets/KDD22_M-Mix.pdf)]
+[[Code](https://github.com/Sherrylone/m-mix)]
+
+ M-Mix Framework
+
+
+
+* **A Simple Data Mixing Prior for Improving Self-Supervised Learning**
+*Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie*
+CVPR'2022 [[Paper](https://arxiv.org/abs/2206.07692)]
+[[Code](https://github.com/oliverrensu/sdmp)]
+
+ SDMP Framework
+
+
+
+* **On the Importance of Asymmetry for Siamese Representation Learning**
+*Xiao Wang, Haoqi Fan, Yuandong Tian, Daisuke Kihara, Xinlei Chen*
+CVPR'2022 [[Paper](https://arxiv.org/abs/2204.00613)]
+[[Code](https://github.com/facebookresearch/asym-siam)]
+
+ ScaleMix Framework
+
+
+
+* **VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix**
+*Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo*
+ICML'2022 [[Paper](https://arxiv.org/abs/2206.08919)]
+
+ VLMixer Framework
+
+
+
+* **CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping**
+*Junlin Han, Lars Petersson, Hongdong Li, Ian Reid*
+ArXiv'2022 [[Paper](https://arxiv.org/abs/2205.15955)]
+[[Code](https://github.com/JunlinHan/CropMix)]
+
+ CropMix Framework
+
+
+
+* **i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable**
+*Kevin Zhang, Zhiqiang Shen*
+ArXiv'2022 [[Paper](https://arxiv.org/abs/2210.11470)]
+[[Code](https://github.com/vision-learning-acceleration-lab/i-mae)]
+
+ i-MAE Framework
+
+
+
+* **MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers**
+*Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li*
+CVPR'2023 [[Paper](https://arxiv.org/abs/2205.13137)]
+[[Code](https://github.com/Sense-X/MixMIM)]
+
+ MixMAE Framework
+
+
+
+* **Mixed Autoencoder for Self-supervised Visual Representation Learning**
+*Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-Yan Yeung*
+CVPR'2023 [[Paper](https://arxiv.org/abs/2303.17152)]
+
+ MixedAE Framework
+
+
+
+* **Inter-Instance Similarity Modeling for Contrastive Learning**
+*Chengchao Shen, Dawei Liu, Hao Tang, Zhe Qu, Jianxin Wang*
+ArXiv'2023 [[Paper](https://arxiv.org/abs/2306.12243)]
+[[Code](https://github.com/visresearch/patchmix)]
+
+ PatchMix Framework
+
+
+
+* **Guarding Barlow Twins Against Overfitting with Mixed Samples**
+*Wele Gedara Chaminda Bandara, Celso M. De Melo, Vishal M. Patel*
+ArXiv'2023 [[Paper](https://arxiv.org/abs/2312.02151)]
+[[Code](https://github.com/wgcban/mix-bt)]
+
+ PatchMix Framework
+
+
+
(back to top)
+## Mixup for Semi-supervised Learning
+
+* **MixMatch: A Holistic Approach to Semi-Supervised Learning**
+*David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel*
+NIPS'2019 [[Paper](https://arxiv.org/abs/1905.02249)]
+[[Code](https://github.com/google-research/mixmatch)]
+
+ MixMatch Framework
+
+
+
+* **Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy**
+*Ke Sun, Bing Yu, Zhouchen Lin, Zhanxing Zhu*
+ArXiv'2019 [[Paper](https://arxiv.org/abs/1911.09307)]
+
+ Pani VAT Framework
+
+
+
+* **ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring**
+*David Berthelot, dberth@google.com, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel*
+ICLR'2020 [[Paper](https://openreview.net/forum?id=HklkeR4KPB)]
+[[Code](https://github.com/google-research/remixmatch)]
+
+ ReMixMatch Framework
+
+
+
+* **DivideMix: Learning with Noisy Labels as Semi-supervised Learning**
+*Junnan Li, Richard Socher, Steven C.H. Hoi*
+ICLR'2020 [[Paper](https://arxiv.org/abs/2002.07394)]
+[[Code](https://github.com/LiJunnan1992/DivideMix)]
+
+ DivideMix Framework
+
+
+
+* **Epsilon Consistent Mixup: Structural Regularization with an Adaptive Consistency-Interpolation Tradeoff**
+*Vincent Pisztora, Yanglan Ou, Xiaolei Huang, Francesca Chiaromonte, Jia Li*
+ArXiv'2021 [[Paper](https://arxiv.org/abs/2104.09452)]
+
+ Epsilon Consistent Mixup (ϵmu) Framework
+
+
+
+* **Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning**
+*Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng*
+NIPS'2021 [[Paper](https://arxiv.org/abs/2102.06605)]
+[[Code](https://github.com/vanint/core-tuning)]
+
+ Core-Tuning Framework
+
+
+
+* **MUM: Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection**
+*JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak*
+CVPR'2022 [[Paper](https://user-images.githubusercontent.com/44519745/225082975-4143e7f5-8873-433c-ab6f-6caa615f7120.png)]
+[[Code](https://github.com/jongmokkim/mix-unmix)]
+
+ MUM Framework
+
+
+
+* **Harnessing Hard Mixed Samples with Decoupled Regularizer**
+*Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li*
+NIPS'2023 [[Paper](https://arxiv.org/abs/2203.10761)]
+[[Code](https://github.com/Westlake-AI/openmixup)]
+
+ DFixMatch Framework
+
+
+
+* **Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise**
+*Fahimeh Fooladgar, Minh Nguyen Nhat To, Parvin Mousavi, Purang Abolmaesumi*
+Arxiv'2023 [[Paper](https://arxiv.org/abs/2308.06861)]
+[[Code](https://github.com/Fahim-F/ManifoldDivideMix)]
+
+ MixEMatch Framework
+
+
+
+* **LaserMix for Semi-Supervised LiDAR Semantic Segmentation**
+*Lingdong Kong, Jiawei Ren, Liang Pan, Ziwei Liu*
+CVPR'2023 [[Paper](https://arxiv.org/abs/2207.00026)]
+[[Code](https://github.com/ldkong1205/LaserMix)] [[project](https://ldkong.com/LaserMix)]
+
+ LaserMix Framework
+
+
+
+* **Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for Semi-Supervised Medical Image Segmentation**
+*Yuanbin Chen, Tao Wang, Hui Tang, Longxuan Zhao, Ruige Zong, Tao Tan, Xinlin Zhang, Tong Tong*
+ArXiv'2023 [[Paper](https://arxiv.org/abs/2308.16573)]
+
+ DCPA Framework
+
+
+
+* **Mixed Pseudo Labels for Semi-Supervised Object Detection**
+*Zeming Chen, Wenwei Zhang, Xinjiang Wang, Kai Chen, Zhi Wang*
+ArXiv'2023 [[Paper](https://arxiv.org/abs/2312.07006)]
+[[Code](https://github.com/czm369/mixpl)]
+
+ MixPL Framework
+
+
+
+* **PCLMix: Weakly Supervised Medical Image Segmentation via Pixel-Level Contrastive Learning and Dynamic Mix Augmentation**
+*Yu Lei, Haolun Luo, Lituan Wang, Zhenwei Zhang, Lei Zhang*
+ArXiv'2024 [[Paper](https://arxiv.org/abs/2405.06288)]
+[[Code](https://github.com/Torpedo2648/PCLMix)]
+
+ PCLMix Framework
+
+
+
+## Mixup for Regression
+
+* **RegMix: Data Mixing Augmentation for Regression**
+*Seong-Hyeon Hwang, Steven Euijong Whang*
+ArXiv'2021 [[Paper](https://arxiv.org/abs/2106.03374)]
+
+* **C-Mixup: Improving Generalization in Regression**
+*Huaxiu Yao, Yiping Wang, Linjun Zhang, James Zou, Chelsea Finn*
+NeurIPS'2022 [[Paper](https://arxiv.org/abs/2210.05775)]
+[[Code](https://github.com/huaxiuyao/C-Mixup)]
+
+* **ExtraMix: Extrapolatable Data Augmentation for Regression using Generative Models**
+*Kisoo Kwon, Kuhwan Jeong, Sanghyun Park, Sangha Park, Hoshik Lee, Seung-Yeon Kwak, Sungmin Kim, Kyunghyun Cho*
+OpenReview'2022 [[Paper](https://openreview.net/forum?id=NgEuFT-SIgI)]
+
+* **Anchor Data Augmentation**
+*Nora Schneider, Shirin Goshtasbpour, Fernando Perez-Cruz*
+NeurIPS'2023 [[Paper](https://arxiv.org/abs/2311.06965)]
+
+* **Rank-N-Contrast: Learning Continuous Representations for Regression**
+*Kaiwen Zha, Peng Cao, Jeany Son, Yuzhe Yang, Dina Katabi*
+NeurIPS'2023 [[Paper](https://arxiv.org/abs/2210.01189)]
+[[Code](https://github.com/kaiwenzha/Rank-N-Contrast)]
+
+* **Mixup Your Own Pairs**
+*Yilei Wu, Zijian Dong, Chongyao Chen, Wangchunshu Zhou, Juan Helen Zhou*
+ArXiv'2023 [[Paper](https://arxiv.org/abs/2309.16633)]
+[[Code](https://github.com/yilei-wu/supremix)]
+
+ SupReMix Framework
+
+
+
+* **Tailoring Mixup to Data using Kernel Warping functions**
+*Quentin Bouniot, Pavlo Mozharovskyi, Florence d'Alché-Buc*
+ArXiv'2023 [[Paper](https://arxiv.org/abs/2311.01434)]
+[[Code](https://github.com/ENSTA-U2IS/torch-uncertainty)]
+
+ SupReMix Framework
+
+
+
+* **OmniMixup: Generalize Mixup with Mixing-Pair Sampling Distribution**
+*Anonymous*
+Openreview'2023 [[Paper](https://openreview.net/forum?id=6Uc7Fgwrsm)]
+
+* **Augment on Manifold: Mixup Regularization with UMAP**
+*Yousef El-Laham, Elizabeth Fons, Dillon Daudert, Svitlana Vyetrenko*
+ICASSP'2024 [[Paper](https://arxiv.org/abs/2210.01189)]
+
+## Mixup for Robustness
+
+* **Mixup as directional adversarial training**
+*Guillaume P. Archambault, Yongyi Mao, Hongyu Guo, Richong Zhang*
+NeurIPS'2019 [[Paper](https://arxiv.org/abs/1906.06875)]
+[[Code](https://github.com/mixupAsDirectionalAdversarial/mixup_as_dat)]
+
+* **Mixup Inference: Better Exploiting Mixup to Defend Adversarial Attacks**
+*Tianyu Pang, Kun Xu, Jun Zhu*
+ICLR'2020 [[Paper](https://arxiv.org/abs/1909.11515)]
+[[Code](https://github.com/P2333/Mixup-Inference)]
+
+* **Addressing Neural Network Robustness with Mixup and Targeted Labeling Adversarial Training**
+*Alfred Laugros, Alice Caplier, Matthieu Ospici*
+ECCV'2020 [[Paper](https://arxiv.org/abs/2008.08384)]
+
+* **Mixup Training as the Complexity Reduction**
+*Masanari Kimura*
+OpenReview'2021 [[Paper](https://openreview.net/forum?id=xvWZQtxI7qq)]
+
+* **Adversarial Vertex Mixup: Toward Better Adversarially Robust Generalization**
+*Saehyung Lee, Hyungyu Lee, Sungroh Yoon*
+CVPR'2020 [[Paper](https://arxiv.org/abs/2003.02484)]
+[[Code](https://github.com/Saehyung-Lee/cifar10_challenge)]
+
+* **MixACM: Mixup-Based Robustness Transfer via Distillation of Activated Channel Maps**
+*Muhammad Awais, Fengwei Zhou, Chuanlong Xie, Jiawei Li, Sung-Ho Bae, Zhenguo Li*
+NeurIPS'2021 [[Paper](https://arxiv.org/abs/2111.05073)]
+
+* **On the benefits of defining vicinal distributions in latent space**
+*Puneet Mangla, Vedant Singh, Shreyas Jayant Havaldar, Vineeth N Balasubramanian*
+CVPRW'2021 [[Paper](https://arxiv.org/abs/2003.06566)]
+
+## Low-level Vision
+
+* **Robust Image Denoising through Adversarial Frequency Mixup**
+*Donghun Ryou, Inju Ha, Hyewon Yoo, Dongwan Kim, Bohyung Han*
+CVPR'2024 [[Paper](https://openaccess.thecvf.com/content/CVPR2024/papers/Ryou_Robust_Image_Denoising_through_Adversarial_Frequency_Mixup_CVPR_2024_paper.pdf)]
+[[Code](https://github.com/dhryougit/AFM)]
+
+(back to top)
+
+## Mixup for Multi-modality
+
+* **MixGen: A New Multi-Modal Data Augmentation**
+*Xiaoshuai Hao, Yi Zhu, Srikar Appalaraju, Aston Zhang, Wanqian Zhang, Bo Li, Mu Li*
+arXiv'2023 [[Paper](https://arxiv.org/abs/2206.08358)]
+[[Code](https://github.com/amazon-research/mix-generation)]
+
+* **VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix**
+*Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo*
+arXiv'2022 [[Paper](https://arxiv.org/abs/2206.08919)]
+
+* **Geodesic Multi-Modal Mixup for Robust Fine-Tuning**
+*Changdae Oh, Junhyuk So, Hoyoon Byun, YongTaek Lim, Minchul Shin, Jong-June Jeon, Kyungwoo Song*
+NeurIPS'2023 [[Paper](https://arxiv.org/abs/2203.03897)]
+[[Code](https://github.com/changdaeoh/multimodal-mixup)]
+
+* **PowMix: A Versatile Regularizer for Multimodal Sentiment Analysis**
+*Efthymios Georgiou, Yannis Avrithis, Alexandros Potamianos*
+arXiv'2023 [[Paper](https://arxiv.org/abs/2312.12334)]
+
+ PowMix Framework
+
+
+
## Analysis of Mixup
-* Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak.
- - On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks. [[NIPS'2019](https://arxiv.org/abs/1905.11001)] [[code](https://github.com/paganpasta/onmixup)]
+* **On Mixup Training: Improved Calibration and Predictive Uncertainty for Deep Neural Networks**
+*Sunil Thulasidasan, Gopinath Chennupati, Jeff Bilmes, Tanmoy Bhattacharya, Sarah Michalak*
+NeurIPS'2019 [[Paper](https://arxiv.org/abs/1905.11001)]
+[[Code](https://github.com/paganpasta/onmixup)]
Framework
-* Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert.
- - On Mixup Regularization. [[ArXiv'2020](https://arxiv.org/abs/2006.06049)]
+
+* **On Mixup Regularization**
+*Luigi Carratino, Moustapha Cissé, Rodolphe Jenatton, Jean-Philippe Vert*
+ArXiv'2020 [[Paper](https://arxiv.org/abs/2006.06049)]
Framework
-* Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou.
- - How Does Mixup Help With Robustness and Generalization? [[ICLR'2021](https://arxiv.org/abs/2010.04819)]
+
+* **How Does Mixup Help With Robustness and Generalization?**
+*Linjun Zhang, Zhun Deng, Kenji Kawaguchi, Amirata Ghorbani, James Zou*
+ICLR'2021 [[Paper](https://arxiv.org/abs/2010.04819)]
Framework
-* Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge.
- - Towards Understanding the Data Dependency of Mixup-style Training. [[ICLR'2022](https://openreview.net/pdf?id=ieNJYujcGDO)] [[code](https://github.com/2014mchidamb/Mixup-Data-Dependency)]
+
+* **Towards Understanding the Data Dependency of Mixup-style Training**
+*Muthu Chidambaram, Xiang Wang, Yuzheng Hu, Chenwei Wu, Rong Ge*
+ICLR'2022 [[Paper](https://openreview.net/pdf?id=ieNJYujcGDO)]
+[[Code](https://github.com/2014mchidamb/Mixup-Data-Dependency)]
Framework
-* Linjun Zhang, Zhun Deng, Kenji Kawaguchi, James Zou.
- - When and How Mixup Improves Calibration. [[ICML'2022](https://arxiv.org/abs/2102.06289)]
+
+* **When and How Mixup Improves Calibration**
+*Linjun Zhang, Zhun Deng, Kenji Kawaguchi, James Zou*
+ICML'2022 [[Paper](https://arxiv.org/abs/2102.06289)]
Framework
-* Zixuan Liu, Ziqiao Wang, Hongyu Guo, Yongyi Mao.
- - Over-Training with Mixup May Hurt Generalization. [[ICLR'2023](https://openreview.net/forum?id=JmkjrlVE-DG)]
+
+* **Over-Training with Mixup May Hurt Generalization**
+*Zixuan Liu, Ziqiao Wang, Hongyu Guo, Yongyi Mao*
+ICLR'2023 [[Paper](https://openreview.net/forum?id=JmkjrlVE-DG)]
Framework
-* Junsoo Oh, Chulhee Yun.
- - Provable Benefit of Mixup for Finding Optimal Decision Boundaries. [[ICML'2023](https://chulheeyun.github.io/publication/oh2023provable/)]
-* Deng-Bao Wang, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Min-Ling Zhang.
- - On the Pitfall of Mixup for Uncertainty Calibration. [[CVPR'2023](https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_On_the_Pitfall_of_Mixup_for_Uncertainty_Calibration_CVPR_2023_paper.pdf)]
-* Hongjun Choi, Eun Som Jeon, Ankita Shukla, Pavan Turaga.
- - Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study. [[WACV'2023](https://arxiv.org/abs/2211.03946)] [[code](https://github.com/hchoi71/mix-kd)]
-* Soyoun Won, Sung-Ho Bae, Seong Tae Kim.
- - Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability. [[arXiv'2023](https://arxiv.org/abs/2303.14608)]
+
+* **Provable Benefit of Mixup for Finding Optimal Decision Boundaries**
+*Junsoo Oh, Chulhee Yun*
+ICML'2023 [[Paper](https://chulheeyun.github.io/publication/oh2023provable/)]
+
+* **On the Pitfall of Mixup for Uncertainty Calibration**
+*Deng-Bao Wang, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Min-Ling Zhang*
+CVPR'2023 [[Paper](https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_On_the_Pitfall_of_Mixup_for_Uncertainty_Calibration_CVPR_2023_paper.pdf)]
+
+* **Understanding the Role of Mixup in Knowledge Distillation: An Empirical Study**
+*Hongjun Choi, Eun Som Jeon, Ankita Shukla, Pavan Turaga*
+WACV'2023 [[Paper](https://arxiv.org/abs/2211.03946)]
+[[Code](https://github.com/hchoi71/mix-kd)]
+
+* **Analyzing Effects of Mixed Sample Data Augmentation on Model Interpretability**
+*Soyoun Won, Sung-Ho Bae, Seong Tae Kim*
+arXiv'2023 [[Paper](https://arxiv.org/abs/2303.14608)]
+
+* **Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup**
+*Damien Teney, Jindong Wang, Ehsan Abbasnejad*
+arXiv'2023 [[Paper](https://arxiv.org/abs/2305.16817)]
+
+(back to top)
+
+## Natural Language Processing
+
+* **Augmenting Data with Mixup for Sentence Classification: An Empirical Study**
+*Hongyu Guo, Yongyi Mao, Richong Zhang*
+arXiv'2019 [[Paper](https://arxiv.org/abs/1905.08941)]
+[[Code](https://github.com/dsfsi/textaugment)]
+
+* **Mixup-Transformer: Dynamic Data Augmentation for NLP Tasks**
+*Lichao Sun, Congying Xia, Wenpeng Yin, Tingting Liang, Philip S. Yu, Lifang He*
+COLING'2020 [[Paper](https://arxiv.org/abs/2010.02394)]
+
+* **Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data**
+*Lingkai Kong, Haoming Jiang, Yuchen Zhuang, Jie Lyu, Tuo Zhao, Chao Zhang*
+EMNLP'2020 [[Paper](https://arxiv.org/abs/2010.11506)]
+[[Code](https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning)]
+
+* **Augmenting NLP Models using Latent Feature Interpolations**
+*Amit Jindal, Arijit Ghosh Chowdhury, Aniket Didolkar, Di Jin, Ramit Sawhney, Rajiv Ratn Shah*
+COLING'2020 [[Paper](https://aclanthology.org/2020.coling-main.611/)]
+
+* **MixText: Linguistically-informed Interpolation of Hidden Space for Semi-Supervised Text Classification**
+*Jiaao Chen, Zichao Yang, Diyi Yang*
+ACL'2020 [[Paper](https://arxiv.org/abs/2004.12239)]
+[[Code](https://github.com/GT-SALT/MixText)]
+
+* **TreeMix: Compositional Constituency-based Data Augmentation for Natural Language Understanding**
+*Le Zhang, Zichao Yang, Diyi Yang*
+NAALC'2022 [[Paper](https://arxiv.org/abs/2205.06153)]
+[[Code](https://github.com/magiccircuit/treemix)]
+
+* **STEMM: Self-learning with Speech-text Manifold Mixup for Speech Translation**
+*Qingkai Fang, Rong Ye, Lei Li, Yang Feng, Mingxuan Wang*
+ACL'2022 [[Paper](https://arxiv.org/abs/2010.02394)]
+[[Code](https://github.com/ictnlp/STEMM)]
+
+* **Enhancing Cross-lingual Transfer by Manifold Mixup**
+*Huiyun Yang, Huadong Chen, Hao Zhou, Lei Li*
+ICLR'2022 [[Paper](https://arxiv.org/abs/2205.04182)]
+[[Code](https://github.com/yhy1117/x-mixup)]
+
+## Graph Representation Learning
+
+* **Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications**
+*Xinyu Ma, Xu Chu, Yasha Wang, Yang Lin, Junfeng Zhao, Liantao Ma, Wenwu Zhu*
+NeurIPS'2023 [[Paper](https://arxiv.org/abs/2306.15963)]
+[[code](https://github.com/ArthurLeoM/FGWMixup)]
+
+* **G-Mixup: Graph Data Augmentation for Graph Classification**
+*Xiaotian Han, Zhimeng Jiang, Ninghao Liu, Xia Hu*
+NeurIPS'2023 [[Paper](https://arxiv.org/abs/2202.07179)]
(back to top)
@@ -711,11 +1297,6 @@ arXiv'2023 [[Paper](https://arxiv.org/abs/2308.03236)]
*Connor Shorten and Taghi Khoshgoftaar*
Journal of Big Data'2019 [[Paper](https://www.researchgate.net/publication/334279066_A_survey_on_Image_Data_Augmentation_for_Deep_Learning)]
-* **Survey: Image Mixing and Deleting for Data Augmentation**
-*Humza Naveed, Saeed Anwar, Munawar Hayat, Kashif Javed, Ajmal Mian*
-ArXiv'2021 [[Paper](https://arxiv.org/abs/2106.07085)]
-[[Code](https://github.com/humza909/survery-image-mixing-and-deleting-for-data-augmentation)]
-
* **An overview of mixing augmentation methods and augmentation strategies**
*Dominik Lewy and Jacek Ma ́ndziuk*
Artificial Intelligence Review'2022 [[Paper](https://link.springer.com/article/10.1007/s10462-022-10227-z)]
@@ -729,6 +1310,22 @@ ArXiv'2022 [[Paper](https://arxiv.org/abs/2204.08610)]
ArXiv'2022 [[Paper](https://arxiv.org/abs/2212.10888)]
[[Code](https://github.com/ChengtaiCao/Awesome-Mix)]
+* **A Survey of Automated Data Augmentation for Image Classification: Learning to Compose, Mix, and Generate**
+*Tsz-Him Cheung, Dit-Yan Yeung*
+TNNLS'2023 [[Paper](https://ieeexplore.ieee.org/abstract/document/10158722)]
+
+* **Survey: Image Mixing and Deleting for Data Augmentation**
+*Humza Naveed, Saeed Anwar, Munawar Hayat, Kashif Javed, Ajmal Mian*
+Engineering Applications of Artificial Intelligence'2024 [[Paper](https://arxiv.org/abs/2106.07085)]
+
+## Benchmark
+
+* **OpenMixup: A Comprehensive Mixup Benchmark for Visual Classification**
+*Siyuan Li, Zedong Wang, Zicheng Liu, Di Wu, Cheng Tan, Weiyang Jin, Stan Z. Li*
+ArXiv'2022 [[Paper](https://arxiv.org/abs/2209.04851)]
+[[Code](https://github.com/Westlake-AI/openmixup)]
+
+(back to top)
## Contribution
diff --git a/docs/en/awesome_selfsup/MIM.md b/docs/en/awesome/awesome_selfsup.md
similarity index 100%
rename from docs/en/awesome_selfsup/MIM.md
rename to docs/en/awesome/awesome_selfsup.md
diff --git a/docs/en/awesome_mixups/Mixup_SSL.md b/docs/en/awesome_mixups/Mixup_SSL.md
deleted file mode 100644
index 6a28a60..0000000
--- a/docs/en/awesome_mixups/Mixup_SSL.md
+++ /dev/null
@@ -1,342 +0,0 @@
-# Awesome Mixup Methods for Self- and Semi-supervised Learning
-
-![PRs Welcome](https://img.shields.io/badge/PRs-Welcome-green) [![Awesome](https://awesome.re/badge.svg)](https://awesome.re) ![GitHub stars](https://img.shields.io/github/stars/Westlake-AI/openmixup?color=blue) ![GitHub forks](https://img.shields.io/github/forks/Westlake-AI/openmixup?color=yellow&label=Fork)
-
-**We summarize mixup methods proposed for self- and semi-supervised visual representation learning.**
-We are working on a survey of mixup methods. The list is on updating.
-
-* To find related papers and their relationships, check out [Connected Papers](https://www.connectedpapers.com/), which visualizes the academic field in a graph representation.
-* To export BibTeX citations of papers, check out [ArXiv](https://arxiv.org/) or [Semantic Scholar](https://www.semanticscholar.org/) of the paper for professional reference formats.
-
-## Table of Contents
-
- - [Mixup for Self-supervised Learning](#mixup-for-self-supervised-learning)
- - [Mixup for Semi-supervised Learning](#mixup-for-semi-supervised-learning)
- - [Contribution](#contribution)
- - [Related Project](#related-project)
-
-## Mixup for Self-supervised Learning
-
-* **MixCo: Mix-up Contrastive Learning for Visual Representation**
-*Sungnyun Kim, Gihun Lee, Sangmin Bae, Se-Young Yun*
-NIPSW'2020 [[Paper](https://arxiv.org/abs/2010.06300)]
-[[Code](https://github.com/Lee-Gihun/MixCo-Mixup-Contrast)]
-
- MixCo Framework
-
-
-
-* **Hard Negative Mixing for Contrastive Learning**
-*Yannis Kalantidis, Mert Bulent Sariyildiz, Noe Pion, Philippe Weinzaepfel, Diane Larlus*
-NIPS'2020 [[Paper](https://arxiv.org/abs/2010.01028)]
-[[Code](https://europe.naverlabs.com/mochi)]
-
- MoCHi Framework
-
-
-
-* **i-Mix A Domain-Agnostic Strategy for Contrastive Representation Learning**
-*Kibok Lee, Yian Zhu, Kihyuk Sohn, Chun-Liang Li, Jinwoo Shin, Honglak Lee*
-ICLR'2021 [[Paper](https://arxiv.org/abs/2010.08887)]
-[[Code](https://github.com/kibok90/imix)]
-
- i-Mix Framework
-
-
-
-* **Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation**
-*Zhiqiang Shen, Zechun Liu, Zhuang Liu, Marios Savvides, Trevor Darrell, Eric Xing*
-AAAI'2022 [[Paper](https://arxiv.org/abs/2003.05438)]
-[[Code](https://github.com/szq0214/Un-Mix)]
-
- Un-Mix Framework
-
-
-
-* **Beyond Single Instance Multi-view Unsupervised Representation Learning**
-*Xiangxiang Chu, Xiaohang Zhan, Xiaolin Wei*
-BMVC'2022 [[Paper](https://arxiv.org/abs/2011.13356)]
-
- BSIM Framework
-
-
-
-* **Improving Contrastive Learning by Visualizing Feature Transformation**
-*Rui Zhu, Bingchen Zhao, Jingen Liu, Zhenglong Sun, Chang Wen Chen*
-ICCV'2021 [[Paper](https://arxiv.org/abs/2108.02982)]
-[[Code](https://github.com/DTennant/CL-Visualizing-Feature-Transformation)]
-
- FT Framework
-
-
-
-* **Piecing and Chipping: An effective solution for the information-erasing view generation in Self-supervised Learning**
-*Jingwei Liu, Yi Gu, Shentong Mo, Zhun Sun, Shumin Han, Jiafeng Guo, Xueqi Cheng*
-OpenReview'2021 [[Paper](https://openreview.net/forum?id=DnG8f7gweH4)]
-
- PCEA Framework
-
-
-
-* **Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing**
-*Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das*
-NIPS'2021 [[Paper](https://arxiv.org/abs/2011.02697)]
-[[Code](https://cvir.github.io/projects/comix)]
-
- CoMix Framework
-
-
-
-* **Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup**
-*Siyuan Li, Zicheng Liu, Di Wu, Zihan Liu, Stan Z. Li*
-Arxiv'2021 [[Paper](https://arxiv.org/abs/2111.15454)]
-[[Code](https://github.com/Westlake-AI/openmixup)]
-
- SAMix Framework
-
-
-
-* **MixSiam: A Mixture-based Approach to Self-supervised Representation Learning**
-*Xiaoyang Guo, Tianhao Zhao, Yutian Lin, Bo Du*
-OpenReview'2021 [[Paper](https://arxiv.org/abs/2111.02679)]
-
- MixSiam Framework
-
-
-
-* **Mix-up Self-Supervised Learning for Contrast-agnostic Applications**
-*Yichen Zhang, Yifang Yin, Ying Zhang, Roger Zimmermann*
-ICME'2021 [[Paper](https://arxiv.org/abs/2204.00901)]
-
- MixSSL Framework
-
-
-
-* **Towards Domain-Agnostic Contrastive Learning**
-*Vikas Verma, Minh-Thang Luong, Kenji Kawaguchi, Hieu Pham, Quoc V. Le*
-ICML'2021 [[Paper](https://arxiv.org/abs/2011.04419)]
-
- DACL Framework
-
-
-
-* **Center-wise Local Image Mixture For Contrastive Representation Learning**
-*Hao Li, Xiaopeng Zhang, Hongkai Xiong*
-BMVC'2021 [[Paper](https://arxiv.org/abs/2011.02697)]
-
- CLIM Framework
-
-
-
-* **Contrastive-mixup Learning for Improved Speaker Verification**
-*Xin Zhang, Minho Jin, Roger Cheng, Ruirui Li, Eunjung Han, Andreas Stolcke*
-ICASSP'2022 [[Paper](https://arxiv.org/abs/2202.10672)]
-
- Mixup Framework
-
-
-
-* **ProGCL: Rethinking Hard Negative Mining in Graph Contrastive Learning**
-*Jun Xia, Lirong Wu, Ge Wang, Jintao Chen, Stan Z.Li*
-ICML'2022 [[Paper](https://arxiv.org/abs/2110.02027)]
-[[Code](https://github.com/junxia97/ProGCL)]
-
- ProGCL Framework
-
-
-
-* **M-Mix: Generating Hard Negatives via Multi-sample Mixing for Contrastive Learning**
-*Shaofeng Zhang, Meng Liu, Junchi Yan, Hengrui Zhang, Lingxiao Huang, Pinyan Lu, Xiaokang Yang*
-KDD'2022 [[Paper](https://sherrylone.github.io/assets/KDD22_M-Mix.pdf)]
-[[Code](https://github.com/Sherrylone/m-mix)]
-
- M-Mix Framework
-
-
-
-* **A Simple Data Mixing Prior for Improving Self-Supervised Learning**
-*Sucheng Ren, Huiyu Wang, Zhengqi Gao, Shengfeng He, Alan Yuille, Yuyin Zhou, Cihang Xie*
-CVPR'2022 [[Paper](https://arxiv.org/abs/2206.07692)]
-[[Code](https://github.com/oliverrensu/sdmp)]
-
- SDMP Framework
-
-
-
-* **On the Importance of Asymmetry for Siamese Representation Learning**
-*Xiao Wang, Haoqi Fan, Yuandong Tian, Daisuke Kihara, Xinlei Chen*
-CVPR'2022 [[Paper](https://arxiv.org/abs/2204.00613)]
-[[Code](https://github.com/facebookresearch/asym-siam)]
-
- ScaleMix Framework
-
-
-
-* **VLMixer: Unpaired Vision-Language Pre-training via Cross-Modal CutMix**
-*Teng Wang, Wenhao Jiang, Zhichao Lu, Feng Zheng, Ran Cheng, Chengguo Yin, Ping Luo*
-ICML'2022 [[Paper](https://arxiv.org/abs/2206.08919)]
-
- VLMixer Framework
-
-
-
-* **CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping**
-*Junlin Han, Lars Petersson, Hongdong Li, Ian Reid*
-ArXiv'2022 [[Paper](https://arxiv.org/abs/2205.15955)]
-[[Code](https://github.com/JunlinHan/CropMix)]
-
- CropMix Framework
-
-
-
-* **- i-MAE: Are Latent Representations in Masked Autoencoders Linearly Separable**
-*Kevin Zhang, Zhiqiang Shen*
-ArXiv'2022 [[Paper](https://arxiv.org/abs/2210.11470)]
-[[Code](https://github.com/vision-learning-acceleration-lab/i-mae)]
-
- i-MAE Framework
-
-
-
-* **MixMAE: Mixed and Masked Autoencoder for Efficient Pretraining of Hierarchical Vision Transformers**
-*Jihao Liu, Xin Huang, Jinliang Zheng, Yu Liu, Hongsheng Li*
-CVPR'2023 [[Paper](https://arxiv.org/abs/2205.13137)]
-[[Code](https://github.com/Sense-X/MixMIM)]
-
- MixMAE Framework
-
-
-
-* **Mixed Autoencoder for Self-supervised Visual Representation Learning**
-*Kai Chen, Zhili Liu, Lanqing Hong, Hang Xu, Zhenguo Li, Dit-Yan Yeung*
-CVPR'2023 [[Paper](https://arxiv.org/abs/2303.17152)]
-
- MixedAE Framework
-
-
-
-* **Inter-Instance Similarity Modeling for Contrastive Learning**
-*Chengchao Shen, Dawei Liu, Hao Tang, Zhe Qu, Jianxin Wang*
-ArXiv'2023 [[Paper](https://arxiv.org/abs/2306.12243)]
-[[Code](https://github.com/visresearch/patchmix)]
-
- PatchMix Framework
-
-
-
-(back to top)
-
-## Mixup for Semi-supervised Learning
-
-* **MixMatch: A Holistic Approach to Semi-Supervised Learning**
-*David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin Raffel*
-NIPS'2019 [[Paper](https://arxiv.org/abs/1905.02249)]
-[[Code](https://github.com/google-research/mixmatch)]
-
- MixMatch Framework
-
-
-
-* **Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy**
-*Ke Sun, Bing Yu, Zhouchen Lin, Zhanxing Zhu*
-ArXiv'2019 [[Paper](https://arxiv.org/abs/1911.09307)]
-
- Pani VAT Framework
-
-
-
-* **ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring**
-*David Berthelot, dberth@google.com, Nicholas Carlini, Ekin D. Cubuk, Alex Kurakin, Kihyuk Sohn, Han Zhang, Colin Raffel*
-ICLR'2020 [[Paper](https://openreview.net/forum?id=HklkeR4KPB)]
-[[Code](https://github.com/google-research/remixmatch)]
-
- ReMixMatch Framework
-
-
-
-* **DivideMix: Learning with Noisy Labels as Semi-supervised Learning**
-*Junnan Li, Richard Socher, Steven C.H. Hoi*
-ICLR'2020 [[Paper](https://arxiv.org/abs/2002.07394)]
-[[Code](https://github.com/LiJunnan1992/DivideMix)]
-
- DivideMix Framework
-
-
-
-* **Unleashing the Power of Contrastive Self-Supervised Visual Models via Contrast-Regularized Fine-Tuning**
-*Yifan Zhang, Bryan Hooi, Dapeng Hu, Jian Liang, Jiashi Feng*
-NIPS'2021 [[Paper](https://arxiv.org/abs/2102.06605)]
-[[Code](https://github.com/vanint/core-tuning)]
-
- Core-Tuning Framework
-
-
-
-* **MUM : Mix Image Tiles and UnMix Feature Tiles for Semi-Supervised Object Detection**
-*JongMok Kim, Jooyoung Jang, Seunghyeon Seo, Jisoo Jeong, Jongkeun Na, Nojun Kwak*
-CVPR'2022 [[Paper](https://user-images.githubusercontent.com/44519745/225082975-4143e7f5-8873-433c-ab6f-6caa615f7120.png)]
-[[Code](https://github.com/jongmokkim/mix-unmix)]
-
- MUM Framework
-
-
-
-* **Decoupled Mixup for Data-efficient Learning**
-*Zicheng Liu, Siyuan Li, Ge Wang, Cheng Tan, Lirong Wu, Stan Z. Li*
-NIPS'2023 [[Paper](https://arxiv.org/abs/2203.10761)]
-[[Code](https://github.com/Westlake-AI/openmixup)]
-
- DFixMatch Framework
-
-
-
-* **Manifold DivideMix: A Semi-Supervised Contrastive Learning Framework for Severe Label Noise**
-*Fahimeh Fooladgar, Minh Nguyen Nhat To, Parvin Mousavi, Purang Abolmaesumi*
-Arxiv'2023 [[Paper](https://arxiv.org/abs/2308.06861)]
-[[Code](https://github.com/Fahim-F/ManifoldDivideMix)]
-
- MixEMatch Framework
-
-
-
-* **LaserMix for Semi-Supervised LiDAR Semantic Segmentation**
-*Lingdong Kong, Jiawei Ren, Liang Pan, Ziwei Liu*
-CVPR'2023 [[Paper](https://arxiv.org/abs/2207.00026)]
-[[Code](https://github.com/ldkong1205/LaserMix)] [[project](https://ldkong.com/LaserMix)]
-
- LaserMix Framework
-
-
-
-* **Dual-Decoder Consistency via Pseudo-Labels Guided Data Augmentation for Semi-Supervised Medical Image Segmentation**
-*Yuanbin Chen, Tao Wang, Hui Tang, Longxuan Zhao, Ruige Zong, Tao Tan, Xinlin Zhang, Tong Tong*
-ArXiv'2023 [[Paper](https://arxiv.org/abs/2308.16573)]
-
- DCPA Framework
-
-
-
-(back to top)
-
-## Contribution
-
-Feel free to send [pull requests](https://github.com/Westlake-AI/openmixup/pulls) to add more links with the following Markdown format. Notice that the Abbreviation, the code link, and the figure link are optional attributes. Current contributors include: Siyuan Li ([@Lupin1998](https://github.com/Lupin1998)) and Zicheng Liu ([@pone7](https://github.com/pone7)).
-
-```markdown
-* **TITLE**
-*AUTHER*
-PUBLISH'YEAR [[Paper](link)] [[Code](link)]
-
- ABBREVIATION Framework
-
-
-```
-
-## Related Project
-
-- [Awesome-Mixup](https://github.com/Westlake-AI/Awesome-Mixup): Awesome List of Mixup Augmentation Papers for Visual Representation Learning.
-- [Awesome-Mix](https://github.com/ChengtaiCao/Awesome-Mix): An awesome list of papers for `A Survey of Mix-based Data Augmentation: Taxonomy, Methods, Applications, and Explainability, we categorize them based on our proposed taxonomy`.
-- [survery-image-mixing-and-deleting-for-data-augmentation](https://github.com/humza909/survery-image-mixing-and-deleting-for-data-augmentation): An awesome list of papers for `Survey: Image Mixing and Deleting for Data Augmentation`.
-- [awesome-mixup](https://github.com/demoleiwang/awesome-mixup): A collection of awesome papers about mixup.
-- [awesome-mixed-sample-data-augmentation](https://github.com/JasonZhang156/awesome-mixed-sample-data-augmentation): A collection of awesome things about mixed sample data augmentation.
-- [data-augmentation-review](https://github.com/AgaMiko/data-augmentation-review): List of useful data augmentation resources.