Skip to content

Benchmarks used for `HPOBench: A Collection of Reproducible Multi Fidelity Benchmark Problems for HPO`

Katharina Eggensperger edited this page Nov 11, 2021 · 4 revisions

For HPOBench: A Collection of Reproducible Multi-Fidelity Benchmark Problems for HPO we used the following

existing community benchmarks

family container version benchmarks
nas.nasbench_201 0.0.5 [Cifar10ValidNasBench201BenchmarkOriginal, Cifar100NasBench201BenchmarkOriginal, ImageNetNasBench201BenchmarkOriginal]
nas.nasbench_101 0.0.4 [NASCifar10ABenchmark, NASCifar10BBenchmark, NASCifar10CBenchmark]
nas.tabular_benchmarks 0.0.5 [SliceLocalizationBenchmarkOriginal, ProteinStructureBenchmarkOriginal, NavalPropulsionBenchmarkOriginal, ParkinsonsTelemonitoringBenchmarkOriginal]
nas.nasbench_1shot1 0.0.4 [NASBench1shot1SearchSpace1Benchmark, NASBench1shot1SearchSpace2Benchmark, NASBench1shot1SearchSpace3Benchmark]
ml.pybnn 0.0.4 [BNNOnProteinStructure, BNNOnYearPrediction]
rl.cartpole 0.0.4 [CartpoleReduced]
surrogates.paramnet_benchmark 0.0.4 [ParamNetReducedAdultOnTimeBenchmark, ParamNetReducedHiggsOnTimeBenchmark, ParamNetReducedLetterOnTimeBenchmark, ParamNetReducedMnistOnTimeBenchmark, ParamNetReducedOptdigitsOnTimeBenchmark, ParamNetReducedPokerOnTimeBenchmark]

and existing community benchmarks

family container version benchmarks
ml.tabular_benchmark - TabularBenchmark for model = ['lr', 'svm', 'xgb', 'rf', 'nn')

We host all code to recreate the experiments in this repo: https://github.com/automl/HPOBenchExperimentUtils