-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-fidelity tabular benchmark compatible additions #112
Conversation
* Scikit-learn showed every time the benchmark was created a warning that the surrogate was created with a different scikit-learn version. However, we can't suppress another deprecation warning, since scikit-learn makes sure that they are not suppressed. (!) * Increase benchmark version
* Add a reduced configuration space for the paramnet benchmark * Codestyle: Add new line at end of file
SVM Surrogate Benchmark * Intial Commit of a SVM Surrogate Benchmark. * Remove some wrong information from the docstrings (also Paramnet Benchmark) * Change an old test case which tests for actual runtime. (svm) this test was comparing the actual time needed for running a svm configuration. Thus, it was often failing on different machines. * SVM Surrogate: Add more references + min dataset fraction * SVM: Add Container Recipe + ClientInterface
* Readme + small fix in client * Test different singularity versions * Print Container Version + HPOBench Version in Log.
Improve Container Integration * Container-Client-Communication: The container does not read the hpobenchrc file anymore. We bind the directories (socket, data, ...) now to fixed paths in the container. * Increase Version Number for each Container * Equalize client abstract benchmark function calls. * Update Logging Procedure on Client and in Container * Update Recipe Files: Add a clean-up step to the recipe to reduce the size of the containers. * Add a Container Configuration test * Update Recipe Template + Fix Typo
Could you please rebase to development? |
Hey Neeratyoy, thanks for your work! Couldn't we also solve it via inheritance? I think about having a BaseMLBenchmark() and create new Classes which overwrite the corresponding fidelity space.
Since the other benchmarks don't use the |
Sure, I think we should. I went with that design since I believe there are checks for the abstract class function signature checks. And to not affect containerization potentially, I thought of changing the abstract class definition where I presumed a As for the multiple class definitions for different fidelity spaces, sure, that is an alternate solution. However, I just felt that it makes the code more verbose. Also might make things more brittle for code using these classes. Hence, I went with the parameterization approach. In any case, these are probably design choices more than functionality so we just need to come to an agreement wrt the scope of the package. Happy to discuss! |
@PhMueller shall we close this? |
Deprecated. |
Main changes/updates (wrt ML benchmarks primarily):
fidelity_choice=None
which shouldn't break existing codeConfigurationSpace
and just the model definitionTODO: