From 427174b474fd4a2894c72df66f4f5a90459cea2f Mon Sep 17 00:00:00 2001 From: Markus Bilz Date: Mon, 30 Oct 2023 21:04:37 +0100 Subject: [PATCH] remove outdated readme + fix test --- README.md | 65 ------------------------ src/otc/models/transformer_classifier.py | 2 +- 2 files changed, 1 insertion(+), 66 deletions(-) diff --git a/README.md b/README.md index 78ea79a9..6b7f1d8e 100644 --- a/README.md +++ b/README.md @@ -12,71 +12,6 @@ This repository contains all the resources for my thesis on option trade classif | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------- | | See [`references`](https://github.com/KarelZe/thesis/tree/main/references) folder. Download obsidian from [obsidian.md](https://obsidian.md/) to easily browse the notes. | Link to [tasks and mile stones](https://github.com/KarelZe/thesis/milestones?direction=asc&sort=due_date&state=open). | Link to [weights & biases](https://wandb.ai/fbv/thesis) (requires login). | Link to [gcp](https://console.cloud.google.com/welcome?project=flowing-mantis-239216) (requires login), and to [bwHPC](https://bwhpc.de/) (requires login). | see [`releases`](https://github.com/KarelZe/thesis/releases/). | -## How to use - -Locally, or on [Jupyter cluster](https://uc2-jupyter.scc.kit.edu/jhub/): -```shell - -# clone project -git clone https://github.com/KarelZe/thesis.git --depth=1 - -# set up consts for wandb + gcp -nano prod.env - -# set up virtual env and install requirements -cd thesis - -python -m venv thesis -source thesis/bin/activate -python -m pip install . - -# run training script -python src/otc/models/train_model.py --trials=100 --seed=42 --model=gbm --dataset=fbv/thesis/ise_supervised_log_standardized_clipped:latest --features=classical-size --pretrain -2022-11-18 10:25:50,920 - __main__ - INFO - Connecting to weights & biases. Downloading artifacts. 📦 -2022-11-18 10:25:56,180 - __main__ - INFO - Start loading artifacts locally. 🐢 -2022-11-18 10:26:07,562 - __main__ - INFO - Start with study. 🦄 -... -``` - -Using [`SLURM`](https://wiki.bwhpc.de/e/BwUniCluster2.0/Slurm) on [bwHPC](https://bwhpc.de/). -Set up `submit_thesis_gpu.sh` to batch job: -```shell -#!/bin/bash -#SBATCH --job-name=gpu -#SBATCH --partition=gpu_8 # See: https://wiki.bwhpc.de/e/BwUniCluster2.0/Batch_Queues -#SBATCH --gres=gpu:1 # number of requested GPUs in node allocated by job -#SBATCH --time=10:00 # wall-clock time limit e. g. 10 minutes. Max is 48 hours. -#SBATCH --mem=128000 # memory in mbytes -#SBATCH --nodes=1 # no of nodes requested -#SBATCH --mail-type=ALL -#SBATCH --mail-user=uxxxx@student.kit.edu - -# Set up modules -module purge # Unload all currently loaded modules. -module load devel/cuda/10.2 # Load required modules e. g., cuda 10.2 -module load compiler/gnu/11.2 - -# start venv and run script -cd thesis - -source thesis/bin/activate # Activate venv with dependencies already installed - -python -u src/otc/models/train_model.py ... -``` - -Submit job: -```shell -# submit job to queue -sbatch ./submit_thesis_gpu.sh -Submitted batch job 21614924 - -# interact with job -scontrol show job - -# view job logs -nano slurm-21614924.out -``` - ## Development ### Set up git pre-commit hooks 🐙 diff --git a/src/otc/models/transformer_classifier.py b/src/otc/models/transformer_classifier.py index 356205bc..c512f997 100644 --- a/src/otc/models/transformer_classifier.py +++ b/src/otc/models/transformer_classifier.py @@ -104,7 +104,7 @@ def _checkpoint_restore(self) -> None: """Restore weights and biases from checkpoint.""" print("restore from checkpoint.") cp = Path("checkpoints/").glob("tf_clf*") - self.clf.load_state_dict(torch.load(cp[0])) + self.clf.load_state_dict(torch.load(next(cp))) def array_to_dataloader_finetune( self,