You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Forecaster
L2: 1e-06
Linear Window: 0
Linear Shared Weights: False
RevIN: False
Decomposition: False
/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/nn/decoder.py:43: UserWarning: The implementation of Local Cross Attn with exogenous variables
makes an unintuitive assumption about variable order. Please see
spacetimeformer_model.nn.decoder.DecoderLayer source code and comments
warnings.warn(
GlobalSelfAttn: AttentionLayer(
(inner_attention): PerformerAttention(
(kernel_fn): ReLU()
)
(query_projection): Linear(in_features=200, out_features=800, bias=True)
(key_projection): Linear(in_features=200, out_features=800, bias=True)
(value_projection): Linear(in_features=200, out_features=800, bias=True)
(out_projection): Linear(in_features=800, out_features=200, bias=True)
(dropout_qkv): Dropout(p=0.0, inplace=False)
)
GlobalCrossAttn: AttentionLayer(
(inner_attention): PerformerAttention(
(kernel_fn): ReLU()
)
(query_projection): Linear(in_features=200, out_features=800, bias=True)
(key_projection): Linear(in_features=200, out_features=800, bias=True)
(value_projection): Linear(in_features=200, out_features=800, bias=True)
(out_projection): Linear(in_features=800, out_features=200, bias=True)
(dropout_qkv): Dropout(p=0.0, inplace=False)
)
LocalSelfAttn: AttentionLayer(
(inner_attention): PerformerAttention(
(kernel_fn): ReLU()
)
(query_projection): Linear(in_features=200, out_features=800, bias=True)
(key_projection): Linear(in_features=200, out_features=800, bias=True)
(value_projection): Linear(in_features=200, out_features=800, bias=True)
(out_projection): Linear(in_features=800, out_features=200, bias=True)
(dropout_qkv): Dropout(p=0.0, inplace=False)
)
LocalCrossAttn: AttentionLayer(
(inner_attention): PerformerAttention(
(kernel_fn): ReLU()
)
(query_projection): Linear(in_features=200, out_features=800, bias=True)
(key_projection): Linear(in_features=200, out_features=800, bias=True)
(value_projection): Linear(in_features=200, out_features=800, bias=True)
(out_projection): Linear(in_features=800, out_features=200, bias=True)
(dropout_qkv): Dropout(p=0.0, inplace=False)
)
Using Embedding: spatio-temporal
Time Emb Dim: 6
Space Embedding: True
Time Embedding: True
Val Embedding: True
Given Embedding: True
Null Value: -1
Pad Value: -1
Reconstruction Dropout: Timesteps 0.05, Standard 0.1, Seq (max len = 5) 0.2, Skip All Drop 1.0
*** Spacetimeformer (v1.5) Summary: ***
Model Dim: 200
FF Dim: 800
Enc Layers: 3
Dec Layers: 3
Embed Dropout: 0.2
FF Dropout: 0.3
Attn Out Dropout: 0.0
Attn Matrix Dropout: 0.0
QKV Dropout: 0.0
L2 Coeff: 1e-06
Warmup Steps: 0
Normalization Scheme: batch
Attention Time Windows: 1
Shifted Time Windows: False
Position Emb Type: abs
Recon Loss Imp: 0.0
/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/utilities.py:91: PossibleUserWarning: max_epochs was not set. Setting it to 1000 epochs. To train without an epoch limit, set max_epochs=-1.
rank_zero_warn(
GPU available: True, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1823: PossibleUserWarning: GPU available but not used. Set accelerator and devices using Trainer(accelerator='gpu', devices=2).
rank_zero_warn( Trainer(limit_val_batches=1.0) was configured so 100% of the batches will be used..
| Name | Type | Params
0 | spacetimeformer | Spacetimeformer | 13.5 M
13.5 M Trainable params
0 Non-trainable params
13.5 M Total params
54.080 Total estimated model params size (MB)
Sanity Checking DataLoader 0: 0%| | 0/2 [00:01<?, ?it/s]Traceback (most recent call last):
File "train.py", line 181, in
trainer.fit(model, datamodule=data_module)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 771, in fit
self._call_and_handle_interrupt(
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 724, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 812, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1237, in _run
results = self._run_stage()
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1324, in _run_stage
return self._run_train()
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1346, in _run_train
self._run_sanity_check()
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1414, in _run_sanity_check
val_loop.run()
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 153, in advance
dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 127, in advance
output = self._evaluation_step(**kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 222, in _evaluation_step
output = self.trainer._call_strategy_hook("validation_step", *kwargs.values())
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1766, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 344, in validation_step
return self.model.validation_step(*args, **kwargs)
File "/home/vdesai/spacetimeformer/spacetimeformer/forecaster.py", line 256, in validation_step
stats = self.step(batch, train=False)
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/spacetimeformer_model.py", line 183, in step
loss_dict = self.compute_loss(
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/spacetimeformer_model.py", line 228, in compute_loss
forecast_out, recon_out, (logits, labels) = self(
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vdesai/spacetimeformer/spacetimeformer/forecaster.py", line 204, in forward
preds, *extra = self.forward_model_pass(
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/spacetimeformer_model.py", line 286, in forward_model_pass
forecast_output, recon_output, (logits, labels), attn = self.spacetimeformer(
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/nn/model.py", line 266, in forward
enc_vt_emb, enc_s_emb, enc_var_idxs, enc_mask_seq = self.enc_embedding(
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/nn/embed.py", line 91, in call
return emb(y=y, x=x)
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/nn/embed.py", line 239, in spatio_temporal_embed
space_emb = self.space_emb(var_idx)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return F.embedding(
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
Any help would be appreciated.
The text was updated successfully, but these errors were encountered:
Forecaster
L2: 1e-06
Linear Window: 0
Linear Shared Weights: False
RevIN: False
Decomposition: False
/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/nn/decoder.py:43: UserWarning: The implementation of Local Cross Attn with exogenous variables
makes an unintuitive assumption about variable order. Please see
spacetimeformer_model.nn.decoder.DecoderLayer source code and comments
warnings.warn(
GlobalSelfAttn: AttentionLayer(
(inner_attention): PerformerAttention(
(kernel_fn): ReLU()
)
(query_projection): Linear(in_features=200, out_features=800, bias=True)
(key_projection): Linear(in_features=200, out_features=800, bias=True)
(value_projection): Linear(in_features=200, out_features=800, bias=True)
(out_projection): Linear(in_features=800, out_features=200, bias=True)
(dropout_qkv): Dropout(p=0.0, inplace=False)
)
GlobalCrossAttn: AttentionLayer(
(inner_attention): PerformerAttention(
(kernel_fn): ReLU()
)
(query_projection): Linear(in_features=200, out_features=800, bias=True)
(key_projection): Linear(in_features=200, out_features=800, bias=True)
(value_projection): Linear(in_features=200, out_features=800, bias=True)
(out_projection): Linear(in_features=800, out_features=200, bias=True)
(dropout_qkv): Dropout(p=0.0, inplace=False)
)
LocalSelfAttn: AttentionLayer(
(inner_attention): PerformerAttention(
(kernel_fn): ReLU()
)
(query_projection): Linear(in_features=200, out_features=800, bias=True)
(key_projection): Linear(in_features=200, out_features=800, bias=True)
(value_projection): Linear(in_features=200, out_features=800, bias=True)
(out_projection): Linear(in_features=800, out_features=200, bias=True)
(dropout_qkv): Dropout(p=0.0, inplace=False)
)
LocalCrossAttn: AttentionLayer(
(inner_attention): PerformerAttention(
(kernel_fn): ReLU()
)
(query_projection): Linear(in_features=200, out_features=800, bias=True)
(key_projection): Linear(in_features=200, out_features=800, bias=True)
(value_projection): Linear(in_features=200, out_features=800, bias=True)
(out_projection): Linear(in_features=800, out_features=200, bias=True)
(dropout_qkv): Dropout(p=0.0, inplace=False)
)
Using Embedding: spatio-temporal
Time Emb Dim: 6
Space Embedding: True
Time Embedding: True
Val Embedding: True
Given Embedding: True
Null Value: -1
Pad Value: -1
Reconstruction Dropout: Timesteps 0.05, Standard 0.1, Seq (max len = 5) 0.2, Skip All Drop 1.0
*** Spacetimeformer (v1.5) Summary: ***
Model Dim: 200
FF Dim: 800
Enc Layers: 3
Dec Layers: 3
Embed Dropout: 0.2
FF Dropout: 0.3
Attn Out Dropout: 0.0
Attn Matrix Dropout: 0.0
QKV Dropout: 0.0
L2 Coeff: 1e-06
Warmup Steps: 0
Normalization Scheme: batch
Attention Time Windows: 1
Shifted Time Windows: False
Position Emb Type: abs
Recon Loss Imp: 0.0
/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/utilities.py:91: PossibleUserWarning:
max_epochs
was not set. Setting it to 1000 epochs. To train without an epoch limit, setmax_epochs=-1
.rank_zero_warn(
GPU available: True, used: False
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py:1823: PossibleUserWarning: GPU available but not used. Set
accelerator
anddevices
usingTrainer(accelerator='gpu', devices=2)
.rank_zero_warn(
Trainer(limit_val_batches=1.0)
was configured so 100% of the batches will be used..| Name | Type | Params
0 | spacetimeformer | Spacetimeformer | 13.5 M
13.5 M Trainable params
0 Non-trainable params
13.5 M Total params
54.080 Total estimated model params size (MB)
Sanity Checking DataLoader 0: 0%| | 0/2 [00:01<?, ?it/s]Traceback (most recent call last):
File "train.py", line 181, in
trainer.fit(model, datamodule=data_module)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 771, in fit
self._call_and_handle_interrupt(
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 724, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 812, in _fit_impl
results = self._run(model, ckpt_path=self.ckpt_path)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1237, in _run
results = self._run_stage()
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1324, in _run_stage
return self._run_train()
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1346, in _run_train
self._run_sanity_check()
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1414, in _run_sanity_check
val_loop.run()
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 153, in advance
dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 204, in run
self.advance(*args, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 127, in advance
output = self._evaluation_step(**kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 222, in _evaluation_step
output = self.trainer._call_strategy_hook("validation_step", *kwargs.values())
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1766, in _call_strategy_hook
output = fn(*args, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 344, in validation_step
return self.model.validation_step(*args, **kwargs)
File "/home/vdesai/spacetimeformer/spacetimeformer/forecaster.py", line 256, in validation_step
stats = self.step(batch, train=False)
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/spacetimeformer_model.py", line 183, in step
loss_dict = self.compute_loss(
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/spacetimeformer_model.py", line 228, in compute_loss
forecast_out, recon_out, (logits, labels) = self(
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vdesai/spacetimeformer/spacetimeformer/forecaster.py", line 204, in forward
preds, *extra = self.forward_model_pass(
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/spacetimeformer_model.py", line 286, in forward_model_pass
forecast_output, recon_output, (logits, labels), attn = self.spacetimeformer(
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/nn/model.py", line 266, in forward
enc_vt_emb, enc_s_emb, enc_var_idxs, enc_mask_seq = self.enc_embedding(
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/nn/embed.py", line 91, in call
return emb(y=y, x=x)
File "/home/vdesai/spacetimeformer/spacetimeformer/spacetimeformer_model/nn/embed.py", line 239, in spatio_temporal_embed
space_emb = self.space_emb(var_idx)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/modules/sparse.py", line 158, in forward
return F.embedding(
File "/home/vdesai/anaconda3/envs/bats/lib/python3.8/site-packages/torch/nn/functional.py", line 2183, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
IndexError: index out of range in self
Any help would be appreciated.
The text was updated successfully, but these errors were encountered: