Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the train loss is higher than the validation and test? #116

Open
bhagiradh opened this issue Aug 8, 2024 · 1 comment
Open

Why the train loss is higher than the validation and test? #116

bhagiradh opened this issue Aug 8, 2024 · 1 comment

Comments

@bhagiradh
Copy link

I was running the code in Colab T4GPU Etth2 patchTST supervised.

`Args in experiment:
Namespace(random_seed=2021, is_training=1, model_id='336_96', model='PatchTST', data='ETTh2', root_path='./dataset//all_six_datasets/ETT-small', data_path='ETTh2.csv', features='M', target='OT', freq='h', checkpoints='./checkpoints/', seq_len=336, label_len=48, pred_len=96, fc_dropout=0.3, head_dropout=0.0, patch_len=16, stride=8, padding_patch='end', revin=1, affine=0, subtract_last=0, decomposition=0, kernel_size=25, individual=0, embed_type=0, enc_in=7, dec_in=7, c_out=7, d_model=16, n_heads=4, e_layers=3, d_layers=1, d_ff=128, moving_avg=25, factor=1, distil=True, dropout=0.3, embed='timeF', activation='gelu', output_attention=False, do_predict=False, num_workers=10, itr=1, train_epochs=100, batch_size=128, patience=100, learning_rate=0.0001, des='Exp', loss='mse', lradj='type3', pct_start=0.3, use_amp=False, use_gpu=True, gpu=0, use_multi_gpu=False, devices='0,1,2,3', test_flop=False)
Use GPU: cuda:0

start training : 336_96_PatchTST_ETTh2_ftM_sl336_ll48_pl96_dm16_nh4_el3_dl1_df128_fc1_ebtimeF_dtTrue_Exp_0>>>>>>>>>>>>>>>>>>>>>>>>>>
train 8209
val 2785
test 2785
Epoch: 1 cost time: 2.1643295288085938
Epoch: 1, Steps: 64 | Train Loss: 0.6763266 Vali Loss: 0.3606680 Test Loss: 0.4046726
Validation loss decreased (inf --> 0.360668). Saving model ...
Updating learning rate to 0.0001
Epoch: 2 cost time: 1.7166812419891357
Epoch: 2, Steps: 64 | Train Loss: 0.5844589 Vali Loss: 0.2609031 Test Loss: 0.3279925
Validation loss decreased (0.360668 --> 0.260903). Saving model ...
Updating learning rate to 0.0001
Epoch: 3 cost time: 1.7202339172363281
Epoch: 3, Steps: 64 | Train Loss: 0.4991248 Vali Loss: 0.2373687 Test Loss: 0.2988000
Validation loss decreased (0.260903 --> 0.237369). Saving model ...
Updating learning rate to 0.0001
Epoch: 4 cost time: 1.7069125175476074
Epoch: 4, Steps: 64 | Train Loss: 0.4619388 Vali Loss: 0.2229405 Test Loss: 0.2888060
Validation loss decreased (0.237369 --> 0.222940). Saving model ...
Updating learning rate to 9e-05
Epoch: 5 cost time: 1.7258367538452148
Epoch: 5, Steps: 64 | Train Loss: 0.4415994 Vali Loss: 0.2171950 Test Loss: 0.2827333
Validation loss decreased (0.222940 --> 0.217195). Saving model ...
Updating learning rate to 8.1e-05
Epoch: 6 cost time: 1.7511725425720215
Epoch: 6, Steps: 64 | Train Loss: 0.4314346 Vali Loss: 0.2150504 Test Loss: 0.2809727
Validation loss decreased (0.217195 --> 0.215050). Saving model ...
Updating learning rate to 7.290000000000001e-05
Epoch: 7 cost time: 1.7013769149780273
Epoch: 7, Steps: 64 | Train Loss: 0.4257249 Vali Loss: 0.2145968 Test Loss: 0.2784564
Validation loss decreased (0.215050 --> 0.214597). Saving model ...
Updating learning rate to 6.561e-05
Epoch: 8 cost time: 1.7248353958129883
Epoch: 8, Steps: 64 | Train Loss: 0.4217108 Vali Loss: 0.2132926 Test Loss: 0.2776138
Validation loss decreased (0.214597 --> 0.213293). Saving model ...
Updating learning rate to 5.904900000000001e-05
Epoch: 9 cost time: 1.7173840999603271
Epoch: 9, Steps: 64 | Train Loss: 0.4192884 Vali Loss: 0.2128644 Test Loss: 0.2769505
Validation loss decreased (0.213293 --> 0.212864). Saving model ...
Updating learning rate to 5.3144100000000005e-05
Epoch: 10 cost time: 1.8021619319915771
Epoch: 10, Steps: 64 | Train Loss: 0.4169750 Vali Loss: 0.2115794 Test Loss: 0.2763964
Validation loss decreased (0.212864 --> 0.211579). Saving model ...
Updating learning rate to 4.782969000000001e-05
Epoch: 11 cost time: 1.7141728401184082
Epoch: 11, Steps: 64 | Train Loss: 0.4154261 Vali Loss: 0.2118243 Test Loss: 0.2759447
EarlyStopping counter: 1 out of 100
Updating learning rate to 4.304672100000001e-05
Epoch: 12 cost time: 1.706538200378418
Epoch: 12, Steps: 64 | Train Loss: 0.4144730 Vali Loss: 0.2105129 Test Loss: 0.2758228
Validation loss decreased (0.211579 --> 0.210513). Saving model ...
Updating learning rate to 3.874204890000001e-05
Epoch: 13 cost time: 1.717226505279541
Epoch: 13, Steps: 64 | Train Loss: 0.4131817 Vali Loss: 0.2107882 Test Loss: 0.2753261
EarlyStopping counter: 1 out of 100
Updating learning rate to 3.486784401000001e-05
Epoch: 14 cost time: 1.7399539947509766
Epoch: 14, Steps: 64 | Train Loss: 0.4124877 Vali Loss: 0.2098478 Test Loss: 0.2757070
Validation loss decreased (0.210513 --> 0.209848). Saving model ...
Updating learning rate to 3.138105960900001e-05
Epoch: 15 cost time: 1.800302267074585
Epoch: 15, Steps: 64 | Train Loss: 0.4110149 Vali Loss: 0.2112466 Test Loss: 0.2749723
EarlyStopping counter: 1 out of 100
Updating learning rate to 2.824295364810001e-05
Epoch: 16 cost time: 1.7359976768493652
Epoch: 16, Steps: 64 | Train Loss: 0.4097658 Vali Loss: 0.2106580 Test Loss: 0.2749432
EarlyStopping counter: 2 out of 100
Updating learning rate to 2.541865828329001e-05
Epoch: 17 cost time: 1.7370517253875732
Epoch: 17, Steps: 64 | Train Loss: 0.4094254 Vali Loss: 0.2105749 Test Loss: 0.2748891
EarlyStopping counter: 3 out of 100
Updating learning rate to 2.287679245496101e-05
Epoch: 18 cost time: 1.7441661357879639
Epoch: 18, Steps: 64 | Train Loss: 0.4085028 Vali Loss: 0.2104510 Test Loss: 0.2746597
EarlyStopping counter: 4 out of 100
Updating learning rate to 2.0589113209464907e-05
Epoch: 19 cost time: 1.7556688785552979
Epoch: 19, Steps: 64 | Train Loss: 0.4091201 Vali Loss: 0.2114273 Test Loss: 0.2745993
EarlyStopping counter: 5 out of 100
Updating learning rate to 1.8530201888518416e-05
Epoch: 20 cost time: 1.740110158920288
Epoch: 20, Steps: 64 | Train Loss: 0.4080682 Vali Loss: 0.2092140 Test Loss: 0.2747724
Validation loss decreased (0.209848 --> 0.209214). Saving model ...
Updating learning rate to 1.6677181699666577e-05
Epoch: 21 cost time: 1.7416722774505615
Epoch: 21, Steps: 64 | Train Loss: 0.4075525 Vali Loss: 0.2114391 Test Loss: 0.2744769
EarlyStopping counter: 1 out of 100
Updating learning rate to 1.5009463529699919e-05
Epoch: 22 cost time: 1.7436606884002686
Epoch: 22, Steps: 64 | Train Loss: 0.4080877 Vali Loss: 0.2100424 Test Loss: 0.2745726
EarlyStopping counter: 2 out of 100
Updating learning rate to 1.3508517176729929e-05
Epoch: 23 cost time: 1.7580888271331787
Epoch: 23, Steps: 64 | Train Loss: 0.4069620 Vali Loss: 0.2099439 Test Loss: 0.2743274
EarlyStopping counter: 3 out of 100
Updating learning rate to 1.2157665459056936e-05
Epoch: 24 cost time: 1.7594590187072754
Epoch: 24, Steps: 64 | Train Loss: 0.4066456 Vali Loss: 0.2096140 Test Loss: 0.2744710
EarlyStopping counter: 4 out of 100
Updating learning rate to 1.0941898913151242e-05
Epoch: 25 cost time: 1.7444617748260498
Epoch: 25, Steps: 64 | Train Loss: 0.4067152 Vali Loss: 0.2105653 Test Loss: 0.2744342
EarlyStopping counter: 5 out of 100
Updating learning rate to 9.847709021836118e-06
Epoch: 26 cost time: 1.7528653144836426
Epoch: 26, Steps: 64 | Train Loss: 0.4071409 Vali Loss: 0.2101815 Test Loss: 0.2743181
EarlyStopping counter: 6 out of 100
Updating learning rate to 8.862938119652508e-06
Epoch: 27 cost time: 1.7296009063720703
Epoch: 27, Steps: 64 | Train Loss: 0.4065424 Vali Loss: 0.2101335 Test Loss: 0.2743031
EarlyStopping counter: 7 out of 100
Updating learning rate to 7.976644307687255e-06
Epoch: 28 cost time: 1.7801082134246826
Epoch: 28, Steps: 64 | Train Loss: 0.4055321 Vali Loss: 0.2105002 Test Loss: 0.2742938
EarlyStopping counter: 8 out of 100
Updating learning rate to 7.178979876918531e-06
Epoch: 29 cost time: 1.752854585647583
Epoch: 29, Steps: 64 | Train Loss: 0.4062736 Vali Loss: 0.2107588 Test Loss: 0.2743371
EarlyStopping counter: 9 out of 100
Updating learning rate to 6.4610818892266776e-06
Epoch: 30 cost time: 1.7650232315063477
Epoch: 30, Steps: 64 | Train Loss: 0.4063427 Vali Loss: 0.2104823 Test Loss: 0.2742883
EarlyStopping counter: 10 out of 100
Updating learning rate to 5.8149737003040096e-06
Epoch: 31 cost time: 1.7491166591644287
Epoch: 31, Steps: 64 | Train Loss: 0.4054611 Vali Loss: 0.2088941 Test Loss: 0.2742677
Validation loss decreased (0.209214 --> 0.208894). Saving model ...
Updating learning rate to 5.23347633027361e-06
Epoch: 32 cost time: 1.8110325336456299
Epoch: 32, Steps: 64 | Train Loss: 0.4051607 Vali Loss: 0.2109981 Test Loss: 0.2741714
EarlyStopping counter: 1 out of 100
Updating learning rate to 4.710128697246249e-06
Epoch: 33 cost time: 1.814404010772705
Epoch: 33, Steps: 64 | Train Loss: 0.4040284 Vali Loss: 0.2108487 Test Loss: 0.2741989
EarlyStopping counter: 2 out of 100
Updating learning rate to 4.239115827521624e-06
Epoch: 34 cost time: 1.7568163871765137
Epoch: 34, Steps: 64 | Train Loss: 0.4055045 Vali Loss: 0.2101851 Test Loss: 0.2741915
EarlyStopping counter: 3 out of 100
Updating learning rate to 3.815204244769462e-06
Epoch: 35 cost time: 1.7631947994232178
Epoch: 35, Steps: 64 | Train Loss: 0.4058457 Vali Loss: 0.2100938 Test Loss: 0.2742004
EarlyStopping counter: 4 out of 100
Updating learning rate to 3.4336838202925152e-06
Epoch: 36 cost time: 1.7559540271759033
Epoch: 36, Steps: 64 | Train Loss: 0.4042997 Vali Loss: 0.2104393 Test Loss: 0.2741138
EarlyStopping counter: 5 out of 100
Updating learning rate to 3.090315438263264e-06
Epoch: 37 cost time: 1.8273789882659912
Epoch: 37, Steps: 64 | Train Loss: 0.4051889 Vali Loss: 0.2105755 Test Loss: 0.2742157
EarlyStopping counter: 6 out of 100
Updating learning rate to 2.7812838944369375e-06
Epoch: 38 cost time: 1.7628061771392822
Epoch: 38, Steps: 64 | Train Loss: 0.4054591 Vali Loss: 0.2107125 Test Loss: 0.2742336
EarlyStopping counter: 7 out of 100
Updating learning rate to 2.503155504993244e-06
Epoch: 39 cost time: 1.7907121181488037
Epoch: 39, Steps: 64 | Train Loss: 0.4051344 Vali Loss: 0.2105941 Test Loss: 0.2741874
EarlyStopping counter: 8 out of 100
Updating learning rate to 2.2528399544939195e-06
Epoch: 40 cost time: 1.778426170349121
Epoch: 40, Steps: 64 | Train Loss: 0.4048844 Vali Loss: 0.2100071 Test Loss: 0.2742252
EarlyStopping counter: 9 out of 100
Updating learning rate to 2.0275559590445276e-06
Epoch: 41 cost time: 1.8147203922271729
Epoch: 41, Steps: 64 | Train Loss: 0.4053577 Vali Loss: 0.2090154 Test Loss: 0.2742776
EarlyStopping counter: 10 out of 100
Updating learning rate to 1.8248003631400751e-06
Epoch: 42 cost time: 1.8106694221496582
Epoch: 42, Steps: 64 | Train Loss: 0.4058025 Vali Loss: 0.2098368 Test Loss: 0.2742032
EarlyStopping counter: 11 out of 100
Updating learning rate to 1.6423203268260676e-06
Epoch: 43 cost time: 1.7889034748077393
Epoch: 43, Steps: 64 | Train Loss: 0.4042823 Vali Loss: 0.2099250 Test Loss: 0.2741432
EarlyStopping counter: 12 out of 100
Updating learning rate to 1.4780882941434609e-06
Epoch: 44 cost time: 1.7906208038330078
Epoch: 44, Steps: 64 | Train Loss: 0.4041702 Vali Loss: 0.2096843 Test Loss: 0.2741599
EarlyStopping counter: 13 out of 100
Updating learning rate to 1.3302794647291146e-06
Epoch: 45 cost time: 1.7851519584655762
Epoch: 45, Steps: 64 | Train Loss: 0.4053837 Vali Loss: 0.2096516 Test Loss: 0.2742423
EarlyStopping counter: 14 out of 100
Updating learning rate to 1.1972515182562034e-06
Epoch: 46 cost time: 1.8485832214355469
Epoch: 46, Steps: 64 | Train Loss: 0.4050943 Vali Loss: 0.2101437 Test Loss: 0.2741430
EarlyStopping counter: 15 out of 100
Updating learning rate to 1.077526366430583e-06
Epoch: 47 cost time: 1.777778148651123
Epoch: 47, Steps: 64 | Train Loss: 0.4059505 Vali Loss: 0.2098159 Test Loss: 0.2741998
EarlyStopping counter: 16 out of 100
Updating learning rate to 9.697737297875248e-07
Epoch: 48 cost time: 1.8135738372802734
Epoch: 48, Steps: 64 | Train Loss: 0.4047638 Vali Loss: 0.2100287 Test Loss: 0.2741700
EarlyStopping counter: 17 out of 100
Updating learning rate to 8.727963568087723e-07
Epoch: 49 cost time: 1.8284835815429688
Epoch: 49, Steps: 64 | Train Loss: 0.4055362 Vali Loss: 0.2096763 Test Loss: 0.2741248
EarlyStopping counter: 18 out of 100
Updating learning rate to 7.855167211278951e-07
Epoch: 50 cost time: 1.798884391784668
Epoch: 50, Steps: 64 | Train Loss: 0.4044474 Vali Loss: 0.2092623 Test Loss: 0.2741421
EarlyStopping counter: 19 out of 100
Updating learning rate to 7.069650490151056e-07
Epoch: 51 cost time: 1.8458547592163086
Epoch: 51, Steps: 64 | Train Loss: 0.4046243 Vali Loss: 0.2099508 Test Loss: 0.2741397
EarlyStopping counter: 20 out of 100
Updating learning rate to 6.36268544113595e-07
Epoch: 52 cost time: 1.8398337364196777
Epoch: 52, Steps: 64 | Train Loss: 0.4045866 Vali Loss: 0.2097042 Test Loss: 0.2741141
EarlyStopping counter: 21 out of 100
Updating learning rate to 5.726416897022355e-07
Epoch: 53 cost time: 1.7948007583618164
Epoch: 53, Steps: 64 | Train Loss: 0.4026537 Vali Loss: 0.2108110 Test Loss: 0.2740823
EarlyStopping counter: 22 out of 100
Updating learning rate to 5.15377520732012e-07
Epoch: 54 cost time: 1.796067237854004
Epoch: 54, Steps: 64 | Train Loss: 0.4041865 Vali Loss: 0.2096090 Test Loss: 0.2741497
EarlyStopping counter: 23 out of 100
Updating learning rate to 4.6383976865881085e-07
Epoch: 55 cost time: 1.8623394966125488
Epoch: 55, Steps: 64 | Train Loss: 0.4055809 Vali Loss: 0.2102339 Test Loss: 0.2740812
EarlyStopping counter: 24 out of 100
Updating learning rate to 4.174557917929298e-07
Epoch: 56 cost time: 1.819002628326416
Epoch: 56, Steps: 64 | Train Loss: 0.4046756 Vali Loss: 0.2101153 Test Loss: 0.2742102
EarlyStopping counter: 25 out of 100
Updating learning rate to 3.7571021261363677e-07
Epoch: 57 cost time: 1.8020164966583252
Epoch: 57, Steps: 64 | Train Loss: 0.4042921 Vali Loss: 0.2098823 Test Loss: 0.2742269
EarlyStopping counter: 26 out of 100
Updating learning rate to 3.381391913522731e-07
Epoch: 58 cost time: 1.781646728515625
Epoch: 58, Steps: 64 | Train Loss: 0.4023844 Vali Loss: 0.2097825 Test Loss: 0.2741038
EarlyStopping counter: 27 out of 100
Updating learning rate to 3.043252722170458e-07
Epoch: 59 cost time: 1.801849365234375
Epoch: 59, Steps: 64 | Train Loss: 0.4056277 Vali Loss: 0.2095982 Test Loss: 0.2742191
EarlyStopping counter: 28 out of 100
Updating learning rate to 2.7389274499534124e-07
Epoch: 60 cost time: 1.8106637001037598
Epoch: 60, Steps: 64 | Train Loss: 0.4036580 Vali Loss: 0.2098972 Test Loss: 0.2741767
EarlyStopping counter: 29 out of 100
Updating learning rate to 2.465034704958071e-07
Epoch: 61 cost time: 1.7544925212860107
Epoch: 61, Steps: 64 | Train Loss: 0.4042776 Vali Loss: 0.2092048 Test Loss: 0.2741373
EarlyStopping counter: 30 out of 100
Updating learning rate to 2.218531234462264e-07
Epoch: 62 cost time: 1.799748182296753
Epoch: 62, Steps: 64 | Train Loss: 0.4044871 Vali Loss: 0.2099020 Test Loss: 0.2741574
EarlyStopping counter: 31 out of 100
Updating learning rate to 1.9966781110160376e-07
Epoch: 63 cost time: 1.8225650787353516
Epoch: 63, Steps: 64 | Train Loss: 0.4040244 Vali Loss: 0.2100120 Test Loss: 0.2739767
EarlyStopping counter: 32 out of 100
Updating learning rate to 1.797010299914434e-07
Epoch: 64 cost time: 1.8303875923156738
Epoch: 64, Steps: 64 | Train Loss: 0.4052188 Vali Loss: 0.2099989 Test Loss: 0.2741511
EarlyStopping counter: 33 out of 100
Updating learning rate to 1.6173092699229907e-07
Epoch: 65 cost time: 1.79451322555542
Epoch: 65, Steps: 64 | Train Loss: 0.4053198 Vali Loss: 0.2093047 Test Loss: 0.2741008
EarlyStopping counter: 34 out of 100
Updating learning rate to 1.4555783429306916e-07
Epoch: 66 cost time: 1.7962408065795898
Epoch: 66, Steps: 64 | Train Loss: 0.4035531 Vali Loss: 0.2104188 Test Loss: 0.2741628
EarlyStopping counter: 35 out of 100
Updating learning rate to 1.3100205086376224e-07
Epoch: 67 cost time: 1.8021390438079834
Epoch: 67, Steps: 64 | Train Loss: 0.4055579 Vali Loss: 0.2104181 Test Loss: 0.2741275
EarlyStopping counter: 36 out of 100
Updating learning rate to 1.1790184577738603e-07
Epoch: 68 cost time: 1.810896396636963
Epoch: 68, Steps: 64 | Train Loss: 0.4058300 Vali Loss: 0.2097863 Test Loss: 0.2741243
EarlyStopping counter: 37 out of 100
Updating learning rate to 1.0611166119964742e-07
Epoch: 69 cost time: 1.810041904449463
Epoch: 69, Steps: 64 | Train Loss: 0.4048844 Vali Loss: 0.2094992 Test Loss: 0.2741615
EarlyStopping counter: 38 out of 100
Updating learning rate to 9.550049507968268e-08
Epoch: 70 cost time: 1.7844066619873047
Epoch: 70, Steps: 64 | Train Loss: 0.4039441 Vali Loss: 0.2104695 Test Loss: 0.2741447
EarlyStopping counter: 39 out of 100
Updating learning rate to 8.595044557171442e-08
Epoch: 71 cost time: 1.7985899448394775
Epoch: 71, Steps: 64 | Train Loss: 0.4049230 Vali Loss: 0.2101472 Test Loss: 0.2741033
EarlyStopping counter: 40 out of 100
Updating learning rate to 7.735540101454298e-08
Epoch: 72 cost time: 1.8101081848144531
Epoch: 72, Steps: 64 | Train Loss: 0.4044941 Vali Loss: 0.2099488 Test Loss: 0.2740869
EarlyStopping counter: 41 out of 100
Updating learning rate to 6.961986091308869e-08
Epoch: 73 cost time: 1.8383808135986328
Epoch: 73, Steps: 64 | Train Loss: 0.4044523 Vali Loss: 0.2102703 Test Loss: 0.2741061
EarlyStopping counter: 42 out of 100
Updating learning rate to 6.265787482177981e-08
Epoch: 74 cost time: 1.770869493484497
Epoch: 74, Steps: 64 | Train Loss: 0.4057866 Vali Loss: 0.2096028 Test Loss: 0.2741747
EarlyStopping counter: 43 out of 100
Updating learning rate to 5.639208733960184e-08
Epoch: 75 cost time: 1.797130823135376
Epoch: 75, Steps: 64 | Train Loss: 0.4045157 Vali Loss: 0.2106047 Test Loss: 0.2741838
EarlyStopping counter: 44 out of 100
Updating learning rate to 5.075287860564165e-08
Epoch: 76 cost time: 1.8030922412872314
Epoch: 76, Steps: 64 | Train Loss: 0.4034024 Vali Loss: 0.2100367 Test Loss: 0.2741746
EarlyStopping counter: 45 out of 100
Updating learning rate to 4.567759074507749e-08
Epoch: 77 cost time: 1.7932078838348389
Epoch: 77, Steps: 64 | Train Loss: 0.4046166 Vali Loss: 0.2097695 Test Loss: 0.2740767
EarlyStopping counter: 46 out of 100
Updating learning rate to 4.1109831670569744e-08
Epoch: 78 cost time: 1.8020799160003662
Epoch: 78, Steps: 64 | Train Loss: 0.4030784 Vali Loss: 0.2091450 Test Loss: 0.2741565
EarlyStopping counter: 47 out of 100
Updating learning rate to 3.6998848503512764e-08
Epoch: 79 cost time: 1.7950432300567627
Epoch: 79, Steps: 64 | Train Loss: 0.4049116 Vali Loss: 0.2105238 Test Loss: 0.2741542
EarlyStopping counter: 48 out of 100
Updating learning rate to 3.3298963653161496e-08
Epoch: 80 cost time: 1.7550172805786133
Epoch: 80, Steps: 64 | Train Loss: 0.4059545 Vali Loss: 0.2089495 Test Loss: 0.2741537
EarlyStopping counter: 49 out of 100
Updating learning rate to 2.996906728784534e-08
Epoch: 81 cost time: 1.7906570434570312
Epoch: 81, Steps: 64 | Train Loss: 0.4031522 Vali Loss: 0.2097577 Test Loss: 0.2741734
EarlyStopping counter: 50 out of 100
Updating learning rate to 2.697216055906081e-08
Epoch: 82 cost time: 2.3685061931610107
Epoch: 82, Steps: 64 | Train Loss: 0.4041427 Vali Loss: 0.2094271 Test Loss: 0.2741428
EarlyStopping counter: 51 out of 100
Updating learning rate to 2.427494450315473e-08
Epoch: 83 cost time: 1.779212474822998
Epoch: 83, Steps: 64 | Train Loss: 0.4040385 Vali Loss: 0.2094695 Test Loss: 0.2741523
EarlyStopping counter: 52 out of 100
Updating learning rate to 2.1847450052839257e-08
Epoch: 84 cost time: 1.778573751449585
Epoch: 84, Steps: 64 | Train Loss: 0.4038752 Vali Loss: 0.2098419 Test Loss: 0.2741503
EarlyStopping counter: 53 out of 100
Updating learning rate to 1.9662705047555332e-08
Epoch: 85 cost time: 1.7763221263885498
Epoch: 85, Steps: 64 | Train Loss: 0.4048391 Vali Loss: 0.2088083 Test Loss: 0.2741368
Validation loss decreased (0.208894 --> 0.208808). Saving model ...
Updating learning rate to 1.7696434542799797e-08
Epoch: 86 cost time: 1.82301926612854
Epoch: 86, Steps: 64 | Train Loss: 0.4055296 Vali Loss: 0.2092915 Test Loss: 0.2740907
EarlyStopping counter: 1 out of 100
Updating learning rate to 1.5926791088519817e-08
Epoch: 87 cost time: 1.782050371170044
Epoch: 87, Steps: 64 | Train Loss: 0.4054545 Vali Loss: 0.2093168 Test Loss: 0.2741685
EarlyStopping counter: 2 out of 100
Updating learning rate to 1.4334111979667836e-08
Epoch: 88 cost time: 1.797612190246582
Epoch: 88, Steps: 64 | Train Loss: 0.4031411 Vali Loss: 0.2100785 Test Loss: 0.2740988
EarlyStopping counter: 3 out of 100
Updating learning rate to 1.2900700781701054e-08
Epoch: 89 cost time: 1.8229186534881592
Epoch: 89, Steps: 64 | Train Loss: 0.4051060 Vali Loss: 0.2101304 Test Loss: 0.2741939
EarlyStopping counter: 4 out of 100
Updating learning rate to 1.161063070353095e-08
Epoch: 90 cost time: 1.8107361793518066
Epoch: 90, Steps: 64 | Train Loss: 0.4056356 Vali Loss: 0.2109791 Test Loss: 0.2741085
EarlyStopping counter: 5 out of 100
Updating learning rate to 1.0449567633177854e-08
Epoch: 91 cost time: 1.8436458110809326
Epoch: 91, Steps: 64 | Train Loss: 0.4044930 Vali Loss: 0.2085991 Test Loss: 0.2741357
Validation loss decreased (0.208808 --> 0.208599). Saving model ...
Updating learning rate to 9.404610869860069e-09
Epoch: 92 cost time: 1.8070034980773926
Epoch: 92, Steps: 64 | Train Loss: 0.4049517 Vali Loss: 0.2098995 Test Loss: 0.2741151
EarlyStopping counter: 1 out of 100
Updating learning rate to 8.464149782874063e-09
Epoch: 93 cost time: 1.791562795639038
Epoch: 93, Steps: 64 | Train Loss: 0.4044298 Vali Loss: 0.2099953 Test Loss: 0.2741124
EarlyStopping counter: 2 out of 100
Updating learning rate to 7.617734804586658e-09
Epoch: 94 cost time: 1.8094542026519775
Epoch: 94, Steps: 64 | Train Loss: 0.4042742 Vali Loss: 0.2098850 Test Loss: 0.2740951
EarlyStopping counter: 3 out of 100
Updating learning rate to 6.855961324127991e-09
Epoch: 95 cost time: 1.828190565109253
Epoch: 95, Steps: 64 | Train Loss: 0.4050080 Vali Loss: 0.2096854 Test Loss: 0.2741583
EarlyStopping counter: 4 out of 100
Updating learning rate to 6.170365191715193e-09
Epoch: 96 cost time: 1.8119406700134277
Epoch: 96, Steps: 64 | Train Loss: 0.4039116 Vali Loss: 0.2106151 Test Loss: 0.2741201
EarlyStopping counter: 5 out of 100
Updating learning rate to 5.5533286725436726e-09
Epoch: 97 cost time: 1.808847427368164
Epoch: 97, Steps: 64 | Train Loss: 0.4049788 Vali Loss: 0.2096705 Test Loss: 0.2741672
EarlyStopping counter: 6 out of 100
Updating learning rate to 4.997995805289306e-09
Epoch: 98 cost time: 1.7819063663482666
Epoch: 98, Steps: 64 | Train Loss: 0.4048662 Vali Loss: 0.2096142 Test Loss: 0.2741997
EarlyStopping counter: 7 out of 100
Updating learning rate to 4.498196224760375e-09
Epoch: 99 cost time: 1.8054022789001465
Epoch: 99, Steps: 64 | Train Loss: 0.4050980 Vali Loss: 0.2100110 Test Loss: 0.2741385
EarlyStopping counter: 8 out of 100
Updating learning rate to 4.048376602284338e-09
Epoch: 100 cost time: 1.8459892272949219
Epoch: 100, Steps: 64 | Train Loss: 0.4042387 Vali Loss: 0.2093764 Test Loss: 0.2740815
EarlyStopping counter: 9 out of 100
Updating learning rate to 3.643538942055904e-09
testing : 336_96_PatchTST_ETTh2_ftM_sl336_ll48_pl96_dm16_nh4_el3_dl1_df128_fc1_ebtimeF_dtTrue_Exp_0<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
test 2785
mse:0.2741357684135437, mae:0.3360080420970917, rse:0.41826650500297546
`

@erikalien5595
Copy link

I think the reason is out-of-distribution problem, isn't it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants