Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Index error (r0 to dl1 process) on the base production (dec_931) #371

Open
SeiyaNozaki opened this issue Nov 13, 2022 · 7 comments
Open

Comments

@SeiyaNozaki
Copy link
Contributor

I found an error on the r0 to dl1 process for a single file and the merge jobs failed consequently.

/fefs/aswg/data/mc/DL1/AllSky/20221027_v0.9.9_base_prod/TrainingDataset/dec_931/Protons/node_corsika_theta_31.589_az_122.714_/job_logs_r0dl1/job_20543571_6.e

Traceback (most recent call last):
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/bin/lstchain_mc_r0_to_dl1", line 8, in <module>
    sys.exit(main())
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/lstchain/scripts/lstchain_mc_r0_to_dl1.py", line 74, in main
    r0_to_dl1.r0_to_dl1(
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/lstchain/reco/r0_to_dl1.py", line 459, in r0_to_dl1
    for i, event in enumerate(source):
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/ctapipe/io/eventsource.py", line 278, in __iter__
    for event in self._generator():
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/ctapipe/io/simteleventsource.py", line 376, in _generator
    yield from self._generate_events()
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/ctapipe/io/simteleventsource.py", line 393, in _generate_events
    for counter, array_event in enumerate(self.file_):
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/eventio/simtel/simtelfile.py", line 291, in iter_array_events
    self.next_low_level()
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/eventio/simtel/simtelfile.py", line 155, in next_low_level
    self.current_mc_event = o.parse()
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/eventio/simtel/objects.py", line 1413, in parse
    d = MCEvent.parse_mc_event(self.read(), self.header.version)
  File "src/eventio/simtel/parsing.pyx", line 53, in eventio.simtel.parsing.parse_mc_event
  File "src/eventio/simtel/parsing.pyx", line 66, in eventio.simtel.parsing.parse_mc_event
IndexError: Out of bounds on buffer access (axis 0)

/fefs/aswg/data/mc/DL1/AllSky/20221027_v0.9.9_base_prod/TrainingDataset/dec_931/Protons/merging-output.e

 31%|███       | 2618/8391 [13:48<35:41,  2.70it/s]Can't append node /dl1/event/telescope/parameters/LST_LSTCam from file /fefs/aswg/data/mc/DL1/AllSky/20221027_v0.9.9_base_prod/TrainingDataset/dec_931/Protons/node_corsika_theta_31.589_az_122.714_/dl1_simtel_corsika_theta_31.589_az_122.714_run192.h5
Traceback (most recent call last):
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/bin/lstchain_merge_hdf5_files", line 8, in <module>
    sys.exit(main())
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/lstchain/scripts/lstchain_merge_hdf5_files.py", line 88, in main
    auto_merge_h5files(
  File "/fefs/aswg/software/conda/envs/lstchain-v0.9.9/lib/python3.8/site-packages/lstchain/io/io.py", line 341, in auto_merge_h5files
    out_node.append(in_node.read().astype(out_node.dtype))
ValueError: structures must have the same size
@vuillaut
Copy link
Member

Thank you for reporting @SeiyaNozaki !
I will look into it

@vuillaut
Copy link
Member

Hi @SeiyaNozaki
Is there another analysis/production affected by this error?

@SeiyaNozaki
Copy link
Contributor Author

No, I found this error by chance:)

@vuillaut
Copy link
Member

I have restarted the DL1 production for node node_corsika_theta_31.589_az_122.714_ and the merging.

@vuillaut
Copy link
Member

Ok this seems fixed.

@SeiyaNozaki
Copy link
Contributor Author

@vuillaut It seems other base productions have the same issue:
20221215_v0.9.12_base_prod and 20230127_v0.9.12_base_prod_az_tel

@SeiyaNozaki
Copy link
Contributor Author

@vuillaut Did you already reprocess those productions? I heard gammaness distribution is strange when Estelle used this production (dec_931 in 20230127_v0.9.12_base_prod_az_tel), so I suspect this issue affects her analysis.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants