Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: DataLoader worker (pid(s) 22100) exited unexpectedly #173

Open
am-official opened this issue Sep 7, 2022 · 1 comment
Open

Comments

@am-official
Copy link

am-official commented Sep 7, 2022

Dear and Near,

I tried all the possibilities. But no luck. Could anyone help me to figure out the issue? Your help is very much appreciated!

I am using windows 11 Pc with NVIDIA GeForce RTX 3090 GPU

------------ Options -------------
add_face_disc: False
aspect_ratio: 1.0
basic_point_only: False
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/
dataset_mode: temporal
debug: False
densepose_only: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
fp16: False
gpu_ids: [0]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 1024
load_features: False
load_pretrain: 
local_rank: 0
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 2
n_frames_G: 3
n_gpus_gen: 1
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_1024_g1
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_canny_edge: False
no_dist_map: False
no_first_img: False
no_flip: False
no_flow: False
norm: batch
ntest: inf
openpose_only: False
output_nc: 3
phase: test
random_drop_prob: 0.05
random_scale_points: False
remove_face_labels: False
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
start_frame: 0
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
---------- Networks initialized -------------
-----------------------------------------------
Doing 28 frames
[]
Num GPUs Available:  0
2022-09-07 18:43:37.812230: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX AVX2
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-09-07 18:43:37.818042: I tensorflow/compiler/xla/service/service.cc:170] XLA service 0x2d8390eac00 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2022-09-07 18:43:37.818151: I tensorflow/compiler/xla/service/service.cc:178]   StreamExecutor device (0): Host, Default Version
[0]
Device ID (unmasked): 0
Device ID (masked): 0
a+b=42
2022-09-07 18:43:37.823294: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:354] MLIR V1 optimization pass is not enabled
------------ Options -------------
add_face_disc: False
aspect_ratio: 1.0
basic_point_only: False
batchSize: 1
checkpoints_dir: ./checkpoints
dataroot: datasets/Cityscapes/
dataset_mode: temporal
debug: False
densepose_only: False
display_id: 0
display_winsize: 512
feat_num: 3
fg: True
fg_labels: [26]
fineSize: 512
fp16: False
gpu_ids: [0]
how_many: 300
input_nc: 3
isTrain: False
label_feat: False
label_nc: 35
loadSize: 1024
load_features: False
load_pretrain: 
local_rank: 0
max_dataset_size: inf
model: vid2vid
nThreads: 2
n_blocks: 9
n_blocks_local: 3
n_downsample_E: 3
n_downsample_G: 2
n_frames_G: 3
n_gpus_gen: 1
n_local_enhancers: 1
n_scales_spatial: 3
name: label2city_1024_g1
ndf: 64
nef: 32
netE: simple
netG: composite
ngf: 128
no_canny_edge: False
no_dist_map: False
no_first_img: False
no_flip: False
no_flow: False
norm: batch
ntest: inf
openpose_only: False
output_nc: 3
phase: test
random_drop_prob: 0.05
random_scale_points: False
remove_face_labels: False
resize_or_crop: scaleWidth
results_dir: ./results/
serial_batches: False
start_frame: 0
tf_log: False
use_instance: True
use_real_img: False
use_single_G: True
which_epoch: latest
-------------- End ----------------
CustomDatasetDataLoader
dataset [TestDataset] was created
vid2vid
---------- Networks initialized -------------
-----------------------------------------------
Doing 28 frames
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\spawn.py", line 125, in _main
    prepare(preparation_data)
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\spawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\runpy.py", line 265, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\runpy.py", line 97, in _run_module_code
    _run_code(code, mod_globals, init_globals,
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "D:\Python_Code\venv\vid2vid-master\test.py", line 75, in <module>
    for i, data in enumerate(dataset):
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\site-packages\torch\utils\data\dataloader.py", line 368, in __iter__
    return self._get_iterator()
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\site-packages\torch\utils\data\dataloader.py", line 314, in _get_iterator
    return _MultiProcessingDataLoaderIter(self)
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\site-packages\torch\utils\data\dataloader.py", line 927, in __init__
    w.start()
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\process.py", line 121, in start
    self._popen = self._Popen(self)
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\context.py", line 224, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\context.py", line 327, in _Popen
    return Popen(process_obj)
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\popen_spawn_win32.py", line 45, in __init__
    prep_data = spawn.get_preparation_data(process_obj._name)
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\spawn.py", line 154, in get_preparation_data
    _check_not_importing_main()
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main
    raise RuntimeError('''
RuntimeError: 
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.
Traceback (most recent call last):
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\site-packages\torch\utils\data\dataloader.py", line 1011, in _try_get_data
    data = self._data_queue.get(timeout=timeout)
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\multiprocessing\queues.py", line 108, in get
    raise Empty
_queue.Empty

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "D:/Python_Code/venv/vid2vid-master/test.py", line 75, in <module>
    for i, data in enumerate(dataset):
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\site-packages\torch\utils\data\dataloader.py", line 530, in __next__
    data = self._next_data()
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\site-packages\torch\utils\data\dataloader.py", line 1207, in _next_data
    idx, data = self._get_data()
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\site-packages\torch\utils\data\dataloader.py", line 1173, in _get_data
    success, data = self._try_get_data()
  File "C:\ProgramData\Miniconda3\envs\tf2.4\lib\site-packages\torch\utils\data\dataloader.py", line 1024, in _try_get_data
    raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 22100) exited unexpectedly
@wangjinbo0929
Copy link

Have you solved the problem? I had the same problem @am-official

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants