We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I am trying to train yolox model on custom dataset which is annotated using cvat tool and exported to coco format.
then i followed the the MMPose_Tutorial.ipynb notebook and created the similar data loader class as TinyCoco.
i am getting the below error while training.
--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[6], line 11 8 runner = Runner.from_cfg(cfg) 10 # start training ---> 11 runner.train() File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/runner/runner.py:1777](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/runner/runner.py#line=1776), in Runner.train(self) 1773 # Maybe compile the model according to options in self.cfg.compile 1774 # This must be called **AFTER** model has been wrapped. 1775 self._maybe_compile('train_step') -> 1777 model = self.train_loop.run() # type: ignore 1778 self.call_hook('after_run') 1779 return model File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/runner/loops.py:96](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/runner/loops.py#line=95), in EpochBasedTrainLoop.run(self) 93 self.runner.call_hook('before_train') 95 while self._epoch < self._max_epochs and not self.stop_training: ---> 96 self.run_epoch() 98 self._decide_current_val_interval() 99 if (self.runner.val_loop is not None 100 and self._epoch >= self.val_begin 101 and (self._epoch % self.val_interval == 0 102 or self._epoch == self._max_epochs)): File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/runner/loops.py:113](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/runner/loops.py#line=112), in EpochBasedTrainLoop.run_epoch(self) 111 self.runner.model.train() 112 for idx, data_batch in enumerate(self.dataloader): --> 113 self.run_iter(idx, data_batch) 115 self.runner.call_hook('after_train_epoch') 116 self._epoch += 1 File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/runner/loops.py:129](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/runner/loops.py#line=128), in EpochBasedTrainLoop.run_iter(self, idx, data_batch) 124 self.runner.call_hook( 125 'before_train_iter', batch_idx=idx, data_batch=data_batch) 126 # Enable gradient accumulation mode and avoid unnecessary gradient 127 # synchronization during gradient accumulation process. 128 # outputs should be a dict of loss. --> 129 outputs = self.runner.model.train_step( 130 data_batch, optim_wrapper=self.runner.optim_wrapper) 132 self.runner.call_hook( 133 'after_train_iter', 134 batch_idx=idx, 135 data_batch=data_batch, 136 outputs=outputs) 137 self._iter += 1 File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py:114](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py#line=113), in BaseModel.train_step(self, data, optim_wrapper) 112 with optim_wrapper.optim_context(self): 113 data = self.data_preprocessor(data, True) --> 114 losses = self._run_forward(data, mode='loss') # type: ignore 115 parsed_losses, log_vars = self.parse_losses(losses) # type: ignore 116 optim_wrapper.update_params(parsed_losses) File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py:361](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py#line=360), in BaseModel._run_forward(self, data, mode) 351 """Unpacks data for :meth:`forward` 352 353 Args: (...) 358 dict or list: Results of training or testing mode. 359 """ 360 if isinstance(data, dict): --> 361 results = self(**data, mode=mode) 362 elif isinstance(data, (list, tuple)): 363 results = self(*data, mode=mode) File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/torch/nn/modules/module.py:1518](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/torch/nn/modules/module.py#line=1517), in Module._wrapped_call_impl(self, *args, **kwargs) 1516 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc] 1517 else: -> 1518 return self._call_impl(*args, **kwargs) File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/torch/nn/modules/module.py:1527](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/torch/nn/modules/module.py#line=1526), in Module._call_impl(self, *args, **kwargs) 1522 # If we don't have any hooks, we want to skip the rest of the logic in 1523 # this function, and just call forward. 1524 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks 1525 or _global_backward_pre_hooks or _global_backward_hooks 1526 or _global_forward_hooks or _global_forward_pre_hooks): -> 1527 return forward_call(*args, **kwargs) 1529 try: 1530 result = None File [~/mmpose/mmpose/models/pose_estimators/base.py:155](http://localhost:8989/lab/tree/demo/mmpose/models/pose_estimators/base.py#line=154), in BasePoseEstimator.forward(self, inputs, data_samples, mode) 153 inputs = torch.stack(inputs) 154 if mode == 'loss': --> 155 return self.loss(inputs, data_samples) 156 elif mode == 'predict': 157 # use customed metainfo to override the default metainfo 158 if self.metainfo is not None: File [~/mmpose/mmpose/models/pose_estimators/bottomup.py:70](http://localhost:8989/lab/tree/demo/mmpose/models/pose_estimators/bottomup.py#line=69), in BottomupPoseEstimator.loss(self, inputs, data_samples) 66 losses = dict() 68 if self.with_head: 69 losses.update( ---> 70 self.head.loss(feats, data_samples, train_cfg=self.train_cfg)) 72 return losses File [~/mmpose/mmpose/models/heads/hybrid_heads/yoloxpose_head.py:319](http://localhost:8989/lab/tree/demo/mmpose/models/heads/hybrid_heads/yoloxpose_head.py#line=318), in YOLOXPoseHead.loss(self, feats, batch_data_samples, train_cfg) 314 flatten_kpt_decoded = self.decode_kpt_reg(flatten_kpt_offsets, 315 flatten_priors[..., :2], 316 flatten_priors[..., -1]) 318 # 2. generate targets --> 319 targets = self._get_targets(flatten_priors, 320 flatten_cls_scores.detach(), 321 flatten_objectness.detach(), 322 flatten_bbox_decoded.detach(), 323 flatten_kpt_decoded.detach(), 324 flatten_kpt_vis.detach(), 325 batch_data_samples) 326 pos_masks, cls_targets, obj_targets, obj_weights, \ 327 bbox_targets, bbox_aux_targets, kpt_targets, kpt_aux_targets, \ 328 vis_targets, vis_weights, pos_areas, pos_priors, group_indices, \ 329 num_fg_imgs = targets 331 num_pos = torch.tensor( 332 sum(num_fg_imgs), 333 dtype=torch.float, 334 device=flatten_cls_scores.device) File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/torch/utils/_contextlib.py:115](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/torch/utils/_contextlib.py#line=114), in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File [~/mmpose/mmpose/models/heads/hybrid_heads/yoloxpose_head.py:410](http://localhost:8989/lab/tree/demo/mmpose/models/heads/hybrid_heads/yoloxpose_head.py#line=409), in YOLOXPoseHead._get_targets(self, priors, batch_cls_scores, batch_objectness, batch_decoded_bboxes, batch_decoded_kpts, batch_kpt_vis, batch_data_samples) 408 targets_each = [] 409 for i in range(num_imgs): --> 410 target = self._get_targets_single(priors, batch_cls_scores[i], 411 batch_objectness[i], 412 batch_decoded_bboxes[i], 413 batch_decoded_kpts[i], 414 batch_kpt_vis[i], 415 batch_data_samples[i]) 416 targets_each.append(target) 418 targets = list(zip(*targets_each)) File [~/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/torch/utils/_contextlib.py:115](http://localhost:8989/home/hcladmin/anaconda3/envs/mmpose-env/lib/python3.8/site-packages/torch/utils/_contextlib.py#line=114), in context_decorator.<locals>.decorate_context(*args, **kwargs) 112 @functools.wraps(func) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) File [~/mmpose/mmpose/models/heads/hybrid_heads/yoloxpose_head.py:532](http://localhost:8989/lab/tree/demo/mmpose/models/heads/hybrid_heads/yoloxpose_head.py#line=531), in YOLOXPoseHead._get_targets_single(self, priors, cls_scores, objectness, decoded_bboxes, decoded_kpts, kpt_vis, data_sample) 524 scores = cls_scores * objectness 525 pred_instances = InstanceData( 526 bboxes=decoded_bboxes, 527 scores=scores.sqrt_(), (...) 530 keypoints_visible=kpt_vis, 531 ) --> 532 assign_result = self.assigner.assign( 533 pred_instances=pred_instances, gt_instances=gt_instances) 535 # sampling 536 pos_inds = torch.nonzero( 537 assign_result['gt_inds'] > 0, as_tuple=False).squeeze(-1).unique() File [~/mmpose/mmpose/models/task_modules/assigners/sim_ota_assigner.py:85](http://localhost:8989/lab/tree/demo/mmpose/models/task_modules/assigners/sim_ota_assigner.py#line=84), in SimOTAAssigner.assign(self, pred_instances, gt_instances, **kwargs) 67 """Assign gt to priors using SimOTA. 68 69 Args: (...) 82 max iou overlaps, assigned labels, etc. 83 """ 84 gt_bboxes = gt_instances.bboxes ---> 85 gt_labels = gt_instances.labels 86 gt_keypoints = gt_instances.keypoints 87 gt_keypoints_visible = gt_instances.keypoints_visible AttributeError: 'InstanceData' object has no attribute 'labels'
from mmengine.config import Config, DictAction from mmengine.runner import Runner cfg.model.setdefault('data_preprocessor', cfg.get('preprocess_cfg', {})) runner = Runner.from_cfg(cfg) runner.train()
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Prerequisite
Environment
I am trying to train yolox model on custom dataset which is annotated using cvat tool and exported to coco format.
then i followed the the MMPose_Tutorial.ipynb notebook and created the similar data loader class as TinyCoco.
i am getting the below error while training.
Reproduces the problem - code sample
Reproduces the problem - command or script
Reproduces the problem - error message
Additional information
The text was updated successfully, but these errors were encountered: