Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Solo on 10x genomics scRNA data #67

Closed
rpauly opened this issue Aug 10, 2021 · 3 comments
Closed

Solo on 10x genomics scRNA data #67

rpauly opened this issue Aug 10, 2021 · 3 comments

Comments

@rpauly
Copy link

rpauly commented Aug 10, 2021

Hi,
I am trying to run solo on the 10x genomics data. This is the command I used:
solo -d /projects/lihc_hiseq/active/SingleCell/processedDATA/12_M491_PBMC_CTC_cDNArep/outs/filtered_feature_bc_matrix -j model_json.json -o 12_M491_output
The model_json.json is the default you have suggested.
This is the error I get:
Cuda is not available, switching to cpu running! Min cell depth: 500.0, Max cell depth: 68659.0 INFO No batch_key inputted, assuming all cells are same batch INFO No label_key inputted, assuming all cells have same label INFO Using data from adata.X INFO Computing library size prior per batch INFO Successfully registered anndata object containing 7333 cells, 32738 vars, 1 batches, 1 labels, and 0 proteins. Also registered 0 extra categorical covariates and 0 extra continuous covariates. INFO Please do not further modify adata until model is trained. GPU available: False, used: False TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs /home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/tqdm/std.py:538: TqdmWarning: clamping frac to range [0, 1] colour=colour) Epoch 1/2000: -0%| | -1/2000 [00:00<?, ?it/s]/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/trainer/connectors/logger_connector/result.py:398: LightningDeprecationWarning: One of the returned values {'reconstruction_loss_sum', 'kl_global', 'kl_local_sum', 'n_obs'} has a grad_fn. We will detach it automatically but this behaviour will change in v1.6. Please detach it manually: return {'loss': ..., 'something': something.detach()}f"One of the returned values {set(extra.keys())} has agrad_fn. We will detach it automatically" Traceback (most recent call last): File "/home/paulyr2/miniconda/envs/solo/bin/solo", line 33, in <module> sys.exit(load_entry_point('solo-sc', 'console_scripts', 'solo')()) File "/home/paulyr2/solo/solo/solo.py", line 240, in main callbacks=scvi_callbacks, File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/scvi/model/base/_training_mixin.py", line 70, in train return runner() File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/scvi/model/base/_trainrunner.py", line 75, in __call__ self.trainer.fit(self.training_plan, train_dl, val_dl) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/scvi/lightning/_trainer.py", line 131, in fit super().fit(*args, **kwargs) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 553, in fit self._run(model) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 918, in _run self._dispatch() File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 986, in _dispatch self.accelerator.start_training(self) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/accelerators/accelerator.py", line 92, in start_training self.training_type_plugin.start_training(trainer) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 161, in start_training self._results = trainer.run_stage() File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 996, in run_stage return self._run_train() File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 1045, in _run_train self.fit_loop.run() File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/loops/base.py", line 111, in run self.advance(*args, **kwargs) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/loops/fit_loop.py", line 200, in advance epoch_output = self.epoch_loop.run(train_dataloader) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/loops/base.py", line 118, in run output = self.on_run_end() File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 235, in on_run_end self._on_train_epoch_end_hook(processed_outputs) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 275, in _on_train_epoch_end_hook trainer_hook(processed_epoch_output) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/trainer/callback_hook.py", line 109, in on_train_epoch_end callback.on_train_epoch_end(self, self.lightning_module) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/callbacks/early_stopping.py", line 170, in on_train_epoch_end self._run_early_stopping_check(trainer) File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/callbacks/early_stopping.py", line 185, in _run_early_stopping_check logs File "/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/pytorch_lightning/callbacks/early_stopping.py", line 134, in _validate_condition_metric raise RuntimeError(error_msg) RuntimeError: Early stopping conditioned on metric reconstruction_loss_validationwhich is not available. Pass in or modify yourEarlyStoppingcallback to use any of the following:elbo_train, reconstruction_loss_train, kl_local_train, kl_global_train/home/paulyr2/miniconda/envs/solo/lib/python3.6/site-packages/tqdm/std.py:538: TqdmWarning: clamping frac to range [0, 1] Epoch 1/2000: -0%|

Suggestions?
Thanks!

@njbernstein
Copy link
Contributor

Hi there

Another user just had this error has well.I'll be trying to track it down today.

Best,
Nick

@njbernstein
Copy link
Contributor

Hi there

It seems like the most recent version of scvi-tools broke this. Please rollback to

pip install scvi-tools==0.11.0
pip install pytorch-lightning==1.2.3

I'll be pinning these in the requirements.txt shortly

@rpauly
Copy link
Author

rpauly commented Aug 12, 2021

Thanks! I tried to roll back, but I do get an error:

pip install pytorch-lightning==1.2.3

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
scvi-tools 0.11.0 requires pytorch-lightning>=1.3, but you have pytorch-lightning 1.2.3 which is incompatible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants