Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dada2 drops samples without reads unfiltered reads from table but not denoising stats. #136

Open
dwthomas opened this issue Mar 30, 2021 · 1 comment · Fixed by #139
Open

Comments

@dwthomas
Copy link
Contributor

Bug Description
dada2-paired appears to drop samples which have 0 sequences after filtering, or perhaps denoising, but keeps samples which have zero sequences from merging and beyond.
This leads to samples being dropped from the table, though they remain in the denoising stats.

I've not checked the behavior with the dada2 paired or dada2 pyro or with older versions of dada2.

Steps to reproduce the behavior
Run a dataset including samples with no reads that pass the filtering step, I had a sample with 0 reads and a sample with 6 poor quality reads that were both dropped.

Expected behavior
The samples should be retained with 0 frequency in the table.

Computation Environment

  • OS: Linux
  • QIIME 2 Release 2021.2

Questions

  1. Is this a qiime2 issue or a dada2 issue?

Comments

  1. Ideally blanks should be blank, so discarding them in dada2 means the user has to actively notice that they are missing.
  2. A trivial fix is something like:
    for i in set(denoising_stats.index) - set(table.index): table.loc[i] = 0
    Though there may be a more clear way of getting the dropped samples.
    I can have a go at implementing that, but I wanted options on if this is something to change in dada2, or here?

Here is the data I noticed this behavior in: https://unh.box.com/s/pyz250peix6hiyrcye2fnwycidx171sz

@benjjneb
Copy link
Collaborator

benjjneb commented Apr 1, 2021

Is this a qiime2 issue or a dada2 issue?

It's sort of a dada2 issue, in that we haven't robustly implemented processing zero-read samples through the whole pipeline, therefore they need to be removed from the processing before entering the post-filtering denoising workflow.

That said, it is probably an easy fix on the Q2 side by just adding those zero-read samples back into whatever final table where they are still wanted, as it seems like you are suggesting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Status: Completed
Development

Successfully merging a pull request may close this issue.

3 participants