-
Notifications
You must be signed in to change notification settings - Fork 0
/
amplicon_workflow.Rmd
1566 lines (1247 loc) · 132 KB
/
amplicon_workflow.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
title: "Amplicon Analysis Workflow"
author: 'Author: Linton Freund ([email protected])'
date: "Last update: `r format(Sys.time(), '%d %B, %Y')`"
output:
html_document:
code_folding: show
collapsed: yes
highlight: pygments
number_sections: yes
smooth_scroll: yes
theme: cosmo
toc: yes
toc_depth: 3
toc_float: yes
fontsize: 15pt
bibliography: mybib.bib
always_allow_html: yes
---
<style type="text/css">
body{
font-size: 13pt;
}
</style>
<!--
Render from R:
rmarkdown::render("Amplicon_Workflow.Rmd", clean=TRUE, output_format="html_document")
R
Note: if you render as a PDF then try to render a website, change the date format above (see here https://stackoverflow.com/questions/39415294/avoid-yaml-header-change-when-switching-knitr-output-format)
Rendering from the command-line. To render to PDF format, use the argument setting: output_format="pdf_document".
$ Rscript -e "rmarkdown::render('Amplicon_Workflow.Rmd', output_format='html_document', clean=TRUE)"
Add logo:
htmltools::img(src = knitr::image_uri("mylogo.png"),
alt = 'logo',
style = 'position:absolute; top:0; center:0; padding:10px;')
-->
```{r setup, include=FALSE}
#options(tinytex.verbose = TRUE)
knitr::opts_chunk$set(echo = TRUE)
```
# Background
This is a tutorial on how to process your 16S ITS1/ITS2 amplicon sequences and identify the taxonomic identification of the ASVs (i.e., amplicon sequence variants, also known as zOTUs for zero OTUs or ESVs for exact sequence variants) in your sequence data.
To create this tutorial, I have assembled scripts I've used to analyze 16S amplicon sequence data provided by Dr. Emma Aronson's lab. The data I am working with to create this workflow comes from a project that examined soil microbial community composition in Mount Saint Helens. The target region was the V4 region within the 16S gene, and sequencing was performed with an Illumina MiSeq (2x300).
This tutorial would not have been possible without [Dr. Benjamin Callahan's](https://callahanlab.cvm.ncsu.edu/) `DADA2` [program](https://github.com/benjjneb/dada2) [@Callahan2016] and [tutorials](https://benjjneb.github.io/dada2/tutorial.html). Additionally, I would like to especially thank [Dr. Mike Lee](https://astrobiomike.github.io/research/) for his guidance, his patience, and his Happy Belly Bioinformatics tutorial called [**Amplicon Analysis**](https://astrobiomike.github.io/amplicon/dada2_workflow_ex#removing-likely-contaminants) tutorial [@Lee2019].
**You can find all of the scripts used in this workflow in this [GitHub repository](https://github.com/hlfreund/Amplicon_Sequencing_Worfklow) or you can download it from SourceForge or Zenodo, which is linked in the [About Me](#about-me) section.**
**If you do use this workflow, please cite this using the DOI, which is included in the [About Me](#about-me) section**
## Considerations before you begin
I was able to analyze these sequences on a High Performance Computing cluster (HPCC) that uses a Slurm scheduler. The minimum amount of total memory I used (not per CPU, but overall) for each step in this workflow (i.e., each step as a separate 'job' via Slurm) was 400GB. Having enough memory is essential to running most of these programs, so please keep this in mind before embarking on this workflow!
These steps are also time consuming depending on your memory constraints, so do not be concerned if this process takes a while. If you plan to run through multiple steps in the workflow in one sitting, then I suggest loading **tmux** before you run your scripts. Here is a handy [tmux cheat sheet](https://tmuxcheatsheet.com/) that I refer to often. For more information on what tmux is and how to utilize it, check this [tmux crash course](https://thoughtbot.com/blog/a-tmux-crash-course) by Josh Clayton.
I also suggest exploring a program called **neovim** (aka nvim) that allows you to use Vim (a text editor) to edit R code and run the code simultaneously. Though nvim is not necessary to run through this workflow, I find that it makes my life a bit easier when running through the `DADA2` portion of the workflow. I will get more into the usage of nvim once we get to the `DADA2` step(s), but for more information please view the [Neovim Github](https://github.com/jalvesaq/Nvim-R) as well as its [documentation](https://github.com/jamespeapen/Nvim-R/wiki#overview). You can also find a helpful nvim tutorial [here](https://girke.bioinformatics.ucr.edu/GEN242/tutorials/linux/linux/#nvim-r-tmux-essentials) created by Dr. Thomas Girke from UC Riverside.
Additionally, you will need to change your path to each of these programs depending on where they are stored in your computer or HPCC. If you are running these steps locally (which, if you are, then you have one badass computer!), then you can skip the module loading lines in each step -- loading modules is specifically for running these scripts on a HPCC that uses a Slurm Workload Manager.
## Submitting Scripts as Jobs with Slurm
If you are unsure as to how to set up the script for submitting on your HPCC, check the code chunk below. This is the information I use when submitting a job to our Slurm system. Again, this is specifically for a system that uses the Slurm scheduler. For more information on what the arguments mean and how to set up your job submissions, please refer to this handy [cheatsheet](https://slurm.schedmd.com/pdfs/summary.pdf) made by Slurm.
**NOTE**: If you are running these scripts on an HPCC, please load the module you need before running, or add `load module name_of_module` to your script before you call on the program you want to use.
```{bash slurm_job_prep, eval=FALSE}
#!/bin/bash -l
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=4 # must match the # of threads if program allows threading (-t #)
##SBATCH --mem-per-cpu=500G # memory per cpu - * if threading, do not let this line run (use ##). Cannot ask for too much memory per cpu!
#SBATCH --mem=500GB # overall memory - if you're threading, keep this line
#SBATCH --time=1-00:00:00 # time requested; this example is 1 day, 0 hrs
#SBATCH --output=name_of_log_file_6.27.21.stdout # name of your log file
#SBATCH [email protected] # your email address
#SBATCH --mail-type=ALL # will send you email updates when job starts and ends (and if it runs successfully or not)
#SBATCH --job-name="Name of Job 1/1/21" # name of your job for Slurm
#SBATCH -p node_name_here # partition node name
```
When I don't know exactly what a program's output will look like, I will run the program via an interactive job on the HPCC. I also suggest running programs interactively if the program requires multiple lines of code to run, and you want to make sure each step has the correct input (whether it be a file, an object, or the output of a previous step in the code). For some more information on interactive jobs in a Slurm system, check out this [blog post](https://yunmingzhang.wordpress.com/2015/06/29/how-to-use-srun-to-get-an-interactive-node/) by Yunming Zhang.
This is how I set up an interactive job on the HPCC (that uses Slurm).
```{bash interactive_job, eval=FALSE}
srun -p node_name_here --mem=500gb --time=1-00:00:00 --pty bash -l
# -p = partition
# --mem = overall memory being requested, not memory per CPU
# --time = overall time requested; 1 day, 0 hrs
# -–pty = gives you a pseudo terminal that runs bash
# bash -l = setting bash as language
```
## A bash scripting tip for before we start
I wanted to share a bit of code that you will see being implemented in every script throughout the tutorial. This little bit of code will help you pull out the sample names from your files, allowing you easily run through your files while also keeping track of which samples those files belong to.
```{bash get_sample_name, eval=FALSE}
for FILE in path/*.fastq;
do
f=$(basename $FILE)
SAMPLE=${f%.fastq*}
echo $SAMPLE
done
```
Here I am using using a for loop to loop through each fastq files in a specific directory. In each iteration of the loop, an `$f` variable is created, which uses the `basename` function to get the file name of the `$FILE` variable. Then `$SAMPLE` is created by using `%` to remove the `.fastq` extension and everything that follows, keeping only the file name (minus the extension) and calling that `$SAMPLE`. Then we can use the `$SAMPLE` variable to substitute the file names, which come in handy for running these scripts over multiple samples at one time. This concept should become clearer as we move through the workflow. If you'd like more information on string substitution (i.e., using `%` to remove parts of a string), please see this helpful [link](https://tldp.org/LDP/abs/html/string-manipulation.html).
# Sample pre-processing
## Demultiplex your samples
When preparing sequencing libraries, we typically multiplex our samples. This means that during library preparation, we've attached barcodes to our sequences that help us trace the sample that these sequences came from. This allows us to pool multiple libraries together in one sequencing run. After sequencing, the sequences are *demultiplexed*, meaning the individual sequences are separated out by sample into individual FASTQ files.
Typically your samples will be returned to you already demultiplexed. However, if your samples are still pooled into one large FASTQ file, do not panic! You can follow the [**demultiplexing tutorial**](https://astrobiomike.github.io/amplicon/demultiplexing) by [Dr. Mike Lee](https://astrobiomike.github.io/research/) which utilizes the [`sabre` tool](https://github.com/najoshi/sabre). Or, you can use `bcl2fastq2` by Illumina (more information [here](https://support.illumina.com/content/dam/illumina-support/documents/documentation/software_documentation/bcl2fastq/bcl2fastq2-v2-20-software-guide-15051736-03.pdf)).
## Sequence Quality and Where to Trim
### Check the quality of your sequences with `FastQC`
It's always a good idea to check the quality of your sequences before you start your analysis, regardless of the type of sequences they are (metagenomes, RNA-seq data, etc). `FastQC` [@Andrews] provides a comprehensive report on the quality of your sequences and is helpful for the following: identifying primers or adapters still attached to your sequences; determining the quality of your reverse reads; etc. You can also use the FastQC reports to determine if you should attempt to merge your forward and reverse reads, or just proceed with only the forward reads.
```{bash fastqc_check, eval=FALSE}
# my 16S and ITS2 sequences are in separate directories, which I why I loop through them separately below
# create a directory to store your FastQC results in
if [[ ! -d ./FastQC_Results ]]; then
mkdir FastQC_Results
fi
# create directory within results directory for 16S FastQC Results
if [[ ! -d ./FastQC_Results/16S_FastQC ]]; then
mkdir FastQC_Results/16S_FastQC
fi
# create directory within results directory for ITS2 (or ITS1) FastQC Results
if [[ ! -d ./FastQC_Results/ITS2_FastQC ]]; then
mkdir FastQC_Results/ITS2_FastQC
fi
# loop through each 16S fastq.gz file and run through FastQC
for FILE in 16S_Seqs/*.fastq.gz;
do
# extract out just the sample name from the file name
f=$(basename $FILE)
SAMPLE=${f%.fastq*} #string manipulation to drop .fastq and everything that comes after
fastqc $FILE --outdir=./FastQC_Results/16S_FastQC
done
# loop through each ITS2 fastq.gz file and run through FastQC
for FILE in ITS2_Seqs/*.fastq.gz;
do
f=$(basename $FILE)
SAMPLE=${f%.fastq*}
fastqc $FILE --outdir=./FastQC_Results/ITS2_FastQC
done
```
FastQC will return a report assessing the per base and per sequence quality of your sequences, as well as the GC and N (i.e., unidentified base) content across your sequences, the distribution of your sequence lengths, and whether or not adapters are still attached to your sequences. The second tab of the report details the per base sequence quality across all of your sequences. The per base quality score (**Q score**), also known as a **Phred score**, is the estimated probability that the base call is wrong. The following equation is used for calculating the Q score:
$$
Q = -10log_{10}E
$$
Here, E is the estimated probability of the base call being wrong. The higher the Q score, the smaller the probability of a base call error. A quality score of 30 (Q30) means that the probability of an incorrect base call is 1 in 1000, and that the base call accuracy (1 - probability of incorrect base call) is 99.9%. For more information on quality scores, please see this info from [Illumina](https://www.illumina.com/science/technology/next-generation-sequencing/plan-experiments/quality-scores.html).
Below is an example of the "per base sequence quality" portion of the report. This portion of the report helps me to determine where I should trim my sequences as I move forward with the analysis. This part of the report can also give you a sense on whether there was an error in your sequencing run. For example, if the average quality score (i.e., the blue line in the report) across all of the bases dips below 30 for half of the sequence length in all of my samples, that could indicate that there was an error with the sequencing run itself.
<center>
![](amplicon_workflow/fastqc_base_seq_qual_plot.png)
</center>
<div align="center">Figure 1: Per Base Quality Scores from FastQC Report</div></br>
Another useful piece of the FastQC report is the adapter content tab, which is the very last tab in the report. This portion of the report tells us the percentage of reads that have adapter sequences at specific base positions along the reads. The following snapshot from a FastQC report shows that the Small RNA 3' adapter sequence is found in ~2% of the sequences starting at around the 160th base. We can use this information to then decide exactly which adapter sequences to cut from our samples in the trimming step.
<center>
![](amplicon_workflow/fastqc_adapter_content.png)
</center>
<div align="center">Figure 2: Frequency of Adapter Sequences from FastQC Report</div></br>
For more on how to interpret FastQC reports, please check out this helpful [FastQC tutorial](https://rtsf.natsci.msu.edu/genomics/tech-notes/fastqc-tutorial-and-faq/) from Michigan State University.
### Expected Error Filtering of Sequences with `eestats`
The `eestats2` program [@Edgar2015] creates a report detailing the percentage of reads that will pass through an expected error filter when the reads are at different lengths. Specifically the program will determine how many reads at each specific length (i.e., 50 bp, 100 bp, 150 bp, etc.) have good enough quality to surpass the three expected error thresholds: 0.5%, 1%, and 2%.
Before you run the `eestats` program, be sure to *gunzip* (aka decompress) your fastq.gz files! You can do that by running the following command: `gunzip /path/to/*.fastq.gz`.
```{bash eestats_check, eval=FALSE}
# Create directory to store eestats results
if [[ ! -d ./EEstats_Results ]]; then
mkdir EEstats_Results
fi
# Create specific directory within eestats results for 16S eestats results
if [[ ! -d ./EEstats_Results/16S_EEstats ]]; then
mkdir EEstats_Results/16S_EEstats
fi
# Create specific directory within eestats results for ITS2 (or ITS1) eestats results
if [[ ! -d ./EEstats_Results/ITS2_EEstats ]]; then
mkdir EEstats_Results/ITS2_EEstats
fi
# Run eestats2 in loop with 16S fastq files
for FILE in 16S_Seqs/*.fastq;
do
f=$(basename $FILE)
SAMPLE=${f%.fastq*}
usearch -fastq_eestats2 $FILE -output ${SAMPLE}_eestats2.txt
# move results to EEstats_Results directory
mv ${SAMPLE}_eestats2.txt EEstats_Results/16S_EEstats
done
# Run eestats2 in loop with ITS2 fastq files
for FILE in ITS2_Seqs/*.fastq;
do
f=$(basename $FILE)
SAMPLE=${f%.fastq*}
usearch -fastq_eestats2 $FILE -output ${SAMPLE}_eestats2.txt
# move results to EEstats_Results directory
mv ${SAMPLE}_eestats2.txt EEstats_Results/ITS2_EEstats
done
```
The following image shows a sample eestats2 report. For this specific sample, the ideal trimming length of the sequences would be around 250 basepairs long. This is because when considering the expected error threshold of 1%, more than 80.8% of the sequence pass this threshold. Though the 300 bp length also allows high rentention of reads, we know that the per base quality of this sample drops as we approach the 300 bp position. Thus, it seems like trimming these sequences to 250 bps would be ideal moving forward.
<center>
![](amplicon_workflow/eestats_example_report.png)
</center>
<div align="center">Figure 3: eestats Report Example</div></br>
For more information on the `eestats2` programs by [USEARCH](https://www.drive5.com/usearch/), please read the documentation [here](https://www.drive5.com/usearch/manual/cmd_fastq_eestats2.html).
## Decontaminate & Trim sequences and cut adapters, primers, etc
There are plenty of programs out there that can be used for trimming, and the following three are the most popular for amplicon analyses: [`cutadapt`](https://cutadapt.readthedocs.io/en/stable/), [`trimmomatic`](http://www.usadellab.org/cms/?page=trimmomatic), and [`bbduk`](https://jgi.doe.gov/data-and-tools/bbtools/bb-tools-user-guide/bbduk-guide/) [@Bushnell]. All of these programs are reputable, but I personally like the `bbduk`, and will use this tool for trimming and adapter removal.
Before I trim my sequences, I refer to the FastQC reports to find out exactly which adapters I should remove from my sequences. For example, when looking at the adapter content portion of the FastQC report above, I can see that the Nextera Transposase Sequence is still present in that particular sample. Thankfully Illumina shares their adapter sequences on their [website](https://support-docs.illumina.com/SHARE/AdapterSeq/Content/SHARE/AdapterSeq/AdapterSequencesIntro.htm), allowing us to easily find common adapters in sequences, like the [Nextera Transposase Sequence](https://support-docs.illumina.com/SHARE/AdapterSeq/Content/SHARE/AdapterSeq/Nextera/SequencesNXTILMPrepAndPCR.htm) for example.
I also know that with the sequences I am analyzing, the PCR primers are still attached (FastQC may identify these primers in your report's Overrepresented Sequences tab, but not necessarily the origin of these sequences). I can either remove these primer sequences using the actual sequence using the (`literal=`) flag, or I can trim from the right (`ftr=`) and/or the left (`ftl=`) of the sequences if I know exactly how long the primer sequences were.
It is recommended to check the overrepresented sequences from the FastQC report to see if there are contaminating sequences present in your data. I suggest taking the most frequent overrepresented sequence and running it through [BLASTn](https://blast.ncbi.nlm.nih.gov/Blast.cgi?PROGRAM=blastn&BLAST_SPEC=GeoBlast&PAGE_TYPE=BlastSearch) if the source of this overrepresented sequence says "No Hit" (meaning that FastQC cannot attribute this sequence to its list of adapter sequences). If the sequence comes up as a contaminant (i.e., a different gene than the amplicon you're looking at) or adapter/primer of some kind, you can add this to the `literal=` flag in `bbduk` to remove the contaminant.
In addition to removing adapter and primer sequences using the the `literal=` flag, I also include a reference file provided by `bbduk` (referenced in the `ref=` flag) that contains all of the Illumina TruSeq adapters. The sequences in the reference file, in addition to the given adapters and primers, will be removed from the sequences. Additionally, bbduk decontaminates sequences by matching kmers (aka reads of a specific length k) to reference genomes. If the kmers match the reference genome, then the kmer is kept. The longer the kmer, the higher the specificity - but there is a limit to this, seeing as the likelihood that long kmers are shared across multiple reads is unlikely. If your kmer length is too short, you could be keeping sequences that are adapters, primers, etc by accident (this is why using the literal sequence flag and the adapter reference file is helpful in `bbduk`).
Below the shell script is a description of all of the flags used by `bbduk` and exactly what they mean. For more information on the `bbduk` flags, please see the `bbduk` [documentation](https://jgi.doe.gov/data-and-tools/bbtools/bb-tools-user-guide/bbduk-guide/).
```{bash bbduk_trim, eval=FALSE}
path=/path/to/sequences/here # replace with the path to your files
# my sequence files are in $path/16S_Seqs/ -- see for loop below
if [[ ! -d ./Trimmed_Seqs ]]; then # creating directory to store trimmed sequences in
mkdir Trimmed_Seqs
fi
if [[ ! -d ./Trimmed_Seqs/16S_Trimmed ]]; then # creating directory for specifically trimmed 16S sequences
mkdir Trimmed_Seqs/16S_Trimmed
fi
for i in ${path}/16S_Seqs/*_R1.fastq;
do
f=$(basename $i)
SAMPLE=${f%_R*}
bbduk.sh -Xmx10g in1=${path}/16S_Seqs/${SAMPLE}_R1.fastq in2=${path}/16S_Seqs/${SAMPLE}_R2.fastq out1=${path}/Trimmed_Seqs/16S_Trimmed/${SAMPLE}_R1_clean.fastq out2=${path}/Trimmed_Seqs/16S_Trimmed/${SAMPLE}_R2_clean.fastq literal=TCGTCGGCAGCGTCAGATGTGTATAAGAGACAG,GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAG ref=/bigdata/aronsonlab/shared/bbmap_resources/adapters.fa rcomp=t ktrim=r k=23 maq=10 minlength=200 mink=13 hdist=1 tpe tbo
done
# ref ---> file provided by bbduk that holds collection of Illumina TruSeq adapters
# literal=(sequence here) ---> literal adapter sequences to remove; "N" represents any base -- in this case, they are indexes within the adapters
# rcomp=t ---> Rcomp looks for kmers and their reverse-complements, rather than just forward kmer, if set to true
# ktrim=r ---> “ktrim=r” is for right-trimming (3′ adapters)
# k=23 ---> look for kmer that is 23 bp long
# mink=11 ---> in addition to kmers of x length, look for shorter kmers with lengths 23 to 11 (in this case)
# maq=10 ---> This will discard reads with average quality below 10
# hdist=1 ---> hamming distance of 1
# mlf=50 ---> (minlengthfraction=50) would discard reads under 50% of their original length after trimming
# trimq=10 ---> quality-trim to Q10 using the Phred algorithm, which is more accurate than naive trimming.
# qtrim=r ---> means it will quality trim the right side only
# tpe ---> which specifies to trim both reads to the same length
# tbo ---> which specifies to also trim adapters based on pair overlap detection using BBMerge (which does not require known adapter sequences)
# mm ----> Maskmiddle ignores the middle base of a kmer, can turn off with mm=f
```
To be extra cautious and ensure that the trimming step was successful, I will run the trimmed sequences through FastQC and compare the reports. If the per base and per sequence qualities have improved and/or the adapters are absent, then I will move forward with the workflow. However, if I am still not happy with the quality of the trimmed reads, I will then run the *trimmed* reads through `bbduk` in hopes of removing persistent, unwanted sequences. I will also check the overrepresented sequences and their frequencies again, and run the most frequent overepresented sequence(s) in BLASTn.
# ASV Assignment with `DADA2`
All of the steps in this portion of the workflow (excluding the tmux and nvim code chunks) have been adapted from Dr. Callahan's `DADA2` [tutorial](https://benjjneb.github.io/dada2/tutorial.html) and Dr. Lee's amplicon [tutorial](https://astrobiomike.github.io/amplicon/dada2_workflow_ex).
To prepare for running `DADA2`, I want to separate our sequence files by locus and region. For example, you do not want to analyze your 16S and ITS2 sequences together in DADA2 -- combining loci and even different regions of the same loci can interfere with the `DADA2` algorithm. For example, even if you have 16S sequences of just the V3 region, and a set of 16S sequences with the V3-V4 region, you would want to run these regions separately through the `DADA2` pipeline. The reason for this will become clearer as we get to the filtering and trimming step and the error rate prediction step.
To create separate directories for your sequence data, I first ensure that their file names include their amplicon that's been sequenced for that particular sample (e.g., the 16S V4 data for Sample1 is in the file `Sample1_16S.V4_R1_001.fastq`). Then I would run the following line of code.
```{bash move_files, eval=FALSE}
# make sure you are in the correct directory before doing this
mv *_16S.V4_* 16S.V4_Seqs
# format of move command: mv file_name directory_name
```
Here I am using a `*` which is a special character that can be used to represent any character or set of characters. In this case, I am telling the `mv` command to move any files that have the `_16S.V4_` pattern anywhere in the file name to a directory called `16S.V4_Seqs`. After running this command, I make sure that my script containing the following `DADA2` R code is in the directory with the specific files you want to analyze. My `DADA2` R script is called `DADA2_tutorial_16S_pipeline.R`, which you will see me reference in a couple of code chunks.
## Run Interactive Job + `tmux` on HPCC
Personally, I like to run through `DADA2` via an interactive job on our HPCC. This will allow us to run scripts line by line and check the output, rather than submitting a job to run in the cluster without our supervision. Basically, this is an easy to way constantly check our progress and (ideally) catch errors as soon as they happen. Again, your HPCC must use Slurm to run an interactive job in this manner.
```{bash start_interactive_job, eval=FALSE}
srun --partition=node_name_here --mem=400gb --cpus-per-task 4 --ntasks 1 --time 08:00:00 --pty bash -l
# --cpus-per-task and --ntasks are not necessary
# --cpus-per-task is needed for multithreading
```
Once the interactive job is running, we can use **tmux** and **nvim** to start running through the `DADA2` R script.
```{bash tmux_nvim_start, eval=FALSE}
tmux new -s mysession # start new tmux session named mysession; do this if you have not started running tmux already
nvim DADA2_tutorial_16S_pipeline.R # load R script using nvim
```
<center>
![](amplicon_workflow/nvim_openR.png)
</center>
<div align="center">Figure 4a: Neovim Display of R script</div></br>
Almost immediately after the R script opens in nvim, I type `\rf`. This will open another window showing your terminal. You can toggle the horizontal verses vertical alignment fo the windows by typing `Ctrl-w shift-H` for a horizontal alignment or `Ctrl-w shift-V`for a vertical alignment. Below is what the screen should look like after typing `\rf` followed by `Ctrl-w shift-H`. You can see the R script is open in the left window, and my terminal is open in the right window.
<center>
![](amplicon_workflow/nvim_horizontal.png)
</center>
<div align="center">Figure 4b: View of Neovim Display after typing `\rf`</div></br>
Now that we are in nvim, all you need to do to run a line of code is to just hit the `space` bar! You can also toggle between windows using `Ctrl-w w`, edit or type code by pressing `i` to insert code, and leaving the editing mode by pressing `esc`. To quit and save changes to your R file, just type `:wq`, or to quit without saving changes to your file, just type `:q!`. When using nvim, I keep Dr. Girke's handy [tmux/nvim tutorial](https://girke.bioinformatics.ucr.edu/GEN242/tutorials/linux/linux/#basic-usage-of-nvim-r-tmux) open as a reference just in case.
### Load the path & FASTQ files
We can start by loading the libraries we need as well as the path to the sequences you want to analyze. In this example I will be analyzing 16S V3-V4 sequences, so I set the path object to be the path to those specific sequences.
```{r load_packages_path, eval=FALSE}
getwd() # double check that we are in the correct directory, where are trimmed sequences are stored.
packageVersion("dada2") # see which version of DADA2 you have installed
suppressPackageStartupMessages({ # load packages quietly
library(dada2)
library(tidyr)
library(ggpubr)
library(decontam)
})
path <- "/path/to/fastq/files" # CHANGE ME to the directory containing the fastq files after
list.files(path)
## Read in sample names
fnFs <- sort(list.files(path, pattern="_R1_clean.fastq", full.names = TRUE))
fnFs # sanity check to see what the file names are
fnRs <- sort(list.files(path, pattern="_R2_clean.fastq", full.names = TRUE)); save.image(file = "mydada_16S.V4.Rdata") # saves all objects in global env.; runs after portion of code before ";"
# Extract sample names, assuming filenames have format: SAMPLENAME_XXX.fastq
sample.names <- sapply(strsplit(basename(fnFs), "_R1"), `[`, 1) #pattern where you want to split the name; i.e., at the _R1 level
## want to split by R1 so that you do not get duplicate sample names for each R1/R2 paired-end sequences
save.image(file = "mydada_16S.V4.Rdata") # save global env.
sample.names # sanity check
```
If you have already started the `DADA2` workflow and want to pick up from where you left off, then you can run this next code chunk to load everything that was in your global environment and saved to an .Rdata object (I will explain this code when we save our first .Rdata file).
```{r load_data, eval=FALSE}
## If you are picking up where you left off, load your mydada_16S.V4.Rdata file now
load("/path/to/fastq/files/mydada_16S.V4.Rdata")
```
## Check sequence quality
Though we have already done this with `FastQC`, there is a step here to check the quality of our forward and reverse reads. These per base sequence quality reports do look nicer than the `FastQC` output and tell you the total number of reads in that particular sample. We use `ggsave()` from the `ggpubr` [package](https://rpkgs.datanovia.com/ggpubr/index.html) [@Kassambara2020] to save these reports as high quality PDFs.
```{r check_seq_quality, eval=FALSE}
plot1<-plotQualityProfile(fnFs[1:2]) # check quality of Forward reads (2 samples)
plot2<-plotQualityProfile(fnRs[1:2]) # check quality of Reverse reads (2 samples)
ggsave(plot1,filename = "16S_pretrim_DADA2_F_quality.pdf", width=15, height=12, dpi=600)
ggsave(plot2,filename = "16S_pretrim_DADA2_R_quality.pdf", width=15, height=12, dpi=600)
```
<center>
![](amplicon_workflow/16S_pretrim_DADA2_F_quality.png)
</center>
<div align="center">Figure 5a: Per Base Sequence Quality Reports for 2 Samples (R1 Only)</div></br>
<center>
![](amplicon_workflow/16S_pretrim_DADA2_R_quality.png)
</center>
<div align="center">Figure 5b: Per Base Sequence Quality Reports for 2 Samples (R2 Only)</div></br>
## Filter and Trim
Now that we've set up our file paths and checked the quality of our sequences, we can set them up for the `DADA2` filter and trim step. First, we create objects that will hold the file names of filtered sequences based on the sample names we have provided. Then the `filterAndTrim` command will filter the reads based upon the following: read quality, read length, the number of Ns (i.e., unknown bases) in a read, the maximum number of expected errors after truncating the reads, and whether or not reads in your sample match the PhiX genome (i.e., a small virus genome used as a control in Illumina sequencing runs; more information [here](https://dornsife.usc.edu/uscgenomecore/faq/)). The maximum expected errors is calculated by solving for E in the Quality Score equation (see above). We have also specified here that we would like our output FASTQ files to be compressed, and that we can multithread this filter and trimming process. Keep in mind that if you are using a **Windows** to run your analyses locally, then multithreading is not available for this step.
Soem crucial things to consider at this step are your read lengths (i.e., 2x250, 2x300), the locus and region(s) you're sequencing, the per base quality of your sequences, the and whether you are using paired-end reads or just forward reads.
For the **ITS1** and **ITS2** genes, their lengths vary so wildly that truncating the sequences based on a specific length is not recommended. However, for the **16S gene**, its regions (i.e., V1-V9) are a bit more reliable in their average lengths. The **16S V4** region varies between ~250 - 283 nucleotides in length [@Illumina; @Vargas-Albores2017], whereas the **V3** region varies between ~130 to 190 nucleotides [@Vargas-Albores2017]. The **V3-V4** region ranges between ~400 - 470 nucleotides in length. For more information, check out this `DADA2` GitHub [issue](https://github.com/benjjneb/dada2/issues/1033) as well as @Rausch2019 and @Bukin2019.
Forward and reverse reads are merged if they have at least a *12 base pair* overlap. If you are using paired-end reads, then your merged read lengths (considering the 12 nucleotide overlap) need to total up to these region lengths. For example, let's say you're truncating your 16S V3-V4 forward reads to 250 base pairs (bp) long and your reverse reads to 160 bp long. If your reads are merged, the total length will be 250 + (160-12) = 398 bp long. This total read length of 398 bp would be a decent minimum read length considering that the range of the 16S V3-V4 region is ~400 - 470 bp.
Lastly, when setting your expected errors per forward and reverse reads (`maxEE=c(R1,R2)`), it is important to consider the per base sequence quality of your reads. Because reverse reads typically have lower per base sequence quality than your forward reads, you may want to relax the expected errors for your reverse reads.
If few too reads are surviving this step, then consider changing your expected errors per read parameter or adjusting the `truncLength` of your reads. Referring to your FastQC and eestats2 reports may be provide even more clarity for how you want to define these paramters. For more information on the `filterAndTrim` function, please view this [documentation](https://rdrr.io/github/benjjneb/dada2/man/filterAndTrim.html).
```{r filter_trim, eval=FALSE}
path # double check that your path is correct
# Create objects that will hold filtered file names in directory called "Filtered"
## these files will be created in the filter + trim step w/ filterAndTrim command
filtFs <- file.path(path, "Filtered", paste0(sample.names, "_F_filtered.fastq.gz"))
filtRs <- file.path(path, "Filtered", paste0(sample.names, "_R_filtered.fastq.gz"))
# giving these file.name elements the sames of the samples
names(filtFs) <- sample.names
names(filtRs) <- sample.names; save.image(file = "mydada_16S.V4.Rdata")
filtFs # let's see what this object looks like
# Filter & Trim!
out <- filterAndTrim(fnFs, filtFs, fnRs, filtRs, truncLen=c(250,235),
maxN=0, maxEE=c(2,3), truncQ=2, rm.phix=TRUE,
compress=TRUE, multithread=TRUE); save.image(file = "mydada_16S.V4.Rdata") # On Windows set multithread=FALSE
# filterAndTrim notes:
## The maxEE parameter sets the maximum number of “expected errors” allowed in a read, which is a better filter than simply averaging quality scores.
## Standard filtering parameters: maxN=0 (DADA2 requires no Ns), truncQ=2, rm.phix=TRUE and maxEE=2.
# truncLen=c(240,230) -- trim F reads to 240 bp, trim R reads to 230 bp
## Notes for trunc length of 2x300 PE reads: https://github.com/benjjneb/dada2/issues/236
head(out)
# * if you are only doing F reads, remove the "truncLen" command - truncLen=c(240,160) [for PE reads]
# sometimes there is a trimLeft=15 argument here, but I removed this because I already trimmed my sequences with bbduk
```
### Learn the Error Rates
Dr. Callahan developed an algorithm for a parametric error model that can use both inference and estimations to determine the error rates for each sample. Here is the excerpt of Dr. Callahan describing this function in his `DADA2` [tutorial](https://benjjneb.github.io/dada2/tutorial.html).
> The DADA2 algorithm makes use of a parametric error model (`err`) and every amplicon dataset has a different set of error rates. The learnErrors method learns this error model from the data, by alternating estimation of the error rates and inference of sample composition until they converge on a jointly consistent solution. As in many machine-learning problems, the algorithm must begin with an initial guess, for which the maximum possible error rates in this data are used (the error rates if only the most abundant sequence is correct and all the rest are errors).
```{r learn_plot_errors, eval=FALSE}
errF <- learnErrors(filtFs, multithread=TRUE); save.image(file = "mydada_16S.V4.Rdata")
errR <- learnErrors(filtRs, multithread=TRUE); save.image(file = "mydada_16S.V4.Rdata")
# The learnErrors method learns this error model from the data by alternating estimation of the error rates and inference of sample composition until they converge on a jointly consistent solution.
# As in many machine-learning (ML) problems, the algorithm must begin with an initial guess, for which the maximum possible error rates in this data are used (the error rates if only the most abundant sequence is correct and all the rest are errors)
plot_error<-plotErrors(errF, nominalQ=TRUE)## sanity check by visualizing estimated error rates -- should see error rates drop w/ increased quality
ggsave(plot_error,filename = "16S_errormodel_DADA2.pdf", width=15, height=15, dpi=600)
```
<center>
![](amplicon_workflow/16S_errormodel_DADA2.png)
</center>
<div align="center">Figure 6: Rarefaction Curve Example</div></br>
Once we have constructed the error model from the reads, we can plot the observed frequency of transitions from base to base (i.e., A2A indicates an A followed by an A) as a function of the consensus quality score at that position in the read. The individual points in black represent the observed error rates for each consensus quality score. The black line shows estimated error rates after convergence of the ML algorithm, and the red line shows error rates expected under the nominal definition of the Q-score. One thing we notice is that as consensus quality score increases, the error rates (black lines) decrease, which is expected.
If it seems like we are doing a lot of sanity checks throughout this workflow, it's because we are! This process can take a while and require some trouble shooting, so it's good to constantly check your work as you make your way through the workflow. Coding without sanity checks is never recommended.
### Identify ASVs in Reads
Before we run the Divisive Amplicon Denoising Algorithm (`dada()`), we have to remove the files from our `filtFs/filtRs` objects that were dropped in the filtering step. The algorithm will not work if we include file names of files that do not actually exist in the **Filtered** directory.
Now it's time to use the `dada` algorithm to infer our amplicon sequence variants (ASVs) from our sequences. For more information on how the Divisive Amplicon Denoising Algorithm works, please see @Callahan2016 and @Rosen2012. To increase the signal of sequence variants with very low abundance across samples, you can choose to pool (`pool=TRUE`) or pseudo-pool (`pool=pseudo`) your sample sequences together.
The output of the algorithm will be a `data-class` object, containing the number of ASVs inferred out of the number of total input sequences.
```{r dada_time, eval=FALSE}
filtFs <- filtFs[file.exists(filtFs)] # removes files that were not included in output because 0 reads passed filter step
filtRs <- filtRs[file.exists(filtRs)]
dadaFs <- dada(filtFs, err=errF, multithread=TRUE, pool=TRUE); save.image(file = "mydada_16S.V4.Rdata") # pseudo pooling is computationally more efficient but similar in results to pooling; pool = True will pool samples together before sample inference
dadaRs <- dada(filtRs, err=errR, multithread=TRUE, pool=TRUE); save.image(file = "mydada_16S.V4.Rdata")
dadaFs[1] # Returns first section of dada-class object {one sample}
```
The wonderful thing about ASVs is that because they are assigned based on 99% sequence identity, they are true representative of biological sequences and thus directly comparable across workflows [@Prodan2020; @Callahan2017]. I highly recommend reading these papers for more information on the benefits of using ASVs/ESVs/zOTUs.
## Merge Forward + Reverse Reads
At this point in the workflow, we are now going to merge our denoised Forward and Reverse reads to get our contiguous sequences (i.e., contigs). Sequences will be merged if they share at least 12 nucleotides. These sequences must be identical to each other in these overlapping regions or else they will not be merged.
```{r merge_reads, eval=FALSE}
mergers <- mergePairs(dadaFs, filtFs, dadaRs, filtRs, verbose=TRUE); save.image(file = "mydada_16S.V4.Rdata")
head(mergers[[1]])
```
The `mergers` object is a `data.frame` containing the merged sequence, its abundance, and some statistics about the sequences themselves. If most of your reads do not merge, then you should revisit the filter and trimming step. It could be that you cut too much off of your sequencing reads, making it more difficult to successfully merge your reads.
## Create Sequence Table & Remove Chimeras
Let's make an ASV table (similar to an OTU table but using ASVs), which will have our samples as rows and our ASVs as columns.
Using this table, we get a sense of how long our ASVs are and the distribution of these ASV lengths. We can also determine the percentage of reads that fall within our desired range of region lengths.
```{r make_seq_table, eval=FALSE}
seqtab <- makeSequenceTable(mergers); save.image(file = "mydada_16S.V4.Rdata")
dim(seqtab)
table(nchar(getSequences(seqtab)))
# we can filter out ASVs that are not within our target range of lengths
seqtab2 <- seqtab[,nchar(colnames(seqtab)) %in% 250:290] # here looking at ASVs that are between 250 - 290 nucleotides long
dim(seqtab2)
table(nchar(getSequences(seqtab2)))
# how many reads fall within our desired length range
sum(seqtab2)/sum(seqtab)
# Look at merged sequences in a plot -- see their distribution and frequency of sequences of certain length
## x axis - total number of reads per sample; y axis - density of samples w/ specific # of total reads
compare_reads_plot1 = ggdensity(rowSums(seqtab), fill = "blue4", alpha = 0.7); save.image(file = "mydada_16S.V4.Rdata")
compare_reads_plot2 = ggdensity(rowSums(seqtab2), fill = "red4", alpha = 0.7)
comp_plots<-ggarrange(compare_reads_plot1, compare_reads_plot2,labels=c("All Reads", "Reads of Desired Length"),ncol=1, nrow=2)
ggsave(comp_plots,filename = "16S.V4_compare_total_reads.pdf", width=10, height=20, dpi=600)
dev.off()
```
## Remove Chimeras
Now we are going to remove all chimeric sequences from our data. **Chimeras** are the the result of two or more biological sequences incorrectly joining together. This is often a result of PCR for a number of reasons. In `DADA2` specifically, chimeras are identified if they can be reconstructed from right and left segments from two or more "parent" sequences. The object `seqtab.nochim` will be our sequence table with the chimeras removed. Most of your reads should not be removed during this step. However, according to the `DADA2` [FAQ page](https://benjjneb.github.io/dada2/faq.html) if you have more than 25% of your reads removed, then it is likely that primers are still attached to your sequences. Be sure to remove these primers in the trimming step with either `bbduk`, `cutadapt`, or `trimmomatic` and begin the workflow again.
```{r remove_chimeras, eval=FALSE}
seqtab.nochim <- removeBimeraDenovo(seqtab2, method="consensus", multithread=TRUE, verbose=TRUE); save.image(file = "mydada_16S.V4.Rdata")
# Chimeric sequences are identified if they can be exactly reconstructed by combining a left-segment and a right-segment from two more abundant “parent” sequences
dim(seqtab.nochim)
dim(seqtab)
dim(seqtab2)
sum(seqtab.nochim)/sum(seqtab) # comparing reads after chimera removal over total reads (after filtering)
sum(seqtab.nochim)/sum(seqtab2) # comparing reads after chimera removal over reads that are our desired length
```
## Track the Reads - Sanity Check
Time for a sanity check and see how many reads we have at this point in our workflow. This is a great place to see if we have lost any reads, and at which steps they were lost - which can really help us determine if we trimmed our reads to the appropriate length. If a lot of reads are lost, it is recommended to check if primers and adapters are still attached to your sequences, and the truncation length of your sequences for the filter and trimming step.
```{r read_tracking, eval=FALSE}
getN <- function(x) sum(getUniques(x)) # function get number of unique sequences per object
track <- cbind(out, sapply(dadaFs, getN), sapply(dadaRs, getN), sapply(mergers, getN), rowSums(seqtab.nochim)); save.image(file = "mydada_16S.V4.Rdata")
# If processing a single sample, remove the sapply calls: e.g. replace sapply(dadaFs, getN) with getN(dadaFs) ; sapply(dadaRs, getN), sapply(mergers, getN),
head(track)
colnames(track) <- c("input", "filtered", "denoisedF", "denoisedR", "merged", "nonchim") # remove whichever labels you didn't include
rownames(track) <- sample.names
head(track)
write.table(track,"16S_tracking_reads_dada2.txt",sep="\t",row.names=TRUE,col.names=TRUE); save.image(file = "mydada_16S.V4.Rdata")
```
## Assign Taxonomy to ASVs
Now it's time to assign taxonomic identities to our ASVs. Dr. Callahan utilizes the [naive Bayesian classifier](https://pubmed.ncbi.nlm.nih.gov/17586664/) his `assignTaxonomy` function, which is the same classifier used by the [Ribosomal Database Project](https://rdp.cme.msu.edu/) (RDP) for taxonomic assignment. For more information on how this classifier works, please read @Wang2007.
In order to assign the taxonomic IDs to our ASVs, we need to have a reference database FASTA file to use as our known training data. These training data with known references will help the classifier determine which taxa our ASVs belong to. Currently the options for reference databases include the latest versions of the **Silva** database (for 16S), the **Ribosomal Database Project** database (for 16S), and the **UNITE** database (which should be used specifically for ITS sequences). A **Green Genes** database file is also included (for 16S analyses), but Green Genes has not been updated in a long time and thus is not the best choice for our reference training dataset. Dr. Callahan has included these reference database files [here](https://benjjneb.github.io/dada2/training.html).
The `assignTaxonomy()` command has multiple arguments that can be adjusted as described [here](https://rdrr.io/bioc/dada2/man/assignTaxonomy.html). One specific argument is `tryRC`, which is an option to try the reverse complement of your sequences to query against your database of choice. If `tryRC=TRUE`, then the reverse complement of the query sequence will be used for taxonomic classification of the ASVs if it is a better match to the reference than the forward (or original) sequence. By default `tryRC=FALSE`, however **if you are analyzing ITS2 sequences and used the 5.8S-Fun/ITS4-Fun primer set created by @Taylor2016**, you must set `tryRC=TRUE`. This is because the "Illumina forward adaptor and barcodes were added to the ITS4-Fun primer rather than the 5.8S-Fun primer to avoid excessive hairpin formation" [@Taylor2016]. That means that your F and R reads are in reverse orientation, and that the reverse complement of these reads should be used when comparing these reads to the UNITE database. Thank you to [Dr.Fabiola Pulido-Chavez](https://scholar.google.com/citations?user=EqvoNK4AAAAJ&hl=en) for sharing this helpful info with me!
```{r assign_taxa_asv, eval=FALSE}
taxa <- assignTaxonomy(seqtab.nochim, "/bigdata/aronsonlab/shared/DADA2_Silva_Files/silva_nr99_v138.1_wSpecies_train_set.fa.gz", multithread=TRUE); save.image(file = "mydada_16S.V4.Rdata")
# ITS2 w/ Taylor et al 2016 5.8S-Fun/ITS4-Fun primers
# taxa <- assignTaxonomy(seqtab.nochim, "/bigdata/aronsonlab/shared/DADA2_Silva_Files/silva_nr99_v138.1_wSpecies_train_set.fa.gz", multithread=TRUE,tryRC=TRUE); save.image(file = "mydada_16S.V4.Rdata")
taxa.print <- taxa # Removing sequence rownames for display only
rownames(taxa.print) <- NULL
taxa.print<-as.data.frame(apply(taxa.print,2, function(x) gsub("[^.]__", "", x))) # remove leading letters and __ with gsub
head(taxa.print)
```
If you are only seeing taxonomic identification at the Phyla level (i.e., the rest of the columns are filled with `NAs`), then this could indicate that we have not trimmed our sequences correctly. For example, let's say we have reads that are 300bp long (from 2x300 PE sequencing), but we are interested in the 16S V3 region which ranges from ~ 130-190 nucleotides in length. If we have not trimmed our sequences down to our desired region length (here 130-190 nucleotides), then our merged reads are no longer reliable, and the classifier will incorrectly identify our ASVs. Remember, the reads need to have at least a 12 nucleotide overlap to merge - so if we are not trimming our reads correctly, we could create merged sequences that are not accurate representations of the regions we are trying to identify, which will hurt us in the taxonomic assignment step.
## Save DADA2 Output for future analysis
We have finished the `DADA2` portion of the workflow! We can save the output from `DADA2` as R objects, text files, and tsv files for future import into R.
First we create a vector of the ASV labels called `asv_headers` that we will use to make the ASV IDs easier to read. We then use a for loop to add an "ASV" prefix to our `asv_headers` so that they are easily identifiable by an ASV number instead of just a number to represent each ASV. We then combine `asv_headers` with the sequences themselves to make an object called `asv_fasta`, which now holds our ASV sequences and their respective IDs.
```{r make_asv_headers, eval=FALSE}
# giving our seq headers more manageable names (ASV_1, ASV_2...)
asv_seqs <- colnames(seqtab.nochim)
asv_headers <- vector(dim(seqtab.nochim)[2], mode="character")
head(seqtab.nochim)
head(asv_headers)
for (i in 1:dim(seqtab.nochim)[2]) {
asv_headers[i] <- paste(">ASV", i, sep="_")
}
asv_fasta <- c(rbind(asv_headers, asv_seqs))
```
Now we can save our ASV count table, our ASV taxonomy table, and the ASV sequences themselves as separate files and R objects. I wanted to provide several file options because some people have a preference as to how they import data into R.
```{r save_dada2_output, eval=FALSE}
# making and writing out a fasta file of our final ASV seqs w/ ASV IDs:
write(asv_fasta, "16S_ASVs_dada2.fa") # write fasta file
write.table(asv_fasta,"16S_ASVs_dada2.txt",sep="\t",row.names=TRUE,col.names=TRUE)
# ASV count table:
asv_counts <- t(seqtab.nochim)
row.names(asv_counts) <- sub(">", "", asv_headers)
# For Vegan format: sample IDs as rows, ASVs as columns
asv_tab<-t(asv_counts)
write.table(asv_tab, "16S.V4_ASVs_Table_dada2.tsv", sep="\t", quote=F, col.names=NA)
write.table(asv_tab,"16S.V4_ASVs_Table_dada2.txt",sep="\t",row.names=TRUE,col.names=TRUE)
# For Phyloseq format: ASVs as row IDs, sample IDs as columns
write.table(asv_counts, "16S.V4_ASVs_Counts_dada2.tsv", sep="\t", quote=F, col.names=NA)
write.table(asv_counts,"16S.V4_ASVs_Counts_dada2.txt",sep="\t",row.names=TRUE,col.names=TRUE)
# taxa ID table:
asv_tax <- taxa
row.names(asv_tax) <- sub(">", "", asv_headers)
write.table(asv_tax, "16S.V4_ASVs_Taxonomy_dada2.tsv", sep="\t", quote=F, col.names=NA)
write.table(asv_tax,"16S.V4_ASVs_Taxonomy_dada2.txt",sep="\t",row.names=TRUE,col.names=TRUE)
#### Save all ASV objects as R objects ####
saveRDS(asv_tax, file = "16S.V4_ASVs_Taxonomy_dada2_Robject.rds", ascii = FALSE, version = NULL,
compress = TRUE, refhook = NULL)
saveRDS(asv_tab, file = "16S.V4_ASVs_Counts_dada2_Robject.rds", ascii = FALSE, version = NULL,
compress = TRUE, refhook = NULL)
saveRDS(asv_fasta, file = "16S.V4_ASV_Sequences_dada2_Robject.rds", ascii = FALSE, version = NULL,
compress = TRUE, refhook = NULL)
#### Save everything from global environment into .Rdata file
save.image(file = "mydada_16S.V4.Rdata")
```
Personally, I like using R objects (file extension .rds) in my analyses. In order to import R objects into R, you can run `data.frame(readRDS("path/to/Robject.rds", refhook = NULL))` to create a data frame holding the contents of your R object file.
# Statistical Analysis
At this point you should have either/or R objects, text files, and tsv files containing the following: 1. your ASV sequences in FASTA format, 2. your ASV count table, and 3. your ASV taxonomy table. You should also have some metadata for your samples that will allow for deeper investigation into your microbial data.
## Import and Prepare Data for Analyses
First before we import any data, let's make sure that we are in the right directory (where our `DADA2` files are stored) and that have all of the necessary R libraries loaded.
```{r load_libraries, eval=FALSE}
getwd() # use setwd("path/to/files") if you are not in the right directory
suppressPackageStartupMessages({ # load packages quietly
library(phyloseq)
library(ggplot2)
library(vegan)
library(dendextend)
library(ggpubr)
library(scales)
library(grid)
library(ape)
library(plyr)
library(dplyr)
library(readxl)
library(dplyr)
library(tidyr)
library(reshape)
library(reshape2)
library(shades)
library(microbiome)
library(devtools)
library(decontam)
library(pairwiseAdonis)
library(corrplot)
})
```
Now let's import the `DADA2` output into R for some statistical analyses. We will import our ASV count table, our ASV taxonomic table, and our metadata for this dataset. We are also going to create an object called `colorset1` to contain the color labels for each of our categories. This will help us keep the colors consistent for each category in all of our figures.
```{r import_data, eval=FALSE}
## Import bacterial ASV count data
bac.ASV_counts<-data.frame(readRDS("16S.V4_MSH_ASVs_Counts_dada2_9.20.2021_Robject.rds", refhook = NULL))
dim(bac.ASV_counts)
head(bac.ASV_counts)
colnames(bac.ASV_counts)<-gsub("X1", "1", colnames(bac.ASV_counts)) # shorten sample names to match sample names in metadata file
bac.ASV_counts$ASV_ID<-rownames(bac.ASV_counts)
head(bac.ASV_counts)
## Import metadata
metadata<-as.data.frame(read_excel("MSH_MappingFile_for_Workflow.xlsx"), header=TRUE)
head(metadata)
#metadata<-na.omit(metadata) # drop NAs from metadata
head(metadata)
metadata$SampleID<-gsub("(\\_.*?)\\_([0-9])","\\1.\\2", metadata$SampleID) # replace second _ with .
rownames(metadata)<-metadata$SampleID
# create color variable(s) to identify variables by colors
## color for Category
colorset1 = melt(c(ClearCutSoil="#D00000",Gopher="#f8961e",NoGopher="#4ea8de",OldGrowth="#283618"))
colorset1$Category<-rownames(colorset1)
colnames(colorset1)[which(names(colorset1) == "value")] <- "Category_col"
colorset1
metadata<-merge(metadata, colorset1, by="Category")
head(metadata)
metadata$Category_col <- as.character(metadata$Category_col)
rownames(metadata)<-metadata$SampleID
## Import ASV taxonomic data
bac.ASV_taxa<-data.frame(readRDS("16S.V4_MSH_ASVs_Taxonomy_dada2_9.20.2021_Robject.rds", refhook = NULL))
head(bac.ASV_taxa)
bac.ASV_taxa[is.na(bac.ASV_taxa)]<- "Unknown" # turn all NAs into "Unkowns"
bac.ASV_taxa$Species<-gsub("Unknown", "unknown", bac.ASV_taxa$Species) # change uppercase Unkonwn to lowercase unknown for unknown species classification
head(bac.ASV_taxa)
bac.ASV_taxa$ASV_ID<-rownames(bac.ASV_taxa) # create ASV ID column to use for merging data frames
head(bac.ASV_taxa)
```
With our data imported, we now need to remove any potential contaminates from our ASV table. These are ASVs that were identified in your positive and negative controls. Fortunately we can use the `decontam()` package to do this and create new, "clean" ASV count and taxonomy tables [@Davis2018]. Be sure to have a column in your metadata that tells you exactly which samples are controls. For information on how to properly use `decontam()`, view the tutorial [here](https://benjjneb.github.io/decontam/vignettes/decontam_intro.html).
**NOTE**: In the particular project we are using for this workflow, there were NO PCR or sequencing controls included. Running the code below will lead to an error. Please use the following section of code as a guide for removing decontaminants in your ASV counts data frame.
```{r remove_contam,eval=FALSE}
## Identify & Remove Contaminants
# Create a df that contains which samples in your data set are positive/negative controls
ControlDF<-metadata[metadata$SampleType=="Control",]
# create a vector for the decontam() pacakge that tells us which sames are controls (TRUE) or not (FALSE)
vector_for_decontam<-metadata$Sample_or_Control # use for decontam package
#convert bac.ASV_counts data frame to numeric
bac.ASV_counts[,-length(bac.ASV_counts)] <- as.data.frame(sapply(bac.ASV_counts[,-length(bac.ASV_counts)], as.numeric))
# transpose so that rows are Samples and columns are ASVs
bac.ASV_c2<-t(bac.ASV_counts[,-length(bac.ASV_counts)])
# create data frame of which ASVs are contaminants are not
contam_df <- isContaminant(bac.ASV_c2, neg=vector_for_decontam)
table(contam_df$contaminant) # identify contaminants aka TRUE
# pull out ASV IDs for contaminating ASVs
contam_asvs <- (contam_df[contam_df$contaminant == TRUE, ])
# see which taxa are contaminants
bac.ASV_taxa[row.names(bac.ASV_taxa) %in% row.names(contam_asvs),]
## Create new files that EXCLUDE contaminants!!!
# making new fasta file (if you want)
#contam_indices <- which(asv_fasta %in% paste0(">", contam_asvs))
#dont_want <- sort(c(contam_indices, contam_indices + 1))
#asv_fasta_no_contam <- asv_fasta[- dont_want]
# making new count table
bac.ASV_counts_no.contam <- bac.ASV_counts[!row.names(bac.ASV_counts) %in% row.names(contam_asvs), ] # drop ASVs found in contam_asvs
head(bac.ASV_counts_no.contam)
# making new taxonomy table
bac.ASV_taxa.no.contam <- bac.ASV_taxa[!row.names(bac.ASV_taxa) %in% row.names(contam_asvs), ] # drop ASVs found in contam_asvs
head(bac.ASV_taxa.no.contam)
# Remove ASVs found in Controls from samples (in addition to contaminants previously ID'd)
Control_counts<-bac.ASV_counts_no.contam[,colnames(bac.ASV_counts_no.contam) %in% ControlDF$SampleID] # see which taxa are contaminants
Control_counts
Control_counts<-Control_counts[which(rowSums(Control_counts) > 0),] # drop ASVs that don't appear in Controls
dim(Control_counts)
head(Control_counts)
bac.ASV_counts_CLEAN<-bac.ASV_counts_no.contam[!bac.ASV_counts_no.contam$ASV_ID %in% row.names(Control_counts),!colnames(bac.ASV_counts_no.contam) %in% colnames(Control_counts)]
bac.ASV_taxa_CLEAN<-bac.ASV_taxa.no.contam[!bac.ASV_taxa.no.contam$ASV_ID %in% row.names(Control_counts),]
# sanity check
colnames(bac.ASV_counts_CLEAN) # check for control sample IDs
## and now writing them out to files
#write(asv_fasta_no_contam, "ASVs-no-contam.fa")
write.table(bac.ASV_counts_CLEAN, "data/EnvMiSeq_W23_16S.V3V4_ASVs_Counts_NoContam.tsv",
sep="\t", quote=F, col.names=NA)
saveRDS(bac.ASV_counts_CLEAN, file = "data/EnvMiSeq_W23_16S.V3V4_ASVs_Counts_NoContam_Robject.rds", ascii = FALSE, version = NULL,
compress = TRUE, refhook = NULL)
write.table(bac.ASV_taxa_CLEAN, "data/EnvMiSeq_W23_16S.V3V4_ASVs_Taxa_NoContam.tsv",
sep="\t", quote=F, col.names=NA)
saveRDS(bac.ASV_taxa_CLEAN, file = "data/EnvMiSeq_W23_16S.V3V4_ASVs_Taxa_NoContam_Robject.rds", ascii = FALSE, version = NULL,
compress = TRUE, refhook = NULL)
```
### Data Formatting, Filtering, and Transformation
Now that we have imported our ASV count table, our ASV taxonomy table, and our metadata, we can start to reformat the actual data objects in R to get them ready for running through the `vegan` suite of tools. First we are going to merge our ASV count table and our ASV taxonomy tables together and filter out some unwanted taxa.
Even though we are analyzing bacterial data, sometimes chloroplast and mitochondrial sequences are attributed to 16S genes. For example, in the Silva database, Chloroplast sequences attributed to Eukaryotes are found within the databases' set of Cyanobacteria sequences. Some sequences within this Chloroplast distinction are actually labeled as bacteria, but they have not been phylogenetically connected to a reference genome. It's important to filter our these eukaryotic sequences before we start playing with statistical analyses. We are also going to drop any ASVs that do not have any counts as well as "singletons", which are ASVs with only 1 count in the entire data set.
**NOTE**: Here we are merging the original ASV counts and ASV taxa data frames. If you followed the `decontam()` step, you should be merging the CLEAN versions of these objects. There is a commented-out line of code in the section below that shows this step.
```{r merge_filter_data, eval=FALSE}
# first we merge the ASV count object and the ASV taxonomy object together by column called "ASV_ID"
# merge CLEAN aka contaminants/controls removed count & taxa tables
#bac.ASV_all<-merge(bac.ASV_counts_CLEAN,bac.ASV_taxa_CLEAN, by="ASV_ID")
bac.ASV_all<-merge(bac.ASV_counts,bac.ASV_taxa, by="ASV_ID")
head(bac.ASV_all)
dim(bac.ASV_all)
bac.ASV_all<-bac.ASV_all[, !duplicated(colnames(bac.ASV_all))] # remove col duplicates
dim(bac.ASV_all)
bac.ASV_dat<-melt(bac.ASV_all)
head(bac.ASV_dat)
# rename columns
colnames(bac.ASV_dat)[which(names(bac.ASV_dat) == "variable")] <- "SampleID"
colnames(bac.ASV_dat)[which(names(bac.ASV_dat) == "value")] <- "Count"
# Drop all Zero counts & singletons ASVs
dim(bac.ASV_dat)
bac.ASV_dat<-bac.ASV_dat[which(bac.ASV_dat$Count > 0),]
dim(bac.ASV_dat)
# Drop Unknowns and Eukaryotic hits
bac.ASV_dat<-subset(bac.ASV_dat, Kingdom!="Unknown") ## drop Unknowns from Kingdom
bac.ASV_dat<-subset(bac.ASV_dat, Phylum!="Unknown") ## drop Unknowns from Phylum
head(bac.ASV_dat)
dim(bac.ASV_dat)
# Create ASV count file that is filtered of eukaryotic taxa - for later use just in case
bac.ASV_dat.with.euks<-bac.ASV_dat
colnames(bac.ASV_dat.with.euks)
# Drop chloroplast & mitochondria seqs
bac.ASV_dat<-subset(bac.ASV_dat, Class!="Chloroplast") ## exclude Chloroplast sequences
bac.ASV_dat<-subset(bac.ASV_dat, Order!="Chloroplast") ## exclude Chloroplast sequences
bac.ASV_dat<-subset(bac.ASV_dat, Family!="Mitochondria") ## exclude Mitochondrial sequences just in case
# check if Eukaryotic and Unknowns are still in your data, this may take a while to run!
'Chloroplast' %in% bac.ASV_dat # check if Chloroplast counts are still in df, should be false because they've been removed
'Mitochondria' %in% bac.ASV_dat # check if Mitochondria counts are still in df, should be false because they've been removed
'Undetermined' %in% bac.ASV_dat # check if undetermined taxa in data frame
#NA %in% bac.ASV_dat
head(bac.ASV_dat)
```
After dropping unknown or undesired sequences from our combined ASV data frame, it's time to create an ASV table that is properly formatted for the `vegan` package. This ASV table must be a **Samples x Species** matrix, in which our Sample IDs as our row names and our ASV IDs as our column names.
**NOTE**: We could have made this ASV table earlier immediately after importing the ASV count data by transposing the table with the `t()` function. However, I want to have an ASV table that excludes taxa I do not want in my data set, like ASVs attributed to Chloroplast sequences or ASVs attributed to unknown Phyla.
```{r create_asv_table, eval=FALSE}
bac.ASV_table<-as.data.frame(dcast(bac.ASV_dat, SampleID~ASV_ID, value.var="Count", fun.aggregate=sum)) ###
head(bac.ASV_table)
rownames(bac.ASV_table)<-bac.ASV_table$SampleID
bac.ASV_table<-subset(bac.ASV_table, select=-c(SampleID))
head(bac.ASV_table)
```
Now we can reformat our metadata to be in the same order (by rows) as our ASV table. **This is a crucial step!** Though it may appear minor, certain functions (such as `adonis2()` for example) will not correctly analyze your data if your metadata and your ASV table are not arranged in the same order by rows. This next step will only work if the two data frames we are reordering have the same number of rows AND the same row names.
```{r reorder_metadata, eval=FALSE}
# double check dimensions of metadata and ASV table
dim(metadata)
dim(bac.ASV_table)
# double check that the rownames exist + match
rownames(metadata)
rownames(bac.ASV_table)
# Find rows in metadata that are not in ASV table, and vice versa --> sanity check
setdiff(rownames(metadata), rownames(bac.ASV_table)) # check rows in metadata not in bac.ASV_table
setdiff(rownames(bac.ASV_table), rownames(metadata)) # check rows in bac.ASV_table not in metadata
# reorder metadata based off of ASV table
metadata=metadata[rownames(bac.ASV_table),]
# here we are reordering our metadata by rows, using the rownames from our ASV table as a guide
# this indexing method will only work if the two dfs have the same # of rows AND the same row names!
# sanity check to see if this indexing step worked
head(metadata)
head(bac.ASV_table)
```
Before we jump into statistically analyzing our sequence data, we will want to standardize our environmental data. You may have multiple environmental variables that you've recorded (i.e., temperature, pH, dissolved oxygen concentration, etc.), all of which could be in different units or vary widely in their relative concentrations, variances, etc. It's important that we scale and center our environmental variables so that we can compare variables of different units/concentrations/etc.
```{r scale_metadata, eval=FALSE}
head(metadata)
meta_scaled<-metadata
meta_scaled[,17:19]<-scale(meta_scaled[,17:19],center=TRUE,scale=TRUE) # only scale chem env data
head(meta_scaled)
```
Now that all of our files are in R and correctly formatted, we can start some statistical analyses!
## Alpha Diversity & Species Richness
### Rarefaction Curves
To calculate species richness and alpha diversity (using the Shannon-Wiener index), we will use functions from the `vegan` [package](https://cran.r-project.org/web/packages/vegan/vegan.pdf) [@Oksanen2020]. Before I get to the alpha diversity and species richness, I will calculate a rarefaction curve for my ASV table. The rarefaction curve tells us that after resampling a pool of N individuals per sample (x-axis), we will identify a certain number of species in said sample (y-axis). This can give us an idea if any sample is more/less species rich than other samples, which can be useful to identify outliars.
```{r draw_rarecurve, eval=FALSE}
sort(colSums(bac.ASV_table))
png('rarecurve_example.png',,width = 1000, height = 900, res=100)
rarecurve(as.matrix(bac.ASV_table), col=metadata$Category_col, step=1000, label=F,ylab="ASVs")
# to show sampel labels per curve, change label=T
dev.off()
```
<center>
![](amplicon_workflow/rarecurve_example.png)
</center>
<div align="center">Figure 7: Rarefaction Curve Example, Colored by Sample Category </div></br>
In this rarefaction curve, each curve is colored based on its sample category: red represents "Clear Cut Soil", yellow represents the "Gopher" category , light blue represents the "No Gopher" category, and dark green represents "Old Growth" soil. Based on this rarefaction curve, it appears that samples within the Old Growth category have a smaller, average sample size compared to the other sample categories.
### Shannon Diversity & Species Richness
Now that I've viewed the rarefaction curve and looked for outliers, we can move onto the alpha diversity and species richness steps.
For alpha diversity and species richness measurements, we are going to use raw data. The use of raw data for any kind of analysis is quite controversial because not all of the samples have the same number of observations: for example, one of our samples may have thousands of ASV counts, whereas other samples can be much smaller or larger than that. Transforming our data can allow us to view the actual distribution of our data, revealing patterns that may have been difficult to observe in the raw data. You can find a helpful example of data transformations and the benefits via its respective Wikipedia [page](https://en.wikipedia.org/wiki/Data_transformation_(statistics)).
Some microbiologists would tell you that we should rarefy our data before moving onto any diversity assessments or downstream analyses. *Rarefying* is a type of data transformation that involves finding the sample with the minimum number of counts in all of your samples, then scaling all of your sample counts down to this size. As described with the rarefaction curve, rarefying allows you to see 1. the number of species across samples and 2. the abundance of said species across samples when sampling based on a given minimum. Historically, rarefaction was the strategy used to transform microbial data. However, more recently many statisticians have advised against rarefaction as we tend to lose a lot of information regarding low abundance OTUs/ASVs. For more information on why rarefaction is not a useful transformation method, please read "Waste Not, Want Not: Why Rarefying Microbiome Data is Inadmissable" by @McMurdie2014.
When we get to the section on calculating [beta diversity](#beta-div-section), I will provide more insight into which transformation method(s) I use and why.
Alpha diversity is a way to measure within-sample diversity, using an equation that considers the richness of certain species as well as the evenness of those species. The `vegan` package has a `diversity()` function that allows one to specify which diversity index the user would like to use for calculating alpha diversity. Here I use the Shannon-Wiener index for calculating alpha diversity. In order to calculate Shannon Diversity, we have to calculate Shannon Entropy, then take calculate the exponential value of the Shannon Entropy (*e* to the power of Shannon Entropy). We can also calculate species richness (i.e., how many species are present in each sample) using the `specnumber()` function from the vegan package. Once we've found species richness and Shannon diversity, we can combine these values into one data frame, then merge this data frame with our metadata to create one dataframe containing: Shannon entropy, Shannon diversity, species richness, and your sample metadata.
```{r shan_div, eval=FALSE}
# if you have another package loaded that has a diversity function, you can specify that you want to use vegan's diversity function as shown below
Shan_ent.16s<-vegan::diversity(bac.ASV_table, index="shannon") # Shannon entropy
Shan_div.16s<- exp(Shan_ent.16s) # Shannon Diversity aka Hill number 1
# create data frame with Shannon entropy and Shannon diversity values
div_16s<-data.frame(Bac_Shannon_Entropy=Shan_ent.16s,Bac_Shannon_Diversity=Shan_div.16s)
class(div_16s)
div_16s$SampleID<-rownames(div_16s)
head(div_16s)
# create a data frame with species richness
S_16s<-data.frame(Bac_Species_Richness=specnumber(bac.ASV_table), SampleID=rownames(bac.ASV_table)) # finds # of species per sample using RAW count data; if MARGIN = 2 it finds frequencies of species
# merge richness and diversity dataframes together
d.r_16s<-merge(div_16s, S_16s, by.x="SampleID", by.y="SampleID")
# merge w/ metadata
bac.div.metadat <- merge(d.r_16s,metadata, by.x="SampleID", by.y="SampleID")
head(bac.div.metadat)
class(bac.div.metadat) # want data frame
```
We can now use the data frame we made with our alpha diversity, species richness, and our metadata to create some nice figures. First want to ensure that the category of interest (i.e., in this example that will be "Category") is the right `class` of variable for generating this figure. Because we are using a categorical identifier, it is wise for us to make sure that our `Category` variable is in the `factor` format.
```{r div_fig_prep, eval=FALSE}
unique(bac.div.metadat$Category) # see how many elements there are in the Category variable
bac.div.metadat$Category <- factor(bac.div.metadat$Category, levels = c("ClearCutSoil","Gopher","NoGopher","OldGrowth"))
class(bac.div.metadat$Category)
```
Now let's make some pretty figures with `ggplot2` [@Wickham2016]! Using `ggplot2`, we can specify what type of plot to make, the color palette you'll use, the size(s) of your font, etc. If you're interested in everything that `ggplot2` can do, please check out this amazing `ggplot2` [Cheat Sheet](https://www.maths.usyd.edu.au/u/UG/SM/STAT3022/r/current/Misc/data-visualization-2.1.pdf). We are also using `ggpubr`, a wrapper for `ggplot2` that allows for easy manipulation and export of `ggplot` figures. For more information on `ggpubr`, please check out the package [website](https://rpkgs.datanovia.com/ggpubr/#:~:text=ggplot2%2C%20by%20Hadley%20Wickham%2C%20is,elegant%20data%20visualization%20in%20R.&text=The%20'ggpubr'%20package%20provides%20some,'%2D%20based%20publication%20ready%20plots).
Here we are going to create box-and-whisker plots of our alpha diversity and species richness data. The first plot will display the alpha diversity across of our groups, and the second plot will display the species richness of these groups. The `y-axis` will show the Shannon diversity in the first plot, and the species richness in the second plot. For both plots, the `x-axis` will display the Category labels.
Each of the individual box-and-whisker plots will be assigned a different color based on the Category variable using the `$Category_col` variable we created earlier.
```{r alphadiv_fig, eval=FALSE}
# shannon diversity by year
bac.a.div<-ggplot(bac.div.metadat, aes(x=Category, y=Bac_Shannon_Diversity, fill=Category)) +
geom_boxplot(color="black")+scale_x_discrete(labels=c("ClearCutSoil"="Clear Cut Soil", "Gopher"="Gopher", "NoGopher"="No Gopher", "OldGrowth"="Old Growth"))+
scale_fill_manual( values=unique(bac.div.metadat$Category_col[order(bac.div.metadat$Category)]), name ="Sample Category", labels=c("ClearCutSoil"="Clear Cut Soil", "Gopher"="Gopher", "NoGopher"="No Gopher", "OldGrowth"="Old Growth"), )+
theme_classic()+
labs(title = "Bacterial Shannon Diversity by Sample Category", x="Category", y="Shannon Diversity", fill="Category")+theme(axis.title.x = element_text(size=13),axis.title.y = element_text(size=13),axis.text = element_text(size=11),axis.text.x = element_text(vjust=1),legend.title.align=0.5, legend.title = element_text(size=13),legend.text = element_text(size=11),plot.title = element_text(hjust=0.5, size=15))
ggsave(bac.a.div,filename = "Bacterial_alpha_diversity.png", width=12, height=10, dpi=600)
# ggplot code break down:
# ggplot(bac.div.metadat, aes(x=Category, y=Bac_Shannon_Diversity, fill=Category)) -- dataset is bac.div.metadat; set aesthetics aka x variable, y variable, and variable for filling in box-whisker plots
# geom_boxplot(color="black") -- outline of box-whisker plots will be black
# scale_x_discrete(labels=c()) -- fix the labels of groups in x-axis
# theme_classic() -- removes grid lines in background of figure
# labs() -- fix plot labels
# theme(...) -- these are changing the settings of the fig: setting axis title size, legend title size, alignment of axes and legend labels, etc
```
<center>
![](amplicon_workflow/Bacterial_alpha_diversity.png)
</center>
<div align="center">Figure 8a: Boxplot of Alpha Diversity by Sample Category</div></br>
We observe the highest average Shannon diversity within the Clear Cut Soil category, followed by the Gopher category, then the No Gopher category, and the Old Growth category. Though this figure is helpful for comparing these categories to one another, we cannot really glean meaningful statistical inforamtion from this boxplot.
Not only does `ggpubr` help us with arranging and saving figures, but we can also use some if its functions to add statistics directly onto our figures with ease. In the next code chunk we use a function called `stat_compare_means()` which allows you to compare different sets of samples to each other. Because we have already assigned our samples to Categories, we can compare the means across our multiple samples.
We can compare the means of each sample to each other in a pair-wise fashion by using either a **T-test** (`t.test`) or a using a **Wilcoxon test** (`wilcox.test`), or compare the means across all of our samples at once using an **Analysis of Variance** aka ANOVA (`anova`) or a **Kruskal-Wallis test** (`kruskal.test`). For more information on how to use the `stat_compare_means()` function to add statistics to your plots, please see this [website](http://www.sthda.com/english/articles/24-ggpubr-publication-ready-plots/76-add-p-values-and-significance-levels-to-ggplots/).
Deciding on whether to use a T-test verses a Wilcoxon test, or an ANOVA verses a Kruskal-Wallis test, depends on whether your data fulfills certain assumptions held by these statistical tests. One of the assumptions for a T-test and an ANOVA is that the data is *normally distributed*. We can test for normality using the **Shapiro-Wilk test**.
The null hypothesis for the Shapiro-Wilk test is that the data is normally distributed. This means that if your p-value for the Shapiro-Wilk test is > 0.05, then the null hypothesis is accepted and the data is in fact normally distributed. However, if p is < 0.05, then the null hypothesis is rejected and your data are not normally distributed.
We can also use a **Q-Q plot** to compare our data with a theoretical normal distribution. These plots show the quantiles for our data in the y-axis, and the theoretical quantiles for a normal distribution on the x-axis. If our data points lie on the line in the Q-Q plot, then the data is considered normally distributed. Skewed data will contain points that are further from the line, curving one way or another.
Let's run a Shapiro-Wilks test using our species richness results, and use a Q-Q plot to see the distribution of these richness values.
**NOTE:** diversity and richness are usually *not* normally distributed, but it's still important to always see how the data are distributed if you plan on running statistical tests that [assume normality](https://www.statisticshowto.com/assumption-of-normality-test/).
```{r test_for_normality, eval=FALSE}
## Using Shapiro-Wilk test for normality
shapiro.test(bac.div.metadat$Bac_Species_Richness) # what is the p-value?
# my example p-value was p-value = 0.4429
# visualize Q-Q plot for species richness
png('qqplot.png',,width = 1000, height = 900, res=100)
qqnorm(bac.div.metadat$Bac_Species_Richness, pch = 1, frame = FALSE)
qqline(bac.div.metadat$Bac_Species_Richness, col = "steelblue", lwd = 2)
dev.off()
```
<center>
![](amplicon_workflow/qqplot.png)
</center>
<div align="center">Figure 8b: Normal Q-Q Plot: Species Richness</div></br>
Because our p-value for the Shapiro-Wilks test is > 0.05, we've determined that our species richness values are not normally distributed. Because of this, we will use a Wilcoxon test (rather than a T-test) to compare the means of our sample groups in a pairwise fashion. Because we only have two groups in this example, we cannot run an Kruskal-Wallis test. Kruskal-Wallis tests and ANOVAs are used when comparing three or more groups.
In the boxplot below, I have only included a few pairwise group comparisons as to not overwhelm the plot. If you'd rather use `*` as indicators of statistical significance instead of using the p-values themselves, you can change the label parameter in the `stat_compare_means()` function from `"p.format"` to `"p.signif"`.
```{r specrich_fig, eval=FALSE}
bac.spec.rich<-ggplot(bac.div.metadat, aes(x=Category, y=Bac_Species_Richness, fill=Category)) +geom_boxplot(color="black")+scale_x_discrete(labels=c("ClearCutSoil"="Clear Cut Soil", "Gopher"="Gopher", "NoGopher"="No Gopher", "OldGrowth"="Old Growth"))+theme_bw()+scale_fill_manual( values=unique(bac.div.metadat$Category_col[order(bac.div.metadat$Category)]), name ="Sample Category", labels=c("ClearCutSoil"="Clear Cut Soil", "Gopher"="Gopher", "NoGopher"="No Gopher", "OldGrowth"="Old Growth"), )+theme_classic()+labs(title = "Bacterial Species Richness by Sample Category", x="Category", y="Species Richness", fill="Category")+theme(axis.title.x = element_text(size=13),axis.title.y = element_text(size=13),axis.text = element_text(size=11),axis.text.x = element_text(vjust=1),legend.title.align=0.5, legend.title = element_text(size=13),legend.text = element_text(size=11),plot.title = element_text(hjust=0.5, size=15))+stat_compare_means(comparisons = list(c(3,4), c(2,3), c(1,4)), hide.ns = FALSE,label = "p.signif",method="wilcox.test")+stat_compare_means(label.y=3600)
ggsave(bac.spec.rich,filename = "Bacterial_species_richness.png", width=12, height=10, dpi=600)
```
<center>
![](amplicon_workflow/Bacterial_species_richness.png)
</center>
<div align="center">Figure 8c: Boxplot of Species Richness by Sample Category</div></br>
I did not want to overwhelm you with multiple pairwise group comparisons on this figure, so you are only seeing the results of three Wilcoxon tests comparing Clear Cut Soil to Old Growth samples, Gopher to No Gopher samples, and No Gopher to Old Growth Samples. The `*` indicate significance levels, with `*` being *p <= 0.05*, `**` being *p <= 0.01*, `***` is *p <= 0.001*, and `****` is *p <= 0.0001*. The symbol `ns` stands for *not significant*. Figure 8c shows us that the Clear Clut Soil samples have a significantly higher average Species Richness than the Old Growth samples. We can also see that our No Gopher samples are significantly higher in species richness than the Old Growth samples, but the difference between the species richness averages in the Gopher verses No Gopher samples is not statistically significant. You can also see a printed value for the Kruskal-Wallis test, which here is p = 0.00008. This test indicates that there is a significant difference in average species richness across all of the sample categories.
## Beta Diversity {#beta-div-section}
### Data Transformation
Before going any further, we should transform our data. Data transformation helps us to better interpret our data by changing the scales in which we view our data, as well as reducing the impact of skewed data and/or outliers in our data set. We can also perform a transformation that normalizes our data, aka changing the distribution of our data to be a normal (i.e., Gaussian) distribution, which is useful for running certain statistical tests that assume normality (like T-tests, ANOVAs). For more on why you should transform your data and what kind of transformations are out there, check out the resources included in this very helpful [Medium article](https://medium.com/analytics-vidhya/a-guide-to-data-transformation-9e5fa9ae1ca3). I have also found this [Medium article](https://medium.com/@kyawsawhtoon/log-transformation-purpose-and-interpretation-9444b4b049c9#:~:text=Log%20transformation%20is%20a%20data,on%20the%20natural%20log%20transformation.) on log transformations helpful as well.
Two useful transformations I have seen used are the **variance stabilizing transformation** (i.e, **VST**) and the **centered log-ratio** transformation (i.e, **CLR**). For information on how to employ this particular transformation, please check out this [tutorial](https://astrobiomike.github.io/amplicon/dada2_workflow_ex#beta-diversity) by the legendary bioinformatician [Dr. Mike Lee](https://astrobiomike.github.io/research/). Though I won't be using the VST transformation, I have not found any literature saying that the CLR transformation is better than VST. The CLR transformation appears to be popular among statisticians, which is why I am choosing to go this route.
We will use the `vegan` package to CLR transform our count data for creating clustering dendrograms and ordinations. The CLR transformation is recommended in the paper "Microbiome Datasets Are Compositional: And This Is Not Optional" by @Gloor2017, which proposes that microbiome data sets are compositional, meaning they describe relationships between multiple components. @Gloor2017 argues that the reason that CLR transformations are ideal for compositional data is because 1. ratio transformations are useful for detecting relationships in both count data and proportion data, and 2. because log-ratios specifically make the data symmetric in a log space.
It's important to change all of the 0s in our ASV table to a very small number so that the log of 0 is not taken in our transformation (@Quinn2021). To account for this, we include the `pseudocount = 1` argument in the `decostand()` function, which will add a small pseudocount to the cells in our matrix containing `0s`. To better understand how CLR transformation works and why its a useful transformation for microbiome data, watch this great [YouTube video](https://www.youtube.com/watch?v=fQPCeV4MUe4) created by [Dr. Thomas Quinn](https://tpq.github.io/) and read "Normalization and microbial differential abundance strategies depend upon data characteristics" by @Weiss2017. For more information on the pros and cons of the CLR transformation, please read "A field guide for the compositional analysis of any-omics data" by @Quinn2019.
```{r clr_transformation, eval=FALSE}
# CLR transformation of ASV table
# df must have rownames are SampleIDs, columns are ASV IDs for vegan functions below
b.clr<-decostand(bac.ASV_table[,-1],method = "clr", pseudocount = 1) #CLR transformation
b.clr[1:4,1:4]
```
### Hierarchical Clustering
Now that we have our CLR-transformed ASV counts, we can create a Euclidean distance matrix that will describe how close (aka similar) our samples are to each other based on their microbial composition. The Euclidean distance between CLR-Transformed compositional data is known as **Aitchison Distance** [@Quinn2018].
```{r clustering_exposure, eval=FALSE}
# check rownames of CLR transformed ASV data & metadata
rownames(b.clr) %in% rownames(meta_scaled)
meta_scaled=meta_scaled[rownames(b.clr),] ## reorder metadata to match order of CLR data
# calculate our Euclidean distance matrix using CLR data (aka Aitchison distance)
b.euc_dist <- dist(b.clr, method = "euclidean")
# creating our hierarcical clustering dendrogram
b.euc_clust <- hclust(b.euc_dist, method="ward.D2")
# let's make it a little nicer...
b.euc_dend <- as.dendrogram(b.euc_clust, hang=0.1)
b.dend_cols <- as.character(metadata$Category_col[order.dendrogram(b.euc_dend)])
labels_colors(b.euc_dend) <- b.dend_cols
png(file="16S_CLR_cluster_Category.png",width = 1000, height = 900, res=100)
par(cex=1)
plot(b.euc_dend, ylab="CLR Euclidean Distance") + title(main = "Bacteria/Archaea Clustering Dendrogram", sub = "Colored by Sample Category", cex.main = 2, font.main= 2, cex.sub = 0.8, font.sub = 3)
legend("topright",legend = c("Clear Cut Soil", "Gopher", "No Gopher", "Old Growth"),cex=.8,col = c("#D00000", "#f8961e", "#4ea8de", "#283618"),pch = 15, bty = "n")
dev.off()
```
<center>
![](amplicon_workflow/16S_CLR_cluster_Category.png)
</center>
<div align="center">Figure 9: Hierarchical Clustering (with Centered Log-Ratio Transformed Data)</div></br>
Though there are some samples not clustered within their sample categories, overall it appears that samples from specific categories form distinct clusters. This indicates thats generally, samples from the same category have similar microbial community composition. We can also see that most of the samples in the Clear Cut Soil category are more similar to the Gopher and No Gopher samples compared to the Old Growth samples. Though this dendogram is helpful, it's not as informative as other visualizations we can do.
### Principal Coordinate Analysis (PCoA)
To learn more about how these sample categories' microbiomes compare to one another, we can use our Euclidean distance matrix (created from CLR transformed ASV counts) to generate an ordination known as **Principal Coordinate Analysis**, aka a **PCoA**.
I am not going to get into the math behind a PCoA, but you can learn more by watching this excellent [StatQuest YouTube video](https://youtu.be/GEn-_dAyYME) and this helpful [link](https://mb3is.megx.net/gustame/dissimilarity-based-methods/principal-coordinates-analysis) that describes what a PCoA is and its uses. If you're interested in learning more about ordinations in general and the impacts they can have on microbiome data, please read "Uncovering the Horseshoe Effect in Microbial Analyses" by @Morton2017.
Generally a PCoA is used at looking at how similar your samples are to each other, and the variability exhibited by your samples, in a reduced dimensional space. The closer two points are in your ordination, the more similar they are. PCoAs yield multiple axes (i.e., principal components) that capture the variation within your data set and are associated with certain values (i.e., **eigenvalues**) that represent to the magnitude of the variation for each axis. These eigenvalues are relative representations of how important each axis of variation is for describing the data set.