forked from PeterKDunn/SRM-Textbook
-
Notifications
You must be signed in to change notification settings - Fork 0
/
15-Tools-DecisionMaking.Rmd
executable file
·913 lines (717 loc) · 39.9 KB
/
15-Tools-DecisionMaking.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
# (PART) Tools for answering RQs {-}
```{r}
source("R/showMakeDecisions.R")
```
# Making decisions: an introduction {#MakingDecisions}
```{r, child = if (knitr::is_html_output()) {'./introductions/15-Tools-DecisionMaking-HTML.Rmd'} else {'./introductions/15-Tools-DecisionMaking-LaTeX.Rmd'}}
```
## Introduction {#Chap15-Intro}
In Sect.\ \@ref(NHANESCaseStudyQual), the NHANES data [@data:NHANES3:Data] were numerically summarised.
The *sample mean* direct HDL cholesterol concentration was different for smokers ($\bar{x} = 1.31$mmol/L) and for non-smokers ($\bar{x} = 1.39$mmol/L).
Importantly, the sample studied is one of *countless* possible samples that could have been chosen.
If a different sample of people was chosen, different values for the two sample means would have been produced.
This leads to one of the most important observations about sampling.
::: {.importantBox .important data-latex="{iconmonstr-warning-8-240.png}"}
Studying a sample leads to the following observations:
\vspace{-2ex}
* Every sample is likely to be different.
* Our sample is one of countless possible samples from the population.
* Every sample is likely to produce a different value for the sample statistic.
* Hence we only observe one of the many possible values for the sample statistic.
\vspace{-2ex}
Since many values for the sample statistic are possible, the possible values of the sample statistic vary (called *sampling variation*) and have a *distribution* (called a *sampling distribution*).
:::
::: {.definition #SamplingVariation name="Sampling variation"}
*Sampling variation* refers to how the sample estimates (statistics) vary from sample to sample, because each sample is different.
:::
We observe just one of the many possible samples, and hence observe just one of the many possible mean direct HDL concentrations for smokers.
Similarly, we observe just one of the many possible mean direct HDL concentrations for non-smokers.
So what does this difference between the two *sample* means imply about the difference between the two *population* means?
Two reasons could explain why the *sample* means are different:
1. The two *population* means are the *same*; the difference between the *sample* mean is due to *sampling variation*.
That is, we just happen to have---by chance---one of those samples where the difference between the two means is quite noticeable.
The *sample* means are different only because we have data from one of the many possible samples, and every sample is likely to be different.
2. Alternatively, the two *population* means are *different*, and the *sample* means reflect this.
How do we decide which of these explanations is supported by the data?
Similarly, in Sect.\ \@ref(NHANESCaseStudyQual) the *odds* of being diabetic were different for smokers ($0.181$) and non-smokers ($0.084$).
What does this difference between the *sample* odds of having diabetes imply about the *population* odds?
Again, two possible reasons could explain why the sample odds are different:
1. The *population* odds are the same.
We just happen to have---by chance---one of those samples where the difference between the odds is quite noticeable.
The *sample* odds are different only because we have data from one of the many possible samples, and every sample is likely to be different, so sometimes, the sample odds are different by chance.
Again, this is called *sampling variation*.
2. Alternatively, the odds are different in the *population*, and the *sample* odds reflect this.
In both situations (means; odds), the two possible explanations ('statistical hypotheses'^[The word 'hypothesis' just means 'a possible explanation'.]) have special names:
1. *No difference exists between the population parameters*: the difference between the statistics is simply due to *sampling variation*.
This is the *null hypothesis*, or $H_0$.
2. *A difference exists between the population parameters*.
This is the *alternative hypothesis*, or $H_1$.
How do we decide which of these explanations is supported by the data?
What is the decision-making *process*?
The usual approach to *decision making* in science begins by assuming the null hypothesis is true.
Then the data are examined to see if sufficient information exists to support the alternative hypothesis.
Conclusions drawn about the *population* from the *sample* can never be certain, since the sample studied is just one of many possible samples that could have been taken.
## The need for making decisions {#NeedForDecisionMaking}
In research, decisions need to be made about *population [parameters](#StatisticsAndParameters)* based on *sample [statistics](#StatisticsAndParameters)*.
The difficulty is that the decision must be made using only one of the many possible samples, and every sample is likely to be different (comprising different individuals from the population), and so each sample will produce different summary *statistics*.
This is called *sampling variation*.
::: {.importantBox .important data-latex="{iconmonstr-warning-8-240.png}"}
[Sampling variation](#def:SamplingVariation) refers to how much a sample estimate (a [*statistic*](#def:Statistic)) is likely to vary across all possible samples, because each sample is different.
:::
However, sensible decisions *can* be made (and *are* made) about population parameters based on sample statistics.
To do this though, the process of *how* decisions are made needs to be articulated, which is the purpose of this chapter.
To begin, suppose I produce a pack of cards, and shuffle them well.
The pack of cards can be considered a *population*.
Suppose I draw a *sample* of $15$ cards from the pack.
Define $\hat{p}$ as the proportion of red cards in the *sample*.
::: {.tipBox .tip data-latex="{iconmonstr-info-6-240.png}"}
A 'standard' pack of cards has 52 cards, organised into four *suits*: spades, clubs (both black),
hearts and diamonds (both red).
Each *suit* has 13 *denominations*: 2, 3, 4, 5, 6, 7, 8, 9, 10, Jack (J), Queen (Q), King (K), Ace (A).
The Ace, King, Queen and Jack are called *picture cards*.
(Most packs also contain two jokers, which are not considered part of a *standard* pack.)
:::
In this sample, you find $\hat{p} = 1$; that is, *all* are red cards.
What should you conclude?
How likely is it that this would happen simply by chance from a fair
`r if (knitr::is_latex_output()) {
'pack (see Fig.\\ \\@ref(fig:Draw15Cards)?'
} else {
'pack; see the animation below.'
}`
Is this evidence that the pack of cards is somehow unfair, or rigged?
```{r animation.hook = "gifski", fig.width = 4, interval = 0.25, fig.align = "center", dev=if (is_latex_output()){"pdf"}else{"png"}}
### CARDS ARE OF SIZE 500 x 726
asp.cards <- 726/500
### LOAD CARD IMAGES
im1 <- png::readPNG("Cards/queen_of_hearts.png")
im2 <- png::readPNG("Cards/3_of_diamonds.png")
im3 <- png::readPNG("Cards/4_of_diamonds.png")
im4 <- png::readPNG("Cards/7_of_hearts.png")
im5 <- png::readPNG("Cards/ace_of_diamonds.png")
im6 <- png::readPNG("Cards/2_of_diamonds.png")
im7 <- png::readPNG("Cards/ace_of_hearts.png")
im8 <- png::readPNG("Cards/jack_of_diamonds.png")
im9 <- png::readPNG("Cards/10_of_hearts.png")
im10 <- png::readPNG("Cards/4_of_hearts.png")
im11 <- png::readPNG("Cards/king_of_hearts.png")
im12 <- png::readPNG("Cards/queen_of_diamonds.png")
im13 <- png::readPNG("Cards/3_of_hearts.png")
im14 <- png::readPNG("Cards/5_of_diamonds.png")
im15 <- png::readPNG("Cards/5_of_hearts.png")
if (knitr::is_html_output()){
for (i in (1:18)){ # Draw 15 cards
# Set up canvas
par(mar = rep(0.05, 4))
plot( c(1.2, 2.7), c(1.2, 1.8),
type = "n",
xlab = "",
asp = asp.cards,
ylab = "",
axes = FALSE)
if (i >= 1 ) rasterImage(im1, 1.20, 1.27, 1.80, 1.9)
if (i >= 2 ) rasterImage(im2, 1.25, 1.27, 1.85, 1.9)
if (i >= 3 ) rasterImage(im3, 1.30, 1.27, 1.90, 1.9)
if (i >= 4 ) rasterImage(im4, 1.35, 1.27, 1.95, 1.9)
if (i >= 5 ) rasterImage(im5, 1.40, 1.27, 2.00, 1.9)
if (i >= 6 ) rasterImage(im6, 1.45, 1.27, 2.05, 1.9)
if (i >= 7 ) rasterImage(im7, 1.50, 1.27, 2.10, 1.9)
if (i >= 8 ) rasterImage(im8, 1.55, 1.27, 2.15, 1.9)
if (i >= 9 ) rasterImage(im9, 1.60, 1.27, 2.20, 1.9)
if (i >= 10) rasterImage(im10, 1.65, 1.27, 2.25, 1.9)
if (i >= 11) rasterImage(im11, 1.70, 1.27, 2.30, 1.9)
if (i >= 12) rasterImage(im12, 1.75, 1.27, 2.35, 1.9)
if (i >= 13) rasterImage(im13, 1.80, 1.27, 2.40, 1.9)
if (i >= 14) rasterImage(im14, 1.85, 1.27, 2.45, 1.9)
if (i >= 15) rasterImage(im15, 1.90, 1.27, 2.50, 1.9)
text(1.95, 1, "How likely is it that we get\n15 red cards in a row?")
}
}
```
```{r Draw15Cards, fig.align="center", fig.width=7, fig.height=4.5, out.width = "25%",fig.cap="How likely is it that you would get 15 red cards in a row from a fair pack?" }
if (knitr::is_latex_output()){
# Set up canvas
par(mar = rep(0.05, 4))
plot( x = c(1.2, 2.7),
y = c(1, 2),
type = "n",
xlab = "",
asp = asp.cards,
ylab = "",
axes = FALSE)
rasterImage(im1, 1.20, 1.27, 1.80, 1.9)
rasterImage(im2, 1.25, 1.27, 1.85, 1.9)
rasterImage(im3, 1.30, 1.27, 1.90, 1.9)
rasterImage(im4, 1.35, 1.27, 1.95, 1.9)
rasterImage(im5, 1.40, 1.27, 2.00, 1.9)
rasterImage(im6, 1.45, 1.27, 2.05, 1.9)
rasterImage(im7, 1.50, 1.27, 2.10, 1.9)
rasterImage(im8, 1.55, 1.27, 2.15, 1.9)
rasterImage(im9, 1.60, 1.27, 2.20, 1.9)
rasterImage(im10, 1.65, 1.27, 2.25, 1.9)
rasterImage(im11, 1.70, 1.27, 2.30, 1.9)
rasterImage(im12, 1.75, 1.27, 2.35, 1.9)
rasterImage(im13, 1.80, 1.27, 2.40, 1.9)
rasterImage(im14, 1.85, 1.27, 2.45, 1.9)
rasterImage(im15, 1.90, 1.27, 2.50, 1.9)
}
```
Getting $15$ reds cards out of $15$ (i.e., $\hat{p} = 1$) from a well-shuffled pack seems very unlikely; you probably conclude that the pack is somehow unfair.
Importantly, *how* did you reach that decision?
Your unconscious decision-making process may have been this:
1. *Assumption*:
You *assumed*, quite reasonably, that I used a standard, well-shuffled pack of cards, where half the cards are red and half the cards are black.
That is, you assumed the *population proportion* of red cards is $p = 0.5$.
2. *Expectation*:
Based on that assumption, you *expected* about half the cards in the sample of $15$ to be red (i.e., expect $\hat{p}$ to be about $0.5$).
You wouldn't necessarily expect *exactly* half red and half black, but you'd probably expect something close to that.
That is, you would expect that $\hat{p}$ would be close to $0.5$.
3. *Observation*:
You *observed* that *all* $15$ cards were red.
That is, $\hat{p} = 1$.
4. *Decision*:
You were expecting that $\hat{p}$ would be about $0.5$, but the observed value was $\hat{p} = 1$.
Since what you observed ('all red cards') was not like what you were expecting ('about half red cards'), the sample *contradicts* what you were expecting, based on your assumption of a fair pack... so your assumption of a fair pack is probably wrong.
Of course, getting $15$ red cards in a row is *possible*... but very *unlikely*.
For this reason, you would probably conclude, based on the evidence, strong evidence suggests that the pack is not a fair pack.
You probably didn't *consciously* go through this process, but it seems reasonable.
This process of decision making is similar to the process used in research.
## How decisions are made {#DecisionMaking}
Based on the ideas in the last section, a formal process of decision making in research can be described.
To expand:
1. **Assumption**:
Make an assumption about the population parameter.
Initially, assume that the *sampling variation* explains any discrepancy between the observed statistic and assumed value of the population parameter.
The initial assumption is that there has been 'no change, no difference, no relationship', depending on the context.
<!-- For example: -->
<!-- - the [*population* parameters](#StatisticsAndParameters) are the same in various groups; sampling variation explains the difference between the [*sample* statistics](#StatisticsAndParameters); -->
<!-- - the [*population* parameter](#StatisticsAndParameters) is some given value; sampling variation explains why the [*sample* statistic](#StatisticsAndParameters) is not equal to this parameter value. -->
2. **Expectation**:
Based on the assumption about the parameter, describe what values of the *statistic* might reasonably be observed from all the possible samples from the population (due to sampling variation).
3. **Observation**:
Compute the observed sample statistic from the data obtained from one of the many possible samples.
4. **Decision**:
If the observed *sample statistic* is:
- *unlikely* to have been observed by chance, the statistic (i.e., the evidence) *contradicts* the assumption about the *population parameter*: the assumption is probably *wrong* (but it is not *certainly* wrong).
- *likely* to have been observed by chance, the statistic (i.e., the evidence) is *consistent with* the assumption about the *population parameter*, and the assumption may be *correct* (though it may be wrong).
This is one way to describe the process of decision making in science
`r if( knitr::is_html_output() ) {
'(Fig.\\ \\@ref(fig:DecisionFlow)).'
} else {
'(Fig.\\ \\@ref(fig:DecisionFlow2)).'
}`
```{r DecisionFlow2, fig.cap = "A way to make decisions", fig.align="center", out.width='100%', fig.width = 9.5, fig.height = 5}
showDecisionMaking()
```
```{r DecisionFlow, animation.hook="gifski", interval=1.5, progress=TRUE, fig.cap="A way to make decisions", fig.align="center", fig.width=6, fig.height=3, dev=if (is_latex_output()){"pdf"}else{"png"}}
if( knitr::is_html_output() ) {
for (i in (1:2)){
par( mar = c(0.15, 0.15, 0.15, 0.15))
openplotmat()
pos <- array(NA, dim = c(6, 2))
pos[1, ] <- c(0.10, 0.85) # Assumption
pos[2, ] <- c(0.40, 0.85) # Expectation
pos[3, ] <- c(0.40, 0.15) # Observation
pos[4, ] <- c(0.40, 0.50) # Consistency?
pos[5, ] <- c(0.80, 0.85) # YES
pos[6, ] <- c(0.80, 0.15) # NO
straightarrow(from = pos[1, ], to = pos[2, ],
lty = 1,
lwd = 2)
straightarrow(from = pos[4, ], to = pos[2, ],
lcol = "black",
lty = 2,
lwd = 2)
straightarrow(from = pos[4, ], to = pos[3, ],
lcol = grey(0.4),
arr.pos = 0.5, # Then cover with box
lty = 2)
if (i == 1 ) {
curvedarrow(from = pos[4, ] + c(0, 0.065), to = pos[5, ],
lcol = grey(0.4),
curve = 0.35,
arr.pos = 0.5, # Then cover with box
lty = 2)
textrect( pos[5, ],
radx = 0.11,
rady = 0.1,
shadow.size = 0,
lcol = "darkseagreen2",
box.col = "darkseagreen2",
lab = "Yes: Supports\nassumption",
col = grey(0)) # CROSS
}
if (i == 2 ) {
curvedarrow(from = pos[4, ] - c(0, 0.065), to = pos[6, ] ,
lcol = grey(0.4),
curve = -0.35,
arr.pos = 0.5, # Then cover with box
lty = 2)
textrect( pos[6, ],
radx = 0.11,
rady = 0.1,
shadow.size = 0,
lcol = "darksalmon",
box.col = "darksalmon",
lab = "No: Contradicts\nassumption",
col = grey(0)) # CROSS
}
textrect( pos[4, ],
radx = 0.11,
rady = 0.1,
shadow.size = 0,
lcol = "snow2",
box.col = "snow2",
lab = "Compare:\nConsistency?",
col = grey(0)) # CHECKMARK
textrect( pos[1, ],
lab = "Population:\nAssumption",
radx = 0.11,
rady = 0.1,
shadow.size = 0,
lcol = "slategray1",
box.col = "slategray1",
cex = 1)
textrect( pos[2, ],
lab = "Sample:\nExpectation",
radx = 0.11,
rady = 0.1,
shadow.size = 0,
lcol = "slategray2",
box.col = "slategray2",
cex = 1)
textrect( pos[3, ],
box.col = "slategray3",
lcol = "slategray3",
shadow.size = 0,
radx = 0.11,
rady = 0.1,
lab = "Sample:\nObservation",
cex = 1)
}
}
```
<div style="float:right; width: 222x; border: 1px; padding:10px">
<img src="Illustrations/pexels-ketut-subiyanto-4546136.jpg" width="200px"/>
</div>
This approach is similar to how we unconsciously make decisions every day.
For example, suppose I ask my son to brush his teeth [@data:Budgett:RandomizationTest], and later I want to decide if he really did.
1. **Assumption**: I *assume* my son brushed his teeth (because I told him to).
2. **Expectation**: Based on that assumption, I *expect* to find a damp toothbrush when I check.
3. **Observation**: When I check later, I observe a *dry* toothbrush.
4. **Decision**: The evidence *contradicts* what I expected to find based on my assumption, so my assumption is probably *false*.
He probably *didn't* brush his teeth.
I may have made the wrong decision: He may have brushed his teeth, but dried his brush with a hair dryer.
However, based on the evidence, quite probably he has not brushed his teeth.
The situation may have ended differently: When I check later, suppose I observe a *damp* toothbrush.
In this case, the evidence seems *consistent* with what I expected to find based on my assumption, so my assumption is probably *true*.
He probably did brush his teeth.
Again, I may be wrong: He may have ran his toothbrush under a tap... but I don't have any evidence that he didn't brush his teeth, though.
Similar logic underlies most decision making in science^[Other ways exist to make decisions, such as using prior knowledge. For example, if my son had a reputation for wetting his toothbrush under the tap instead of brushing his teeth, that information can be incorporated into the decision making. This approach is called *Bayesian statistics*.].
::: {.example #DecisionMakingProcess name="The decision-making process"}
Consider the cards example from Sect.\ \@ref(NeedForDecisionMaking) again.
The formal process might look like this:
1. **Assumption**: *Assume* the pack is fair and well-shuffled pack of cards: the population proportion of red cards is $p = 0.5$ (the value of the *parameter*).
2. **Expectation**: Based on this assumption, roughly equal (but not necessarily *exactly*) equal numbers of red and black cards would be expected in a sample of $15$ cards.
The sample proportion of red cards $\hat{p}$ (the value of the *statistic*) is expected to be close to, but maybe not exactly, $0.5$.
3. **Observation**: Suppose I then deal $15$ cards, and *all* $15$ are red cards: $\hat{p} = 1$.
4. **Decision**: $15$ red cards from $15$ cards seems unlikely to occur if the pack is fair and well-shuffled.
The data seem *inconsistent* with what I was expecting based on the assumption (Fig.\ \@ref(fig:DecisionFlowCards)).
The evidence suggests that the assumption is probably false.
Of course, getting $15$ red cards out of $15$ is not *impossible*, so my conclusion may be wrong... but it is *very* unlikely.
Based on the evidence, concluding that a problem exists with the pack of cards seems reasonable.
:::
```{r DecisionFlowCards, fig.cap = "A way to make decisions for the cards example", fig.align="center", out.width='100%', fig.width = 9.5, fig.height = 4.95}
showDecisionMaking(populationText = expression( atop(I~bold(assume)~the,
pack~is~fair)),
expectationText = expression(atop(I~bold(expect)~to~find,
about~half~red~cards)),
oneSampleText = expression( atop(Deal~one~hand,
of~15~cards) ),
oneStatisticText = expression( atop(bold(All)~cards,
are~red)),
Decision = "Reject"
)
```
<!-- ```{r DecisionFlowCards, echo = FALSE, fig.cap = "A way to make decisions for the cards example", fig.align = "center", fig.width = 6.25, fig.height = 2.75, out.width = '75%'} -->
<!-- showMakeDecisions(arrowYes = FALSE, -->
<!-- assumptionText = "Assumption:\n Fair pack", -->
<!-- expectationText = "Expectation:\n Half red") -->
<!-- ``` -->
## Making decisions in research {#MakingDecisionsInResearch}
Let's think about each step in the decision-making process
`r if( knitr::is_html_output()){
'(Fig.\\ \\@ref(fig:DecisionFlow))'
} else {
'(Fig.\\ \\@ref(fig:DecisionFlow2))'
}`
individually.
* The **assumption** about the parameter (Sect.\ \@ref(Assumption));
* The **expectation** of the statistic (Sect.\ \@ref(ExpectationOf));
* The **observations** (Sect.\ \@ref(Observation)); and
* Make a **decisions** (Sect.\ \@ref(MakeDecision)).
### Assumption about the population parameter {#Assumption}
<div style="float:right; width: 222x; border: 1px; padding:10px">
<img src="Pics/iconmonstr-delivery-6-240.png" width="50px"/>
</div>
The initial assumption about the population parameter is that there has been 'no change, no difference, no relationship', depending on the context.
Using this idea, a reasonable assumption can be made about the *population* parameter:
* We might **assume** that *no difference* exists between the parameter for two groups in the *population*, since we don't have any evidence yet to say there *is* a difference.
For example, we might assume that the mean HDL cholesterol concentration is the same for current smokers and non-smokers in the *population*, for the NHANES data.
(If we already *knew* there was a difference in the population, why would we be performing a study to see if there is a difference?)
* We might **assume** there has been *no change* in the value of some population parameter (for example, after using a new method, or studying a different group).
These assumptions about the population parameter are called *null hypotheses*.
<div style="float:right; width: 222x; border: 1px; padding:10px">
<img src="Illustrations/pexels-ketut-subiyanto-4546136.jpg" width="200px"/>
</div>
::: {.example #Assumptions name="Assumptions about the population"}
Many dental associations recommend brushing teeth for two minutes.
One study [@data:Macgregor1979:BrushingDurationKids] recorded the tooth-brushing time for 85 uninstructed schoolchildren from England.
We could *assume* the *population* mean tooth-brushing time in the population is two minutes, as recommended.
After all, we don't have evidence to suggest any other value for the mean.
A sample can then be obtained to determine if the sample mean is consistent with, or contradicts, this assumption.
:::
### Expectations of sample statistics {#ExpectationOf}
<div style="float:right; width: 222x; border: 1px; padding:10px">
<img src="Pics/iconmonstr-christmas-42-240.png" width="50px"/>
</div>
Having assumed a value for the population parameter, we then determine *what values to expect from the sample statistic* for all the possible samples we could end up observing, based on the assumption being true.
Since many samples are possible, and every sample is likely to be different (sampling variation), the value of the sample statistic depends on which one of the possible samples we obtain: the sample statistic is likely to be different for every sample.
Think about the cards in Sect.\ \@ref(NeedForDecisionMaking).
Assuming a fair pack, then *half* the cards are red in the *population* (the pack of cards), so the population proportion is assumed to be $p = 0.5$.
In a *sample* of $15$ cards, what values could be reasonably expected for the *sample* proportion $\hat{p}$ of red cards (the statistic)?
If samples of size $15$ were repeatedly taken, the sample proportion of red cards would vary from hand to hand, of course.
But *how* would $\hat{p}$ vary from sample to sample?
Perhaps $15$ red cards out of $15$ cards happens reasonably frequently... or perhaps not.
How could we find out?
We could:
* use mathematical theory.
* shuffle a pack of cards and deal $15$ cards many hundreds of times, then count how often we see $15$ red cards of out $15$ cards.
* *simulate* (using a computer) dealing $15$ cards many hundreds of times from a fair pack (i.e., $p = 0.5$), and count how often we get $15$ red cards of out $15$ cards (i.e., $\hat{p} = 1$).
The third option is the most practical.
To begin, suppose we simulated only ten hands of $15$ cards each;
`r if (knitr::is_latex_output()) {
'Fig.\\ \\@ref(fig:RollDice10) shows the sample proportion of red cards from each of the ten repetitions.'
} else {
'the animation below shows the sample proportion of red cards from each of the ten repetitions.'
}`
Not one of those ten hands produced $15$ red cards in $15$ cards.
```{r animation.hook="gifski", interval=0.25, dev=if (is_latex_output()){"pdf"}else{"png"}}
if (knitr::is_html_output()){
set.seed(99999)
num.in.hand <- 15
num.sims <- 10
x.loc <- 1:(num.in.hand)
y.loc <- 1
prop.red <- array(dim = num.sims)
all.hands <- array( dim = c(num.sims, num.in.hand))
for (i in 1:num.sims){
par ( mar = c(5.1, 5.1, 4.1, 2.1)) # DEFAULT is 5.1, 4.1, 4.1, 2.1
plot( x = c(1, (num.in.hand + 2)),
y = c(1, num.sims),
type = "n",
las = 1,
xlab = "",
ylab = "",
main = paste("Hand number",i),
axes = FALSE)
hand <- sample( c("B", "R"),
num.in.hand,
replace = TRUE)
all.hands[i, ] <- hand
prop.red[i] <- sum( (hand == "B")/2 == floor( (hand == "B")/2) ) / num.in.hand
num.all.hands <- (all.hands == "B")
for (j in 1:i){
text(y = j,
x = 1:num.in.hand,
labels = all.hands[j, ],
col = ifelse( num.all.hands[j, ]/2 == floor(num.all.hands[j, ]/2), "red", "grey") )
}
# Add p-hat heading
mtext(expression(hat( italic(p) ) ),
side = 3,
line = 0,
at = num.in.hand + 2 )
# Add the hand number to the left-hand side
axis(side = 2,
at = 1:i,
las = 1,
labels = paste("Hand ", 1:i, sep = "") )
# Add the sample proportion to right-hand side
text(num.in.hand + 2, 1 : i,
labels = format(round(prop.red[1 : i], 2), nsmall = 2) )
#Add dividing line
abline(v = num.in.hand + 1,
col = "grey")
# Divide each hand set
abline(h = (1 : i) - 0.5,
col = "grey")
}
}
```
```{r RollDice10, fig.align = "center", fig.width = 6.5, out.width='75%', fig.cap = "Ten hands of $15$ cards: The sample proportion that is red varies from hand to hand (as shown on the right-hand side)"}
if (knitr::is_latex_output()){
set.seed(99999)
num.in.hand <- 15
num.sims <- 10
x.loc <- 1:(num.in.hand)
y.loc <- 1
prop.red <- array(dim = num.sims)
all.hands <- array( dim = c(num.sims, num.in.hand))
plot( x = c(1, (num.in.hand + 2)),
y = c(1, num.sims),
type = "n",
las = 1,
xlab = "",
ylab = "",
main = "Ten example hands of 15 cards",
axes = FALSE)
for (i in 1:num.sims){
par ( mar = c(0.1, 4.3, 4.1, 0.1)) # DEFAULT is 5.1, 4.1, 4.1, 2.1
hand <- sample( c("B", "R"),
num.in.hand,
replace = TRUE)
all.hands[i, ] <- hand
prop.red[i] <- sum( (hand == "B")/2 == floor( (hand == "B")/2) ) / num.in.hand
num.all.hands <- (all.hands == "B")
}
for (j in 1:i){
text(y = j,
x = 1:num.in.hand,
labels = all.hands[j, ],
col = ifelse( num.all.hands[j,]/2 == floor(num.all.hands[j,]/2),
"black",
grey(0.7)),
font = ifelse( num.all.hands[j,]/2 == floor(num.all.hands[j,]/2),
2, # BOLD
1) # NORMAL FONT
)
}
# Add p-hat heading
mtext(expression(hat( italic(p) ) ),
side = 3,
line = 0,
at = num.in.hand + 2 )
# Add the hand number to the left-hand side
axis(side = 2,
at = 1:i,
las = 1,
labels = paste("Hand ", 1:i, sep = "") )
# Add the sample proportion to right-hand side
text(num.in.hand+2, 1:i,
labels = format( round(prop.red[1:i], 2), nsmall = 2) )
#Add dividing line
abline(v = num.in.hand + 1,
col = "grey")
# Divide each hand set
abline(h = (1:i) - 0.5,
col = "grey")
# }
}
```
Suppose we repeated this for *hundreds* of hands of $15$ cards (rather than the ten above), and for each hand we recorded $\hat{p}$, sample proportion of cards that were red.
The value of $\hat{p}$ would vary from sample to sample (sampling variation), and we could record the value of $\hat{p}$ from each of those hundreds of hands.
A histogram of these hundreds of sample proportions could be constructed;
`r if (knitr::is_latex_output()) {
'Fig.\\ \\@ref(fig:HandRedHist1000) shows a histogram of the sample proportions from $1000$ simulations of a hand of $15$ cards.'
} else {
'the animation below shows a histogram of the sample proportions from $1000$ repetitions of a hand of $15$ cards.'
}`
This histogram shows how we might expect the sample proportions $\hat{p}$ to vary from sample to sample, when the *population* proportion of red cards is $p = 0.5$.
```{r DiceHist, animation.hook = "gifski", interval = 0.025, dev=if (is_latex_output()){"pdf"}else{"png"}}
if (knitr::is_html_output()){
set.seed(9900991)
num.in.hand <- 25
num.sims <- 1000
prop.red <- array(dim=num.sims)
for (i in 1:num.sims){
hand <- sample( c("B", "R"),
num.in.hand,
replace = TRUE)
prop.red[i] <- sum( (hand == "B")/2 == floor( (hand == "B")/2) ) / num.in.hand
out <- hist( prop.red,
breaks = seq(0.02, 0.98, by = 0.04),
las = 1,
ylim = c(0, 200),
xlim = c(0, 1),
col = plot.colour,
main = paste("Histogram of sample proportions\nHand number:", i),
xlab = "Proportion of the 15 cards that are red",
sub = paste("(For this hand: proportion of cards red: ", format(round(prop.red[i], 3), nsmall = 3), ")", sep = "" ),
ylab = "Frequency",
right = FALSE,
axes = FALSE)
axis(side = 1)
axis(side = 2,
las = 1)
xx <- seq(0, 1, length = 500)
yy <- dnorm(xx,
mean = 0.5,
sd = 0.1 )
yy <- yy/max(yy) * max(out$count)
lines(yy ~ xx,
col = "grey",
lwd = 2)
}
}
```
```{r HandRedHist1000, fig.align="center", fig.height=3.75,fig.width=6.25, out.width='70%', fig.cap="A histogram of the sample proportion of red cards, in hands of $15$ cards, for $1000$ repetitions" }
if (knitr::is_latex_output()){
set.seed(99030991)
num.in.hand <- 25
num.sims <- 1000
prop.red <- array(dim = num.sims)
for (i in 1:num.sims){
hand <- sample( c("B", "R"), num.in.hand,
replace = TRUE)
prop.red[i] <- sum( (hand == "B")/2 == floor( (hand == "B")/2) ) / num.in.hand
}
out <- hist( prop.red,
breaks = seq(0.02, 0.98, by = 0.04),
las = 1,
ylim = c(0, 200),
xlim = c(0, 1),
col = plot.colour,
main = "Histogram of sample proportions\nfor 1000 hands",
xlab = "Proportion of 15 cards that are red",
ylab = "Frequency",
right = FALSE,
axes = FALSE)
axis(side = 1)
axis(side = 2,
las = 1)
xx <- seq(0, 1, length = 500)
yy <- dnorm(xx,
mean = 0.5,
sd = 0.1 )
yy <- yy/max(yy) * max(out$count)
lines(yy ~ xx,
col = "grey",
lwd = 2)
# Locate p-hat
arrows(x0 = 0.9,
y0 = 125,
x1 = 1,
y1 = 0,
angle = 15,
length = 0.1)
text(x = 0.9,
y = 125,
pos = 3,
labels = expression( hat(italic(p)) == 1) )
points(x = 1,
y = 0,
pch = 19)
}
```
### Observations about our sample {#Observation}
From a sample of $15$ cards (one of the many samples that are possible), suppose the sample statistic is $\hat{p} = 1$ (i.e., $15$ red cards out of $15$ cards).
### Making a decision {#MakeDecision}
<div style="float:right; width: 222x; border: 1px; padding:10px">
<img src="Pics/iconmonstr-shipping-box-7-240.png" width="50px"/>
</div>
You then need to make a decision: is it reasonable that you would observe $\hat{p} = 1$ if we assume $p = 0.5$?
Observing $15$ red cards out of $15$ cards is quite rare: it never happened once in the $1000$ simulations, so a sample producing $\hat{p} = 1$ almost *never* occurs.
So based on simulating one thousand hands, you could conclude that finding $\hat{p} = 1$ *almost never occurs*... *if the assumption of a fair pack was true*.
But we *did* find $\hat{p} = 1$ (i.e., $15$ red cards in $15$ cards)... so the assumption of a fair pack (i.e., $p = 0.5$) is probably wrong.
What if we had observed $11$ red cards in a hand of $15$ cards, a sample proportion of $\hat{p} = 11/15 = 0.733$ (rather than $15$ red cards out of $15$)?
The conclusion is not quite so obvious then:
`r if (knitr::is_latex_output()) {
'Fig.\\ \\@ref(fig:HandRedHist1000) shows that $\\hat{p} = 0.733$ is unlikely... but certainly possible.'
} else {
'the animation above shows shows that $\\hat{p} = 0.733$ is unlikely... but certainly possible.'
}`
What if $9$ red cards were found in the $15$ (i.e., $\hat{p} = 0.6$)?
`r if (knitr::is_latex_output()) {
'Figure\\ \\@ref(fig:HandRedHist1000) shows that $\\hat{p} = 0.6$'
} else {
'The animation above shows shows that $\\hat{p} = 0.6$'
}`
could reasonably be observed, since there are many possible samples that lead to $\hat{p} = 0.6$, or even higher.
More generally, a better approach is required for making a decision.
Special tools are needed to describe what to *expect* from the *sample* statistic after making *assumptions* about the *population* parameter.
These special tools are discussed in the next chapters.
::: {.example #SamplingVariation name="Sampling variation"}
Many dental associations recommend brushing teeth for two minutes.
One study [@data:Macgregor1979:BrushingDurationKids] recorded the tooth-brushing time for $85$ uninstructed schoolchildren from England.
Of course, every possible sample of $85$ children will include different children, and so produce a different sample mean $\bar{x}$.
Even if the *population* mean toothbrushing time really is two minutes ($\mu = 2$), the *sample* mean probably won't be exactly two minutes, because of sampling variation.
*Assume* the population mean tooth-brushing time is two minutes ($\mu = 2$).
*If* this is true, we then could describe what values of the sample statistic $\bar{x}$ to *expect* from all possible samples.
Then, after obtaining a sample and computing the sample mean, we could determine if the sample mean seems *consistent* with the assumption of two minutes, or whether it seems to *contradict* this assumption.
:::
## Tools for describing sampling variation
Making decisions about population parameters based on a sample statistic is difficult:
only one of the many possible samples is observed.
Since every sample is likely to be different, different values of the sample statistic are possible.
In this chapter, though, a process for making decisions has been studied
`r if( knitr::is_html_output()){
'(Fig.\\ \\@ref(fig:DecisionFlow)).'
} else {
'(Fig.\\ \\@ref(fig:DecisionFlow2)).'
}`
To apply this process to research, describing *how* sample statistics vary from sample to sample (sampling variation) is necessary.
Some of those tools are discussed in the following chapters:
* tools to describe the distribution of the population and the sample: Chap.\ \@ref(SamplingDistributions).
* tools to describe how sample statistics vary from sample to sample (sampling variation), and hence what to expect from the sample statistic: Chap.\ \@ref(SamplingVariation).
* tools to describe the random nature of what happens with sample statistics, and so determine if the sample statistic is consistent with the assumption: Chap.\ \@ref(Probability).
## Summary {#Chap15-Summary}
Decisions are often made by making an *assumption* about the population parameter, which leads to an *expectation* of what might occur in the sample statistics.
We can then make *observations* about our sample, and then make a *decision* about whether the sample data support or contradict the initial assumption.
## Quick review questions {#Chap15-QuickReview}
::: {.webex-check .webex-box}
1. True or false: Parameters describe *populations*.\tightlist
`r if( knitr::is_html_output() ) {torf(answer = TRUE )}`
1. True or false: Both $\bar{x}$ and $\mu$ are *statistics*.
`r if( knitr::is_html_output() ) {torf(answer = FALSE )}`
1. True or false: The value of a statistic is likely to be different in every sample.
`r if( knitr::is_html_output() ) {torf(answer = TRUE )}`
1. True or false: *Sampling variation* describes how the value of a *statistic* varies from sample to sample.
`r if( knitr::is_html_output() ) {torf(answer = TRUE )}`
1. True or false: The initial assumption is made about the *sample statistic*.
`r if( knitr::is_html_output() ) {torf(answer = FALSE )}`
1. True or false: The variation in statistics from sample to sample is called sampling variation
`r if( knitr::is_html_output() ) {torf(answer = TRUE )}`
1. True or false: If the sample results seem inconsistent with what was expected, then the assumption about the population is probably true.
`r if( knitr::is_html_output() ) {torf(answer = FALSE )}`
1. True or false: In the sample, we know exactly what to expect.
`r if( knitr::is_html_output() ) {torf(answer = FALSE )}`
1. True or false: Hypotheses are made about the population.
`r if( knitr::is_html_output() ) {torf(answer = TRUE )}`
:::
## Exercises {#MakingDecisionsExercises}
Selected answers are available in Sect.\ \@ref(MakingDecisionsAnswer).
::: {.exercise #MakingDecisionsDice}
Suppose you are playing a die-based game, and your opponent rolls a `r include_graphics("Dice/die6.png", dpi=1250)` ten times in a row.
1. Do you think there is a problem with the die?
1. Explain how you came to this decision.
:::
::: {.exercise #MakingDecisionsClaim}
In a 2012 advertisement, an Australian pizza company claimed that their 12-inch pizzas were 'real 12-inch pizzas' [@mypapers:Dunn:PizzaSize].
1. What is a reasonable assumption to make to test this claim?
1. The claim is based on a sample of 125 pizzas, for which the sample mean pizza diameter was $\bar{x} = 11.48$ inches.
What are the two reasons why the sample mean is not 12-inches?
1. Does the claim appear to be supported by, or contradicted by, the data?
Why?
3. Would your conclusion change if the sample mean was $\bar{x} = 11.25$ inches, rather than 11.48 inches?
Does the claim appear to be supported by, or contradicted by, the data?
Why?
1. Does your answer depend on the sample size?
For example, is observing a sample mean of 11.25 inches from a sample of size 10 equivalent to observing a sample mean of 11.25 inches from a sample of size 125?
:::
<!-- QUICK REVIEW ANSWERS -->
`r if (knitr::is_html_output()) '<!--'`
::: {.EOCanswerBox .EOCanswer data-latex="{iconmonstr-check-mark-14-240.png}"}
**Answers to in-chapter questions:**
- \textbf{\textit{Quick Revision} questions:}
**1.** True.
**2.** False.
**3.** True.
**4.** True.
**5.** False.
**6.** True.
**7.** False.
**8.** False.
**9.** True.
:::
`r if (knitr::is_html_output()) '-->'`