-
-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
clarifying when approximations are used by effectsize
#184
Comments
effectsize gives a warning, but this is suppressed by report... :/ |
If not suppressed, can it be muted? |
We should use tryCatch() to catch the warning and then adjust the report output accordingly. |
If I might be so bold as to make a suggestion: if an error message does get generated in the future, it might be useful to display the "correct" format for getting the "right" results as part of the message, as @mattansb posted in the other thread. |
Thinking out loud here: How about-
For example, here it would change to Cohen's d (approximate) = 2.03, 95% CI [1.13, 2.91] |
That would be great, and maybe (instead of what I suggested above) it could describe how to "correct" the approximation in the docs or in an example or something :) |
This would essentially be "don't use formula notation for htest functions" - that's the solution! (: |
You are operating on the assumption that novices like me know what "formula notation" is ;) |
Or better, don't use htest functions, just fit a linear model 😜 |
The formula notation for t.test(mpg ~ am, data = mtcars) # formula
t.test(mtcars$mpg[mtcars$am==0], mtcars$mpg[mtcars$am==1]) # pass values |
But really @leighclark Don't use the *.test functions. Use lm() instead. https://lindeloev.github.io/tests-as-linear/ |
Hmm this is good advice maybe for inference in general - but |
hey @bwiernik I think (know) we are at totally different stages of our psychology and statistics careers, so I am still at the stage where I have to do the things that are given to me, but thanks for the tip and I will take a look at it! Cheers, |
This would be a great addition to the docs / examples ... I just needed to find this issue to figure out how to report t.tests
|
Results from |
Since
report
callseffectsize
on model objects to compute effect sizes,effectsize
often needs to use an approximation in caseinsight::get_data()
doesn't work.I think the
report
output should somehow note this for the user, lest they get confused as to why the values don't match with other package outputs (includingeffectsize
).For example, #183 (comment)
The text was updated successfully, but these errors were encountered: