Last active
August 29, 2015 14:07
-
-
Save mikelove/e3c95819282d215df9de to your computer and use it in GitHub Desktop.
FDR calculations care about ratios of null/total
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# make 10 small p-values (p = 0.01) | |
# and 90 big p-values (p = 0.99) | |
p = rep(c(.01,.99), c(10,90)) | |
# adjust the p-values with Benjamini-Hochberg method | |
# and then tabulate them | |
table(p.adjust(p, method="BH")) | |
0.1 0.99 | |
10 90 | |
# assume we have higher resolution of the exact same signal | |
# simulate this by just repeating each p-value from before 100 times | |
p.higer.resolution = rep(p, each=100) | |
# adjust the p-values and tabulate | |
table(p.adjust(p.higer.resolution, method="BH")) | |
0.1 0.99 | |
1000 9000 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
I'm not claiming that on real data there won't be a hit from multiple test correction. my point is just that increasing the number of tests doesn't necessarily hurt you in terms of power as long as the null/total ratio remains the same.
two points for real data would be:
rep(p, each=100)
is not realistic