Frequentist statistics are really… silly in a way. And this coming from someone who has to teach it. Sure, p is less than 5%, but you sampled 100,000 people-- an effect size of 0.05 would be significant at this rate. “bUt ItS sIgNiFiCaNt”… Oy.
We're all on the spectrum
Submitted 1 month ago by fossilesque@mander.xyz to science_memes@mander.xyz
https://mander.xyz/pictrs/image/79e3a196-fe93-4ea5-80fb-cba385c6ba82.jpeg
Comments
taiyang@lemmy.world 1 month ago
Contramuffin@lemmy.world 1 month ago
I get very suspicious if a paper samples multiple groups and still uses p. You would use q in that case, and the fact that they didn’t suggests that nothing came up positive.
Still, in my opinion it’s generally OK if they only use the screen as a starting point and do follow-up experiments afterwards
taiyang@lemmy.world 1 month ago
Yeah, I used to work in a field with huge samples so significance wasn’t really all that useful. I usually just report significant coefficients and try to make clear what changes by model. For instance, if a type of curriculum showed improvements on test scores, you simply say how much and, possibly, illustrate it by saying if a person went from 50th percentile to 55th percentile.
Every field varies, though. I find it crazy how much psychologists I’ve worked with cared about r-squared. To each their own, I guess.
OrnateLuna@lemmy.blahaj.zone 1 month ago
The fun part is that we don’t
marcos@lemmy.world 1 month ago
We don’t. We keep just doing things and good things keep happening afterwards.
We don’t even know if those two facts are linked in any way.
degen@midwest.social 1 month ago
Nearly irrelevant xkcd
Image
At least in software we know where the linchpins are on some level.
Azuth@lemmy.today 1 month ago
Descartes said it best. The only thing I can know for sure is that I do, in fact, exist.