Z value (also known as z-score) is the distance (signed) between your model and a prediction.
If your model is a mean (the average), the z-scores are the set of differences between the mean and the values used to compose the mean.
If your model is a regression (relating, say, two variables relating x and y), then the z-score is the difference between the regression line and the values used to fit the regression.
whosepoopisonmybuttocks@sh.itjust.works 19 hours ago
My limited knowledge on this subject: The z-score is how many standard deviations you are from the mean.
In statistical analysis, things are often evaluated against a p (probability) of 0.05 (or 5%), which also corresponds to a z-score of 1.96 (or roughly 2).
So, when you’re looking at your data, things with a z score >2 or <2 would correspond to findings that are “statistically significant,” in that you’re at least 95% sure that your findings aren’t due to random chance.
As others here have pointed out, z-scores closer to 0 would correspond to findings where they couldn’t be confident that whatever was being tested was any different than the control, akin to a boring paper which wouldn’t be published. "We tried some stuff but idk, didn’t seem to make a difference.*
HeyThisIsntTheYMCA@lemmy.world 7 hours ago
i’m in a couple “we tried some stuff but it really didn’t work” medical “research” papers, which we published so no one would try the same thing again.
Passerby6497@lemmy.world 18 hours ago
But then you have competing bad outcomes:
whosepoopisonmybuttocks@sh.itjust.works 13 hours ago
There’s certainly a lot to discuss, relative to experimental design and ethics. Peer review and good design hopefully minimizes the clearly undesirable scenarios you describe as well as other subtle sources of error.
I was really just trying to explain what we’re looking at on op’s graph.