Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

WHAT IF WHAT IF

⁨226⁩ ⁨likes⁩

Submitted ⁨⁨1⁩ ⁨day⁩ ago⁩ by ⁨FactChecker@lemmy.world⁩ to ⁨[deleted]⁩

https://lemmy.world/pictrs/image/430306f9-47f5-42a4-99ef-e265bc9bde1b.jpeg

source

Comments

Sort:hotnewtop
  • TropicalDingdong@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Image

    source
    • chuckleslord@lemmy.world ⁨18⁩ ⁨hours⁩ ago

      Yes, but one should be on Mr Peanutbutter’s head instead of Princess Caroline’s

      source
  • jeffw@lemmy.world ⁨1⁩ ⁨day⁩ ago

    It’s called publication bias, idiots. Research that has no strong result doesn’t get published

    source
    • skibidi@lemmy.world ⁨10⁩ ⁨hours⁩ ago

      And it is a terrible thing for science and contributes greatly to the crisis of irreproducibility plaguing multiple fields.

      If 1000 researchers study the same thing, and 950 of them find insignificant results and don’t publish, and 50 of them publish their significant (95% confidence) results - we have collectively deluded ourselves into accepting spurious conclusions.

      This is a massive problem that is rarely acknowledged and even more rarely discussed.

      source
    • LanguageIsCool@lemmy.world ⁨1⁩ ⁨day⁩ ago

      “What if we kissed and it was so average that science didn’t talk about us?”

      There we go

      source
      • msage@programming.dev ⁨18⁩ ⁨hours⁩ ago

        Hey! I feel seen!

        source
        • -> View More Comments
    • ethaver@kbin.earth ⁨1⁩ ⁨day⁩ ago

      which is such a shame because there really should be more evidence for what is and isn't placebos

      source
      • udon@lemmy.world ⁨1⁩ ⁨day⁩ ago

        Easier said than done though. If the results are non-significant, that can be due to all sorts of things only one of them being a lack of an actual effect. If your measure is bad/noisy/not well calibrated, your research plan has flaws etc., the sample is too small, … Most non-significant results are due to bad research and it’s hard to identify the other ones. Preregistration and registered reports are some ideas to change that

        source
    • pendel@feddit.org ⁨1⁩ ⁨day⁩ ago

      Big research hates this trick

      source
  • humorlessrepost@lemmy.world ⁨4⁩ ⁨hours⁩ ago

    The Y axis is humor.

    The X axis is how wet your fart was.

    source
  • InternetCitizen2@lemmy.world ⁨10⁩ ⁨hours⁩ ago

    Might be too late, but can we also grope and make out under the 10 commandments?

    source
    • HeyThisIsntTheYMCA@lemmy.world ⁨5⁩ ⁨hours⁩ ago

      could we pick somewhere that’s not a splash zone?
      Image

      source
  • LodeMike@lemmy.today ⁨1⁩ ⁨day⁩ ago

    Z score for what? What are these numbers.

    I know what a Z score is I just don’t know what this means.

    source
    • whosepoopisonmybuttocks@sh.itjust.works ⁨17⁩ ⁨hours⁩ ago

      My limited knowledge on this subject: The z-score is how many standard deviations you are from the mean.

      In statistical analysis, things are often evaluated against a p (probability) of 0.05 (or 5%), which also corresponds to a z-score of 1.96 (or roughly 2).

      So, when you’re looking at your data, things with a z score >2 or <2 would correspond to findings that are “statistically significant,” in that you’re at least 95% sure that your findings aren’t due to random chance.

      As others here have pointed out, z-scores closer to 0 would correspond to findings where they couldn’t be confident that whatever was being tested was any different than the control, akin to a boring paper which wouldn’t be published. "We tried some stuff but idk, didn’t seem to make a difference.*

      source
      • HeyThisIsntTheYMCA@lemmy.world ⁨5⁩ ⁨hours⁩ ago

        i’m in a couple “we tried some stuff but it really didn’t work” medical “research” papers, which we published so no one would try the same thing again.

        source
      • Passerby6497@lemmy.world ⁨16⁩ ⁨hours⁩ ago

        But it could also make for an interesting paper, “We tried putting healing crystals above cancer patients but it didn’t seem to make any difference.”

        But then you have competing bad outcomes:

        1. The cancer patients aren’t given any other treatment, so you’re effectively harming them through lack of action/treatment
        2. The cancer patients are given other (likely real) treatments, meaning your paper is absolutely meaningless
        source
        • -> View More Comments
    • TropicalDingdong@lemmy.world ⁨1⁩ ⁨day⁩ ago

      Z value (also known as z-score) is the distance (signed) between your model and a prediction.

      If your model is a mean (the average), the z-scores are the set of differences between the mean and the values used to compose the mean.

      If your model is a regression (relating, say, two variables relating x and y), then the z-score is the difference between the regression line and the values used to fit the regression.

      source
    • marcos@lemmy.world ⁨1⁩ ⁨day⁩ ago

      As I understand it, the data there is the histogram of z-value observed by some census of published papers.

      They should make a normal curve, but the publishing process is biased.

      source
      • Squirrelsdrivemenuts@lemmy.world ⁨22⁩ ⁨hours⁩ ago

        But we also prioritize research where we suspect/hypothesize differences, so I think even if all research was published it wouldn’t necessarily be a normal distribution.

        source
    • ivanafterall@lemmy.world ⁨1⁩ ⁨day⁩ ago

      A Z score is a type of airplane, I believe.

      source
    • iAvicenna@lemmy.world ⁨22⁩ ⁨hours⁩ ago

      I came here with the same question but now I realize that if I ask it I will only get replies explaining me what Z-score is and not Z-score of what. So I will just assume it is sth akin to h-index. Still does not make much sense to me as to why average h-index papers “don’t survive” (i.e get rejected because no one is interested lets say) where as negative ones do.

      source
  • Kolanaki@pawb.social ⁨1⁩ ⁨day⁩ ago

    We might not survive.

    source