Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

just p-hack it.

⁨323⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨days⁩ ago⁩ by ⁨fossilesque@mander.xyz⁩ to ⁨science_memes@mander.xyz⁩

https://mander.xyz/pictrs/image/ae9da594-9219-4cc4-868a-35a587ce1246.jpeg

source

Comments

Sort:hotnewtop
  • bratorange@feddit.org ⁨1⁩ ⁨day⁩ ago

    Funny thing there is actually attempts at modeling uncertainty in Deep Learning. But they are rarely used because they are either super inaccurate or have super slow convergence. (MCMC, Bayesian neural networks) The problem is essentially that learning algorithms cannot properly integrate over certainty distributions, so only an approximation can be trained, which is often pretty slow.

    source
    • uuldika@lemmy.ml ⁨1⁩ ⁨day⁩ ago

      if they existed they’d be killer for RL. RL is insanely unstable when the distribution shifts as the policy starts exploring different parts of the state space. you’d think there’d be some clean approach to learning P(Xs|Ys) that can handle continuous shift of the Ys distribution in the training data, but there doesn’t seem to be. just replay buffers and other kludges.

      source
  • TropicalDingdong@lemmy.world ⁨2⁩ ⁨days⁩ ago

    Cross validation: “What am I a joke to you?”

    source
    • icelimit@lemmy.ml ⁨1⁩ ⁨day⁩ ago

      What’s cross validation?

      source
      • TropicalDingdong@lemmy.world ⁨1⁩ ⁨day⁩ ago

        Cross validation is a way of calculating the likely uncertainty of any model (it doesn’t have to be a machine learning model).

        A common cross validation approach is LOOCV (leave one out cross validation), for small datasets. Another is K-folds cross validation. In any case, the basics is to leave out “some amount” of your training data, totally removed from the training process, then you train your model, then you validate it on the trained model. You then repeat this process over the k-folds or each unit of your training data to create a valid uncertainty.

        Image

        So a few things. First, this a standard approach in machine learning, because once you get stop making the assumptions of frequentism (and you probably should), you no longer get things like uncertainty for free, because the assumptions aren’t met.

        In some approaches in machine learning, this is necessary because there really isn’t a tractable way to get uncertainty from the model (although in others, like random forest, you get cross validation for free).

        Cross validation is great because you really don’t need to understand anything about the model itself; you just implement the validation strategy and you get a valid answer for the model uncertainty.

        source
      • Sirius006@sh.itjust.works ⁨1⁩ ⁨day⁩ ago

        A joke to you.

        source
  • ZkhqrD5o@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Haha, random sampling go brr. :)

    source
  • minoscopede@lemmy.world ⁨1⁩ ⁨day⁩ ago

    Thank you for your service brave memer

    source