Bayesian purist cope and seeth.
Most machine learning is closer to universal function approximation via autodifferentiation. Backpropagation just lets you create numerical models with insane parameter dimensionality.
Submitted 4 months ago by driving_crooner@lemmy.eco.br to science_memes@mander.xyz
https://lemmy.eco.br/pictrs/image/2395bbc1-a540-443a-af8e-78c9d0095de4.webp
Bayesian purist cope and seeth.
Most machine learning is closer to universal function approximation via autodifferentiation. Backpropagation just lets you create numerical models with insane parameter dimensionality.
I like your funny words, magic man.
erm, in english, please !
Universal function approximation - neural networks. Auto-differentiation - algorithmic calculation of partial derivatives (aka gradients) Backpropagation - when using a neural network (or most ML algorithms actually), you find the difference between model prediction and original labels. And the difference is sent back as gradients (of the loss function) Parameter dimensionality - the “neurons” in the neural network, ie, the weight matrices.
If thats your argument, its worse than Statistics imo. Atleast statistics have solid theorems and proofs (albeit in very controlled distributions). All DL has right now is a bunch of papers published most often by large tech companies which may/may not work for the problem you’re working on.
Universal function approximation theorem is pretty dope tho. Im not saying ML isn’t interesting, some part of it is but most of it is meh. It’s fine.
A monad is just a monoid in the category of endofunctors, after all.
No, no, everyone knows that a monad is like a burrito.
(Joke is referencing this: blog.plover.com/prog/burritos.html )
pee pee poo poo wee wee
Any practical universal function approximation will go against entropy.
Universal function approximation - neural networks. Auto-differentiation - algorithmic calculation of partial derivatives (aka gradients) Backpropagation - when using a neural network (or most ML algorithms actually), you find the difference between model prediction and original labels. And the difference is sent back as gradients (of the loss function) Parameter dimensionality - the “neurons” in the neural network, ie, the weight matrices.
If thats your argument, its worse than Statistics imo. Atleast statistics have solid theorems and proofs (albeit in very controlled distributions). All DL has right now is a bunch of papers published most often by large tech companies which may/may not work for the problem you’re working on.
Universal function approximation theorem is pretty dope tho. Im not saying ML isn’t interesting, some part of it is but most of it is meh. It’s fine.
Eh. Even heat is a statistical phenomenon, at some reference frame or another. I’ve developed model-dependent apathy.
Yes, this is much better because it reveals the LLMs are laundering bias from a corpus of dimwittery.
Inb4 AI suggesting to put glue on pizza
iT’s JuSt StAtIsTiCs
But it is, and it always has been. Absurdly complexly layered statistics, calculated faster than a human could.
This whole “we can’t explain how it works” is bullshit from software engineers too lazy to unwind the emergent behavior caused by their code.
I agree with your first paragraph, but unwinding that emergent behavior really can be impossible. It’s not just a matter of taking spaghetti code and deciphering it, ML usually works by generating weights in something like a decision tree, neural network, or statistical model.
Assigning any sort of human logic to why particular weights ended up where they are is educated guesswork at best.
It’s totally statistics, but that second paragraph really isn’t how it works at all. You don’t “code” neural networks the way you code up website or game. There’s no “if (userAskedForThis) {DoThis()}”. All the coding you do in neutral networks is to define a model and training process, but that’s it; Before training that behavior is completely random.
The neural network engineer isn’t directly coding up behavior. They’re architecting the model (random weights by default), setting up an environment (training and evaluation datasets, tweaking some training parameters), and letting the models weights be trained or “fit” to the data. It’s behavior isn’t designed, the virtual environment that it evolved in was. Bigger, cleaner datasets, model architectures suited for the data, and an appropriate number of training iterations (epochs) can improve results, but they’ll never be perfect, just an approximation.
But it is, and it always has been. Absurdly complexly layered statistics, calculated faster than a human could.
Well sure, but as someone else said even heat is ultimately statistics. Yeah, ML is statistics, but so what?
This is exactly how I explain the AI (ie what the current AI buzzword refers to) tob common folk.
And what that means in terms of use cases.
When you indiscriminately take human outputs (knowledge? opinions? excrements?) as an input, an average is just a shitty approximation of pleb opinion.
or stolen data
**AND stolen data
Its curve fitting
But it’s fitting to millions of sets in hundreds of dimensions.
Well, lots of people blinded by hype here… Obv it is not simply statistical machine, but imo it is something worse. Some approximation machinery that happen to work, but gobbles up energy in cost. Something only possible becauss we are not charging companies enough electricity costs, smh.
We’re in the “computers take up entire rooms in a university to do basic calculations” stage of modern AI development. It will improve but only if we let them develop.
Moore’s law died a long time ago, and AI isn’t getting any more efficient
Honestly if this massive energy need for AI will help accelerate modular/smaller nuclear reactors 'm all for it. With some of these insane data centers companies want to build each one will need their own power plants.
I’ve seen tons of articles on small/modular reactor companies but never seen any make it to the real world yet.
nathanfillionwithhandupmeme.jpg
Neural nets, including LLMs, have almost nothing to do with statistics. There are many different methods in Machine Learning. Many of them are applied statistics, but neural nets are not. If you have any ideas about how statistics are at the bottom of LLMs, you are probably thinking about some other ML technique. One that has nothing to do with LLMs.
That’s where the almost comes in. Unfortunately, there are many traps for the unwary stochastic parrot.
Training a neural net can be seen as a generalized regression analysis. But that’s not where it comes from. Inspiration comes mainly from biology, and also from physics. It’s not a result of developing better statistics. Training algorithms, like Backprop, were developed for the purpose. It’s not something that the pioneers could look up in a stats textbook. This is why the terminology is different. Where the same terms are used, they don’t mean quite the same thing, unfortunately.
Many developments crucial for LLMs have no counterpart in statistics, like fine-tuning, RLHF, or self-attention. Conversely, what you typically want from a regression - such as neatly interpretable parameters with error bars - is conspicuously absent in ANNs.
Any ideas you have formed about LLMs, based on the understanding that they are just statistics, are very likely wrong.
Software developer here, the more I learn about neural networks, the more they seem like very convoluted statistics. They also just a simplified form of neurons, and thus I advise against overhumanization, even if they’re called “neurons” and/or Alex.
he more I learn about neural networks, the more they seem like very convoluted statistics
How so?
I wouldn’t say it is statistics, statistics is much more precise in its calculation of uncertanties. AI depends more on calculus, or automated differentiation, which is also cool but not statistics.
Just because you don’t know what the uncertainties are doesn’t mean they’re not there.
Most optimization problems can trivially be turned into a statistics problem.
Most optimization problems can trivially be turned >into a statistics problem.
Sure if you mean turning your error function into some sort of likelihood by employing probability distributions that relate to your error function.
But that is only the beginning. Apart from maybe Bayesian neural networks, I haven’t seen much if any work on stuff like confidence intervals for your weights or prediction intervals for the response (that being said I am only a casual follower on this topic).
iTs JusT iF/tHEn 🥴🥴🥴
Also linear algebra and vector calculus
My biggest issue is that a lot of physical models for natural phenomena are being solved using deep learning, and I am not sure how that helps deepen understanding of the natural world. I am for DL solutions, but maybe the DL solutions would benefit from being explainable in some form. For example, it’s kinda old but I really like all the work around gradcam and its successors arxiv.org/abs/1610.02391
How is it different than using numerical methods to find solutions to problems for which analytic solutions are difficult, infeasible, or simply impossible to solve.
Any tool that helps us understand our universe. All models suck. Some of them are useful nevertheless.
I admit my bias to the problem space though: I’m an AI engineer—classically trained in physics and engineering though.
In my experience, papers which propose numerical solutions cover in great detail the methodology (which relates to some underlying physical phenomena), and also explain boundary conditions to their solutions. In ML/DL papers, they tend to go over the network architecture in great detail as the network construction is the methodology. But the problem I think is that there’s a disconnect going from raw data to features to outputs. I think physics informed ML models are trying to close this gap somewhat.
well numerical models have to come up with some model that explains how relevant observables behave. With AI you don’t even build the model that explains the system physically and mathematically, let alone the solution.
It is basically like having Newton’s Equations vs an AI that produces coordinates with respect to time, given initial conditions and force fields.
Moldy Monday!
Aka fudge
FaceDeer@fedia.io 4 months ago
The meme would work just the same with the "machine learning" label replaced with "human cognition."
wizardbeard@lemmy.dbzer0.com 4 months ago
Have to say that I love how this idea congealed into “popular fact” as soon as peoples paychecks started relying on massive investor buy in to LLMs.
I have a hard time believing that anyone truly convinced that humans operate as stochastic parrots or statistical analysis engines has any significant experience interacting with others human beings.
Less dismissively, are there any studies that actually support this concept?
essell@lemmy.world 4 months ago
Speaking as someone whose professional life depends on an understanding of human thoughts, feelings and sensations, I can’t help but have an opinion on this.
To offer an illustrative example
When I’m writing feedback for my students, which is a reparative task with individual elements, it’s original and different every time.
And yet, anyone reading it would soon learn to recognise my style same as they could learn to recognise someone else’s or how many people have learned to spot text written by AI already.
I think it’s fair to say that this is because we do have a similar system for creating text especially in response to a given prompt, just like these things called AI. This is why people who read a lot develop their writing skills and style.
But, really significant, that’s not all I have. There’s so much more than that going on in a person.
So you’re both right in a way I’d say. This is how humans develop their individual style of expression, through data collection and stochastic methods, happening outside of awareness. As you suggest, just because humans can do this doesn’t mean the two structures are the same.
FaceDeer@fedia.io 4 months ago
I'd love to hear about any studies explaining the mechanism of human cognition.
Right now it's looking pretty neural-net-like to me. That's kind of where we got the idea for neural nets from in the first place.
mosiacmango@lemm.ee 4 months ago
If by “human cognition” you mean “tens of thousands of improvised people manually checking and labeling images and text” so that the AI can exist, then yes.
If you mean “it’s a living, thinking being,” then no.
Schmeckinger@lemmy.world 4 months ago
My dude it’s math all the way down. Brains are not magic.
jherazob@beehaw.org 4 months ago
Not at all