Did OP mean accomplishing the connectivity and with software rather than hardware? No, we don’t have hardware that can modify itself like a brain does, but I think it is possible to accomplish that with coding.
los_chill@programming.dev 7 months ago
Neurons undergo physical change in their interconnectivity. New connections (synapses) are created, strengthened, and lost over time. We don’t have circuits that can do that.
RememberTheApollo_@lemmy.world 7 months ago
palebluethought@lemmy.world 7 months ago
Sure, but now you’re talking about running a physical simulation of neurons. Real neurons aren’t just electrical circuits. Not only do they evolve rapidly over time, they’re powerfully influenced by their chemical environment, which is controlled by your body’s other systems, and so on. These aren’t just minor factors, they’re central parts of how your brain works.
Yes, in principle, we can (and have, to some extent) run physical simulations of neurons down to the molecular resolution necessary to accomplish this. But the computational power required to do that is massively, like millions of times, more expensive than the “neural networks” we have today, which is really just us anthropomorphizing a bunch of matrix multiplication.
It’s simply not feasible to do this at a scale large enough to be useful, even with all the computation on Earth.
RememberTheApollo_@lemmy.world 7 months ago
Thanks for putting it at a scale I can grok. If we could create such a device it would just be a literal (digital) brain.
Dkarma@lemmy.world 7 months ago
Performance suffers. Basically we don’t have the computing power to scale the sw to the perf levels of the human brain.
masterspace@lemmy.ca 7 months ago
Yes we do. FPGAs and memristors can both recreate those effects at the hardware level. The problem is scaling it to the number of neurons in the human brain.
Neuromancer49@midwest.social 7 months ago
Actually, neuron-based machine learning models can handle this. The connections between the fake neurons can be modeled as a “strength”, or the probability that activating neuron A leads to activation of neuron B. Advanced learning models just change the strength of these connections. If the probability is zero, that’s a “lost” connection.
Those models don’t have physical connections between neurons, but mathematical/programmed connections. Those are easy to change.
FooBarrington@lemmy.world 7 months ago
That’s a vastly simplified model. Real neurons can’t be approximated with a couple of weights - each neuron is at least as complex as a multi-layer RNN.
TempermentalAnomaly@lemmy.world 7 months ago
I’d love to know more.
I recently read “The brain is a computer is a brain: neuroscience’s internal debate and the social significance of the Computational Metaphor” and found it compelling. It bristled a lot of feathers on Lemmy, but think their critique is valid.
Do you have any review resources? I have a bit of knowledge around biology and biochemistry, but haven’t studied neuroscience.
And no pressure. It’s a lot to ask dor some random person on the internet. Cheers!
FooBarrington@lemmy.world 7 months ago
Here’s the video that introduced me to the idea: www.youtube.com/watch?v=hmtQPrH-gC4
He explains it very well and gives a lot of references :)