rtfm_modular
@rtfm_modular@lemmy.world
- Comment on Why can't people make ai's by making a neuron sim and then scaling it up with a supercomputer to the point where it has a humans number of neurons and then raise it like a human? 7 months ago:
All fair points, and I don’t deny predictive text generation is at the core of what’s happening. I think it’s a fair statement that most people hear “predictive text” and think it’s like the suggested words in a text message, which it’s more than that.
I also don’t think Turing Tests are particularly useful long term because humans are so fallible. We too hallucinate all the time with our convictions based on false memories. Getting an AI to have what seems like an emotional response or show uncertainty or confusion in a Turing test is a great way to trick people.
The algorithm is already a black box as is the mechanics of our own intelligence. We have no idea where the ceiling is for this technology yet. This debate quickly goes into the ontological and epistemological discussion about what it means to be intelligent…if the AI predictive text generation is complex enough where you simply cannot tell a difference, then is there a meaningful difference? What if we are just insanely complex algorithms?
I also don’t trust that what the market sees in AI products is indicative of the current limits. AGI isn’t here yet, but LLMs are a scary big step in that direction.
Pragmatically, I will maintain that AI is a different form of intelligence because I think it shortcuts to better discussions around policy and how we want this tech in our lives. I would gladly welcome the news that tells me I’m wrong.
- Comment on Why can't people make ai's by making a neuron sim and then scaling it up with a supercomputer to the point where it has a humans number of neurons and then raise it like a human? 7 months ago:
Talk to anyone who consumes Fox News daily and you’ll get incorrect predictive text generated quite confidently. You may also deny them their intelligence and lack of humanity with the fallacies they uphold.
I also think intelligence is a gradient—is an ant intelligent? What about a dog? Chimp? Who gets to draw the line?
It very may be a very complex predictive text generator that hallucinates but I’m concerned that it minimizes its capabilities for better or worse—Its ability to maintain context and has enough plasticity to reason and change its response points to something more, even if we’re at an early stage.
- Comment on Why can't people make ai's by making a neuron sim and then scaling it up with a supercomputer to the point where it has a humans number of neurons and then raise it like a human? 7 months ago:
First, we don’t understand our own neurons enough to model them.
AI’s “neuron” or node is a math equation that takes a numeric input with a variable “weight” that affects the output. An actual neuron a cell with something like 6000 synaptic connections each and 600 trillion synapses total. How do you simulate that? I’d argue the magic of AI is how much more efficient it is comparatively with only 176 billion parameters in GPT4.
They’re two fundamentally different systems and so is the resulting knowledge. AI doesn’t need to learn like a baby, because the model is the brain. The magic of our neurons is their plasticity and our ability to freely move around in this world and be creative. AI is just a model of what it’s been fed, so how do you get new ideas? But it seems that with LLMs, the more data and parameters, the more emergent abilities. So we just need to scale it up and eventually we can raise the.
AI does pretty amazing and bizarre things today we don’t understand, and they are already using giant expensive server farms to do it. AI is super compute heavy and require a ton of energy to run. So, the cost is a rate limiting the scale of AI.
There are also issues related to how to get more data. Generative AI is already everywhere and what good s is it to train on its own shit? Also, how do you ethically or legally get that data? Does that data violate our right to privacy?
Finally, I think AI actually possess an intelligence with an ability to reason, like us. But it’s fundamentally a different form of intelligence.
- Comment on What the Hell Happened to my Cookies? 1 year ago:
This is what I thought as well. Creaming butter and sugar properly gives the cookie better structure and spread less.
The butter also needs to be the right temperature before baking—chilling dough is sometimes needed. Also regularly scrapping the sides of the bowl while mixing is important to have a nice homogeneous cookie without gobs of dry flour or butter.?