it could be several spectra and what we call consciousness is a series of multiple phenomena
Comment on The Sensory Biology of Plants
m_f@discuss.online 4 days ago
It all depends on what you mean by “conscious”, which IMO doesn’t fall under “Maybe everything is conscious” because that’s wrongly assuming that “conscious” is a binary property instead of a spectrum that humans and plants are both on while clearly being at vastly different levels. Maybe I just have a much looser definition of “conscious” than most people, but why don’t tropisms count as a very primitive form of consciousness?
funkless_eck@sh.itjust.works 4 days ago
agamemnonymous@sh.itjust.works 4 days ago
I’m inclined to believe every dynamic interconnected system is “conscious” to some degree. Not 1:1 with human consciousness obviously, but the same base phenomenon.
The main problem is that there aren’t very good metrics to distinguish how primitive a consciousness is. Where do you draw the line between consciousness and reflex? Is each of your cells conscious in its own impossibly tiny way?
auraithx@lemmy.dbzer0.com 3 days ago
The line is it being a conscious effort.
Reflexes weren’t a conscious effort.
agamemnonymous@sh.itjust.works 3 days ago
Sure, how do you detect conscious effort from outside?
m_f@discuss.online 3 days ago
Yeah, reflexes could be considered a conscious effort of a part of your body. Or your immune system might be considered “conscious” of a virus that it’s fighting off. What’s a testable definition of “conscious” that excludes those?
I think that “conscious” is also a relative term, i.e. “Conscious of what?” A cell in your body could be said to be conscious of a few things, like its immediate environment. It’s clearly not conscious of J-pop though. But to be fair to it, none of us are “really” conscious of say Sagittarius B2 or an organism living at the bottom of the ocean.
The best way I’ve found to think about it is that consciousness can be thought of as a world model. The bigger the world model, the more consciousness it could be said to have. Some world models might be smaller, but contain things that bigger ones don’t though. Worms don’t understand what an airplane is, but humans also don’t really understand the experience of wriggling through soil.
auraithx@lemmy.dbzer0.com 3 days ago
The decision would be made by measurable neurological means not chemical ones.
stray@pawb.social 4 days ago
I think the big dividing line between what many animals do and what cells or plants do is the ability to react in different ways by considering stimuli in conjunction with memory, and then the next big divide is metacognition. I feel like there should be concrete words for these categories. “Sentient” and “conscious” have pretty much lost meaning at this point, as demonstrated by this discussion’s existence.
I will call them reactive awareness, decisive awareness, and reflective awareness in the absence of a better idea.
angrystego@lemmy.world 4 days ago
Cells are very diverse, though. Some can get over your first divide.
stray@pawb.social 4 days ago
That’s not a problem. The idea is to define practical categories along the spectrum of consciousness so that they can be discussed without having to re-define terms prior to every discussion. There’s no reason any given organism should or shouldn’t fall into a particular category except for its properties directly regarding that category.
auraithx@lemmy.dbzer0.com 3 days ago
“Conscious” means being aware of oneself, one’s surroundings, thoughts, or feelings, being awake, or acting with deliberate intention, like a “conscious effort”. It refers to subjective experience and internal knowledge, differentiating from unconsciousness (sleep, coma).
It’s a spectrum, sure. But the spectrum is between ants and humans; not animals and plants.
m_f@discuss.online 3 days ago
What does “aware” mean, or “knowledge”? I think those are going to be circular definitions, maybe filtered through a few other words like “comprehend” or “perceive”.
Does a plant act with deliberate intention when it starts growing from a seed?
To be clear, my beef is more with the definition of “conscious” being useless and/or circular in most cases. I’m not saying “woah, what if plants have thoughts dude” as in the meme, but whatever definition you come up with, you have to evaluate why it does or doesn’t include plants, simple animals, or AI.
auraithx@lemmy.dbzer0.com 3 days ago
Aware means it has a sense of self. They are circular because we use these words to define how that is perceived.
Plants do not act deliberately when they do anything, because they do not have a sense of self and are not conscious.
m_f@discuss.online 2 days ago
What do “sense” and “perceived” mean? I think they both loop back to “aware”, and the reason I point that out is that circular definitions are useless. How can you say that plants lack a sense of self and consciousness, if you can’t even define those terms properly? What about crown shyness? Trees seem to be able to tell the difference between themselves and others.
As an example of the circularity, “sense” means (using Wiktionary, but pick your favorite if you don’t like it) “Any of the manners by which living beings perceive the physical world”. What does “perceive” mean? “To become aware of, through the physical senses”. So in your definition, “aware” loops back to “aware” (Wiktionary also has a definition of “sense” that just defines it as “awareness”, for a more direct route, too).
I meant that plants don’t have thoughts more in the sense of “woah, dude”, pushing back on something without any explanatory power. But really, how do you define “thought”? I actually think Wiktionary is slightly more helpful here, in that it defines “thought” as “A representation created in the mind without the use of one’s faculties of vision, sound, smell, touch, or taste”. That’s kind of getting to what I’ve commented elsewhere, with trying to come up with a more objective definition based around “world model”. Basing all of these definitions on “representation” or “world model” seems to the closest to an objective definition as we can get.
Of course, that brings up the question of “What is a model?” / “What does represent mean?”. Is that just pushing the circularity elsewhere? I think not, if you accept a looser definition. If anything has an internal state that appears to correlate to external state, then it has a world model, and is at some level “conscious”. You have to accept things that many people don’t want to, such as that AI is conscious of much of the universe (albeit experienced through the narrow peephole of human-written text). I just kind of embraced that though and said “sure, why not?”. Maybe it’s not satisfying philosophically, but it’s pragmatically useful.
ameancow@lemmy.world 3 days ago
The foundational idea behind what the user is talking about is called panpsychism, it’s the idea that consciousness or awareness is actually a fundamental quality of the universe like fields or forces, in that it’s in everything, but only complex systems have actual thoughts.
The theory(?) states that even a single electron or proton has a state of awareness, but without any functional way to remember any information or think it’s just like some kind of flash of experience like if you suddenly developed perpetual amnesia about literally everything… while you were hurtling through the universe at high speed.
I get the concept, but I don’t get the usefulness of it. It feels too close to people wishing The Force was real.
Guys. You are not getting your light sabers this way.
m_f@discuss.online 3 days ago
I’m not advocating for consciousness as a fundamental quality of the universe. I think that lacks explanatory power and isn’t really in the realm of science. I’m kind of coming at it the opposite way and pushing for a more concrete and empirical definition of consciousness.
icelimit@lemmy.ml 3 days ago
So basically everything is tripping and only a few things can be legit sober?
auraithx@lemmy.dbzer0.com 3 days ago
Sounds like anthropomorphism to me.
ameancow@lemmy.world 3 days ago
Not sure if you know that what you’re describing has a name it’s called Panpsychism and it is gaining some popularity but that’s not because there’s any reason to believe in it or any evidence, it’s a fanciful idea about the universe that doesn’t really help or connect anything. IE: panpsychism doesn’t make for a better explanation for anything than the idea that you are just a singular consciousness living in it’s most probable state to be able to observe or experience anything.
I’m not shooting it down, it’s one of those things we just will never know, but that’s a pretty huge list of things and possibilities so I just don’t know if it’s more or less useful than any other philosophical view.
m_f@discuss.online 3 days ago
I don’t think I’m talking about panpsychism. To me, that’s just giving up and hand wavey. I’m much more interested in trying to come up with a more concrete, empirical definition. I think questions like “Well, why aren’t plants conscious” or “Why isn’t an LLM conscious” are good ways to explore the limits of any particular definition and find things it fails to explain properly.
I don’t think a rock or electron could be considered conscious, for example. Neither has an internal model of the world in any way.
agamemnonymous@sh.itjust.works 3 days ago
Panpsychism seems logically more possible than the alternative. If consciousness is an emergent property of complex systems, the universe is probably conscious because it’s the most complex system there is.
ameancow@lemmy.world 3 days ago
It depends on if you think consciousness is something that emerges from information exchange systems or some higher level “thing” we don’t understand yet, and I lean towards the idea that consciousness emerges from information exchange systems. If that’s the case, then the universe, while containing massive areas of complexity, isn’t entirely exchanging information, only in isolated areas that are borrowing energy even as entropy broadly decreases. I would be more open the idea of some possibility of consciousness occurring in the hyper-low entropy state of the very early universe when everything was much closer together and there was enough energy to connect a whole universe worth of information.
agamemnonymous@sh.itjust.works 3 days ago
Who knows what energetic structures exist within galactic super clusters? Energy is constantly exchanged in the universe.
AnarchoEngineer@lemmy.dbzer0.com 4 days ago
Personally, I’m more a fan of the binary/discrete idea. I tend to go with the following definitions:
If you could prove that plants have the ability to choose to scream rather than it being a reflexive response, then they would be sentient. Like a tree “screaming” only when other trees are around to hear.
If I cut myself my body will move away reflexively, it with scab over the wound. My immune system might “remember” some of the bacteria or viruses that get in and respond accordingly. But I don’t experience it as an action under my control. I’m not aware of all the work my body does in the background. I’m not sentient because my body can live on its own and respond to stimuli, I’m sentient because I am aware that stimuli exist and can choose how to react to some of them.
If you could prove that the tree as a whole or that part of a centralized control system in the tree could recognize the difference between itself and another plant or some mycorrhiza, and choose to respond to those encounters, then it would be conscious. But it seems more likely that the sharing of nutrients with others, the networking of the forest is not controlled by the tree but by the natural reflexive responses built into its genome.
Also, If something is conscious, then it will exhibit individuality. You should be able to identify changes in behavior due to the self referential systems required for the recognition of self. Plants and fungi grown in different circumstances should respond differently to the same circumstances.
If you taught a conscious fungus to play chess and then put it in a typical environment, you would expect to see it respond very differently than another member of its species who was not cursed with the knowledge of chess.
If a plant is conscious, you should be able to teach it to collaborate in ways that it normally would not, and again after placing it in a natural environment you should see it attempt those collaborations while it’s untrained peers would not.
Damn now I want to do some biology experiments…
stray@pawb.social 4 days ago
I don’t know whether this applies to plants and fungi, but it applies to just about every animal. There’s a minimum basic sense of self required in distinguishing one’s own movements from the approach of an attacker. Even earthworms react differently when they touch something vs when something touches them.
AnarchoEngineer@lemmy.dbzer0.com 4 days ago
Yes most definitely, I’d imagine most animals are conscious.
In fact my definition of sapience means several animals like crows and parrots and rats are capable of sapience.
m_f@discuss.online 3 days ago
When you say “aware of the delineation between self and not self”, what do you mean by “aware”? I’ve found that it’s often a circular definition, maybe with a few extra words thrown in to obscure the chain, like “know”, “comprehend”, “perceive”, etc.
Also, is a computer program that knows which process it is self aware? If not, why? It’s so simple, and yet without a concrete definition it’s hard to really reject that.
On the other extreme, are we truly self aware? As you point out, our bodies just kind of do stuff without our knowledge. Would an alien species laugh at the idea of us being self-aware, having just faint glimmers of self awareness compared to them, much like the computer program seems to us?
AnarchoEngineer@lemmy.dbzer0.com 3 days ago
Anything dealing with perception is going to be somewhat circular and vague. Qualia are the elements of perception and by their nature it seems they are incommunicable by any means.
Awareness in my mind deals with the lowest level of abstract thinking. Can you recognize this thing and both compare and contrast it with other things, learning about its relation to other things on a basic level?
You could hardcode a computer to recognize its own process. But it’s not comparing itself to other processes, experiencing similarities and dissimilarities. Furthermore unless it has some way to change at least the other processes that are not itself, it can’t really learn its own features/abilities.
A cat can tell its paws are its own, likely in part because it can move them. if you gave a cat shoes, do you think the cat would think the shoes are part of itself? No, And yet the cat can learn that in certain ways it can act as though the shoes are part of itself. The same way we can recognize that tools are not us but are within our control.
We notice that there is a self that is unlike our environment in that it does not control the environment directly, and then there are the actions of the self that can influence or be influenced directly by the environment. And that there are things which we do not control at all directly.
That is the delineation I’m talking about. It’s more the delineation of control than just “this is me and that isn’t” because the term “self” is arbitrary.
We as social beings correlate self with identity, with the way we think we act compared to others, but to be conscious of one’s own existence only requires that you can sense your own actions and learn to delineate between this thing that appears within your control and those things that are not. Your definition of self depends on where you’ve learned to think the lines are.
If you created a computer program capable of learning patterns in the behavior of its own process(es) and learning how those behaviors are similar/dissimilar or connected to those of other processes, then yes, I’d say your program is capable of consciousness. But just adding the ability to detect its process id is simply like adding another built in sense; it doesn’t create conscious self awareness.
Furthermore, on the note of aliens, I think a better question to ask is “what do you think ‘self’ is?” Because that will determine your answer. If you think a system must be consciously aware of all the processes that make it up, I doubt you’ll ever find a life form like that. The reason those systems are subconscious is because that’s the most efficient way to be. Furthermore, those processes are mostly useful only to the self internally, and not so much the rest of reality.
To be aware of self is to be aware of how the self relates to that which is not part of it. Knowing more about your own processes could help with this if you experienced those same processes outside of the self (like noticing how other members of your society behave similarly to you) but fundamentally, you’re not necessarily creating a more accurate idea of self awareness just be having more senses of your automatic bodily processes.
It is equally important, if not more so, to experience more that is not the self rather than to experience more of what would be described as self, because it’s what’s outside that you use to measure and understand what’s inside.
m_f@discuss.online 2 days ago
I made another comment pointing this out for a similar definition, but OK so awareness is being able to “recognize”, and recognize in turn means “To realize or discover the nature of something” (using Wiktionary, but pick your favorite dictionary), and “realize” means “To become aware of or understand”, completing the loop. I point that out, because IMO the circularity means the whole thing is useless from an empirical perspective and should be discarded. I also think qualia is just philosophical navel-gazing for what it’s worth, much like common definitions of “awareness”. I think it’s perfectly possible in theory to read someone’s brain to see how something is represented and then twiddle someone else’s brain in the same way to cause the same experience, or compare the two to see if they’re equivalent.
As far as a computer process recognizing itself, it certainly can compare itself to other processes. It can e.g. iterate through the list of processes and kill everything that isn’t itself. It can look at processes and say “this other process consumes more memory than I do”. It’s super primitive and hardcoded, but why doesn’t that count? I also thinking learning is separate but related. If we take the definition of “consciousness” as a world model or representation, learning is simply how you expand that world model based on input. Something can have a world model without any ability to learn, such as a chess engine. It models chess very well and better than humans, but is incapable of learning anything else, i.e. expanding its world model beyond chess.
I think we largely agree then, other than my quibble about learning not being necessary. A lot of people want to reject the idea of machines being conscious, but I’ve reached the “Sure, why not?” stage. To be a useful definition though, we need to go beyond that and start asking questions like “Conscious of what?”