Joe,

Too many essays and too little time! I am sorry I did not get to this during the contest.

First, you have a clear writing style that tells a story that alone makes this a very good essay.

Second, you talk about thermodynamics beyond just noise in communication. This sharp temperature boundary that life thrives is center to understanding the formation of life.

You get a little too much into human intentions for my taste (I start at minimum and stay there), but how you walk as through step is wonderful.

All the best,

Jeff

Thanks for the compliments Jeff, and believe me I didn't get around to nearly as many of the essays as I would have liked, I'm less bothered about the contest and happier to know that my ideas find even a small receptive audience. I focused on human intentions in the second half of the essay because I felt that's what we really want from this essay topic, and why this question intrigues us: at bottom it's about whether our intentions can be "reduced" to the laws of physics and if so what that means. I tried to indicate, first, that modern thermodynamics can explain the emergence of intentionality but second, and perhaps more importantly, that our experience of the world is not demeaned by being explicable in terms of mathematical laws. Indeed, I think this explicability is proof that we belong in nature, and my hope is that understanding physics and how humanity emerges naturally would ultimately make us feel less alienated within nature. Getting the physics right should be first ad foremost, but we should not forget why our effort matters and what we hope to accomplish while doing that.

Thanks again for taking the time to read and reply!

Joe

12 days later

Hi Joe,

thank you for this excellent essay. You're covering a lot of ground in a short space but nevertheless manage to remain very clear.

I really like what you did with memory and selves. It comes with interesting questions. Does a pushdown automaton have more selfhood than a DFA, or does it start a little higher up the food chain than that? Conversely, is there a point at which excessive memory (perhaps compared to some other dimension) decreases selfhood? There are certainly clinical cases of people with perfect recall and it appears to be impeding them in more ways than one.

It is true that the self is very much unstable, and I like the irony you point out, but at the same time it just keeps coming back! You don't really hear stories of people who suddenly just went blank (except perhaps in a Brian Catling novel). This process persistence is interesting in itself.

One thing I am unsure of: if plants are every bit as subtle and ingenious as animals, can we really conclude that they are mindless? I think I might be missing something in how you define a mind. Given the same purposefulness and barring access to their internal experience, I would tend to grant them the same mindfulness. They might not have motility, which might annoy Merleau-Ponty (and then again, rhizomes can cover quite some ground), but at first sight I wonder, only partly facetiously, if the distinction is not somewhat "kingdomist"?

PS: there appears to be a few of the participants who are around New York here, maybe we should organise some form of get-together :)

    6 days later

    Hey Robin!

    Thanks for the compliments and taking the time to read my work! I agree with your comments about memory, and if I could have written another 10,000 words I would have unpacked things in much greater detail, and I definitely let clarity suffer in a few places in favor of poetic effect. I wouldn't want to suggest that there's anything like a monotonic relationship between memory and subjectvity (howver quantified), but certainly there are some critical limits. A pure Markov process with no past-correlations affecting its trnasition probabilities is never going to manifest a "sense of self," but the point that perhaps too much memory can be detrimental to stable identity is interesting too, it reminds me of the Borges story 'Funes the memorious." So I just wanted to say that memory is a critical parameter, but never intended to suggest it was the only requirement for subjectivity.

    I think computational complexity might be a better measure, but even then I'm sure you can contrive counterexamples of things with high computational complexity and virtually zero intelligence. In the end, I'm less interested in there being any single criteria and more in defining a high-dimensional parameter space *within which* self-awareness is possible, and memory is certainly one of those dimensions.

    The counterpoint that, despite it's tendency to disappear from time to time, subjectivity also stubbornly persists is also insightful. I suppose if my point could be summarized as saying that the "free energy of formation of self-awareness is extremely small, hence the self can fluctuate out of existence from time to time", its tendency to come right back once a person "regains their faculties" also suggests that it is some kind of preferred steady-state for the dynamical system that is our brain/nervous system. I don't think there's any physical contradiction between those two assertions actually, they could both be true or they could both be false (I think they are probably both true), but it does lead to an amusing consequence at the level of our mental life. We are always rushing ahead of ourselves but then coming back to ourselves simultaneously, so to speak.

    Last point about "kingdomism": I hope not! I think it turns on the semantics of words like "minds" and "intelligence," and thats a place where I was definitely not clear enough. I took 'mind" as a stand-in for the action of an animal nervous system, so my definition was perhaps kingdomist. But I wanted to make an anti-kingdomist point, which is that "intelligence", or the ability to perform internal computations based on input data that result in some output action which changes the relative state of the system to its environment in some way that is meaningful to the system, just like in the e. coli example I described, is not limited to animal nervous systems at all. If you defined "mind" the way I defined intelligence then I would also agree that plants, and indeed all organisms, have minds. That just sounded a little hokey to me though, so I decided to reserve "mind" for things that operate in the scale-neighborhood of animal brains and "intelligence" for the general adaptive capacity of any dynamic system that shares mutual information with its environment and acts based on the results of its internal computations. Could be just wording but the connotation is important to me: I'm comfortable saying the universe as a whole is a computer, I'm not comfortable saying the univserse as a whole has a mind. That's why I feel the same way about plants.

    Ok and finally I'm definitely down for an NYC fqxi meet-up. Erik Hoel is at Columbia and I'm just one stop north at City college and we have discussed this possibility on twitter already. Message me and we can set it up!

    Joe

    Write a Reply...