Hey Robin!
Thanks for the compliments and taking the time to read my work! I agree with your comments about memory, and if I could have written another 10,000 words I would have unpacked things in much greater detail, and I definitely let clarity suffer in a few places in favor of poetic effect. I wouldn't want to suggest that there's anything like a monotonic relationship between memory and subjectvity (howver quantified), but certainly there are some critical limits. A pure Markov process with no past-correlations affecting its trnasition probabilities is never going to manifest a "sense of self," but the point that perhaps too much memory can be detrimental to stable identity is interesting too, it reminds me of the Borges story 'Funes the memorious." So I just wanted to say that memory is a critical parameter, but never intended to suggest it was the only requirement for subjectivity.
I think computational complexity might be a better measure, but even then I'm sure you can contrive counterexamples of things with high computational complexity and virtually zero intelligence. In the end, I'm less interested in there being any single criteria and more in defining a high-dimensional parameter space *within which* self-awareness is possible, and memory is certainly one of those dimensions.
The counterpoint that, despite it's tendency to disappear from time to time, subjectivity also stubbornly persists is also insightful. I suppose if my point could be summarized as saying that the "free energy of formation of self-awareness is extremely small, hence the self can fluctuate out of existence from time to time", its tendency to come right back once a person "regains their faculties" also suggests that it is some kind of preferred steady-state for the dynamical system that is our brain/nervous system. I don't think there's any physical contradiction between those two assertions actually, they could both be true or they could both be false (I think they are probably both true), but it does lead to an amusing consequence at the level of our mental life. We are always rushing ahead of ourselves but then coming back to ourselves simultaneously, so to speak.
Last point about "kingdomism": I hope not! I think it turns on the semantics of words like "minds" and "intelligence," and thats a place where I was definitely not clear enough. I took 'mind" as a stand-in for the action of an animal nervous system, so my definition was perhaps kingdomist. But I wanted to make an anti-kingdomist point, which is that "intelligence", or the ability to perform internal computations based on input data that result in some output action which changes the relative state of the system to its environment in some way that is meaningful to the system, just like in the e. coli example I described, is not limited to animal nervous systems at all. If you defined "mind" the way I defined intelligence then I would also agree that plants, and indeed all organisms, have minds. That just sounded a little hokey to me though, so I decided to reserve "mind" for things that operate in the scale-neighborhood of animal brains and "intelligence" for the general adaptive capacity of any dynamic system that shares mutual information with its environment and acts based on the results of its internal computations. Could be just wording but the connotation is important to me: I'm comfortable saying the universe as a whole is a computer, I'm not comfortable saying the univserse as a whole has a mind. That's why I feel the same way about plants.
Ok and finally I'm definitely down for an NYC fqxi meet-up. Erik Hoel is at Columbia and I'm just one stop north at City college and we have discussed this possibility on twitter already. Message me and we can set it up!
Joe