Dear Simon,
I address your comments/questions below:
"So an example would be running two deterministic systems, with identical initial conditions, and with one started a second after the first. The first machine would be a fantastic predictor and learner. There there's correlation, but some kind of causal connection, once initial conditions are fixed, is missing from the pair. Minimally dissipative."
--> Please bear with me, but I take my time in understanding all the tiny details completely. Correct me if I am wrongly characterizing what you are saying, If the two systems are run in the manner that you describe, are you saying that the joint system is minimally dissipative or just the second system? If they are jointly dissipative, then the correlation between the two would be plastic as expected. I mention this at the start of section 5, where I discuss how subsystem relationships should be plastic if the joint system is minimally dissipative. And the correlation will hence vary depending upon the input provided. Does that answer your point?
"Another example (more complicated, but works for proabilistic/non-deterministic evolution) would be the Waterfall (or Wordstar) problems."
--> Let me get back to you on this once I have a firmer grasp on what these problems are exactly. I remember reading about it in Aaronson's blog a while ago and I need to revisit it. Thank you for that particular link. I am an avid fan of his blog and work, and the updated version of the essay has references to his blog post on the Integrated Information Theory.
"You see a version of this in the ways in which deep learning algorithms are able to do amazing prediction/classification tasks. System 1 it turns out, with a lot of calculations and work, really does predict System 2. But if System 1 is the X-ray image of an aircraft part and System 2 is in-flight airplane performance, does it really make sense to say that System 1 has "learned", or is inferring, or doing anything agent-like? Really the effort is in the map-maker."
-->I agree that while deep learning networks are learning in a manner similar to us, there are large differences between us and deep learning algorithms. Along the lines of John Searle's Chinese room argument, I would argue that such algorithms are only syntatical and there are no semantics there. Furthermore running such algorithms on a von Neumann architecture GPUs (as they traditional are) means these are not minimally dissipative systems. I think the plastic subsystem connections are needed for any system to be minimally dissipative and von Neumann architecture does not have that. If we went to systems with neuromorphic architecture, then it becomes a lot more interesting I think.
I agree with you that the effort is really with the map-making and this is why I am very interested in unsupervised learning with an array of devices called memristors (Look for Prof. Yang's group at Umass,Amherst. They are doing cool things like this). Short of starting with an artificial primordial soup and evolving/self-organize an artificial brain on silicon in an accelerated manner, I think such an approach is the best way to test my ideas and build an agent remotely close to us (Since we know somethings about the final product aka our brain, we can cheat and start with an array of memristors since they can behave as neurons and synapses. How to impose other thermodynamic constraints on this array is something I am thinking about now). We just set up the array of physical devices without any preprogramming or map making, let it run and supply it with inputs and it is allowed to make its own maps and provide outputs. If such a system is able to answer questions about flight performance, based on x-ray image of the airplane, I think a) that would be amazing and b) we have to seriously entertain the possibility that it is an agent like us (I am not touching the question whether such an agent is conscious or not by a 10 foot pole haha)
I hope I didnt miss anything and answered your questions. Let me know if I need to further clarify more things.
Cheers
Natesh
PS: In all of this I think I might have to seriously step back and see if there is some fundamental difference between self-organized systems and those systems which are designed by another 'intelligent' systems, and if that changes things.