Natesh,
Great essay, well informed and organised analysis of a very interesting hypothesis.
I also like most and agree with much, certainly with the addage that; 'our aims and goals are shaped by our history', and the importance of; efficiency, neuronal avalanches, branching parameters, critical regions and that a 'hierarchical predictive coding' model is possible.
I'm not yet convinced that minimal dissipation itself is a precondition of learning and 'inference dynamics' and didn't feel you proved that. Do you not think it might be more a ubiquitous characteristic than a driver - in a similar way to 'sum over paths'? If 3 lifeguards head off to save a drowning girl 100m down the beach, one heads straight for her, one to the shore point opposite her position, and one at the right angle to allow for swimming slower than running; Of course the 3rd gets there with least energy, but I feel that underlying that may be a still greater and more useful truth and meaning.
Also might the error/feedback mechanism be better described as iterative value judgement comparisons. Perhaps no 'right or wrong' just different outcomes, which we can't value till compared with previous runs.
Say the first 'run' of consequences is imaginative drawn from input history. Say Do I want a PhD Y/N? gives an 'AIM' (Y1). We run a scenario to imagine it and implications. We then keep running it as more data comes in. Subsequent lower level/consequential Y/N decisions are the same, taking Y1 into the loop, an on hierarchically. If it turns out not to be as envisaged, or we win millions and want to be a playboy instead, we change the aim to N and form new ones.
May it be that you're a little too enamored by the formula and suggest conclusions from those rather than the deeper meaning they're abstracted from.
i.e. Might Lifeguard 3 have said 'how do I get there fastest' rather than '..by using least energy'? (or just done that due to it's successful outcome in past re-runs of the scenario).
Lastly on 'Agency', which I see as a semantic 'shroud'. If all subsequent Y/N decisions in a cascade are simply consequential on the first Y/N decision, and the branches lead to a mechanical motor neuron or chemical response, all repeated in iterative loops, might the concept 'agency' not fade away with most of the mystery?
All 'leading edge' questions I think and thank you for a brilliant essay leading to them. Top mark coming.
Best
Peter