Essay Abstract

What does it take for a system, biological or not, to have goals? Here, this question is approached in the context of in silico artificial evolution. By examining the informational and causal properties of artificial organisms ("animats") controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning. The focus lies on comparing two types of Markov Brains that evolved in the same simple environment: one with purely feedforward connections between its elements, the other with an integrated set of elements that causally constrain each other. While both types of brains 'process' information about their environment and are equally fit, only the integrated one forms a causally autonomous entity above a background of external influences. This suggests that to assess whether goals are meaningful for a system itself, it is important to understand what the system is, rather than what it does.

Author Bio

Larissa Albantakis is an Assistant Scientist at the Wisconsin Institute for Sleep and Consciousness, at the University of Wisconsin--Madison. She obtained her Diploma in physics from Ludwig-Maximilians University in Munich in 2007, and her PhD in Computational Neuroscience from Universitat Pompeu Fabra in Barcelona in 2011. She has been at the University of Wisconsin since 2012, working together with Giulio Tononi on Integrated Information Theory, and has recently been awarded a 'Power of Information' Independent Research Fellowship by the Templeton World Charity Foundation.

Download Essay PDF File

Hi Larissa,

Nice example, withfun discussion, I am not (yet?) a "Tononi believer" myself, but it does get more convincing perhaps where you explain that more complicated environments give rise to more "integrated" architectures. (when adding more types of blocks etc.)

If there is indeed such a trend (likely) then I wonder whether you investigated a possible connection here with information compression or even Kolgomorov complexity?

Kind Regards

Rene Ahn (2855)

    oops, I, mean of course Kolmogorov.

    Larissa Albantakis,

    Good essay!

    The questions remain are 1) how the system acquire stability with togetherness? 2) what internal state create goal?, 3) what is the internal disciplining principle? and 4) how the system replicate?

      Dear Rene,

      Thank you for your comment and pointing to compression / Kolmogorov complexity. On a practical level there is indeed a connection. In fact we use compression as a proxy for integrated information $\Phi$ in real neural recordings (see Casali AG, Gosseries O, et al. (2013) A theoretically based index of consciousness independent of sensory processing and behavior. Sci Transl Med 5:198ra105.). The idea is that a perturbation will have a complex (incompressible) response in a highly differentiated and integrated system, but will have only a local or homogenous (highly compressible) response in a modular, disconnected or homogenous system.

      We also found a correlation between compression measures and $\Phi$ in a study on elementary cellular autonomata (Albantakis & Tononi, 2015).

      With respect to the theoretical issues discussed here, intrinsic information and meaning, what is important is characterizing the entire cause-effect structure of the system rather than just its $\Phi$ value (which is just a number). As I argue in the essay, intrinsic information must be physical, and the actual mechanisms of the system matter. By contrast, algorithmic information is, by definition, a measure of extrinsic information: it explicitly disregards the actual mechanisms of the system (neural network) and seeks the shortest program with the same output. For intrinsic information and intrinsic meaning, the implementation matters. To recap the essay, the proposal is that meaning is not in what the system is doing, but in what it is, and algorithmic information only captures the "doing".

      I'm looking forward to reading your interesting essay more thoroughly soon.

      Best regards,

      Larissa

      Dear Shaikh,

      Thank you! And indeed, those are very important questions. As admitted in the essay, it is still a long way to understand what kind of cause-effect structure would correspond to goals. As part of the integrated information research project, before we get to goals, we are currently exploring what kind of cause-effect structure would be required to have intrinsic information about spatial relations.

      With respect to 1), applying the IIT framework, we can assess whether a system is a stable integrated complex across its dynamics (and did so recently for the fission yeast cell cycle network, to appear soon, see ref 18 in the essay). In this way we can also gain insights about which mechanisms contribute to the stability, as opposed to the function of the system.

      About 3), the animat experiments show that integrated structures have an advantage in complex environments even if the selection is purely based on fitness. As outlined in the essay, the main reasons are that integrated systems are more economical and more flexible (for more details see the refs given in the essay).

      Finally, with respect to 4), in the artifical evolution scenario described, the animats are simply copied into the next generation with a fitness-dependent probability. In general, however, the notion of intrinsic information outlined here applies to artificial systems just as much as to biological systems. Accordingly, being self-replicators is not a necessary requirement for having goals. But of course it is crucial for the question how those system have developed in nature in the first place.

      Best regards,

      Larissa

      Larissa

      Thanks for your reply.

      To move from high potentiality to low potentiality is inborn nature of matter and is the inborn goal of matter.

      The goal in question is to differentiate between internal potentiality and external potentiality and to steer motion.

      Dear Larissa,

      Why is the "fitness" 47% at the start, when there are no connections between elements, sensors and motors? Surely the fitness should be 0 if the Figure 1 model has no connections i.e. if there is no ability to catch food or avoid danger?

      If the animats weren't already fully fit enough to survive in the environment, then how did they survive to generation 2, let alone survive to generation 60,000?

        Dear Lorraine,

        Thanks for your thorough reading. The initial 47% are a technical issue. If the animat is just sitting still (which it is without connections) it gets hit by ("catches") some blocks correctly and correctly avoids some blocks. 0% fitness would correspond to doing the task all wrong, i.e. catching all the large blocks and avoiding all the small blocks. One could rescale fitness to 0 for no connections and negative values if they do worse than by doing nothing at all. That wouldn't affect any of the results.

        As for your second question, after each generation the animats are selected by the algorithm probabilistically dependent on their fitness. If they all do terribly, then each of them has the same probability of 'reproducing' into the next generation.

        The population size is kept fixed at 100 animats. So it can be the case that some animats are copied several times, while others are not copied at all.

        The genomes of the animats in the new population are then mutated with low probability, and some of the mutated animat offspring may now have a first connection that allows them to have a little bit higher fitness in generation 1 (or whenever such a mutation first happens).

        These slightly fitter animats then have a higher probability of 'reproducing' into the next generation and so on. The way to see this is that it's not the animat itself that is put into the next generation, but its mutated offspring, which can be fitter than its parent.

        I hope this made sense! Let me know if you still have questions.

        Best,

        Larissa

        5 days later

        Dear Dr. Albantakis,

        I read your essay with great interest. Your studies of even very small model neural networks shows clearly that they evolve adaptive behavior which mimics that in biological organisms.

        I also address the issue of adaptation in my own essay, "No Ghost in the Machine". I argue that recognition of self, other agents, and a causal narrative are built into specific evolved brain structures, based on neural networks, which create a sense of consciousness as part of a dynamic model of the environment. The reason that this is such a difficult problem is that we are being misled by the subjective perceptions of our own minds.

        Also, I noticed that you work at an Institute for Sleep and Consciousness. In my essay, I cited the work of Prof. Allan Hobson at Harvard, who emphasizes the importance of the dream state as an alternative conscious state that can provide essential insights. Do you have any thoughts about this?

        Alan Kadin

          Dear Dr. Kadin,

          Thank you for your interest! Indeed, sleep is a very interesting state for consciousness research as it is possible to compare conscious vs. unconscious levels in the same state using so-called non-response paradigms. Taking consciousness as phenomenology, dreaming clearly counts as being conscious. I also happened to notice that the scientific american article about sleep you cited in your essay in fact describes research performed at the Wisconsin Center for Sleep and Consciousness (Please see our website http://centerforsleepandconsciousness.med.wisc.edu/index.html for more interesting experimental work being done in this field.)

          It was a pleasure reading through your essay, and I hope you found the notion of causal control/autonomy advocated in my essay of interest. While the dynamical system as a whole (including the agent) may be dynamically determined, from the intrinsic perspective of the agent itself in its current state within that environment, there are causal constraints on its mechanisms from within the agent and from the environment. In this way, systems with the right kind of recurrent connections can causally constrain themselves above the background of influences from the environment.

          The animats are so relevant to ideas and theories about "dynamic models of the environment" as they provide an excellent model system to test the proposed ideas. What kind of mechanistic structure would be necessary to have any kind of "model of the environment"? Do the simple animats have it, some of them, or not? And if not, then why not? What is missing?

          Best regards,

          Larissa

          Dear Larissa,

          It was fun to catch up on your animats work. You make an unusual move here--at least from the point of view of many biologists, who follow Dan Dennett and like to reveal goal-directed behavior to be nothing but selection. We take the "intentional stance" because it's so useful as a prediction tool.

          By contrast, you want to locate goals through the causal powers that a system's internal representations possess. A lot of the essays this year have invoked information processing as a source of something meaningful. Yet it's never been entirely clear to me how we can really distinguish dynamics from computation (I try a different tack in my essay, talking about memory vs. memorylessness, but this only works as a negative case while you have an explicitly positive criterion).

          A while ago at SFI I remember a debate about whether the gas in the seminar room was performing a computation or not. Many of the computer scientists said "sure, why not." But nobody really felt satisfied by it. Computer scientists are great at recognizing what's a paper in computer science, but are not so great at telling us how to spot a computation in the wild.

          You've just jumped in and said, hey, there are certain causal features we expect to see in a system that's actually thinking. And then (if I understand correctly) you've attacked the "meaning from selection" story by showing that your animats might appear to have goals, but under this stricter notion, some actually don't.

          Your essay makes we want to suggest an experiment: what happens when animats interact? A concern with the setup as stands is that if Phi is going up as the environment becomes more interesting, it could just be that complexity is leaking in from the environment--the system is mirroring interesting things that happen outside. But if you give animats a very simple game theoretic problem, and they evolve towards high-Phi systems regardless, that would be a lovely ex nihilo demonstration. Famously, Prisoner's Dilemma leads to all sorts of complexity, while being (at least on the game-specification size) a zero-memory, one-bit process. What would happen? It would be fun to correlate properties of the payoff matrix with Phi.

          Yours,

          Simon

            Hello Larissa

            Your project sounds fascinating, and must have been enjoyable.

            As you know, crucial element in the experiment is the designer's goal. Without the designer there is no seeking, and no experiment.

            I'm not suggesting a religious significance to seeking, or intention, but rather, that there seems to be a presumption that seeking and avoiding, however rudimentary, can develop in a truly deterministic system. Goal-seeking behavior may seem unproblematic in a deterministic world just because it has emerged in ours, but try an experiment of any complexity without programming an appearance of goal-seeking and watch how many generations it takes for it to emerge on its own(!)

            You write of "goal-directed behavior" that "by the principle of sufficient reason, something must cause this behavior." You might be interested in my essay about spontaneity being more fundamental than causation, that it may be causally influenced, but essentially free of causation.

              Dear Simon,

              Good to hear from you. Your comment made my day, as you indeed captured the essence of my essay. The animats are such a great model system as they force one to consider the implementation of suggested potential solutions to intrinsic meaning, based on "information processing", "models about the environment", etc. Most of the time these ideas are presented abstractly, sound really great, and resonate with many people, but on closer consideration fail to pass the implementation test.

              With respect to the question of dynamics vs. computation, and whether the gas in the seminar room performs a computation, David Chalmers addressed a similar point here: Chalmers, D.J. (1996). Does a rock implement every finite-state automaton? Synthese 108, 309-333. It's about mapping any kind of computation onto a system that can assume various states. I think the conclusion is that in order to say that two systems perform the same computation, it is not sufficient for them to have a dynamical sequence of states that can be mapped onto each other. Instead, there has to be a mapping of all possible state transitions, which basically means the same causal structure, i.e. a mapping of the causal implementation.

              Along these lines, computation, in my view, requires knowing all counterfactuals. I.e. to know that an AND gate is an AND gate and performs the AND computation, it is not sufficient to know that it transitions from 11 -> 1. One needs to know all possible input states (all possible counterfactuals) and the resulting output state.

              With respect to game theory, I know that Chris Adami and Arend Hintze successfully applied the animats to solve games such as the prisoner's dilemma, but we haven't measured their integrated information in such environments yet. Memory does play a crucial role for evolving integration. Games that can be solved merely by "reflexes" based on current sensory inputs will produce mostly feedforward systems. Evaluating the animats on multiple-game versions with different pay-off matrices should indeed be interesting. Thank you for bringing that up! Relatedly, we are currently evaluating "social" animats that can sense other agents and mostly replicated the past results.

              Best regards,

              Larissa

              Dear Larissa Albantakis,

              Nice essay on animats,

              Your ideas and thinking are excellent for eg...

              By examining the informational and causal properties of artificial organisms ("animats") controlled by small, adaptive neural networks (Markov Brains), this essay discusses necessary requirements for intrinsic information, autonomy, and meaning.

              Some of the animats even lack the conditions to be separate causal entities from their environment. Yet, observing their behavior affects our intrinsic mechanisms. For this reason, describing certain types of directed behaviors as goals, in the extrinsic sense, is most likely useful to us from an evolutionary perspective.

              A Good idea, I fully agree with you............

              ..................... At this point I want you to ask you to please have a look at my essay, where ...............reproduction of Galaxies in the Universe is described. Dynamic Universe Model is another mathematical model for Universe. Its mathematics show that the movement of masses will be having a purpose or goal, Different Galaxies will be born and die (quench) etc...just have a look at my essay... "Distances, Locations, Ages and Reproduction of Galaxies in our Dynamic Universe" where UGF (Universal Gravitational force) acting on each and every mass, will create a direction and purpose of movement.....

              I think intension is inherited from Universe itself to all Biological systems

              For your information Dynamic Universe model is totally based on experimental results. Here in Dynamic Universe Model Space is Space and time is time in cosmology level or in any level. In the classical general relativity, space and time are convertible in to each other.

              Many papers and books on Dynamic Universe Model were published by the author on unsolved problems of present day Physics, for example 'Absolute Rest frame of reference is not necessary' (1994) , 'Multiple bending of light ray can create many images for one Galaxy: in our dynamic universe', About "SITA" simulations, 'Missing mass in Galaxy is NOT required', "New mathematics tensors without Differential and Integral equations", "Information, Reality and Relics of Cosmic Microwave Background", "Dynamic Universe Model explains the Discrepancies of Very-Long-Baseline Interferometry Observations.", in 2015 'Explaining Formation of Astronomical Jets Using Dynamic Universe Model, 'Explaining Pioneer anomaly', 'Explaining Near luminal velocities in Astronomical jets', 'Observation of super luminal neutrinos', 'Process of quenching in Galaxies due to formation of hole at the center of Galaxy, as its central densemass dries up', "Dynamic Universe Model Predicts the Trajectory of New Horizons Satellite Going to Pluto" etc., are some more papers from the Dynamic Universe model. Four Books also were published. Book1 shows Dynamic Universe Model is singularity free and body to collision free, Book 2, and Book 3 are explanation of equations of Dynamic Universe model. Book 4 deals about prediction and finding of Blue shifted Galaxies in the universe.

              With axioms like... No Isotropy; No Homogeneity; No Space-time continuum; Non-uniform density of matter(Universe is lumpy); No singularities; No collisions between bodies; No Blackholes; No warm holes; No Bigbang; No repulsion between distant Galaxies; Non-empty Universe; No imaginary or negative time axis; No imaginary X, Y, Z axes; No differential and Integral Equations mathematically; No General Relativity and Model does not reduce to General Relativity on any condition; No Creation of matter like Bigbang or steady-state models; No many mini Bigbangs; No Missing Mass; No Dark matter; No Dark energy; No Bigbang generated CMB detected; No Multi-verses etc.

              Many predictions of Dynamic Universe Model came true, like Blue shifted Galaxies and no dark matter. Dynamic Universe Model gave many results otherwise difficult to explain

              Have a look at my essay on Dynamic Universe Model and its blog also where all my books and papers are available for free downloading...

              http://vaksdynamicuniversemodel.blogspot.in/

              Best wishes to your essay.

              For your blessings please................

              =snp. gupta

              • [deleted]

              Dear James,

              Thank you for your comment and taking the time to read my essay! Indeed, in these artificial evolution experiments, some kind of selection bias has to be assumed that leads to certain systems being preferred over others. In the absence of biased selection, causal structure may emerge, but will not be stable for more than a couple of generations.

              I read your essay about spontaneity with much interest. A possible connection could be that in the described causal analysis we assume any element within the system that is not being constrained as maximum entropy and the cause-effect power of a mechanism is evaluated also in comparison of maximum entropy. Certainly though my analysis starts by assuming physical elements with at least two states that can causally constrain each other and leaves room for more fundamental concepts.

              The point I want to make with the essay is actually quite similar to Searl's Chinese Room argument, but aims at a partial solution at least. The two animats perform the same task, but in the feedforward case there is no system that could possible have any understanding of the environment (or anything else), as there is not system from the intrinsic perspective in the first place. This animat would correspond to the lookup tables. The other animat does have a small but nevertheless integrated core that constrains itself and thus at least forms a minimal system that exists from the intrinsic perspective above a background of influences from the environment.

              Best regards,

              Larissa

              Dear Larissa,

              nice and dense essay! One of the aspects that intrigued me most and that, I believe, adds much originality to your work, is the attempt to tackle goal-oriented behaviour under the perspective of the 'intrinsic' features of the agent - beyond what appears to the external observer. However, I'm still trying to understand clearly the sense in which the use of internal cause-effect information, based on conditional state distributions and the IIT tools, should yield a 'more internalised' notion of goal-oriented behaviour for an open subsystem than, say, the plain detection of a local entropy decrease. In which sense is the former more internal? Does it refer to an issue of internal interconnection architecture, high Phi values, and ultimately growing consciousness?

              One of the most attractive (at least to me) hard questions related to the 2017 Essay Contest is the difference between re-acting and acting: when and how does the ability to act spontaneously, as opposed to reacting (to, say, the arrival of pieces of different sizes) arise in artificial or natural systems? As far as I have seen, none of the essays has tackled this issue directly. What (new?) information-theoretic 'trick' is required for obtaining an animat that starts doing something autonomously and for no reason, i.e., not as a reaction to some external stimulus? In your opinion, is it conceivable to characterize (and synthesize) this skill just in the framework of IIT [... yielding an animat that stops catching pieces and says "Larissa, give me a break!" :-] ?

              Another small question: in the simulation of [8] it seems that fitness increases visibly, while Phi doesn't. In general, shouldn't one expect them to roughly grow together?

              Thank you!

              Tommaso

              http://fqxi.org/community/forum/topic/2824

                Dear Tommaso,

                Thank you very much for your comment and insightful questions. By contrast to something like measures of local entropy decreases, the IIT formalism does not just yield a quantity (integrated information) but also a characterization of the system, its cause-effect structure, which is the set of all system mechanisms that constrain the past and future states of the system itself. The mechanisms specify everything within the system that makes a difference to the system itself. In this way I don't just find out whether there is an intrinsic system in the first place, but also get a picture of its capacity for 'understanding', of what matters to the system and what cannot possibly matter to the system because it doesn't have the right mechanism to pick it up. I hope this helped. Characterizing more precisely how intrinsic meaning could arise from the cause-effect structure is work in progress in our lab.

                I completely agree on your point regarding 'acting' vs. 'reacting'. In fact, this is basically the topic of my fellowship project for the next 3 years. Our goal is to quantify when and how strongly an action was caused by the system as opposed to the environment. Autonomous action here means that the system's action is not entirely driven by its current sensory inputs from the environment. Making a choice based on memory, however, would count as autonomous. If you look at the website (ref [8]) under the 'task simulation' tab and set it to trial 55, for example, you can see that the animat already shows a little bit of autonomous behavior in that sense. It first follows the block, then goes in the other direction, then follows again. This means that its behavior didn't just depend on the sensory inputs, but is context-dependent on its own internal state. This is a little different than your definition of autonomy ('doing something for no reason'). That could be achieved with just a little noise inside the system.

                As for your last question: the issue with a trial-by-trial correlation of Phi and fitness is that an animat can always have more Phi than is necessary, as there is no real cost on being more integrated than needed the way the simulation is set up. Moreover, fitness can also increase due to e.g. a connection from a sensor to a motor (a reflex), which would not increase the integration. In practice, for complex tasks, there should be a lower limit on the amount of integration that is required for a given task given constraints on the number of elements, connections, and time available to perform the computations as integrated systems are more economical than feedforward systems.

                Best regards,

                Larissa

                Dear Larissa,

                I carefully read your essay. Your approach and mine are radically different, but this precisely could be a sufficient reason to have a good discussion.

                Your essay has a great merit. You honestly describe the constraints a given system has to master so that we can ascribe to the system in question. " A system can only 'process' information to the extent that it has mechanisms to do so." And "The cause-effect structure of a system in a state specifies the information intrinsic to the system, as opposed to correlations between internal and external variables. If the goals that we ascribe to a system are indeed meaningful from the intrinsic perspective of the system, they must be intrinsic information, contained in the system's cause-effect structure (if there is no mechanism for it, it does not matter to the system)." Finally "Yet, the system itself does not 'have' this intrinsic information. Just by 'processing' information, a system cannot evaluate its own constraints. This is simply because a system cannot, at the same time, have information about itself in its current state and also other possible states."

                Shortly speaking, for the concept "goal" related to any system to have a meaning, the system in question must be equipped by a a lot of natural or artificial devices, whereas the set of the latter is supposed to be configured in an exactly determined way.

                Suggesting that the forgoing is easy to say, but much less easy to realize, and even to model, you are absolutely right.

                Well, but do you not agree that the problem is much more fundamental?

                To specify the information intrinsic to any system, the required internal causal structure of this system must be "able to specify" information, and this "ability" presupposes other "abilities" like the "ability" to recognize information before and after being specified. So, the more fundamental question is: where do these "abilities" come from?

                Yes, "by 'processing' information, a system cannot evaluate its own constraints", but the very fact of evoking systems "'processing' information" already implies the presence of "information processors" within these systems, and once again, we have to ask the more fundamental question is: where do these "information processors" come from?

                And so on.

                These "more fundamental" questions which until further notice have no answers converge to the problem of generalized irreversibility. In a classical manner going back to Clausius, generalized irreversibility can be formulated as follows: For any system S apparently violating irreversibility, there is a "wider" system S' "comprising" S, so that at the level of S', irreversibility is reestablished. In the classical formulation, notions like "wider systems" or "systems 'comprising' other systems" are rather vague, and so not really appropriated for taking into account intrinsic information or integrated information you are focusing on.

                Now, in order to touch the essential without too formal developments, let us consider the good old Maxwell's Demon operating on its Boltzmannian gas. In your eyes, Maxwell's Demon perhaps belongs to ancient history, whereas most authors, for diverging motivations going from Landauer's "principle" to whatever it may be, believe that the Demon is not able to accomplish its mission. But on the other hand, the Demon represents an intuitive mean to grasp the more fundamental problem being behind all superstructure problems concerning integrated information. So let us do as if Maxwell's Demon could do its job.

                Operating in the well-known way, the Demon pushes the gas back to its ordered initial state. Under single-step selection conditions, the improbability of the transition would be tremendously high. Considered alone, the gas expresses a genuine irreversibility violation. In fact, the gas is not alone, because of the Demon's presence. Here the "wider system" reestablishing irreversibility is to be interpreted as a system with integrated information, and so all the questions arising with regard to information integration arise again. It is easy to "imagine" - like Maxwell - the existence of the Demon. By contrast, it would be hard - infinitely hard - to equip a mesoscopic, perhaps I should say microscopic device so that it is able to detect instantaneously the motion state - velocity, acceleration, direction - of each molecule present in the neighborhood of the gate, knowing that in the sole neighborhood of the gate you find an unimaginable number of molecules. Further, the microscopic Demon has to be able to take instantaneously the good decision. And then, the Demon must be conditioned to be a serious, conscious scientist respectively meticulously the experimental protocol, and not a trouble-maker misusing its quasi-infinite intelligence to make bad jokes or something else, and this point presupposes moral qualities. And so on. Yet, the foregoing is not a bit easy caricature. A task like re-ordering a disordered gas - a simple task in comparison with other tasks implying aims and/or intentions - needs information integration we cannot master, neither technologically, nor intellectually. I think, you agree.

                But now we arrive to the essential: Beyond integration problems, there remains the more fundamental problem of generalized irreversibility. Even if the Demon, against Landauer, Szilard, Brillouin, Costa de Beauregard ..., actually managed to "generate work by entropy reduction", generalized irreversibility would not violated: the transition of the gas from maximal disorder to initial order under single-step selection conditions is tremendously improbable, yes, but the "emergence" of the Demon under the same single-step selection conditions is infinitely more improbable. So, as well as within any case of generalized irreversibility, the apparent irreversibility violation by the gas is "paid" by a given higher improbability at the level of the "wider" system consisting of the gas and the Demon.

                As long as the devices required by information integration are given, information integration is hard to formalize, hard to realize, but at least we can conceive it to some extent.

                By contrast, in a context like evolution where the devices required by information integration are not given, we have to ask where they come from, and at this level of analysis we merely are lost.

                So, in my own paper Daring Group-theoretic Foundations of Biological Evolution despite Group Theory I try to tackle the problem at source, at the fundamental level concerning irreversibility.

                Just because of the differences between your really interesting paper and mine, a discussion about both papers would be a pleasure for me.

                All the best; good luck

                Peter