Let me anticipate your response by raising the following issue. In Peircean terms one can describe the action of a room thermostat as follows: in this context the room temperature is the significant quantity or sign, and is the input to a process that in the case that the temperature is excessive responds by turning off the heat? Would you reject such descriptions, and insist only on mathematical ones (which are of little utility to someone trying to fix a problem). Or is your objection that they add nothing to common sense, in which case I have to suggest that you read more, so you can see that in more complicated situations semiotic accounts are by no means trivial.

Dear Professor Josephson,

W.v.O. Quine made clear why positivism failed (see: Two Dogmas of Empiricism). Things have meaning only in the widest context of other things, i.e. the sign is never attached to a thing and not even to a single observation sentence. When a woman is sent a bunch of red roses then it is usually taken (in Western culture) as a sign of romantic love. When a woman was sent a bunch of red roses by Al Capone it was a sign of her becoming a widow soon...

Your thermostat example is an example of a well-formed sentence adhering to syntax and semantics, which is why it has meaning, but I can't see how it relates to semiotics. If, however, you think that the thermostat can be objectively (pre-linguistically) described by signs (as the term biosemiotics suggest), Peirce, who described his mature ideas as being very close to Kant's, would most likely disagree. The things have meaning (are signs) for US, what they are beyond...we cannot know.

So, I read your essay with great interest, because it carries 'meaning' (without which all is nothing indeed) in its title, but was a bit disappointed to find it reduced to 'objectifying' semiotics.

Heinrich Luediger

Somewhere (sorry I can't locate the reference) the point is made that for a long time sign theory and science were kept separate: sign theory was considered relevant only to human thought, while biology thought only in terms of information, ignoring the concept of sign. Then some biologists realised that the two could be fitted together and so biosemiotics came into existence. In other words, science has taken semiotics beyond what was envisaged by Peirce, though the utility of his ideas remains.

Professor Josephson,

I found your essay on meaning fascinating, provocative, and alas troubling, due to the highly unsettling details of one of your major references. That said, your more formal explanation of that same reference led me to an interpretation that relies only on specific examples from well-established fundamental physics. I believe that re-interpretation both broadens and has relevance to your definition of meaning.

While my perspectives on meaning would likely fall under the "meaning is emergent" category of your essay, my version of meaning emergence is a bit more subtle than that. That is because I accept the various anthropic arguments that calculate unbelievably high probabilities against the emergence of life in a more randomly parameterized universe, let alone one with radically different fundamental rules. I do not find it plausible that the existence of life can be separated from the existence of meaning. So, when I say that meaning is "emergent" in terms of experimentally testable information theory definitions, I am referring only to our own limited human mechanisms for the discovery of meaning. Meaning itself appears to be inherent and pre-programmed into the very fabric of our cosmos, both at the level of the Standard Model and deeper. I do not think we are remotely close to understanding how that works, or even how to frame the question properly.

When I say that I reflexively re-interpreted your comments on "oppositional dynamics" and "scaffolding" in terms of fundamental physics, I would point out as a simple example the property of stability (persistence) that is characteristic of our universe from the fermion level up. That stability in turn is a fundamental prerequisite for all forms of information and meaning, and I unexpectedly agreed after reading your essay that this stability stems from a curious process of two (or more) opposing entities coming together... but only in certain very specific ways, which I would state as follow:

The Persistence Principle. Within our universe, persistence and stability emerge as a result of incomplete cancellation of fundamentally conserved quantities.

While it is not the most fundamental example of this principle, the hydrogen atom is a beautiful example. The simplest example of a bound positive-negative system is positronium, an electron and a positron in close proximity. In the singlet or para-positronium state it mostly decays into two gamma photons. When this occurs, any long-term stability or persistence lost, with the two gammas sailing off in opposite directions to perpetuate a smoother, more plasma-like state of the universe.

In sharp contrast, an electron bound to a proton cancels charge, but does not self-annihilate due to imbalances in other conserved quantities. A sort of stalemate is reached, one in which the universe as a whole become quieter and less dynamic due to fewer long-range electric fields. The hydrogen atom itself then persists in this quieter medium, no longer as subject to the overwhelming influence of such powerful fields. This is the first step in the creation of classical, history-creating information, since classical information is after all nothing more than the particular configuration of a local system after wave function collapse (or in David Deutsch terminology, after universe selection).

This emergence of persistence is, I'm fairly sure, a physics-level example of how our universe seems custom-designed (the anthropic numbers again) to create what you call scaffolding, that is, forms of persistence upon which still higher levels of persistence, information, and meaning can then exist. Your oppositional dynamics then become these incomplete cancellations of conserved quantities, at many different scales and levels of complexity.

It is important to note that scaffolding -- useful, usable persistences -- emerge only when two (or more, e.g. quarks) mutually canceling quantities are not exact mirror images. In effect, the incomplete cancelation allows the remaining fundamentally conserved quantities to emerge as first-order entities in their own rights. Thus hydrogen atoms are characterized at a distance primarily by both mass and location, both of which persist in ways that allow new forms of complexity to emerge at still higher levels.

I would humbly suggest that this path might be a way of translating some of your intriguing ideas into both a more fundamental and more accessible form.

If so, it means that your oppositional dynamics can (and really, must) be generalized to numbers beyond just two. A split circle at best represents only the binary case of incomplete oppositional cancelation, and that binary case becomes commonplace only at the less fundamental level of atoms. Quarks forming protons and neutrons are examples and proofs that trinary cancelation works extremely well for creating scaffolding, since protons are arguably among the most common and enduring information-conserving artifacts within our universe.

The chemical elements can also be interpreted as n-ary cancellations of charge, though of course one could also interpret them as bundles of proton-electron pairs. The intriguing aspect of this n-ary incomplete cancellation interpretation of atoms is that their cancelations are a bit flexible, allowing a tiny bit of charge-cancellation variation through which compounds can emerge to provide still higher levels of complexity.

Jumping to an enormously higher level of complexity, your quotation of Hoffmeyer:

"This network of [local] semiotic controls establishes an enormously complex semiotic scaffolding for living systems."

... invokes far more complicated networked and multi-level forms of cancelation that in economic theory would be called a "free market economy." Such economies produce new products that quickly discard (cancel out) the details of how they emerged, and instead become new, persistent components with higher levels of meaning that then enable new levels of interaction and emergence.

So what is the bottom line? I would simply suggest that your main ideas, especially if generalized to the n-ary cases instead of focusing solely on the limited binary case, are much more deeply embedded in fundamental physics than meets the eye at first glance. To repeat my proposed generalization:

The Persistence Principle. Within our universe, persistence and stability emerge as a result of incomplete cancellation of fundamentally conserved quantities.

Sincerely,

Terry Bollinger

(FQXi topic 3099, "Fundamental as Fewer Bits")

    Dear Professor Josephson,

    The point of 'objectivity' is indeed a crucial one: Newton's laws are inter-subjective, because everyone equipped with a yardstick, a clock and a balance can try to falsify or simply use his theory.

    Now, theories of evolution in the widest sense are 'objective' inasmuch they are logical constructions definitely ruling out the inter-subjective observer - they are object-centered. However, what they claim WOULD be observable IS not observable, because the 'objective' vantage point cannot be taken by any subject. Then, however, the conclusion is that by being 'objective', i.e. not inter-subjective, such theories are subjective, a matter of belief or, rather, persuasion.

    What keeps our thinking apart is (in my opinion) that you think of signs in terms of communication theory, whereas I consider language to be constitutive of experience and a lucky misunderstanding at best when it comes to communication.

    Despite these differences I'm glad that we agree on the importance of meaning...

    Heinrich Luediger

    Oppositional dynamics, like semiosis, does involve triads though this was not very explicit in my brief account, so I am not dealing with just your 'limited binary case'. Where it enters is in the statement 'this coordination has itself a cause'. Yardley talks about triads quite a lot in her book. And persistence is an essential characteristic of biological systems, so that is implicit also. I agree in principle with much of what you say above but I will be expressing it rather differently. As I said, triads play an important role in my approach and that of Yardley's, but trinary cancellation looks like a good phrase (but if I understand your term correctly it is already present in Peirce since as I recall he refers to correlations of 3 entities which cannot be reduced to basic correlation of two of the three) and I may work it in. By the way, parametric amplification is a very simple case of triads (input-signal-idler) and as I see it this is more or less how it all begins.

    Dr Josephson,

    By "trinary cancellation" I mean the red-green-blue color charge cancellation of the strong force. This particular type of cancellation is particularly powerful due to color confinement, which makes color invisible anywhere in the universe outside of nucleons and mesons. Structures (scaffolding) that emerge from this striking partial cancellation include electric charge, mass, and spin.

    Thank you for the excellent (I was not clear at all) question and helpful link advice! I must also apologize for my slow response. I seem not to get notifications about responses from other essay threads, so I had to manually search to find responses.

    A bit more elaboration about "fundamental circles" is provided below. My apologies for the length, but the relevance of your annihilation-emergence concept, especially with some easy generalizations, seems to have engaged my interest more than I anticipated. I think it is very relevant in particular to the anthropic probability issue.

    ----------

    Assuming I understood your concepts rightly -- I do not presume this, but hope so -- then the three-part mutual cancellation of the color or strong force would seem very much to fit with your idea that persistence emerges from mutual... opposition? cancellation? Even if it is not an exact fit your ideas, your essay has convinced me that this type of almost-complete-cancellation is a deep and vital component of how our universe manages to meet the astonishing anthropic probabilities.

    I would like to suggest an important and I think complementary addition, which is this: Your concept (along with your major reference; I acknowledge that it is primarily her idea) not only creates scaffolding, but also creates flatness. By flatness I mean the ability for persistent entities to spread out without penalty or excessive cost over large spaces. Within those spaces, which are just as much a creation of the incomplete cancellation as are the scaffolding structures, the emergent structures are able to exist independently and subsequently to interact in very interesting ways.

    I would suggest that this emergence of flatness from your circle scenario is every bit as important as the emergence of the scaffolding itself, and that there are in fact complementary to each other. The "mutual consumption" of the entities (two or more) is what clears the field and creates a flat, expansive space possible, while the incomplete part of the cancellation creates the relatively isolate entities (e.g., atoms) that reside within that "burned out" space.

    I can think of no more literal of emergent flatness than the formation of hydrogen atoms at the end of the long dark era after the big bang, which cleared space for photons and enabled the formation of far more interesting entities, such as galaxies and stars and planets. Space itself was already flat, but in terms of electromagnetic forces, which are incredibly powerful and contrary at large scales to chemistry or anything else resembling our world, this event also mostly cleared out the powerful fields and enabled entities made of atoms -- the emergent scaffolding -- to exist in relative isolation and with far greater persistence over time of their states. In short, plasma became memory, some of the earliest fabric of classical, information-rich history.

    (SIDE NOTE: If you believe in space itself as emergent, which I do incidentally, then at some deeper level not covered by the Standard Model there must exist yet another circle of annihilation-emergence that quite literally creates the flat xyz space that makes our entire expansive universe possible. There is lively physics dialog going on these days about quantum entanglement as a possible path to that emergence. Alas, that dialog is sadly encumbered by a completely unnecessary historical insistence on pushing the argument down to the astronomically energetic Planck level, nominally in order to include gravity, even though that approach that has for 40 years failed to yield a meaningful theory. Since entanglement works very well indeed at the ordinary particle level, insisting that entanglement be pushed down to the astronomical energy levels of Planck space violates Occam's razor about as emphatically as any proposal of which I am aware. So: Entanglement as a possibility for emergent space is intriguing. Entanglement when forced down to the astronomically energetic Planck level is... not persuasive at all.)

    ----------

    On a separate point, there is an easy way to unify oppositions of two or more as part of a single model. Here's how:

    If you take your circle and imagine charge as two points on opposite side of the circle, you have the binary cancellation case.

    If you instead take three equal charges and distribute them in an equilateral triangle around the same circle, you have the trinary cancellation case.

    However, there is no reason to stop there. Placing the points of any regular polygon with 4, 5, etc points also works fine. That those do not seem to occur in fundamental physics does not preclude them from occurring in your higher levels of organization. I would argue for example that the benzene ring, which is to me one of the most delightful and important stabilizing structures in all of biochemistry, is an example of a six-point mutual cancellation yielding a new form of scaffolding. For biochemistry, the idea of the stable benzene ring as "scaffolding" is about as literal as it gets. We would not exist without it, because without the stabilizing effect of partial implementations of this ring, there would be no amino acids at a minimum and no strong, persistent way to make "interesting" molecules (ones more complex than, say, polyethylene).

    Finally, your circle charges ("entities in conflict") need not even be regular polygons. You could have two pairs of nearby charges on opposite sides, for example.

    The full generalization, including unequal charges and even distribution over three dimensional space (!), is to treat the charges like angular momentum vectors that collectively cancel out to zero angular momentum. The angular momentum model is really inherently 3D, with the 2D (circle) case just a subset, since there is a very special relationship between angular momentum and 3-space due the 3D space's unique interchangeability of rotations and vectors. The model conspicuously does not generalize in any simple way to any dimensionality other than 3D and its subsets of 2D and 1D.

    Incidentally, I should note that your original circle model, if presented in terms of angular momentum vectors, is really the 1-space (1D) case; the circle is... well... not really necessary? You just have two entities at opposite end of a line, after all.

    The deeply fundamental trinary color force example does, however, require an actual circle or 2-space subset. So arguably, the circle begins not at the 2-charge electromagnetic level, but at the smaller scale in which nucleons emerge. One could thus say the circle is more fundamental... but only for three opposing forces, rather than for just two.

    I do not know what the potential of the full 3-space model is. However, I once again I would point to biochemistry for a very interesting high-number 3-space example of mutual cancellation leading to stability: C60, also known as buckminsterfullerene. These marvelous little geodesic spheres have no less than 60 fully symmetric vertices (the carbon atoms) that collectively form one of the most stable (scaffolding again) overall molecules known in chemistry.

    ----------

    Enough. Thank you for your excellent question. I was in retrospect very far from being clear when I casually dropped in the term "trinary".

    Cheers,

    Terry Bollinger (Essay 3099)

    Just to deal with a technicality first of all: each essay has a 'subscribe' button at the top of the page, which you can use to be notified automatically of postings re that essay. Unfortunately the notification you get does not give you a specific link to the new posting, which is sheer incompetence as every posting has its own 'anchor' (you have to look at the source code of a web page to find out what it is so you can make a link from it).

    Your comments on flatness and cancellation raise interesting issues in regard to invariance and symmetry, which I will elaborate on separately. Watch this space!

    Professor Josephson,

    I am very much looking forward to seeing your elaborations, particularly since you will be addressing invariance and symmetry!

    Cheers, Terry Bollinger

    ----------

    Tangent Warning! Below are some observations on FQXi software support.

    I share your frustrations. I suspect that what happened to FQXi is a classic case of legacy lock-in. That is, many years ago FQXi took a "pretty good first whack" at their software, figuring they would improve it later. But after they got it working, they quickly began accumulating a large "customer base" of user data (essays, etc.) for that early design. This made it difficult to change any of the basics without also upsetting or even losing legacy data. Over time, that just gets worse, making change even more difficult.

    It would likely be complicated, risky, and costly now for FQXi to update even simple functions. I also strongly suspect they have only very minimal software update support, which is not unusual for a smallish group like FQXi.

    FQXi likely needs to transition to an updated and more open source based model to break the lock-in, but that is of course easier said than done. A good open source tool expert (not me!) likely could move them to a new model at low cost, though, since there are some impressive blogging and customer support tools out there these days.

    Dear Brian,

    I'm very much a fan of the Bohm and Wheeler elements.

    Two small points: (1) I wonder whether the focus on meaning is a bit of a red herring? Wouldn't any emergent phenomenon make the same point (e.g. money, wetness, hurricanes, swarms, etc.)? What is special about meaning as distinct from other examples of emergence?

    Also: you mention general relativity cannot be fundamental because all it does is gravity (ditto, mutatis mutandis, for the standard model that doesn't do gravity). But it should maybe be noted that attempts were made to get particle physics out of general relativity (Einstein and Wheeler), and attempts were made to get gravity from particle physics. They didn't work, of course, which is why you probably ignore them - but perhaps a mention is in order.

    Best,

    Dean

      Brian,

      I read all your linked papers, and delighted in them. Sorry I hadn't read more of your work before, aside of course from technical work on superconductors and Josephson junction. I'll do my best to correct that.

      Anyway, the notion that "Biosemiotics has a concept 'code duality' which is roughly the idea that codes and their references generate each other" resonates. And I think it goes deeper than organic life.

      Dear Brian,

      I write all my comments in word. When finished, I login and copy and paste them. For reading the comment(s) during writing in Word, I change Windows by simultaneously press alt shift.

      Terry,

      there are some things that had to be enhanced in these contest in my humble opinion. The logout-problem seems to me of secondary issue. As I wrote recently on Ilja Schmelzer's essay page, I agree that the ability for voting as such - and especially also for determining some finalists - is absurd, since it heavily involves psychological inertia such as likes and dislikes (just as on facebook) as well as group dynamics, mutual up- or downvoting and such things like that, as the essay contest's timeline proceeds. In my opinion, results that are labeled 'scientific' should not be a matter of some Darwinian selection process.

      But obviously they nonetheless are a matter of Darwinian selection in this contest, since otherwise the optional criteria of Acceptability and Relevance would be amongst the initial selection criteria for the eligibility of the submissions. Since FQXi refuses to proceed in this way and delegate this to the contestants themselves by some voting process, it installs a Darwinian competition process with all its highly subjective pros and cons. If FQXi is convinced that Acceptability and Relevance can be judged more objectively beyond such a Darwinian process of mutually excluding subjective interests, I would like to ask FQXi why they do not conform to this. From the 'bird's view' of the FQXi expert panel of judges, the criteria of Acceptability, Relevance and Interesting must be valid, since otherwise they couldn't adopt them at all for the final judging process.

      So why not abandon this absurd voting process and extend the mentioned eligibility criteria to the final judging criteria from the very start on before the essays get published? It would enhance the readability as well as understandability of the entries as well.