Thanks Valeria, You give a thoughtful review.

You generally prefer to speak frankly (you say) without worrying how your essay might be rated as a consequence. Me too, I feel the same. I no longer worry about the ratings. You said, "you will read and rate one's essay who reciprocate you. I keep your freedom being a bit more constrained by your own rule, than what this essay contest allows for learning something or see an other people view." To explain, I like sincere critique. Not everyone wants to give that, or receive it, so I made my policy explicit: I want frank reviews and that's what I give in return. It's worked pretty well, so far. I still read other essays, I just don't review them.

I answer your points:

1. I agree premise P1 (limit of light speed) can't be proved, only falsified. And people do try to falsify it, but none has yet succeeded; so it feels like a strong premise.

2, 4. You warn we might in future develop artificial beings more intelligent than we, but less moral. They'd undermine the steering practice by rejecting the moral theory and instead enslaving us, or destroying us. Do I understand you?

Why would they reject the moral theory? What possible reason could they give?

3. I'm happy you agree, "The tipping points between success and failure are many, and each hinges on the freedom of an individual whose identity is unknown." I'm a poor philosopher and my arguments are simple (maybe too simple), but they work for me.

So it was your own highlighter you recommended. Thank you! This was the first time I used one. It was far from perfect. If I need it again, I'll try Sticky Plus instead.

Mike

Hi Petio, Thanks for inviting me. Unfortunately I almost never join groups, associations and such. But I wish you luck, - Mike

Dear Mike!

Thank you receiving my review, which wasn't truly a critique. Only a frankly speak :)

1. About falsifying the P1 (limit of light speed) premise: I'm not a physicist at all, so I'm not able anyhow describe and prove it, however I frankly have concrete experiences about the non-local (not restricted to time and space and light speed) thought communication a.k.a generally called - telepathy. So I can agree in that with you: in normal average human matters sometimes this kind of experience and communication may be much rather disturbing than helpful.

2.4 I answered these points on my essay page too.

"You warn we might in future develop artificial beings more intelligent than we, but less moral. They'd undermine the steering practice by rejecting the moral theory and instead enslaving us, or destroying us. Do I understand you?

Why would they reject the moral theory? What possible reason could they give?"

There may be much. However, it seems almost sure that the artificial beings come into existence by human creation at first. There may be humans rejecting the moral theory using artificial means for supremacy purposes. As a consequence their creatures as their artificial likeness may also reject the moral laws not only for enslaving or destroying us, but eventually themselves. (See: Terminator film series, and I also suggest to watch Eureka TV series mainly season 5. The latter is more closed to our present human understanding.)

3. Perhaps, if that individual may be known the tipping points between success and failure may be much less.

I wrote my StickyPlusHighlighter Firefox extension for any research purpose. Feel free to try it for your any goal.

I rated your essay.

Kind regards,

Valeria

Thanks for answering, Valeria, and for rating me. (I wouldn't tell people when rating them because they might back-calculate the value of your vote. I couldn't do that in your case only because I received a bunch of high and low votes all at the same time. Just so you know.)

1. I've no experience of telepathy, but I believe you speak sincerely, and I admire your sense of humour on the topic.

2, 4. I replied to Ross on this point (May 8), but didn't really explain my thinking. So here I try to explain: The moral theory states a principle of action for rational beings, which is to respect personal freedom (M2). To kill or enslave would obviously violate that principle. So either these cruel beings (the ones you warn of) are irrational and therefore not intelligent, or they think the theory invalid and non-binding on them, and therefore have a reason for thinking this. What reason? Why do they think the theory is invalid?

My argument here (based on Kant's insight) is that immoral behaviour is necessarily irrational. Therefore we've nothing to fear from an alien intelligence (artificial or natural) insofar as it's rational.

3. Yes, or the unknown tipping points would be less. But then morality might fail, or at least my theory of it, because it depends on not knowing. It's like blind Lady Justice who treats all equally because she cannot see their differences.

Mike

Dear Mike!

About rating: I told you my sincere stance about it lengthy. I haven't concern who calculates what, and why. I only wished to know you about my admiration.

1. I keep the good humour is an admirable gift. I'm lucky because I get much :) However my telepathic experiences are not a joke. I keep it to be a natural ability never proved with accepted science, which may be controlled by conscious intent and further extended by any means. (just google - synthetic telepathy).

2. "The moral theory states a principle of action for rational beings, which is to respect personal freedom (M2)." I can agree. However I think your further statements and explanations are not so logically direct deductions.

What or who would be exactly considered to be a 'rational being'? What does exactly mean the notion 'rational'? Is a rational being consequently having moral? I think these questions require more clarification yet. Yes, Kant was the father of arguing these very crucial matters. But, I did not find his concrete statement you argue here " immoral behaviour is necessarily irrational"

You ask, and mention..."...they think the theory invalid and non-binding on them, and therefore have a reason for thinking this. What reason? Why do they think the theory is invalid?..."

Mike: Not everything has a pure reason! I think, Kant is crucial here, because he felt/experienced it by instinct, but he wished to explain to himself being rationalized. Perhaps, that may be why he wrote the 'Critique of pure reason'.

About your question: I think,...they have no rational reason only they may be simply 'cruel' ones and likely they are thinking they are not bound by moral. Simply, there are ones having no moral, but this does not mean they are not quite intelligent, and they are irrational, on the contrary they may be quite sharp ones. Okay? I just warn, there may be such ones, who can act for beyond reason supremacy over the nature and their nature, and I think a moral theory won't stop them.

Kind regards,

Valeria

Hi Mike,

First of all, thank you again for your comments and critique of my essay. I see that you've been fairly successful in getting other entrants to critique your essay. I will give you my thoughts, which echo those of others, but is there anything in particular you would like me to give you feedback on?

Others have commented on your premise that "reason is the supreme value". I think I take a similar position to others in thinking that reason is a means rather than an end. Reason can't exist without something to do the reasoning. Is life just a means to reason? Do we value things, such as animals and plants, that cannot reason?

I see that taking reason as the supreme value means that we value the endless continuity of rational being (M0). We could possibly take, by definition, that morality relates personal action to a universally collective end (M1). But I don't think it follows that we should have a "maximum personal freedom compatible with equal freedom for all" (M2).

Just as we don't know what actions will lead us to success (interplanetary colonisation), we might not know what actions will lead us to failure (extinction). I'm not sure that maximum personal freedom is a good idea without: a) knowledge of what outcomes might result from available actions, and b) values that adequately assess the preferability of each outcome. Nevertheless, freedom of thought is something I think is fairly safe.

I like your development of ideas of how to find a consensus while maximising freedom of thought and expression. I'm not sure if it might be putting the cart before the horse though. Does the system encourage people to be rational, or assume that they already are? Perhaps the first direction humanity should steer is toward being composed of rational individuals.

Though we may ask other questions then: If everyone is rational and working from the same knowledge and understanding, what would anyone disagree on? How much we value certain outcomes or actions? Can we all rationally accept the same values?

I agree that freedom of thought is important and allowing people to dissociate themselves from their ideas by using "pipes" is a good way to have a rational debate about ideas. So, given that this essay is your text or pipe. What would you change in your next draft?

Toby

    Hi Mike,

    What an awesome article, I really enjoyed it and your vision of how we might better build consensus. You obviously put a great deal of thought into how to maximize the fairness of consensus through networked drafting of consensus norms. This is a neglected area and I'm glad to see that someone is filling the void. I would love to see a longer work focused just on that which explains your reasoning and conclusions in greater detail, or better yet, that and the software to put it into practice. You have provided a very valuable idea for the future of humanity, and for that I would be wholly satisfied if you were to win this contest.

    Now, I do have one quibble. It turns out that superluminal signaling has a long history, stretching back to the 19th century in the work of the ultimate modern scientific genius and pioneer, Nikola Tesla. Now, no one believed him then, but there have been numerous replications and other methods which have confirmed everything he discovered. I recommend reading "Transmit radio messages faster than light," by Ishii & Giakos (Microwaves & RF, 1991). This article describes non-transverse (i.e., longitudinal) electromagnetic waves and provides equations which show that they can be superluminal. Not only that, but they produced non-transverse radio waves and measured pulse transit times corresponding to 5.0242 x108 m/s in one experiment, and 4.43 x108m/s in another.

    Now, marginal superluminality such as that would not change your first premise much, but other technologies involving tight gravity wave beams, found in experiments carried out in Russia by Eugene Podkletnov, demonstrated pulse velocities in excess of 64c! (64c was the limit of what they could measure, and the signal maxed out their instruments.) These signals were certainly robust--in one experiment they were able to punch a hole in a steel plate with them.

    As you research this topic, you'll find a lot of talk about the distinction between group velocity and phase velocity, along with an argument to the effect that this distinction means that--even though superluminal phase velocities have been detected--no communicative signal can be transmitted superluminally. However, as Ishii and Giakos point out at the beginning of their article, this distinction is only relevant for analog signals. A digital signal can be transmitted at the phase velocity. Of course, all the cold water you will find being thrown around in an attempt to discredit the significance of longitudinal wave superluminality isn't even relevant to what Podkletnov and his team demonstrated years ago.

    Now, in my opinion, even if you were to remove premise one and the paragraphs based on it, the thrust of your article would not change significantly. What you constructed from premise one was a nice idea, but it does not correspond to the true limitations of conceivable communication technologies.

    All in all, I really found your article to be a valuable contribution, and I have rated it accordingly. I wish you all the best here and in everything you do.

    Warmly,

    Aaron

      Sure Valeria, I understand you weren't joking. But I think you spoke with a smile (which I detected in your words) and a smile is healthy and admirable.

      About those cruel beings: The moral theory states how a rational being should act (given). The beings find no fault in the moral theory and accept it as correct (given). Then they act cruelly in contradiction of the theory (given). It follows they act irrationally.

      Further they are very intelligent (given). It follows they are insane. Right? - Mike

      Dear Mike!

      I can accept (truly :) your pure logical conclusions if you wish. Right! However I'm a woman, so sometimes I'm not using pure logic to draw conclusion.:) Unfortunately, there are/may be very intelligent beings disregarding the moral (theory) and act insane.

      Bye - Valeria

      You're welcome, Toby; thanks in return for yours. Summary: A) answering about the feedback I seek; B) defending premise P2, supreme value on reason, and explaining how to attack it; C) defending principle M2, maximum of personal freedom; D) defending rational discourse as the horse to pull the cart; E) sampling what can and cannot be reasonably agreed; and F) planning my next essay draft.

      A. Robert says I "cover too much ground" to get my points across in the space available (Apr 30). I agree. That's a formal fault. If I could ask for more, I'd ask someone (despite the difficulty) to identify a content fault, i.e. one that invalidates the thesis, such as a principle that's unsupported in theory, or a practice that's infeasible. Or give the thesis a good denting in the attempt. Or reveal something new and interesting that's hidden to me.

      But these are tall orders, given how the thesis is overcompressed.

      B. No (to answer), I don't mean to imply in premise P2 (reason as a supreme value) that life itself has a purpose in reason.

      Yes, I agree we may value something despite it being incapable of reason. In P2, I don't mean to imply that the value on reason is exclusive of other values; we'll still have a great number of other values. The theorist might even try to deduce from that great number the supreme value of reason, as reason recognizes the value of things. Not knowing the value of something, we're in greater danger of losing it, or of failing to create it in the first place (artificial things). See also my answer (E) to Mark's May 3 post.

      For any who doubt the strength of premise P2 and wish to attack it: imagine that reason is lost from the universe leaving us behind mutatis mutandis. Now explain how we'd get along and ultimately recover unharmed. Then I'll agree that reason isn't supreme after all.

      Or identify some other value V whose loss from the universe implies the loss of reason itself. Then I'll agree that V is supreme above reason.

      Or identify some other value W whose loss from the universe we could not amend even while reason remained with us. Then I'll agree that W is co-supreme with reason.

      C. You say, "Just as we don't know what actions will lead us to success (interplanetary [should be interstellar] colonisation), we might not know what actions will lead us to failure (extinction)." Here you imply a symmetry that would neutralize the utility of a maximized personal freedom (M2). But I believe that symmetry is already broken by M2. Consider: "if a given action does not reduce anyone's freedom to act, then it can hardly reduce the likelihood of eventual success" (p. 2). By the same token, it can hardly increase the likelihood of eventual failure. A maximum of personal freedom "compatible with equal freedoms for all" is more likely to avoid extinction than to cause it. This is what breaks the symmetry you imply. So M2 still holds (by prudence) as a means to M0.

      D. I think the horse (to adapt your metaphor) already competent to pull. See figure F6. The introduction of guideways (attachment of harness) would enable the rational discourse of the public sphere on the left (strength of horse) to pull the decision systems on the right (cart). Currently we see the horse off chewing weeds instead of pulling the cart, or otherwise demonstrating its strength. But where you attribute this to the beast's incompetence, I attribute it to its being detached from the cart.

      E. "If everyone is rational and working from the same knowledge and understanding, what would anyone disagree on?" I think the best answer (if I understand) is the obvious one: we might reasonably disagree about those things it would be reasonable to disagree about, such as favourite flavours of ice cream (trivial example), or certain aspects of the future (less trivial perhaps) that make no sense to agree about. But there's at least one non-trivial aspect of the future (I argue) that we cannot reasonably disagree about, which is also a timeless constant, and therefore a destination to steer for. This is M0, from which I deduce a theory and means of future steering (aka morality).

      F. In my next draft (which I'm planning now), I'll try to fix the formal fault that Robert has identified by giving the text a lot more room to breathe, and letting it answer itself the questions you and others pose. I'm grateful to you all on this account because I've generally no access to critical readers.

      Mike

      Thanks in return, Jeff. I posted a review of your essay yesterday. I'll be rating it (along with the others on my review list) some time between now and May 30.

      Awaiting your answer, - Mike

      Erratum: A hole was punched in a concrete block, not a steel plate. A steel plate was also mentioned in the context of the same set of experiments, but it was not punctured, only dented.

      Hi Aaron, It needs a longer text, I agree. I plan to start writing one shortly. There's software already (Votorola), but it's only a prototype with wires sticking out.

      Thanks for sharing these superluminal findings (new to me). I guess they aren't generally recognized yet, so my premise can still appear to be secure. I'm sure it'll eventually fail regardless (what ever doesn't?) and with it my moral/steering theory. But maybe we'll have found other destinations to steer toward by then, maybe with the help of your foreknowledge machines.

      Warmly, with best wishes in return, - Mike

      Ah, but a woman reached the same conclusion long ago (see below). We're almost there, please follow the logic a few more steps: A naturally intelligent race as a whole is unlikely to go insane. Therefore we probably created these poor, demented creatures ourselves (as you suggested earlier). So the cruelty began with us. We put them through life and death experiments in the lab, tormented them with unnecessary suffering, and ultimately made them insane.

      Clearly we shouldn't do that. It's obviously wrong. We should restrict ourselves to creating our intelligent beings in the natural, old fashioned way (boy meets girl, etc). That's what the cold logic tells us, anyway. But it's also what the Romantic writer Mary Shelley told us 200 years ago, in Frankenstein. - Mike

      Dear Mike!

      Due to you mentioned on my essay page your possible misunderstanding me may arise from the language differences, thus I'm using here a reference for some relevant English words meanings (WordWeb7 http://wordweb.info/) You can download and use it free. It can give much help even if you have a native English. I'll mark with W7 in the text. And, I apologize for my present longer comment here again. But, I feel we've picked here something which is more important than to shrunk the thoughts into short statements and answers not completely understood.

      That is expectable as you write "...A naturally intelligent race as a whole is unlikely to go insane..", very because a 'naturally and unconditionally structured living natural system or organism acts for its whole balanced self-sustenance even being been unconscious about it '. (As I stated earlier in my above longer philosophical post written to you). However, what you presume in your this cited statement requires conscious awareness, and intelligence (W7: 1. The ability to comprehend, to understand and profit from experience) at many levels of arrangements of nature. I mean, the 'nature' (both psychical and physical manner and every meaning by W7) interweaves us at many levels both individually and being a socially healthy (optimally ordered) or disordered working organism.

      I only wish to point out to that: Both the applying of moral (W7: adjective: 1. Concerned with principles of right and wrong or conforming to standards of behaviour and character based on those principles. 2. Psychological rather than physical or tangible in effect. ) laws requires intelligence, and altering even with positive intent or to encroach on the naturally unconscious balance of an organism also necessitate intelligence and knowledge.

      Unfortunately, ones being moral or not would mean: Intelligent ones may act either wrong or right, conforming or refusing standards of behaviour - both allowed! And unfortunately too, the impact is mainly psychical (W7: 1.Affecting or influenced by the human mind,2. Outside the sphere of - presently mutually accepted - physical science ) then draws physical or tangible effects and consequences.

      Albeit the effects and consequences may be irrational, insane etc. expectable foreseen by intelligent ones, unfortunately the bigger problem is if those are in charge who act on the wrong side rejecting moral and do black magic/science using very cold logic with a consideration basically for getting supremacy over the Nature and their own nature.

      My personal opinion is: To act moral much more depends on conscious intent (seeing, re-learning or rearranging our knowledge and steering our race, society as a whole and healthy and balanced organism), than intelligence and irrationality. Basically this was the message of my essay and our conversation.

      You are right in that: An adequately intelligent rational being (added by me) - with a positively charged conscious intent to be moral is able to apprehend - that is beyond reason to overcome the naturally given supremacy of nature from which every knowledge is arising! That is beyond reason developing such kind of technologies to govern us by any kind of sophisticate artificial intelligence or computers in which some ones may put them with an eventual goal to destroy the whole natural system into a virtually natural environment (heaven)! This is not only irrational, driven by insane minds, but seems impossible! Because the Nature involving our nature and encompassing us as a larger whole can resist owing to the Nature unconditionally and unconsciously can attempt to do balance, either we recognize, comprehend it how it is done or not. However we can understand it, and we can consciously resist going insane.

      I'm willing to talk with you further if you wish to do it here or at my given email.

      Bye - Valeria

      Thanks Valeria, this is helpful. It looks like we're speaking of different things, especially in regard to morality. You speak of two different conceptions of morality, and I speak of a third. First you speak of morality as broadly defined in dictionaries. With such a broad definition, I agree that a wrong action (morally wrong) needn't always be an irrational action. Okay.

      Second, you speak of your own conception of morality, as you might define it in your own moral theory. Here again, a wrong action needn't always be an irrational action. Okay.

      But for my part, I speak of morality as defined in my essay. I present a theory that (not unlike Kant's) binds morality and reason together in a context of action, such that right = rational, and wrong = irrational. This is a different conception of morality. Maybe you accept it, or maybe you reject it; but the question is, Would those cruel, hyper-intelligent beings (the ones you spoke of) accept it?

      Yes, suppose they accept my moral theory. Then (it follows) their immoral actions are irrational. Further, in being both hyper-intelligent and hyper-wrong, they are probably insane.

      Or no, suppose they reject it. Then have they a reason for rejecting it?

      No, suppose they've no reason; they just reject it and, turning their backs (hmmmf!), refuse to talk about it any more. Here again their actions are irrational (and we suspect insane).

      Or yes, suppose they've a reason for rejecting it. What might that reason be?

      You see, I'm just asking you to find a fault in the moral theory. If there's no fault, then hyper-rational/immoral beings are impossible. - Mike

      Thank you for writing this beautiful essay, Michael. The accompanying diagrams are similarly beautiful and very helpful in making your thoughts clear. I find the recombinant text and guideway system appealing. My main question is this: if rational discourse is valued, how does this system ensure that rational discourse is maintained (as opposed to attractive rhetoric, bribes, threats, etc.)? Are there additional systems that would need to be put in place to maintain this?

      Thanks for your entry!

      Jeff

      Dear Mike!

      I understand you, and I can accept your morality principles and theory! Albeit I keep that being moral i.e. acceptation of any definition, description, law about 'moral', mainly depends on one's conscious intent and decision.

      You ask: "Would those cruel, hyper-intelligent beings (the ones you spoke of) accept it?" I don't know!

      Please understand. There is no fault in either of moral theories itself!

      But, you are wrong in that conclusion "If there's no fault, then hyper-rational/immoral beings are impossible"! That is not so! Whether those cruel, hyper-intelligent beings accept moral or not doesn't depend on how a moral theory is defined. It only depends on whether they want to accept it or not!

      This is not a definition of 'my moral theory' - this is unfortunately a fact! Okay?

      There may be one driven into insane very because one doesn't want to accept what he knows. He is propelled toward doing wrong=irrational (as you define) things even if he is very intelligent to know very well what he does is irrational, and insane.

      What is his reason to do this? The basic antimony inside himself. Simply, He doesn't want to know or accept what he knows! This is much more a psychological disorder than what the pure logic can lie for being moral.

      The most of us can accept and wish moral laws being anyhow defined, and can act using that laws! However I warned, and mentioned quite lengthy to you the biggest problem is, if there may be only few ones or only one, but high in charge doing irrational things rejecting moral anyhow defined, but affecting the lives of most of us.

      Bye - Valeria

      Mike - I will use an another word antagonism (instead of antimony, although the latter may has hidden meaning) - bye