Author En Passant replied on Apr. 24, 2015 @ 01:24 GMT unstub

Gary,

I don't want to insult Sujatha Jagannathan in case she is not an automaton.

Your perception of the "fluency" of her language is right. Her talk seems to me to be "canned" (and I mean that in more ways than one).

But if you are right, then its creators are cheating. They intersperse regular (machine) dialogue with their human intervention whenever the situation gets too complex for AI (and purposely introduce human errors).

Have no fear of AI. Below, I copy some text that I saw on the Internet just now. Strong AI is simply preposterous.

It would mean that we can lift ourselves by our own bootstraps.

En

May 15, 2013 | Luke Muehlhauser | Analysis

Strong AI appears to be the topic of the week. Kevin Drum at Mother Jones thinks AIs will be as smart as humans by 2040. Karl Smith at Forbes and "M.S." at The Economist seem to roughly concur with Drum on this timeline. Moshe Vardi, the editor-in-chief of the world's most-read computer science magazine, predicts that "by 2045 machines will be able to do if not any work that humans can do, then a very significant fraction of the work that humans can do."

But predicting AI is more difficult than many people think.

To explore these difficulties, let's start with a 2009 bloggingheads.tv conversation between MIRI researcher Eliezer Yudkowsky and MIT computer scientist Scott Aaronson, author of the excellent Quantum Computing Since Democritus. Early in that dialogue, Yudkowsky asked:

It seems pretty obvious to me that at some point in [one to ten decades] we're going to build an AI smart enough to improve itself, and [it will] "foom" upward in intelligence, and by the time it exhausts available avenues for improvement it will be a "superintelligence" [relative] to us. Do you feel this is obvious?

Aaronson replied:

The idea that we could build computers that are smarter than us... and that those computers could build still smarter computers... until we reach the physical limits of what kind of intelligence is possible... that we could build things that are to us as we are to ants -- all of this is compatible with the laws of physics... and I can't find a reason of principle that it couldn't eventually come to pass...

The main thing we disagree about is the time scale... a few thousand years [before AI] seems more reasonable to me.

Those two estimates -- several decades vs. "a few thousand years" -- have wildly different policy implications.

If there's a good chance that AI will replace humans at the steering wheel of history in the next several decades, then we'd better put our gloves on and get to work making sure that this event has a positive rather than negative impact. But if we can be pretty confident that AI is thousands of years away, then we needn't worry about AI for now, and we should focus on other global priorities. Thus it appears that "When will AI be created?" is a question with high value of information for our species.

Let's take a moment to review the forecasting work that has been done, and see what conclusions we might draw about when AI will likely be created.

19 days later

Thanks Gary. I just posted a reply to your well reasoned comment.

All the best,

Akinbo

    Sorry, in addition seeing your interest in the wave equation, what is your assessment/ comment on the correctness of Thomas Erwin Phipps?

    a month later

    Hello Gary,

    Would you mind if I tapped your brain a little? I have a draft of a paper (attachment) and post the abstract below.

    Regards and thanks,

    Akinbo

    *You may reply me here or on my essay blog or better still to: taojo@hotmail.com

    =========================================================================

    Abstract: Absurdities arising from Einstein's velocity-addition law have been discussed since the theory's formulation. Most of these have been dismissed as being philosophical arguments and supporters of Special relativity theory are of the opinion that if the math is not faulted they are ready to live with the paradoxes. Here, we now demonstrate a mathematical contradiction internal to the theory itself. We show that when applied to light there is no way to mathematically reconcile the Einstein velocity-addition law with the second postulate of the theory which may have a fatal consequence.

    ==========================================================================Attachment #1: 2__Shorter_version__Application_of_the_velocity-addition_law_to_light_itself.pdf

    Write a Reply...