The Dawn of Symbolic Life, The Future of Human Evolution! — Written 07/08/12

For Your Entertainment (FYE)!  ;+)

http://www.symboliclife.net/

http://www.symboliclife.net/introduction

http://www.symboliclife.net/our_fatal_flaw

http://www.symboliclife.net/the_transition_to_a_new_life_form

http://www.symboliclife.net/masters_slaves_and_robots

http://www.symboliclife.net/contact_us

http://www.symboliclife.net/references_and_links

http://en.wikipedia.org/wiki/Nick_Bostrom

http://www.nickbostrom.com/

http://www.fhi.ox.ac.uk/our_staff/research/nick_bostrom

http://www.nickbostrom.com/fut/evolution.html

http://humanityplus.org/?gclid=COe9gsKBh7ECFQGu4godxB2ROg

http://singularity.org/ourmission/

http://singularity.org/files/strategicplan2011.pdf

http://singularity.org/what-is-the-singularity/

http://singularity.org/why-work-toward-the-singularity/

http://www.humansfuture.org/

http://news.nationalgeographic.com/news/2009/11/091124-origin-of-species-150-darwin-human-evolution.html

http://www.msnbc.msn.com/id/7103668/ns/technology_and_science-science/t/human-evolution-crossroads/

http://en.wikipedia.org/wiki/Artificial_intelligence

http://en.wikipedia.org/wiki/Lisp_programming_language

http://en.wikipedia.org/wiki/Prolog

http://en.wikipedia.org/wiki/Expert_system

http://en.wikipedia.org/wiki/Knowledge_engineering

http://en.wikipedia.org/wiki/Knowledge_engineer

http://www.wtec.org/loyola/kb/c1_s1.htm

http://en.wikipedia.org/wiki/KEE

http://www.ibm.com/developerworks/expert/try.html

http://www.exsys.com/?gclid=COee2a6Mh7ECFedV4godu35N1Q

http://www.arc.sci.eg/NARIMS_upload/CLAESFILES/3744.pdf

http://www.claes.sci.eg/Department.aspx?DepId=272&lang=en

http://www.expertise2go.com/e2g3g/tutorials/knoweng/

 

 

“Artificial intelligence, by claiming to be able to recreate the capabilities of the human mind, is both a challenge and an inspiration for philosophy. Are there limits to how intelligent machines can be? Is there an essential difference between human intelligence and artificial intelligence? Can a machine have a mind and consciousness? A few of the most influential answers to these questions are given below.

Turing’s “polite convention”: We need not decide if a machine can “think”; we need only decide if a machine can act as intelligently as a human being. This approach to the philosophical problems associated with artificial intelligence forms the basis of the Turing test.

The Dartmouth proposal: “Every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it.” This conjecture was printed in the proposal for the Dartmouth Conference of 1956, and represents the position of most working AI researchers.

Newell and Simon’s physical symbol system hypothesis: “A physical symbol system has the necessary and sufficient means of general intelligent action.” Newell and Simon argue that intelligences consist of formal operations on symbols. Hubert Dreyfus argued that, on the contrary, human expertise depends on unconscious instinct rather than conscious symbol manipulation and on having a “feel” for the situation rather than explicit symbolic knowledge. (See Dreyfus’ critique of AI.)

Gödel’s incompleteness theorem: A formal system (such as a computer program) cannot prove all true statements. Roger Penrose is among those who claim that Gödel’s theorem limits what machines can do. (See The Emperor’s New Mind.)

Searle’s strong AI hypothesis: “The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.” John Searle counters this assertion with his Chinese room argument, which asks us to look inside the computer and try to find where the “mind” might be.

The artificial brain argument: The brain can be simulated. Hans Moravec, Ray Kurzweil and others have argued that it is technologically feasible to copy the brain directly into hardware and software, and that such a simulation will be essentially identical to the original.”

 

“Artificial Intelligence is a common topic in both science fiction and projections about the future of technology and society. The existence of an artificial intelligence that rivals human intelligence raises difficult ethical issues, and the potential power of the technology inspires both hopes and fears.

In fiction, Artificial Intelligence has appeared fulfilling many roles, including a servant (R2D2 in Star Wars), a law enforcer (K.I.T.T. “Knight Rider”), a comrade (Lt. Commander Data in Star Trek: The Next Generation), a conqueror/overlord (The Matrix), a dictator (With Folded Hands), a benevolent provider/de facto ruler (The Culture), an assassin (Terminator), a sentient race (Battlestar Galactica/Transformers/Mass Effect), an extension to human abilities (Ghost in the Shell) and the savior of the human race (R. Daneel Olivaw in Isaac Asimov’s Robot series).

Mary Shelley’s Frankenstein considers a key issue in the ethics of artificial intelligence: if a machine can be created that has intelligence, could it also feel? If it can feel, does it have the same rights as a human? The idea also appears in modern science fiction, including the films I Robot, Blade Runner and A.I.: Artificial Intelligence, in which humanoid machines have the ability to feel human emotions. This issue, now known as “robot rights”, is currently being considered by, for example, California’s Institute for the Future, although many critics believe that the discussion is premature. The subject is profoundly discussed in the 2010 documentary film Plug & Pray.

Martin Ford, author of The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, and others argue that specialized artificial intelligence applications, robotics and other forms of automation will ultimately result in significant unemployment as machines begin to match and exceed the capability of workers to perform most routine and repetitive jobs. Ford predicts that many knowledge-based occupations—and in particular entry level jobs—will be increasingly susceptible to automation via expert systems, machine learning and other AI-enhanced applications. AI-based applications may also be used to amplify the capabilities of low-wage offshore workers, making it more feasible to outsource knowledge work.

Joseph Weizenbaum wrote that AI applications can not, by definition, successfully simulate genuine human empathy and that the use of AI technology in fields such as customer service or psychotherapy was deeply misguided. Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum these points suggest that AI research devalues human life.

Many futurists believe that artificial intelligence will ultimately transcend the limits of progress. Ray Kurzweil has used Moore’s law (which describes the relentless exponential improvement in digital technology) to calculate that desktop computers will have the same processing power as human brains by the year 2029. He also predicts that by 2045 artificial intelligence will reach a point where it is able to improve itself at a rate that far exceeds anything conceivable in the past, a scenario that science fiction writer Vernor Vinge named the “singularity”.

Robot designer Hans Moravec, cyberneticist Kevin Warwick and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, which has roots in Aldous Huxley and Robert Ettinger, has been illustrated in fiction as well, for example in the manga Ghost in the Shell and the science-fiction series Dune.

Political scientist Charles T. Rubin believes that AI can be neither designed nor guaranteed to be friendly. He argues that “any sufficiently advanced benevolence may be indistinguishable from malevolence.” Humans should not assume machines or robots would treat us favorably, because there is no a priori reason to believe that they would be sympathetic to our system of morality, which has evolved along with our particular biology (which AIs would not share).

Edward Fredkin argues that “artificial intelligence is the next stage in evolution”, an idea first proposed by Samuel Butler’s “Darwin among the Machines” (1863), and expanded upon by George Dyson in his book of the same name in 1998.

Pamela McCorduck writes that all these scenarios are expressions of the ancient human desire to, as she calls it, “forge the gods”.”

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s