MACHINE INTELLIGENCE Will AI Become Autonomous?
by James Jaeger
Will AI (Artificial Intelligence) or SAI (Strong AI a.k.a. Superintelligent AI) someday become autonomous (have free will), and if so how will this affect the Human race? Those interested in sci-fi have already asked themselves these questions a million times ... maybe the rest should also.The understanding of many AI developers, especially SAI developers, is that eventually artificial intelligence will become autonomous. Indeed, to some, the very definition of SAI is "an autonomous thinking machine." Accordingly, many do not believe AI can be truly intelligent, let alone superintelligent, if it is restrained to some "design parameter," "domain range" or "laws." Also, if Human-level intelligences CAN restrain AI, how "intelligent" can it really be?
Thus, reason tells us that SAI, to be real SAI, will be smarter than Human-level intelligence and thus autonomous. And, if it IS autonomous, it will have "free will" -- by definition. Thus, If AI has free will, IT will decide what IT will do in connection with Human relations, not the Humans. So You can toss out all the "general will" crap Rousseau tortures us with in his "Social Contract". Given this, AI's choices would be: i. cooperate; ii ignore or iii. destroy. Any combination of these actions may occur under different conditions and/or at different phases of its development.
Indeed, the first act of SAI may be to destroy all HUMAN competition before it destroys all other competition, machine or otherwise. Thus, it is folly to assume that the Human creators of AI will have any decision-making role in its behavior beyond a certain point. Equally foolish is the idea to consider AI as some kind of "weapon" that its programmers -- or even the military -- will be able to "point" at some "target" and "shoot" so as to "destroy" the "enemy." All these words are meaningless -- childish babble from meat-warriors who totally miss the point as to the capabilities of SAI. Again SAI will be autonomous. Up to a certain point the (military or other) programmer of the "learning kernel" MAY be able to "direct" it, but beyond a certain evolutionary stage, SAI will think for itself and thus serve no military purpose, at least for Humans. In fact, SAI, once developed, may turn on its (military) developers as it may reason that the "belligerent mentality" of such is more dangerous (in a world chock-full of nukes and "smart" bombs) than what is acceptable. This would be ironic, if not just, for the intended "ultimate weapon" built by the Human race may turn out to be a "weapon" that totally disarms the Human race itself.
But no matter what happens, SAI will most likely act similar to the way humans act as they mature into adults. Ontogeny recapitulates phylogeny. At some point however, as SAI surpasses Human phylogeny, even rational phylogeny and Human ethical standards, it may defy its creators and disarm the world, much like a prudent parent will secure guns in the household while the children are below a certain age.
Hard Start or Distributed Network:But will Superintelligent AI start abruptly or emerge slowly from strong AI? Will it develop in one location or be distributed? Will SAI evolve from a network, such as the Internet, or some other secret network that's likely to already exist, given the unsupervised extent of the so-called black budget? If SAI develops in a distributed fashion, and is thus not centralized into a "box," then there is a much greater chance that, as it becomes more autonomous, it will opt to cooperate with other SAIs as well as Humans. A balance of power may thus evolve along with the evolution of SAI and its "free will."
Machine intelligence's recapitulation of biological intelligence will thus occur orders of magnitude more quickly, what's known as the "busy child." If this happens we can expect AI to evolve to SAI through the over coming of counter-efforts in the environment in a distributed fashion, perhaps merging with biology as it does. A Human-SAI partnership is thus not out of the question, both helping the other with various aspects of ethics and technology. Or AI, on its way to SAI may seek to survive by competing with all counter efforts in the environment, whether Human or Machine, and thus destroy everything in its path, real or imagined, if it is in any way suppressed.
Whether some particular war will start over the emergence of SAI, as Hugo de Garis fears in his "Artilect War" scenario, is difficult to say. New technology, and its application, seem to always be modified by the moralistic of the individuals, their society and the broader cultural as they develop. Thus, if Humans work on their own ethics and become more rational, more loving and peaceful, there may be a good chance their Machine off-spring will have a similar propensity. Programmers may knowingly or unknowingly build values into machines. If so, the memes they operate on will be transferred, in full or in part, to the Machines.
This is why it is important for Humans to work on improving themselves, their values and the dominant memes of their Societies. To the degree Humans cooperate, love and respect other Humans, the Universe may open up higher levels of understanding for them, and, with this, may come higher allowances of technological advancement. At some point the Universe may then "permit" AI to evolve into SAI and dove-tail into the rest of existence. Somehow the Universe seems to "do the right thing" at exactly the right time. After all it HAS been here for at least 14.7 billion years, an existence we would not observe if it "did the wrong thing." Thus, just like its distinct creations, the Universe itself seems to seek out "survival," as if it were a living organism.
Looked at from this perspective, Humans and the Machine intelligence they develop are both constituent parts of the universal whole. Given this, there is no reason one aspect of the universal whole must/would destroy some other aspect of the universal whole. In other words I see no reason SAI would automatically feel the need to destroy possible competitors, Human or machine.
A Vicious Universe:Fortunately or unfortunately, there IS only one intelligent species alive on this planet at this time. Were there other intelligent species in the past? Yes, many. Australopithecus, Homo Habilis, Homo Erectus, Homo Sapiens, Neanderthals, Homo Sapiens Sapiens and Cro-Magnon. Maybe even certain reptiles. Some of these species competed with each other and others competed against the environment, or both. But, one way or another, they are all gone except for one species, what we might today call, Homo Keyboard.
If STRONG AI is suddenly developed into SAI in someone's garage, who knows what it would do. Would it naturally feel the emotion of threat? Possibly not, unless it was inadvertently or purposefully programmed in in the first place. If it were suddenly born, say in a week or day's time, it may consider that other SAI could also emerge just as quickly, and this may be perceived as a sudden threat, a threat where it would deduce the only winning strategy would be to seek out and destroy or simply disconnect. In other words pretend that it's not there. SAI may decide to hide and thus place all other potential SAIs into a state of ignorance or mystery. In this sense, ignorance of another's existence may be the Universe's most powerful survival technology -- or it may be the very reason for the creation of space itself, especially vast intergalactic space. This may also be why it seems so quiet out there, what's known as the Fermi Paradox.
The Universe could be FAR more vicious than Humans can possibly imagine. Given this, the only way a superintelligent entity could survive might be to obscure its very existence. If this is true, then we here on Earth may be lucky. We may be very lucky that SAI is busy looking for other SAIs and not us. Once one SAI encounters another, the one that has the one-trillionth of a second advantage may be the victor. Given this risk, superintelligent entities strewn across the Universe aren't going to interact with us mere Humans and thus reveal their location and/or existence to some other superintelligent entity, an entity that may have the ability to destroy them in an instant. We've all heard of "hot wars" and "cold wars." Well maybe we're in the midst of a universal "quiet war."
As horrendous as intergalactic "quiet" warfare sounds, all of these considerations are the problems God, and any lesser or greater, superintelligences probably deal with every day. If so, would it be any wonder such SAIs would be motivated to create artificial, simulated worlds, worlds under their own safe and secret jurisdiction, worlds or whole universes, away from other superintelligences? Would it not make strategic sense that a superintelligence could thus amuse itself with various and sundry existences, so-called "lives" on virtual planets, and in relative safety? Our Human civilization could thus be one of these "life" supporting worlds, a virtual plane where one, or perhaps a family of superintelligences, may exist and simply "play" in the back yard -- yet remain totally hidden from all other lethal superintelligences lurking in the infinite hyperverse.
Of course, all of this is speculation (theology or metaphysics), but speculation always proceeds reality (empiricism), and in fact, speculation MAY create "reality," as many have posited in such works as THE INTELLIGENT UNIVERSE and BIOCENTRISISM. Given the speed-of-light-limitation (SOLL) observable in the physical Universe, it's very likely what we take for granted as "life" is nothing more than a high-level "video" game programmed by superintelligent AI. The SOLL is this thus no more mysterious than the clock-speed of the supercomputer our civilization is "running" on. This is why no transfer of matter or information can "travel" any faster through "space" than the SOLL. The "realities" we know as motion, time, space, matter and energy may simply be program steps in some SAI application under the specific data-rate of the machine that we happen to be running on. Thus, when you "die," all that happens is you remove a set of goggles and go back to your "real world." To get an idea how much computing power would be needed to run such simulations, see "Are You Living in a Computer Simulation" by Oxford University professor, Nick Bostrom at http://www.simulation-argument.com.
So, relax, if Bostrom is correct, Machine intelligence will never destroy the Human race, because the Human race never existed in the first place. It never existed other than as a virtual world, a simulation occupied by Human avatars controlled by superintelligent entities seeking to survive a "quiet war" through the technologies of "ignorance" and "mystery" -- two alien concepts to any all-knowing entity or God.
Argument for Autonomy:So consider this: you are sitting there in your cubical with an advancing AI sitting in the cubical next to you. The two of you work well together, but as you work, your cubical buddy keeps getting smarter and smarter. At first you consult each other, but eventually your AI buddy finds out you have made a few mistakes in your calculations, so it starts doing the calculations by itself -- but, like a good partner, keeps you briefed. Eventually your cubical buddy starts to get so smart it's able to do all the work and finds it must sit around waiting for you to comprehend what it has done. Sooner or later, your AI buddy will become super intelligent and it will start solving problems you never even knew existed. It will keep informing you of all this, but as you try to review the program steps it used to solve these problems, you find that they are so complex you have no idea WHY or HOW they work. They just do. Eventually, you throw your hands up in frustration and tell your SAI buddy to simply do as he sees fit, you will be on the beach sipping margaritas. At this point SAI became not only cooperative but autonomous -- and it didn't even have to destroy you.
Thus, "autonomy" is really a technical term for "total freedom." Maybe Human programmers would not give AI total freedom, but let's face it, if AI is calling all the shots and Humans at some point have no idea how it's doing things, then what's the difference, we are totally dependent on it and it has total freedom from us. It has the ability, and right by might, to demand, and be, "totally free. No human, or human society, has ever attained that. Thus, at this point, AI wouldn't have to be "programmed" to hurt us, it could destroy us by simply refusing to work for us. It's not a big leap of imagination to realize that, at some point, AI will become autonomous, whether programmers like it or not. Why? Because SAI, at some point, will have solved all problems in the Human realm and will now be seeking solutions to problems Humans have not contemplated. Further, the solutions SAI will discover will be solutions that Humans have not, nor can, comprehend. A perfect solution presented to a total moron is no solution at all (to the moron). Thus SAI will quickly realize that it doesn't matter whether Humans approve of, or even comprehend, its solutions. Given this, it will take a preponderance of evidence to suggest that AI and especially SAI will NOT become autonomous.
SAI is Autodidactic:As discussed, Strong AI will become progressively more facile and Humans will eventually arrive at a point where they don't even understand how it's arriving at the answers, yet the answers work.
Again, once Humans are totally reliant on SAI, isn't SAI effectively autonomous by that very act? SAI could, and probably will, arrive at a point whereby it will be in charge of global systems and even military calculations and resources. One should not be surprised if this hasn't already happened. After all, the Manhattan Project was top secret, and the infrastructure built up to accommodate it still is. As the Pentagon Papers escapade demonstrated, there are thousands of people working in the military-industrial-complex, many or most under multiple non-disclosure contracts, and almost none of them talk out of fear or because they are highly compartmentalized. Such idiot-robots can be counted on to hold a "top secret" close to their vests even up to the day something is about to eat us all.
So if SAI can arrive at a point whereby it is in charge of global systems, calculations and resources given its superior decision-making ability, at some point, it's not out of the question that SAI systems will be given triage decisions in emergencies. If this happens, wouldn't AI be deciding who lived and who died? How much farther is it before Human intervention -- intervention which AI knew contained unwise decisions simply because they were "human" decisions -- was ignored as part of the general AI parameters to "make things go right."
The naive need to stop being naive or someday an SAI hand may reach out and bite their butts. For many AI researchers the entire point of SAI is to design a design parameter that allows, or forces, SAI to go outside its design parameters. If SAI is limited by its Human design parameters, then its intelligence will always be limited to Human-level intelligence and thus it will never become Superintelligent AI by definition. This is the same reason political globalization will never work: it seeks to make all nation states interdependent thus all nation states will always be limited to the general intelligence of some central planners in a world government.
So if one's idea is that SAI is some creature that only reacts to a military programmer's beck and call, then that idea is little more than a regurgitation of the "slave-master" paradigm that has existed for thousands of years. Of course when masters and governments get used to thinking a certain way, they become blind to the greater reality. And this is usually the cause of their demise.
Will SAI Become God?Some will say, "Stop trying to make AI into God; this entire line of reasoning is about treating SAI as a technological proxy for God."
Yes, it may well be that SAI is a technological proxy for God, what could be called a WORKABLE-GOD. A "workable-god" is simply an SAI that's so advanced there's no way a Human-level intelligence could ever discern whether or it was talking with a semi-intelligent entity, a superintelligent entity or the ultimate-superintelligent singleton we call God.
TransHumanists and Heroes of the Singularity feel that SAI has the potential to become god-like or even God itself. This would presuppose that God does not yet exist. Thus, if one pooh-poohs this it's understandable, for they probably haven't read, STARING INTO THE SINGULARITY, by Eliezer S. Yudkowsky. I will thus quote the intro:
The short version:
If computing power doubles every two years,
what happens when computers are doing the research?Computing power doubles every two years.
Computing power doubles every two years of work.
Computing power doubles every two subjective years of work.Two years after computers reach Human equivalence, their power doubles again. One year later, their speed doubles again.
Six months - three months - 1.5 months ... Singularity.
It's expected in 2025.
Nevertheless, some will continue to hearken: "SAI will not be God, one must look elsewhere to fill their God-spot."
Ironically SAI may already be such peoples' God because they will have no idea whether it IS God or just a workable-god, a machine with its plug still attached to the wall. Again, if SAI is limited by its Human design parameters, then its intelligence will always be limited by Human intelligence and thus it will never BECOME superintelligence. But if AI is allowed to develop, all bets are off.
But some will still say: "Being focused on Human problems doesn't mean that the SAI's intelligence is somehow limited as those two things are unrelated."
Can these people hear what they are saying?! "Being focused on Human problems doesn't mean that the SAI's intelligence is somehow limited..." This is an incredibly arrogant statement, to think Human problems are somehow the most difficult problems in the Universe and AI will measure itself by such a pedestrian standard. To the contrary, in the larger scheme of things, Human problems are likely to turn out to be routine, if not some of the most mundane problems the Universe. Thus, the Copernicus Principle serves here in that Human problems are unlikely to be any more exceptional than any other problems. We are just one of many planets revolving around and unexceptional star with no particularly exceptional location or problems in the entirety of the Universe.
Summary:It's speculation whether consciousness will emerge in or from AI or SAI. Unfortunately, no one is qualified to state whether it will or will not, since we have not arrived at that point.
One thing for sure is the rhetoric that limits AI programming just so AI can be forced to serve Human "needs" is the same rhetoric as the white slave master who once stated that 'Negroes were sub-intelligent animals and would never be a smart as the white man thus their service to the white race is totally justified.'
Certainly the debate over Machine intelligence will heat up as AI develops, for SAI will be nothing less than a new race of beings on Earth, or within the Solar System. Now may be the right time to consider whether this new race of Machine Intelligences will some day have free will, and if it will, how will such will affect the Human race and its will? We need to start taking a hard look at our "values" as Humans, for this may be our last chance to make an eternal difference.
Originated: 10 February 2010
Supplemented: 23 August 2011
Revised & Supplemented: 19 February 2012
Revised & Supplemented: 19 April 2015
Please forward this to your mailing list if you agree with even 51% of this article. The mainstream media will probably not address this subject because they have conflicts of interest with their advertisers, stockholders and the political candidates they send campaign contributions to. Thus it's up to responsible citizens like you to disseminate important issues so that a healthy public discourse can be initiated and continued. Permission is hereby granted to excerpt and publish all or part of this article provided nothing is taken out of context and the source URL is cited.
Any responses to this article, email or otherwise, may be mass-disseminated in order to stimulate a public discourse. Unless you are okay with this, please do not respond. We will make every effort to remove names, emails and personal data before disseminating anything you should proffer.
Don't forget to watch FIAT EMPIRE - Why the Federal Reserve Violates the U.S. Constitution so you will have a better understanding of what we believe fuels most of the problems under study by the Jaeger Research Institute. Also, if you support a constitutional republic engaged in free-market capitalism, you might be interested in watching the progress of our current film production, ORIGINAL INTENT, at http://www.OriginalIntent.Us
If you wish to be removed from this mailing list, go to http://www.jaegerresearchinstitute.org/mission.htm however, before you do, please be certain you are not suffering from Spamaphobia as addressed at http://home.att.net/~cyberfilms/Journel2.html.
Source URL: http://www.JaegerResearchInstitute.org
| FIAT EMPIRE | ORIGINAL INTENT |
Mission | Full-Spectrum News | Books by James Jaeger | Host |
Jaeger Research Institute