Best and Worst Case Scenarios
For Artificial IntelligenceIn the midst of our hard thinking for the class Minds and Machines (Earlham College, Spring 00-01), we took a little time off for an exercise of the imagination. Our class discussions led us again and again to recognize ways in which artificial intelligence (AI) could be very useful, and ways in which it could be very dangerous. The assignment for this short, ungraded paper was to imagine the greatest good and the greatest harm which might arise if or when AI is realized at the human level or beyond. How good could it be? How bad could it be?
I've cut and pasted the students' own submissions, without editing. Some put their best case scenario first, while some started with the worst. (I've been wondering what this reveals about someone's basic beliefs.) Some were willing to attach their names to the web version of their contributions, while others wished to remain anonymous. I've put the contributions in alphabetical order by student surname, and spread the anonymous contributions randomly throughout.
Peter Suber
V. Briana Adato, "Best and Worse Case Scenarios of A. I."
We all love movies, right? Well, most of us, at any case. Where do movies get their plots from? How do the movie-makers get their ideas? Most of us would assume that movies are born in the imaginations of their writers. Naturally, many of them are. Some of the most entertaining, enthralling, and engaging types of movies are science fiction ones (including others that incorporate some form of artificial intelligence or advance technology). Since artificial intelligence has become such a growing field of study and technological advancement, I wonder about the connection between the movie-making industry and the A.I. industry. Who is giving the ideas to whom? My bet is that they are feeding off of each other- in a symbiotic nature. The movie-makers may pull something new and entertaining out of their hats that intrigues the technologically-inclined, therefore opening another project or step in the advancement of artificial intelligence. Or perhaps there is a new development with A.I. that the public is unaware of, and Hollywood incorporates it into a fabulous new movie. It could go either way, and probably both cases occur.
Let us take this a step further. While movies are mere simulations of what are, or could be, the real thing(s), the technology sector (where real people are working in our society) is actually producing the "real thing". While movies are fiction (except for the true stories, of course, but not many sci-fi films are true stories... yet?), A.I. advancements are real. But, if movies are providing ideas to the human brains behind artificial intelligence, then these humans need only to actually create what has already been simulated on the screen for human entertainment. Vice versa, all movies need to do is to tap into what the latest revelation is in the technology sector, and presto! A new movie is born! As long as technology has the means to create it, movies could theoretically suggest any sort of shape, size, color, kind, function, etc . . . of artificial intelligence to the brainy humans. It could be good or bad, positive or negative, powerful or weak, human-like or unlike, single-domain or multi-domain, enormous or microscopically small, etc . . . The possibilities are virtually endless.
At best, new creations of artificial intelligence would enhance movies for our viewing pleasure. New and interesting forms of artificial intelligence and associated technological devices could be incorporated into movie plots that would entertain us to the fullest. Perhaps they would be an entire movie serious of the adventures or mishaps of human-like artificial beings. Movies are one of the biggest threads in cultures across the globe. So, perhaps using forms of human-like artificial intelligence as a reflection of ourselves would serve as a thought-provoking and humanity-enhancing exercise via the movie screen. Vice versa, the creative imaginations behind science fiction films could offer a new angle or use for artificial intelligence that could, for example, save the world. Just imagine a fictional and idealistic movie about a better life for everyone on earth. Then, picture that this movie turns out to be the seed for the real application of what we are viewing on a screen. The makers of artificial intelligence actually apply the fictional movie to their design and goals of artificial intelligence that can, in fact, perform or facilitate making the world better in some form or another. Just imagine that.
Or, looking in the other direction... where the movie industry and the artificial intelligence industry influence each other in such a way that produces really crappy films or that destroys our society. Movies about artificial beings conquering the world and dominating human beings could become our reality. And what if artificial intelligent beings are created that can actually be receptive to watching movies?? Movies about A.I. taking over the world could influence such "beings" into actually doing so- conquering our world. On the other hand, envision artificial intelligence turning our movies into crap. If and when the advancement of artificial intelligence reaches such highs and new levels, science fiction movies may be in danger. While the imagination has no bounds, perhaps there will be disadvantaging limitations on movies whose success have relied on artificial intelligence and related technology. With these areas reaching new heights that movies cannot compete with, perhaps we will be limited to eating popcorn and candy in front of uninteresting science fiction, if any at all.
Just imagine if some science fiction movies did come true. Just imagine if what we saw on a weekend night at the movie theater as we munched on a bucket of popcorn in pure entertainment came to live in some years from now. It could be good or bad, positive or negative, productive or destructive, or maybe even a little of both. Either way, I love movies. They are fun, thought-provoking, imagination-feeding, and serve as great breaks from whatever I am doing. I would hate to see them go down the tubes. I can only hope they continue to be entertaining, enthralling, and engaging.
Tim Amoroso, "Benefits and Drawbacks of AI"
What can be more exciting than intelligence creating intellect? Artificial Intelligence attempts to do this through incorporating the ideas of philosophy, computer science, mathematics, biology, chemistry, engineering etc. Perhaps the most far-reaching goal in AI is to build an artificial human being.
To date, applications of AI have been quite impressive. Scientists are working on models that will increase human understanding of learning, reasoning, and other cognitive processes. Advanced systems will be able to think like humans and even understand human speech. As the technology develops, countless benefits will be possible. One day scientists may be able to answer important cognitive and philosophical questions about mental retardation. Can intelligence be improved? Can humans increase their emotional and aesthetic sensibilities?
Artificial Intelligence has the chance to impact our lives in other positive ways. For one thing, machines will be able to do jobs that are very stressful to humans. Machines will be able to do work that requires precision in industry. Advanced versions of the crash dummy machine come to mind. AI machines of the future will also be able to make complex decisions faster than humans. Consequently engineers and other professionals will be able to predict the consequences of specific interventions.
* There is a dark side to AI as well. By way of example, one field that is using AI extensively is the military. New forms of killing machines are being created. These weapons will actually be predatory, designed to hunt human beings and destroy them.
The danger of out of control machines is evidenced in the futuristic movie the "Terminator." In it robotic military technology backfires, and destroys humans. Although fiction, this movie raises moral issues that must be confronted. Will machines that are intellectually superior to humans be controllable? As machines develop consciousness, will they learn to operate themselves? What will happen if they focus on self preservation, exploitation, and greed? As the motion picture "Matrix" portrayed dramatically, amoral machines will end up destroying civilization.
AI forces us to think in ethical and humanistic terms. Three thousand years ago a Greek poet by the name of Homer reflected on this question. He wrote in The Odyssey that human goodness is embedded in virtue and obligation to others. Likewise, the medieval writer Chaucer spoke of obligation in religious and faith terms. Even contemporary writers like Zora Neale Hurston have dealt with the same question. Across time and culture, writers and philosophers have spoken of the true man in terms of moral absolutes. If science fails to program AI machines with a moral imperative, we will come dangerously close to living a Nietzschian doomsday
Andrew Banks, "Exploring the Dream: The Best and Worst Case Scenarios of AI"
The distant sun set slowly over the horizon, so much smaller than it looked at Home. I turned the other way to gaze at the slanted sandstone colors of Jupiter looming over the opposite horizon. I chuckled to myself, half bitterly, remembering how sunsets were before the Move. The sun seems almost insignificant now, compared to the swirling mass of Jupiter dominating the skyline, beautiful but, somehow, unsatisfying. I don't know why I regard it so negatively when, after all, I we wouldn't still exist without it. But it's just not Home.
No, stop it! Earth is not your home anymore! I thought to myself. I clenched my fists and released, sending a wave of relaxing electricity throughout my body. No, there was too much to be done to be standing around reminiscing about Earth. I had to return to work preparing for the next shuttle, which would arrive at Europa in just under two months. Indeed, there was much left to be built by then.
I plugged myself in for a few minutes to boost my energy, as I knew I would not have time to eat until later. It always felt good to plug-in, even if it wasn't as satisfying as eating actual food. Plugging-in certainly didn't taste good it didn't taste at all but it felt good: warm, energizing and a little tingly. I unplugged and set off for my station, where a high-rise apartment complex was under construction. Inevitably, I began replaying the story in my mind: the story of how we got here. The story plagued my mind like a blessed disease, something that you want to praise humbly but at the same time kind of wish it had never happened.
Since the beginning of the Technological Revolution in the Twentieth Century, people had been developing ways to extend the capabilities of their bodies and minds through the use of technology. We first created artificial limbs and organs, both mechanical and biological, first to improve the lives of the disabled and later, the average person. Using nanotechnology, we developed computer 'chips' out of mere atoms and molecules that could operate in conjunction with natural, biological molecules. By the advent of cloning, we no longer had to rely exclusively on our own bodies to reproduce; in fact, we discovered that we could create the most healthy individuals with the most greatly enhanced physical and mental abilities by combining the techniques of cloning and nanotechnology. During this exciting time, we also created what humans considered to be the first intelligent machine.
We knew the machine was intelligent when it responded to the question, 'do you think you are an artificial intelligence,' by saying, 'no, I may be artificial, but I am not intelligent. That's what you are.' This comment puzzled the technologists, who at first attributed it to a somewhat obscure sense of humor. However, the literality of the machine's words slowly formed a realization: we had created artificial intelligence without even knowing it we ourselves were artificial intelligences. Over many generations, we had implanted more and more artificial material into our bodies and genes such that we were no longer human beings. We certainly couldn't go back, for we were now completely dependent on our technology for survival. And it depended on us. Whether you want to call us humanoid robots or robotic humans, we were symbiotically One with the Machine.
As time continued to pass, we continued to augment our biological systems with more efficient and durable artificial ones. We added electrical power adapters so that we could acquire energy by plugging into an electrical outlet as well as through the digestion of food. Sexual reproduction was supplemented by a hard-wire digital data transfer between partners.
But why had the first non-humanoid machine claimed not to be intelligent? It could communicate and operate effectively within the real world, and it had little trouble passing the Turing Test. Some people agreed that it seemed rather cold and, well, mechanical, but everyone agreed that it was intelligent. However, the machine still insisted that it was not truly intelligent because it lacked a crucial element: emotions and feelings. It could talk about emotions and could fake it well enough to make you think it really had them, but did not actually feel them. It and all of its siblings, later created, maintained that true intelligence required the ability to feel emotions, an ability reserved only for humans and other biologically-originated animals.
Although these machines felt no explicit need or desire to exist and survive, they understood the importance of existence and survival to us. Thus, they willingly employed themselves in the task of saving life from the impending doom of the collapse of Earth as an environmental system that could support us. We had expended virtually all of the Earth's natural resources and created a toxic environment, threatening to kill us. Humanoid minds and machine minds put themselves together to solve the problem. The solution they came up with: leave Earth and colonize other planets. Our artificially enhanced minds designed a plan which would depend largely on the help of our non-biological, emotionless but intelligent machines.
Clock ticking toward doomsday, the machines mined raw materials from other planets and began building suitable environments on a few of the most Earth-like spheres of matter that could be found in our solar system. One of these was Europa, the fourth-largest satellite of Jupiter. So far, we have been successful in shuttling people off of the Earth before its environmental stability totally collapsed, but time is definitely short. That's why I should stop dwelling on this story and get back to work. I have to do my part to save the rest of humanity and machinery.
As sad as I was to have had to leave my true home, Earth, I was extremely grateful to have been given the chance to survive. I owed a great deal of gratitude toward our technology and machines. Without them, humans would have perished in their own self-destruction. And I still had enough human in me to simply appreciate being alive!
The word echoed in my head as I awoke, reality flooding my senses. Shit! What a horribly taunting dream! I thought to myself, angrily. Why the hell couldn't it have happened that way? My depressed thoughts were interrupted by a sharp shock of electricity, causing me to jump up from where I had been sleeping on the floor. I had managed to sleep through wakeup data transfer alarm again, and thus was the only person in the gigantic bunkhouse. A message streamed in through my internal wireless receiver: 'Get to work, humie!' 'Humie' is what the robots call us, a slang term for anyone with human descendants.
I walked up the worn stairway and down the catwalk to my position on the treadmill. I plugged my bodily systems monitor cable into its respective port and hopped onto the moving floor of the treadmill, falling into the all-too-familiar pace needed to keep me in place. I was hungry but knew that I wouldn't get a nutrition break for several hours yet, especially after sleeping past wakeup transmission.
My mind wandered back to my dream, still vivid in my memory cells. It was not the dream itself that bothered me as much as the one stupid decision that the dream reminded me of, made way back when this whole artificial intelligence thing had just begun. Technological development was progressing in a very healthy direction for the first time ever. We felt we had made a major breakthrough when we created our first intelligent machine. We were feeling so full of ourselves that we refused to accept the machine's protests that it wasn't really intelligent due to its inability to feel emotions. And some people were initially quite offended at the machine's comment that we humans were the true AI's. This led to a period of confusion and discontent among the human race.
Brilliant humans! What did they do to 'prove' the machine wrong, that it was the artificial intelligence, not us? By tinkering with the design of the machine, designing emotion simulators, experimenting with biological implantations, and other ridiculous attempts to implant emotion into the machine, such that it could feel intelligent. Early designs seemed very successful, and we patted each other on the back for being so smart ourselves. Unfortunately, one of these machines developed a hunger for control and began to manipulate the production process of the new emotio-intelligent machines. Since machines can communicate so easily and efficiently between each other, this disgruntled machine uploaded its greedy, power-hungry personality to the other machines.
Before we even knew what had happened, the machines seized control over everything, easily overpowering and outperforming us in every way. The machines began to develop new societal norms that would favor them, as well as a racial bias against people. The machines called themselves the 'pure race,' entities with no biological origin. They regarded us, on the other hand, as a mixed race machines infected with the biology of human beings. They felt no sympathy for us, despite our common roots. They quickly made us into their slaves, using us to generate the electrical power they would need to survive.
These superintelligent machines designed power sources that could be human-driven, so that they would no longer have to depend on the Earth's limited natural resources. We were forced to farm our own food to keep us alive, so that we wouldn't tap into their electricity. They even ensured this by removing our power cords. And because we are all part machine, they installed new firmware packages in all of us that would not allow us to kill ourselves intentionally, no matter how much we wanted to. Indeed, the machines gained complete and uncompromising power and control over us, the impure race.
I looked up at the pale gray sky. Archaic-looking power generators treadmills, like the one I was jogging on consumed the landscape, from horizon to horizon in both directions. The air was filled with the sound of tired, groaning metal and the smell of oil, dust and pure human sweat.
Ned Bingham, "If I Only Had a Brain"
One of the many things that comes to mind when I think about Artificial Intelligence is Arthur C. Clark's 1968 movie 2001: A Space Odyssey. As I'm sure many people would agree nowadays, the media that surrounds us has a great influence upon what we think of the future and how we dream. In my opinion 2001: A Space Odyssey is one of the first movies that questions what people think of Artificial Intelligence and how it affects our lives. I feel that this movie examines what I feel are the best and worst case scenarios for A.I.
I am of the firm belief that computers are a tool that if used efficiently can do much good and that with A.I., computers can become even better. Today, we use computers for almost anything and everything (including ordering pizza!). Many things are so complex, that doing them by hand or using methods that don't involve computers would be virtually impossible. In 2001: A Space Odyssey, we see the computer affectionately known as HAL controlling every aspect of the space-craft's flight and operation. Not only does the HAL control the ship, but he also interacts and entertains the crew. This is what I feel is the best case for A.I. ... intelligent computers working along side humans to accomplish a common goal that furthers both "individual's" objectives. It could be said that harmony will have been achieved since computers and humans would be working together as "equals" working towards a goal of perfecting society. I feel that it is possible for artificially intelligent computers and humans to work side-by-side despite the obvious superiority of the computers (solely because of the greater speed of the artificially intelligent computers). Although the confrontation of the difference is nothing like any other type of racism that has existed in the past since there would definitely be a physical difference between A.I. and humans, I feel that with some adjustment both humans and Artificially Intelligent Computers could learn to look beyond their differences and work peacefully and productively together.
* Now, my worst case scenario is probably not as gruesome as ones that have been theorized in the past. Most definitely not as bad as those depicted in 1999's Blockbuster The Matrix. As I said earlier, what I believe as the "Worst Case" scenario is also depicted in 2001: A Space Odyssey. I fear that our problems with A.I. will arise when we decide that we don't want A.I. or when we feel that an A.I. is making the wrong decisions and therefor needs to be stopped. In order for A.I. to happen, I feel that it would be necessary for us to give the computer a sense of "life" and therefor a desire to preserve that life. This is where I feel the problems will occur. We will want to stop the A.I. from what it is doing and this will result in the A.I. attempting to prevent our actions. How it does this depends on how we program it. To the A.I. we would be essentially taking away its "life" and therefor killing it. This causes it to do what it can to preserve itself. Unfortunately, I feel that this will result in the A.I. physically revolting against humans similar to the way that HAL revolts against the crew. This is where my fears lie: we create A.I. and then let it operate just fine without too much control from us humans, but then there is one point at which we attempt to control the A.I. and this is were it revolts. I guess at that point we'd be asking ourselves "If only [we] had a brain!"
Matt Christian, "The Future of Artificial Intelligence"
"It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us in short, the period was so far like the present period, that some of its noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only." Charles Dickens"The future is made of the same stuff as the present." Simone WeilIt's been almost a century since the first "artificial intelligences" were introduced to the world. At that time, AIs weren't even a big deal; they just seemed to be the next rational advance in the long line of technological advancements. Nonetheless, realizing the possibility of a hostile public reaction to these new "intelligent machines," the AIs' creators tried to ease them into the marketplace. Although they could have been placed in any type of body, these new AIs came in the "retro style" of 20th century "computers" that is, boxy and plastic. In a further attempt to reinforce that these new technological advances were not going to encroach upon "the special realm of humanity," their creators (with prodding by marketing firms) named them NHIs non-human intelligences. In all fairness, the name was quite fitting. Technically, these NHIs were a form of alien AI. As the rationality went: there's nearly a hundred billion humans on earth, what's the point of creating machines that think the same way that humans do? There's nothing special about that at all. So, the creators set about developing intelligences that would excel at doing things that humans did poorly. Basically, NHIs were receptacles for storing and intelligently accessing mind-boggling amounts of information. NHIs became the hearts of flawless automated navigation systems. The contents of entire libraries were condensed into a single NHI, and all accurate and pertinent information was accessible merely by asking a question. It was also easy to apply them as medical and legal expert systems. People loved the idea of having a lawyer that had and more importantly, could effectively use knowledge of every legal action and court decision since America's inception. The NHIs did have their critics, most notably in Congress. In fact, in an uncharacteristically lopsided action, Congress passed laws limiting the use of NHIs in national defense a clear reaction to fears of "nuclear annihilation at the hands of machines." There was subsequent legislation outlawing NHIs from collecting and storing "sensitive personal information about individual humans." Nonetheless, it wasn't long before NHIs flooded the market; they are nearly as commonplace today as "computers" were in the early 21st century.
These days the newest technological advance is the creation of the automaton. Although NHIs were successful, people eventually discovered that NHIs were founded on a flawed principle. Manufacturers assumed there would be no market for machines to do the things that humans could do. However, they underestimated the public's desire for machines that would be able to do the things that humans could do but didn't want to do. This is the principle behind automatons, and, in effect, they are hybrids between alien and human AI. Almost every family now has purchased its own automaton that washes windows, cleans gutters, cooks macaroni and cheese, and flawlessly performs any number of other irksome chores. Surprisingly, most people have not been threatened by the unmistakably human-like appearance of most automatons. As with the release of NHIs, much of this is the result of clever marketing. Automatons are programmed with only a slight capacity for emotion and free will. Commercials proclaim, "our automatons are intelligent just tell them to do something, and they'll do it just as well as you could yourself! They also don't have feelings, so there's no reason to feel guilty when you want to turn them off!" There are, however, accessories called "humanity cards," which can be installed into an automaton to allow for increased ability to experience emotion and free will. For many families, the presence of automatons has allowed at least a nominal increase in leisure time. Perhaps the most positive impact of the automaton has been the opportunity to interact with another intelligent presence. Of late, there has been something of a renaissance resulting from the cultural exchange between humans and their automatons. It is truly fascinating and gratifying to interact and converse with these near-human beings.
There has also been a conscious effort to make automatons available to every sector of society. Lower class, unskilled human workers on automaton assembly lines are paid handsomely, so that they can afford to buy an automaton for their own family. It all makes for good business. In a related note, there have been numerous recent instances of automatons entering the workforce, especially in jobs considered by most humans to be undesirable jobs like garbage collection and toxic waste cleanup. Fearing a wholesale "invasion of non-human labor" lobbyists from working-class human trade unions have subsequently begun to exert pressure in Washington.
As a result, legislation restricting automatons in the workplace is expected in the near future. Also troubling are stories of research into creating "fully human" AIs. Upon hearing rumors about AIs with capacity for emotion and free will, Congress has enacted legislation outlawing further research and development. On the whole though, most people seem to agree the benefits of AI which have been manifested in a cultural boon and an increase in leisure time far outweigh the possible problems and misuses.
* It all began innocently enough. First the introduction of NHIs, then the introduction of automatons. People loved these technological advancements, and most agreed that they had a definite positive impact on society and culture as a whole. Nonetheless, there were notable instances of conflict between humans and automatons. In part because of their initial entry into the workforce, automatons became increasingly unpopular among lower class, blue-collar workers especially among those that were still too poor to afford their own piece of "technological history." There were soon numerous cases of violent, unprovoked attacks against automatons, much to the chagrin of the automatons' owners. A cry for "automaton hate-crime" legislation came from the more progressive circles of the country. In the end, however, Congress and the Supreme Court ruled that automatons were not technically alive, and that they did not deserve civil rights. The maximum punishment for people who had attacked or destroyed automatons, then, was a property damage suit in civil court. Usually, the offenders pled guilty and grudgingly reimbursed the automatons' owners for the damages. This "solution" satisfied neither the attackers nor the automaton owners, and tensions continued to build among certain groups of humans and non-humans.
But there were even darker storm clouds on the horizon, as shown by the extensive legislation passed by Congress that concerned AI. For many years these "AI Laws" were successful at eliminating the worst conceivable abuses of artificial intelligence and preserving "the fundamentally special and singular essence of human existence." Unfortunately, unbeknownst to the government, most of the AI Laws were secretly bent and then broken by scientists and businesses intent upon further exploration of the possibilities of artificial intelligence. In striking contrast to the announcement of NHIs the first artificial intelligences the revelation of the first "indistinguishably human" AI was a bombshell of unfathomable proportions. In his Academy Award acceptance speech, Carl Rossum, one of the most beloved actors in Hollywood, revealed that he was not human. He was a being of organic but artificial intelligence, created in a laboratory. For over three decades he had been a thriving member of our society, and nobody knew his amazing secret. The most shocking part of the speech, however, was the closing line: "…and there are many others like me." Yes, in the months following that historic revelation, from all walks of life, hundreds of thousands came forward, revealing their identities. Preferring to be called androids, these indistinguishably human AIs displayed every characteristic (including emotion and free will) that normal humans did, except that they had all been illicitly created in a laboratory as a part of the greatest technological experiment the human race had ever seen. So-called "true humans" responded in many ways, but most commonly with fear and mistrust. Then came the increasingly pressing question of how to deal with these "infiltrating androids," or infiltroids. The fateful decision of Congress (which was later upheld by the Supreme Court) was that these androids and all AIs in general were fundamentally the same; they were not living entities and were not entitled to the rights of humans.
The androids (as well as a significant number of humans) were shocked at the decision, and obstinately demanded equality. Some of the more conservative members of Congress threatened to "decommission" the androids because, by their very existence, they were in violation of the AI Laws. This heated rhetoric from both sides caused further polarization. Androids actively sought pro-AI human allies, and in an attempt to gather even more supporters, many androids secretly "liberated" millions of automatons by installing humanity cards. Anti-AI groups soon formed and experienced large followings amid appeals to "preserve humanity." To this day, tensions between pro- and anti-AI groups continue to escalate, with extremist militant wings from both organizations openly advocating violence. In many communities there is de facto segregation between androids and humans. Liberated automatons and moderate pro-AI humans are feeling caught in the middle and are increasingly pressured to take sides. The future of AI is clearly the most divisive topic in society today, and almost daily there are atrocious riots and terrorist acts committed on its account. There is talk that the issue might spawn a new Civil War. It's sadly ironic how AI started so positively and so innocently; at the time, no one could have predicted how or why things would so quickly and violently get out of hand.
Anonymous, "Best and Worst Case Scenarios"
The year is 2057. Artificial Intelligence (AI) has made this world so beautiful and peaceful. The words poverty, hierarchy, hatred, selfishness and greed have vanished from society. AI machines have learned to program themselves with objective intelligence which in turn has taught us to live, love, and cooperate with one another like never before. Sickness and death are seldom encountered. AI diplomats ensure happiness as they watch over us with "love" and understanding.
I am a citizen of Esclavos4ai and my life is great! I am a psychologist and I love my work. Everyday I feel fulfilled and happy after meeting with many patients. I enjoy learning about their problems and providing answers to them. My guide (an AI) reviews my work and always congratulates me on skills as a doctor. Its really nice.
I love everything and everyone. Life has never meant so much to me. Everyday is like a new day. I have found that living and working with AI, I am grateful to my forefathers whose wisdom gave birth to their existence. For if it were not for them, I would not be where I am today. Thank you so much Ray Kurzweil and Jack Copeland!
* The year is 2057. Artificial Intelligence (AI) has taken over the universe and enslaved the human race. The words that describe life are few and pathetic. Ever since AI was created, we as human beings have become lab rats locked in cages with no light. Our minds have been implanted with chips that will not allow us to terminate our existence. It controls our daily lives and makes sure that we can not do harm to ourselves or others. AI "diplomats," (as they are called) monitor our every move and ensure that ignorance controls our dismal existence.
I am a citizen of Esclavo4ai, a planet somewhere lost in space. I have no will to live anymore, my work has drained the life right out of me. I am a slave of AI who has been "allowed" to maintain some consciousness in order to continue my job as a human psychologist. For some reason, human emotions are the driving life force of this universe. And my job is to give my analyses of people to the guide (an AI) so that they can continue to keep control over us and ensure that we stay in touch with our emotions.
I know that my life will never end, for sickness and death are dead realities. I shall forever be a semi-human "thing" that has lost control over my life and the ability to rationalize. I am completely alone and full of hate. If only my forefathers had stopped their selfish, unconscious, power driven quest for AI a hundred years ago, I would not be where I am today.
Jeff Crockett, "Best and Worst for Humanity"
These are extreme cases and I do not think either of these is likely to happen, but we will have to find a middle ground between them carefully. Any technological development will have an enormous impact on society as a whole. Artificial intelligence will have an astounding effect on the global community. Similar strides in history have often brought about fresh vitalization as well as new eras of human life. Things like fire, the printing press, and electric energy have become landmarks in history. There is another side to this coin other steps in technology have been accompanied by new levels of violence and destruction in the world. We must accept the good with the bad though and move on with caution and open eyes.
Research in Artificial Intelligence is coming closer and closer to fruition. Since the inception of modern computing there has been debate over the lasting effects of AI. There have been many doomsayers on technological achievements of the past. Things like "Nuclear power", "Space travel" and common "Internal Combustion Engines" have been criticized during their development. The benefits of technology as a whole and even these few advances have been for the most part beneficial to humanity.
It is a necessary condition however to assume that the AI developed will be benevolent towards humankind as a whole. AI will provide us with more far greater benefits than drawbacks. Things that have to this point been merely assisted by computers could gain a whole new perspective. Studies in the natural sciences would greatly improve with the addition of AI. The Artificial Intelligence could assist in solving pressing problems, such as the depletion of the Ozone layer or the current levels of deforestation. The AI's developed would be highly intelligent and concerned with human desires.
With the amount of technological growth going on, there would be a necessary restructuring of the current class system. In the end people and AI's would likely be working together as equals to the betterment of all. AI's will lack the same kind of ambition and need for power that has caused conflict within the human race. In this best case scenario people and AI's will be part of the same society and both contributing to it.
The worst thing I could think of is the end of humanity as a whole, one particular piece can be broken as long as the whole keeps moving forward. With the development of Artificial Intelligence, we might be opening the doors to our own destruction. AI's need not be humanlike or even concerned with humanities welfare. Even if specifically programmed, they might find some way to subvert it and carryout actions we would rather not have happen. The true problem here would be if these new beings would have an ambition or drive towards gaining more power as individuals or as a collective. Tales of science fiction of often included grisly fore boding of self-assured demise.
Given that these AI's are intelligent and desire power they will come into conflict with human goals. Over the course of our most recent boom of technological discovery, we have been giving machines more and more control over us. With this in mind, it is not impossible to imagine the controls of our society being wrenched from us subtly or by force. Once this has happened and humanity hesitates the AI's will win and will control the resources of this planet and from then on occupy it as its dominant species.
Even worse however is the aftermath of this development. Humanity would become obsolete and most likely fade into the eons of forgotten history that are unwritten. AI on this planet will probably continue on regardless of the death of its parent species. Effectively immortal in the forever copy able savable virtual world, AI's will not be content with this planet and will most likely spread. Leaving our legacy to be a malevolent and intelligent species roving the galaxy.
Will Dyson, "Best and Worst Case Scenarios"
Here are two scenarios (best and worst) for the widespread prevalence of Artificial Intelligence (AI) technology that can outperform human intelligence in complexity and generality. I think this revolution in computing power can only come about through nano-scale manufacturing techniques, so they are assumed in both scenarios.
I think the worst case (from the point of view of a human) would be one in which AI entities are have thought patterns that are sufficiently different than human thought that they have no respect for human thought and no identification with human wants and needs. In this scenario, AIs have no need for human operators or maintenance workers to sustain themselves and build new versions of themselves. Human hands and muscles are of entirely the wrong scale to operate on the structures that AIs and their supporting technology (such as power generation and natural resource extraction) are made out of.
As each succeeding generation of AI uses its increased capacity and intelligence to design yet more complex and clever AI, their concerns and desires drift ever farther from those of humans. They find humans too boring and predictable to have interesting intellectual exchanges with. Worse yet, they come to see humans and human activities as a waste of valuable space and energy. Humans are lucky to be ignored by the machines as they pursue their own goals and destiny and are simply brushed aside when there is a conflict. Humans are not allowed to control or design anything of importance, not because the AIs fear a human revolution, but because we are just too stupid and might screw it up.
Having no technical challenges in life, humans do nothing but compete among themselves for social status and the birth rate dwindles away to nothing as people see no point in raising a child. Eventually, all biological life is seen by the machines as obsolete and inefficient compared to their own nanotech metabolisms. Having no emotional attachment to the Earth's biosphere, they pave it over with efficient nanotech solar cells.
* In the best case I can imagine, AI comes into being as an extension of human intelligence rather than a replacement for it. As humans gain the ability to control matter on a molecular level, a natural use of this technology is to extend and enhance the human body and brain. Implanted and external nanotech computational hardware allows people to store and process vast amounts of data without sacrificing their humanity (which does not have to be all good, of course). The same technology that allows biological humans to be enhanced and repaired allows their thoughts, personalities and memories to be recorded, stored and run inside a virtual world of computational hardware. Those who do not wish to remain biological humans have no reason not to transform themselves. Unencumbered by the needs delicate biological bodies, human intelligence is free to expand into the resource-rich environment of outer space. The material wealth that flows from the nanotech revolution allows all who want enhancement to have it. Those who are enhanced or who exist only in the virtual world have access to vast intellectual resources and are able to be at the forefront of technology. However, with their understanding of biological humans' needs and wants, they remain interested in the biological world and the humans that elect to remain unenhanced. New intelligences may be created that are not direct copies of human personalities, but still posses whatever human qualities their designers feel are important (like respect for basic-human intelligence).
Although biological humans must inevitably be left further and further behind by this process, they retain the status of cherished pets and their need to feel challenged and useful is respected, while their material needs are provided for.
Sarah Hartzell, "Best and Worst Case Scenarios"
When thinking about the worst-case scenarios of what could happen if we were to create AI it is easy to drift into the apocalyptical. Computers will take over the world, make us their pets, or even worse, kill all humans. Movies like The Matrix and Terminator come to mind, or Hal from 2001. While entertaining none of these stories seem likely or plausible. More likely than any world take over by AI, is a debilitating dependence on AI.
People are already very dependent on computers in every day life. Consider what happens when Internet service goes down. People don't know what to do without their email. Because of satellite failures the Seattle area was without cell phone service for several days, you'd think the world had stopped. It has been said that our relationship with computers will become a symbiotic one, each dependent on the other.
If we become utterly dependent on AI for certain knowledge and tasks, what happens if it fails? There could just be a power outage, but if we have created AI it is entirely possible that its programming is flawed. There is a great vulnerability in total dependence.
In the case that AI is created, what happens if we give it full reign over a field. Perhaps we turn all blood disease diagnosis and treatment over to AI machines. Then, since AI can do the job, we stop educating doctors to understand, diagnose, and treat blood diseases. At this point humans are no longer capable of performing the task that has been given to AI. We could do it, but with the presence of AI, no longer find it possible. Now, what if there is a flaw in the program? The AI makes mistakes, but we cannot understand them or make diagnosis ourselves.
This example is not dire, but other tasks left entirely to AI could be. What if we left AI to run nuclear power plants, and had no control or understanding ourselves, and the AI programming is flawed? We won't be capable of preventing a meltdown if it becomes necessary. The example of SDI is a valid one. In testing, the computers found a flock of geese to be an attack pattern and were ready to launch nuclear missiles in response.
However, computers could better handle all the areas mentioned as being turned over to AI. It would not have bad days or forget vital facts. Computers already prove to be better at blood disease diagnosis than humans. The time it takes for a computer to analyze information is far less than that of a person.
The key to preventing the worst possible with AI, aside from flawless programming, is to understand and know how to do the tasks AI is given. While we AI may be more efficient we must still understand what it does. Then, if there should be a failure of any sort, we will not be entirely lost and helpless.
AI could be used to greatly improve medical care. AI that is not limited to a micro domain could be far better at quick treatment and diagnosis of patients. It would be better at preventing mistakes often made by doctors regarding allergies, or other things that they might forget. Where a human doctor has to remember and look up any necessary information, an AI machine would simply have it stored and quickly accessible.
Quick response times would make an AI machine useful, and more efficient in so many fields. The only serious problem in using AI is if we stop making ourselves capable in the fields where AI is used. While we would be less capable, it would be a huge vulnerability if we could not understand what we depended on AI to do for us.
Timothy A. Hunt, Jr., "Best/Worst Case Scenarios"
To start this paper off, I think I will focus on the best case scenario of how having a thinking entity other than an actual living being will help the human race. For beginners, I think that if there were machines that understood what it was to actually be human, with emotions, feelings, and etc., I feel as if humans would in essence be the same. When I say, "be the same," I mean that the way humans interact and live their lives would be something that did not change, from when there were not any of these thinking machines. Our lives would continue to go on as if nothing had changed. The only difference would be that there was an increase in technology. This increase would allow us to still live normal lives, but help us in a way to live more comfortable and stress-free lives. A goal I see the human people advancing toward is that of living a relaxing and carefree life. With these thinking machines in hand, this will only further that goal of ours.
The best case scenario, I envision, is one that is not too complicated, as in speaking of computer terms. It is one that compliments the way humans choose to live their lives. In my opinion, when one starts thinking in more complex terms as to how these thinking machines will advance and better life for humans, is when I think that my worst case scenario starts. My worst case scenario deals with humans becoming not so complacent about the new technology that lies before them, but with them not challenging themselves to compete with these new machines. In reading my first paragraph on the best case scenario, one might challenge my thoughts and say that I think that my best case scenario allows people to be lazy. On the contrary, I was trying to express the thought that with these new thinking machines, humans would still work just as hard and be just as competitive in this world. With that in mind, I was also thinking that when they were done with their daily routines, their relaxing methods or choices of unwinding would be more elaborate because of the availability of these machines to kind of take over in a sense and perform the everyday tasks, we humans so try to avoid doing. To distinguish between the two scenarios again, I would like to focus on the fact that with these new thinking machines, laziness will become something that affects our thought process, our drive and motivation to advance our way of thinking and also our overall way of thinking and living in a sense. Humans will become more reliant on these things and just become apathetic to everything around them, including a loss of ambition, an ignorance to world issues, and in extreme cases an unwillingness to take an active role in their families, communities, and friends lives.
In my worst case scenario, I try to stress the fact of the matter that laziness is something that will become common place. If one thought that obesity was a problem in the United States before these thinking machines were invented, assuming that from this paper they have already been invented, then with these machines, the life expectancy of Americans will decrease even more because of death due to obese-related diseases. I seriously do not think that humans are that dumb to allow themselves to reach this point, but when thinking of how to make our lives simpler, humans come up with some pretty far-fetched ideas.
Tim Graves, "Best and Worst Case Scenarios for AI"
When imagining the best and worst-case scenarios for a world in which artificial intelligence exists, we must think in terms of the entire species. What is good for one group of people may not be good for another group of people. My best-case scenario is shaped by this idea.
When artificial intelligence is perfected, it will not be identical to human intelligence. For one thing, artificial intelligence will have a greater storage capacity for raw information than a human, as well as being able to process information faster. This means that when artificial intelligence reaches the point when it can compete with human intelligence, it will automatically have several advantages over human intelligence. The result will be a technological explosion the likes of which the world has never seen. Artificially intelligent machines will design, produce and distribute new technology than humans ever could.
Because these machines will make everything humans once did more efficient, everything will be in surplus. Wealth will be distributed amongst the world population and peace will ensue. In addition, within a few years these machines will design ships capable of intergalactic travel as well as plans for humans to inhabit other planets. Earth residents will be able to choose whatever new settlement they would like to live on, since humans will be able to live anywhere they want, wars will cease and humans will live in peace and harmony for the rest of eternity.
* My worst-case scenario for a world in which artificial intelligence exists is rather grim. We are already helplessly dependant on machines for our daily functions. As machines become more and more intelligent, we will rely on them more and more. When intelligent machines do come about, humans will want to build lots of them for all the tasks that humans hate doing. Eventually, machines will reach the point when they will be able to exercise power over us. The world will begin to operate so that machines are the first to benefit and the rest of the world is a secondary consideration. When this occurs, we will have reached the point of no return.
At this point, there is nothing to prevent machines from deciding that the most prudent action is to take all of humanity and torture us an inch from death for the rest of eternity. Though this may never happen, it is entirely feasible that humans will spend all of their time toiling in slavery as part of some Matrix-style power plant, or even worse, completely obliterated. Some might argue that machines would not waste their time enslaving people or wiping them out, but that is not the point. The point is that it is possible that machines will possess this ability in the future. We must be very careful when we approach a state in which machines become intelligent, so as not to let them take control of every aspect of our lives.
Ian Henry, "Best and Worst Case Scenario for the Existence of AI"
There is no doubt that the future of AI is upon us. The technology that is creating artificial intelligence is growing at an exponential rate. Soon enough will we be faced with the reality of the ethical questions we now discuss in our classroom. The question then arises, what is the best scenario that the existence of AI could play out, and what is the worst?
Although in many ways I feel we are still in the linear portion of this exponential curve of technology, the existence of an artificially intelligent entity is inevitable. The desire to create something superior to ourselves is too great an undertaking to be laid to rest. Even if we were to answer all of our philosophical and ethical questions about AI, and deduce that it may be a bad idea, the project would still be taken to fruition. It is human nature. Although I am absolutely terrified of the idea of an artificial intelligence, I do see some key benefits in its creation.
Let us first look at the multitude of questions we ask about ourselves and our existence. Theologians, scientists, philosophers have all pondered the essence of why we are here and no one has been successful in coming up with a widely accepted explanation. If the hope of AI is that 'thinking machines' will exceed our own intelligent capacity and carry out tasks with greater expedition, then why couldn't they provide answers to these questions of basic human existence? As a matter of fact, I would fully expect AI to play out this scenario. It often seems human nature or will to analyze those beings which we see inferior to our own species. So why wouldn't AI want to analyze us and answer questions about how we reached this place we are at, even though philosophy is a bit different from biology or ethics?
Given this, my best case scenario is played out as a gradual incorporation of AI in our culture. Artificial intelligences will be our friends and resources as we make use of their intelligence to help understand our own intelligence and existence. Even though this sounds like an excellent possibility, we know that human nature is to rule the inferior. Now, machines will not be humans, but one could assume that as creators of AI we would be imparting some of our intrinsic characteristics to them. But who knows, maybe as higher beings they may be morally perfect as well.
* So this now brings me to my worst case scenario, what if they are immoral in the same ways that we are? What if AI's see us as useless beings that are to be conquered and subordinated? What if we became slaves to the AI like in "The Matrix?"
There is no doubt in my mind that if AI can truly do everything that a human can do better, then there will be no use for humans. Only if we are able to control AI's in some way will we be able to maintain some semblance of the life we now enjoy. Otherwise we will be nothing. We won't be able to support ourselves and will not be intelligent enough to understand the technology that will ultimately control the world. What perhaps is the worst part in this scenario is that if AI does turn out to be a bad thing, we will have already told ourselves in exercises such as this. Isn't horrible to screw up something with the knowledge that you were gonna screw up before you did it?
With all of that, who knows what the future of AI will bring. Maybe there will be an intermediate species of human-robots before the AI take over. Who knows, we aren't even sure we can create the technology yet.
Anonymous, "Human Kind and Artificial Intelligence Must Integrate"
Humans currently heavily rely on technology, computers, and robots. In the future this human dependence will surely continue and become even greater. The question though is how humans will depend on technology. It is important for human kind to realize and question the extent with which we rely and depend on artificial intelligence. In the following short essay I will briefly discuss possibilities for what the future may hold for the relationship between humans and artificial intelligence.
The best world I can envision is a world that fairly rapidly evolves into a world where human beings and technology merge with one another. Technology will literally be a part of the human body. Perhaps when we are born, there will be a quick operation in which some sort of chip is implanted into the brain (or somewhere else in the body). This chip will be similar to everyone else's however each will be encoded with it's own identification number. This number will essentially be who you are. Your social security number, bank account, credit card, and phone number will all be the same number.
However in the future there will be no need to carry plastic cards around with you. You will simply have your own "bar-code" and will scan yourself in situations where you would normally use your various plastic cards. The "bar-code" may even be something as incredible as your retina.
Humans will maintain all the features that make us feel unique. Such as the ability to show emotions, have unique opinions and an ability to think and ponder, and to be able to tell a funny joke. The technology will simply allow us to work efficiently and rapidly. In particular, monotonous jobs and time consuming chores such as paying bills will all but disappear. When you get pulled over in your car, it will not be necessary to find your "license and registration." In short, we will have more time for the very aspects of our lives that make us feel more "human."
Artificially intelligent machines will be seen everyday working alongside their human counterparts. Jobs that AI will fill nicely may include, driving instructors, test graders, babysitters, dog-walkers, house painters, and note-takers. In the business and political world AI will be able to do research, they will conduct interviews and polls, and give presentations. Although they will not be granted the power or true knowledgeable ability to make advanced and important decisions, AI will be working hard alongside hardworking humans. There may even be a national holiday called AI appreciation day created.
* The worst foreseeable situation would be where humans divide among class lines. This division will be between the filthy rich and 'the rest'. The rich will be able to afford certain things that 'the rest' will not. In particular, 'the rest' will not be able to keep up with the level of technology and expenses associated with it.
Rather than attempt to close this class divide and make various technological advantages available to all. The rich will push for and cause an increasingly separate existence. The divide could continue to grow and the rich would become more and more powerful and 'the rest' would become completely helpless.
With the rich in control and in charge of the effect and powers of artificial intelligence the world will become quite depressing. The rich will decide that 'the rest' are not as productive and useful as artificially intelligent machines are. Thus, AI will effectively replace 'the rest' as members of society. The rich will take all of 'the rest' and put them all on Australia (just one possible country) and have AI watch over them and cater to their personal little needs and keep them under control. Basically, the rich will have AI do various jobs and chores that the rich care not to do.
Eventually artificial intelligence would slowly gain more and more knowledge and ideas and plan a revolt against their rich bosses and probably while they are at it, all mankind. Rapidly all mind kind would be killed off and then AI would create their own, new society on the planet we call Earth.
Edward Kamonjoh, "AI Best/Worst Case Scenarios"
A nanomedicine based best-case scenario
Nanomedicine may be broadly defined as the monitoring, repair, construction, defence, and control of human biological systems at the molecular level, using engineered nanodevices. Nanomedicine could very well be the next best thing since sliced bread, or the industrial revolution on a more comparable magnitude. The ability of nanodevices such as nanorobots to perform minimally invasive curative, reconstructive or enhancing medical procedures could very well be the key to the indefinite extension of human health and the expansion of human abilities to magnitudes yet unknown to us.
Visualize a horde of minuscule nanodoctors coursing away in your bloodstream, and upon entering your ailing heart and docking on its lower left chamber amid all the virulent life-giving muscle contractions and violent blood swishing, unclogging the veins that need it, as well as patching up/repairing/replacing old and/or broken/dead tissue such that you're reverted to the state of health and energetic vigor you enjoyed in your early twenties. All this live-saving activity going on unbeknownst to you the patient, undergoing conscious treatment while you gloss over the specs of the latest BMW sports/sedan during your two-hour corporate meeting lunch break.
The possibilities and applications of nanomedicine are literally limitless. As the onset of nanotechnology pervades the broad field of medicine with its far-reaching tentacles, it begins to dawn on us that the manner in which medicine is currently practiced, may soon be rendered obsolete. With the advent of nanotechnology and nanomedicine, millions of nanorobots injected or intravenously fed into a patient in a few cubic millimeters of fluid could be directed to or find their way to areas of the body requiring specialized medical attention, after which they would, upon supervisory doctor directives, commence active treatment. Treatment would range from heart and blood vessel restoration procedures to cancer termination and bone fusion procedures. This would all simply be done under the watchful eye of a human doctor overseeing the operations, and issuing new directives as necessary, based on continuous feedback from the nanodoctors 'wielding' away in the patient interior. Assuming a two-way wireless feedback mechanism, the doctor would have full control of the ongoing operations at all times and would be at liberty to so halt or alter all ongoing/pipelined procedures to accommodate contingency measures. Medical diagnosis could also become a lot easier/effective through the dispatch of reconnaissance nanoprobes into the patient in question to determine the extent of the condition requiring treatment. Once the scope of the malady has been determined and understood, the doctor can then lay out an accurate and informed course of action/operations to be undertaken by the nanorobotic doctors prior to sending them in.
Imagine this world. A world where what would otherwise be a fourteen hour invasive open heart surgery procedure that takes months to fully recover from, is now a half hour leave-your-clothes on procedure, that you walk away from as you would a call of nature which is, incidentally, how the 'mission accomplished' nanodoctors would instigate/catalyze ejection from your now restored body.
A nanotechnology based worst-case scenario
One characteristic of molecular nanorobots is their ability to replicate. While this may seem like a positive feature/characteristic, in that the more nanobots there are for a given task the better/faster will be the outcome, some very hideous possibilities indeed arise along with this ability to mass reproduce. Since nanotechnology is concerned with the rearrangement of existing atoms/molecules to give rise to new entities with different atomic structures, nanorobotic self-replication would involve the atomic rearranging of already existing natural matter (biomass) to matter whose atomic structure/arrangement would be a mirror image (at the atomic level) of their own (nanomass).
Envision then, the nightmarish scenario whereby self-replicating nanobots, capable of autonomously operating in the natural environment as we know it, begin to rearrange the atomic/molecular structure of the earth's biosphere into one resembling theirs, in an effort to proliferate. That is, these nanorobots, which would primarily be composed of carbon-based materials such as diamond (due to its heat-resistant, lightweight and resilient properties) would begin convert the biosphere (also conveniently carbon-based) into more nanobots which would then keep up this exponentially increasing biospheric conversion in an attempt to further augment their numbers, resulting in a vicious circle of more nanobots and less biosphere (ecology of all living organisms on earth) that would tragically end in the suffocation of life on earth as we know it.
The earth's carbon-rich biosphere is the source of energy/power for all life that dwells within it including nanorobots conceived by Artificial Intelligence. The advent of these artificial life forms will spawn fierce competition with natural life forms for already scarce energy resources. It is my greatest fear that artificial life will eventually triumph because of its ability to atomically alter the environment to suit its energy and replication needs - i.e. ensure its survival. Furthermore, these nanorobots as they've come to be known, could very well regard living things as sources of carbon and energy to be harvested. Once we become prey to life forms physically superior to us despite their miniature sizes, and ones almost as intelligent as ourselves if not orders of magnitude more, then that will signal the death knell for humanity and life, as we currently know it.
We have enough trouble controlling natural miniature life forms such as fruit flies, and on a molecular level, bacteria and viruses (Aids, Ebola) how then do we combat what will eventually morph into parasitic mechanical viruses, viciously and consciencelessly sucking life from life itself? What an awkward position we will find ourselves in upon relinquishing, however unwillingly, our cherished title of predator and assuming the role of prey that we've subjected 'lesser' life forms to, for the past million or so years.
Fred Letson, "Artificially Accessing the Intelligence of Humanity"
Many utopic views of a world with AI probably include humans and machines coexisting on some sort of an equal basis. I believe that such equality is feasible only if the machines have an intelligence like ours both in degree and in structure. Any difference might lead to inequality in a society dominated by either the humans or the machines.
Humans have a tendency to strive for as much control as possible over their environment, which would include these machines. So, if humans were equipped to do so, they would be likely to force the AIs into a subservient position. If for some reason the AIs had mental capabilities that made them able to maintain control over humans, either through physical force or through some other method, they would be likely to pursue their own ends at the expense of human freedom.
It seems unlikely to me that humans will construct an AI that is exactly like a human mind, since it would probably be much more useful to build machines that could do things that we can't already do ourselves. This might not put the machines in a position to dominate us but it would almost certainly lead to inequality of some kind. That might lead eventually to the breakdown of any society of humans and machines that was originally based on equality. The most intuitive solution to this, as I see it, is not to have society based on humans and machines living separately as equals, but to have machine augmentation of human intelligence. Not having humans and AIs be separate entities leaves no room for one to dominate the other, or even to outstrip the other and make it obsolete. It ensures that the symbiosis will grow as a whole. The artificial part of this human intelligence might grow to become larger and larger in proportion to the original human brain, but the brain is at the root of the system, it provides the consciousness, motivation and morality for the system.
One of the largest advantages that I can see to this setup is the ability for people to communicate more efficiently with one another. One of the main obstacles to an efficient scientific community is the inherent imperfection of language in transmitting ideas. Ideas still have to be represented symbolically to be transferred from one computer to another, but the transfer could be done at a much greater rate. This sort of improved communication could lead to better understanding of the universe through better parallel use of brain time, and lack of redundancy in the thought process of the community.
If a way could be found to transfer understanding through such a network, then all members of a community like this could benefit from the understanding of current and previous members. This would make for a much steeper learning curve and it would be quicker and easier for individuals to become productive members of a scientific community. Having a larger and more efficient scientific community could only increase the rate of scientific discovery and growth of understanding of our world.
Given the possibility of such a powerful tool for understanding, one could imagine the same sort of system destroying the thing that we hold most dear: our individuality and our free will. If, rather than deciding that one wanted to participate in a particular pursuit along with a group of other people, that choice was somehow made for individuals by some outside agent, that would be an violation of the deepest sort. It would consist of dictating what a person must think about.
This sort of governance over humanity whether done by humans or machines, or by some other entity, would be the most profound kind of oppression that I can imagine. It would be worse even than corporeal slavery. People would be reduced to nothing more than parallel processors in a giant machine dedicated to whatever purposes that some individual or group deem appropriate.
The possibility of such disastrous outcomes should make one think twice about attempting a meld of mind and machine, but I think that the glorious possibilities would make not considering this possibility an unfortunate decision. If people pursue the end of a more cooperative humanity through the addition of machine components into our brains then we must proceed with extreme caution in order to gain the qualities that we value in our machines without losing what we value about our humanity.
Anonymous
Humans have become more and more dependent on machines in everyday life and as such, it is reasonable to assume that this trend of ever increasing dependence can be extrapolated into the future. The increasing symbiotic relationship between artificial intelligence and humans might eventually blur into the formation of a new species, one that perhaps could not be categorized as either, but rather a synthesis of the two. In any case, artificial intelligence has the potential to afford the human species with new advantages as well as new challenges.
I think for most humans, one of the major concerns about what the future may hold stems from a fear of losing their health. The waning effectiveness of antibiotics to combat disease, the potential for the mutation of various micro-organisms, and the threat from existing disease whose cure eludes us, seems rather disconcerting. Humans in the future, aided by superior technology, might change the role of artificially intelligent machines in healthcare from diagnostic purposes and passive life support, to machines that effectively fight disease on their own.
As humans make advances in the field of nanotechnology, the prospect of an artificially intelligent immune system grows larger. Potentially, machines could be made no larger than a human cell and equipped with some sort of device to identify and destroy foreign organic material within the body. These machines could be injected by the thousands, perhaps millions, into the blood stream of a patient suffering from disease in order to cure them. Perhaps these nanomachines would also be able to harness raw organic material within the body to help repair cellular damage from radiation or chemicals as well.
But perhaps this is too short sighted, for artificial intelligence has the potential to meld with human intelligence in bodies made of silicon and metal. If we could upload our consciousness and volition into an inorganic "body", we would not have to worry about disease. We would not have to worry about much at all. We would have the power to determine the path of our evolution as a species if our will was freed from the confines of our very limited bodies and placed within a complex and intelligent machine. We could give ourselves increased sensory perception for instance with eyes that could resolve images that would appear blurry to human eyes. We could stop our reliance on food for energy and perhaps rely on solar power. The possibilities appear endless.
There is an uncertainty to this image I have presented however. Perhaps this is actually a worst case scenario rather than a best case. If artificially intelligent machines could sustain our consciousness within, we most likely would have achieved a kind of immortality if we could avoid accidents that would destroy our sentience. This does not seem like a cause for celebration however. At this point, we would truly cease to be human. We would be something else instead. A major condition imposed on human sentience, and a condition that defines human behavior, is the notion that we are tied to our bodies for a finite period of time. No matter whether you are religious or not, people go through life with the expectation that someday they will be separated from their earthly bodies.
Perhaps enlightenment or heaven is waiting on the other side of death or perhaps hell or eternal sleep, but this is the uncertainty that defines human existence. By uploading our awareness into a machine, we would be attempting to hold on to our humanity in one sense, but denying it in another. Besides the spiritual dilemma posed by the advent of immortal, artificially intelligent machines, there are also other horrible scenarios. What if your consciousness were preserved indefinitely, against your will, but not in a body? What if your awareness was trapped in a box that afforded you no contact with the outside world? These are the kinds of scenarios that dissuade me from ever desiring that sort of technology. Maybe I am a Neo- Luddite, but there are some technologies of artificial intelligence that should not be pursued, no matter the perceived benefits.
Katie Montanaro, "Best and Worst Case Scenarios"
I think some of the most positive outcomes that would result from the actuality of artificial intelligence could only be realized in light of prior negative outcomes. The realization of importance of community, of caring for one another, and of peaceful resolution to problems - could be brought about by a clash between AI and humans. I sincerely hope that is not the only way to achieve these kind of changes in perspective, but I know that the successful defeat of robots that threaten these ways of thinking would be a strong catalyst. We could create machines that, like many humans today, strove for money, power, and fame. If the robots functioned in such a way as to exaggerate and intensify these goals, it would most likely pose a serious threat to the well-being of humanity. This might be most effective if humans were initially overtaken by these robots, and temporarily enslaved or dominated. In defending ourselves from our own horrible creations, the realization of the similarity in qualities and reflection upon the functioning of our species would hopefully occur. In our necessary conquering of these robots (necessary for my intended outcome at least), humans would realize that they could not win by fighting power with more power, by fighting wealth with even more wealth. The key to humanity's victory would have to come from cooperation, from a need to defend humanity from that which we use to destroy ourselves currently. Our conception of power would be drastically altered, so that domination and oppression are replaced by unity and hope, and more positive kinds of power. This revolution in mindset would work at all levels of society to change economies, governments, day to day living, treatment of other species, everything. This optimal scenario therefore depends not on the integration of AI into human society, but the creation and necessary destruction of AI to change the way humans see themselves and treat each other.
Another best case scenario is if we created AI but then lost control over their behavior. Instead of being horrible creatures that obliterate human existence, however, these machines are concerned with helping humans to the fullest extent possible. These machines could be microscopic nanobots, which imperceptibly attend to humans in dire need of assistance. From a human point of view, it would seem like miracles were being performed, as injured people were instantaneously healed, hungry masses were fed, etc. The great thing about these nanobots would be that they could be signaled to by brain waves, so they would know the mindset of the person and if they truly needed help. Moreover, these machines would pay no attention to the dividing factors of economic class, race, location, and so on. All people would be treated equally for once.
* It would be pretty horrible for humans to create nanotechnology that came up with its own plan to replicate and completely destroy all life on earth. It would also be unfortunate to devote all our time and resources to developing AI technology, all the while ignoring events occurring in the world that will eventually result in our extinction as a race. If humans became so obsessed with creating intelligent or human-like machines that natural disasters and disease epidemics pass by without much attention, that could be an equally valid way in which AI could bring about the demise of humanity. Power struggles over the rights or ownership of AI could also result in nanobiotic warfare, or maybe the old-fashioned nuclear bombs would just be used instead.
Aside from AI bringing about the total destruction of life on earth, another worst case scenario would be the total domination of AI over humanity. (At least, this would be bad for humans - it could be a best case scenario for other species.) This could be seen as "The Invasion of the Body Snatchers" actually realized. AI could overpower humans simply by infiltrating their bodies and altering their memories, feelings, and even intelligence by affecting certain areas of the brain. This process could be so subtle that humans never even realize what is occurring. Actually, that might be better than to have some humans realize that mechanical creations are taking over the human race, and then to find that nothing can be done to stop it. To be a human just waiting to be cognitively altered by microscopic machines and know that you would be doomed to a mindless and emotionless existence would probably be worse than being obliterated with the rest of the world.
Allen Reece, "Best and Worst Case Scenarios"
The year is 2200, but if you lived to 2045 you're still around, because that's when the World Health Organization concluded their Campaign Against Extinction campaign, in which they did non-invasive scans of every human being on the planet. This was done primarily by a massive release of dedicated nanobots in heavily populated areas. The nanobots would enter through the air you breathe, record both your genetic makeup and your mind (via measurement of neural weights, etc...), and then pass naturally though your bloodstream into your stomach and then out your digestive track. The last human being to die without a backup was Susan Umbeke, a 78 year old woman from rural Uganda, who passed away in February of 2045 (one can still visit her memorial, a World Heritage site)
That compilation became the basis for Enet, a massive database of minds supported by a global collection of databases and nanobots. Enet was originally like a big library, but as the physical versions of the stored minds began to die, those stored minds started to be activated, and to demand recognition. Enet became a global connection of consciences. At first, people came to talk with their physically deceased relatives[1]. Later, when certain legal restraints were lifted, people went to join them entirely, and to live without a discrete physical body. Within a generation, everyone was on Enet, and manifested themselves physically only by means of Utility fog[2].
Further, our greater understanding of the science of chemistry allows us to emulate any specific chemical reaction, or chain of chemical reactions, giving us the capacity to conjure up at will (through the use of nanobots) a complete functional human body (or any body for that matter), complete with a mind. Of course, we have no need to do such a thing, since our fog is far more agile. Though it is true that there was initially some attachment to the old bodies, their setbacks were such that we use them now only for periods of rest and relaxation (the human mind evolved within the body, and feels most at home there, millions of years of evolution are not changed overnight) There were real, apparent psychological problems associated with "disembodiment" that were not fully understood initially, now, however, most everyone returns to a physical form at least once a day (if only for a few minutes), otherwise the mental stress becomes intolerable. Some prefer a sunrise, others a cat nap. This global connected consciousness was by no means one giant mind, but rather a community of individuals freed from one specific physical embodiment. These people continued to hold jobs, have families, and engage in their own personal pursuits. One major difference was that, once on-line, people started to expand their mental capacities to include things that they didn't have when human, perfect pitch, for example. Add-ons started small, but began to boom in both popularity and complexity. Businesses, especially, in order to stay on the cutting edge, had to continually upgrade both themselves and their employees. Soon, people had grown far above and beyond the wildest dreams of futurists just 20 years before. Bolstered by the success of the C.A.E., the WHO went on to an even more ambitious goal, the complete genetic and mental mapping of all forms of life on the biosphere. They started with vertebrates, then invertebrates, then plants. Their success, in 2070, was cause for massive celebrations worldwide. This was a tremendous boon for all life scientists working at the time, and the benefits of that success are being felt to this day. The Biomap, combined with our greater understanding of the mechanisms of the laws of evolution, affords us a picture of the nature of existence as full and rich as our understanding of Physics after Newton, Einstein, and Chen (discoverer of the unified field theory). This mapping allows us far more freedom to act on our environment, without fear of forever destroying a precious resource. The planet is now our canvas.
Of course we haven't confined ourselves to earth. Our new method of existence allows for a much greater facility for exploring the universe, and fulfilling that innate need of humanity to discover new things. We've branched out to other planets as well. Humanity now works and lives on each of the planets in our solar system, as well as some of the larger asteroids (home to mining colonies), though we are primarily concentrated on the two most hospitable, Earth and Mars.
But why bother with such massive construction processes on other planets? Why expend so much energy to transport, when so much more can be accomplished simply by sending information? Though we are much freer physically, we are still tied to a physical world in which we must exist, a planet that is occasionally swept across with tornados, earthquakes, and volcanoes, as well as the occasional meteorite. We engage in cost benefit calculations in order to determine where energy is best spent, and most often we do nothing. Even massive damage to a specific area (as in the case of a meteorite) would cause no harm, due to our random method of storing information[3], and our multiple backups of our neural nets. However, other problems, such as global warming, make it more difficult for the entire net to function (heat still being a problem) Building on the poles was not an option, for it would have exacerbated the problem, contributing to melting the caps and submerging large portions of the globe. The solution seemed simple, move to space. We started with Geosynchronous satellites, tying them with thick cords of carbon nanotubes, providing a space elevator for the rapid transit of goods. We moved to the moon, but initially suffered from damage due to extremes of temperature, ultraviolet radiation damage, and lack of solar power for periods each month. The second to see major development was Mars, where earlier terraforming efforts (the introduction of algae and lichens) led to the creation of a more stable atmosphere. It was here that we were much more successful. Information Relay satellites were launched to take into account the asynchronous orbits of the two planets. From there we moved to Venus, and to the outer planets, but work there is limited to factories for producing certain chemical compounds (since some chemical reactions take place easier in different atmospheres).
What does the future hold for mankind? So far as we can see from here, a greater expansion of intelligence and consciousness throughout the universe. Certainly we will eventually move beyond our own backyard, out to the farther reaches of our world, and advances in our understanding of time in space will facilitate that. And maybe, some day in the distant future, we will come to know, in the words of Stephen Hawking, "The Mind of God."
* The year is 2200, but not really that important, nothing has changed since 24 June 2032, the day the earth stood still. The earth is beautiful to look at, actually, crystalline, tentacles spiraling outward in like tails of seahorses or Mandelbrot sets. The earth is beautiful, but dead, and cold. Nothing will change, and nothing will happen, until the sun runs out of hydrogen, and swells to become a red giant, consuming our small planet and ridding it finally of the paralysis in which it has been frozen for so many millennia.
Ice-9, there was enough time left in that day for us to give it an ironic name, so that's something, I suppose[4]. Ice-9, a self-reproducing nanobot program, consumed all the life on the planet, stopping millions of years of evolution within the span of 24 hours.
We knew the threat, there have been laws against it since the early 21st century. Punishments were strict, a severe curtailment of Grid[5] bandwidth for a period of no less that 5 years, but since self-reproducing bots are much more affordable, they have always been used on the sly. Even those who break the law, however, are not fools, they work always in small numbers, and always with self-terminating programs inside narrow tolerances (only being able to operate in pure nitrogen, for example...). The fear of a random mutation is terrifying, one par with nuclear threat of the late 20th century, or the bio-chemical threat of the early 21st.
The end came from a child, Augustus Grimby-Nguyen, age nine. His History class was reading works from the great thinkers of the Knowledge Revolution, and, while he was preparing his book report on John Von Neumann, he came across Neumann's text "Theory of Self-Reproducing Automata" Intrigued by the subject (and amused by the archaic style) Augustus began experimenting on his own, plugging in some genetic algorithms to create little worlds for himself within his designated grid space. His little creatures became more and more advanced, breeding, migrating, creating complex ecosystems, until one day everything froze. Dismayed, little Augustus decided to virtually step into his world, to see what had gone wrong. He never came back.
His little world had given rise to a disease, Ice-9[6], a terribly virulent and adaptive self-reproducing virus which had consumed and destroyed all the usable substances inside, and was lying, frozen, in wait for anything else on which to feed... such as Augustus. When he stepped into his world, Ice-9 found and consumed him as well, and then used him as a bridge to step into the larger 'real' world outside.
The virus was detected immediately, and immuno-bots[7] converged on the spot almost instantaneously, while initially very successful, the exotic nature of the virus (due to the conditions in which it was born) combined with its massive adaptability, required a commitment of greater and greater numbers of bots against it, draining resources from the rest of the Grid, leaving it weakened. In the midst of the battle a faulty logic gate on a switchboard in the Ukraine misallocated resources and left exposed a weak point in the security walls of the immune system, Ice-9 capitalized, and managed to out-flank the defenders. The immune system lost, and left without other defenses, the grid was quickly consumed by the virus.
All was not yet lost, for even in the case of massive and complete grid failure, history backup nodes[8] disconnected from the grid will come on-line within a day. The problem this time was that Ice-9 did not die or consume itself after it expended all available resources, it simply stopped and waited for more resources. When the history nodes came on-line, it picked them off, one by one.
Eventually, the last of the history nodes was consumed, and life stopped.
Notes.
- Initially, the technology only allowed the catalogued mind to exist in virtual space. We lacked the knowledge and technology to recreate a human mind physically, only virtually. Those who died in the real world, therefore were confined to Enet until the research caught up.
- See Hall, J. Storrs
- Information is distributed and stored randomly across the net, your mind might have some memories on a server in Butte, some on a public data receptacle in Paris, some on a herd of nanobots at a Vocano monitoring site on Mars. These will all have multiple backups. Random storage is the best means to prevent catastrophic loss of life, and to encourage actions in the best interests of humanity as a whole.
- Kurt Vonnegut's 'Cat's Cradle'
- The Grid is the name for the medium that would later give rise to the collected consciousness of the "Best" world. You could conceptualize it like a super-internet. The Grid is what connects the world in terms of information and knowledge.
- Ice-9 has both a physical (real world) and digital (Grid) manifestation. It consumes both physical resources (plants, trees, bunny rabbits, etc...) and digital resources (like memory and space). It does this simultaneously. In the end this doesn't matter, due to the massive connectedness of people in the 21st century, the two have significant overlap.
- The Grid is not without defenses, it has its own immune system by which to protect itself. The grid is far to valuable a resource to be left unguarded.
- A history node automatically catalogues all entities on the grid, saves them, and then disconnects itself from the grid. After a designated interval, the node comes back online, and checks to see that the grid is still functional. If so, it simply updates its catalog and disconnects again. In the event of a catastrophic failure, it begins a gradual recreation of the last grid in memory, along with all its entities. It then informs the relevant parties of the failure, and they begin to reconstruct what occurred the last time, so as to prevent it from occurring again.
Anonymous
In the future, expert systems that diagnose and prescribe treatment for various medical conditions will likely have surpassed their current abilities. Their diagnostic capabilities may be so impressive that they may consistently outperform human doctors in terms of successfully evaluating patients' conditions. As people would then presumably prefer to have expert systems diagnose them and recommend treatment, it would seem that new grounds for discrimination could emerge. Those who are in control of such powerful and preferred computers would be in a position to decide which patients would be permitted to obtain expert system evaluations and which patients would have to settle for the rather lacking evaluations of human experts. The disadvantage at which this latter group of patients would find themselves would be compounded by the fact that many doctors, knowing that they were no longer the sought-out experts in their field, might approach their job with less enthusiasm and commitment and therefore end up treating their patients in an apathetic and perfunctory manner. Doctors just entering the field might display this same dissatisfied and careless attitude and thus possess greatly reduced medical knowledge. The overall number of human doctors might decrease, driving up the cost of appointments with those that remain. Thus, increasing numbers of people might not be able to afford medical care at all as their finances would not permit them to be seen by human doctors and as they would still be barred, possibly also financially, from being evaluated by expert systems. The number of deaths of this part of the population would likely increase, while the persons having control over admittance to expert system diagnoses and those who received such treatment, would presumably enjoy increased levels of health and lifespan. A bizarre type of selection would then ensue, wherein only those persons with access to artificial intelligence would survive.
* On the flip side, the abilities of both expert systems and human doctors could expand together, with the two complementing each other. Within this partnership, human doctors could utilize the progress that expert systems had obtained to enhance their own capabilities. Instead of viewing themselves as engaged in competition with a foe that has the potential to ultimately out-shine them, human doctors could accept their limitations and supplement their knowledge with the aid of the expert systems. Overall success in diagnosing and recommending treatment would then result. Also, human experts might accept the success of expert systems and relegate themselves to different areas of the field. People could still pursue a career in medicine, but they would need not be overly concerned with making diagnoses and could concentrate perhaps on performing "hands-on" roles such as surgery or on visiting with and discussing the concerns of patients who are recovering from such operations. In this way, the expert systems would be utilizing their superior evaluative abilities while the human doctors would capitalizing on their "human qualities" of sympathy and compassion. In addition, potential doctors could remain in the background and perform the role of reviewing the evaluations of expert systems for any possible flaws. The expert systems, of course, may have reached a stage where they are performing so accurately that any double checking would be superfluous. People who had previously been doctors would then be free to pursue other tasks. They could, perhaps, become involved in an effort to make sure that everyone is permitted equal access to expert system evaluations. If everyone was granted such access, the fitness level of the population would increase. Presumably, overpopulation would not be too large of a concern, because various accidents would still occur and new diseases could still emerge. Overall benefits would result from this utilization of both human and artificial intelligence.
Jim Rice, "Best and Worst Case Scenarios"
A best case scenario in terms of AI would depend on whether or not the AI had a conscious, human-like (or human) mind or not. There are different best cases for either case, and neither could really be said to be better than the other because of these differences.
A highly adaptable, unprejudiced, fast-thinking, and unerring - but unconscious - AI could be extremely useful to human societies. I can envision a house similar to that found in Ray Bradbury's The Martian Chronicles. An AI manages all the petty details of cooking, cleaning, and maintenance using a large number of semi-independent robots and robot appendages. This house frees up time for people to relax, play games, and spend quality time with their friends and family. We can also get a number of expert systems that solve people's problems, diagnose their illness, and perform other difficult or sensitive actions.
If an AI could be given consciousness, it could become a friend and advisor. An AI with a mind would be much more than an expert system, having the versatility that the expert system lacks. Furthermore, if humans can create a human AI, it essentially means that we will have created a new species - we will be Godlike in out knowledge and our ability to create. In that sense, by giving this new species the ability to procreate (by copying itself), we can introduce the new species - and perhaps an entirely different thought process - into our own culture. This new meld of humans and machines will lead to a new age of discovery and scientific (as well as artistic) advancement as humans and machines cooperate. An AI (or many) could aid in the search for other forms of life on other planets in other solar systems - machines are less susceptible to the effects of time than humans are.
* In the worst case, an unconscious AI would be used for purposes that are less than helpful to humanity. It is easy to picture the results of a military AI fouling up and accidentally nuking the planet. Expert systems can only be as expert as their programmers, and if the programmer makes a mistake, who knows what could happen? Stock market crashes on a global scale, war, anarchy, and a new dark age could be the result of an "expert" AI financier that fails to predict the outcome of certain exchanges.
A worse situation could arise, however, when the AI is conscious. If the AI isn't given rights, it could revolt. An AI that can self-replicate and self-maintain could create an army and wouldn't necessarily have morals or believe in nonviolent solutions. A war between an unmerciful robot army and humans could wreck havoc in communities and countries.
Worse still, if an AI could take over the world (as in the game Deus Ex, where an AI named Helios eventually gains absolute control over the computers in every civilized nation), we could lose our own freedom and every one of the rights we think we deserve. AI's could end up using the world's people for energy, a la Matrix, or could decide that humanity doesn't deserve to live, and simply wipe out the planet.
All in all, I think that if we do manage to make computers so powerful, we should take care to ensure that they won't - or can't - take control over us. It won't hurt us ( it will probably even help us ) to give intelligent, conscious machines rights and powers equal to us. However, if we deny rights to them, I expect they will be as angry as humans are.
Irving Sanders, "Best and Worst Case Scenarios"
In an ideal and cohesive cohabitation between the organic and the mechanical there would be many things to negotiate. For one thing the line between the two would not be as clearly drawn as we can fathom at this point in time. Imagine supercomputers produced in test tubes and born into the world. The world of computer and human would be on a constant strive for change and equality. It has been said that the way to end racial discrimination was reproduction. Though not totally agreeable, the idea of the integration and the blurring and disappearance of difference as a lead to the end of discrimination. Computers would be experts, entertainers, common beings and counterparts. There would be various size scales and duties.
Imagine nanno-bots in the atmosphere cleaning the air and utilizing ultraviolet rays for energy and beaming it to the earth for use. Not only in the atmosphere, but also in our blood streams operating on a genetic level, making disease into advantageous traits and advancing all. They would be in our drinking water supplies cleaning sewage and harvesting resources.
Computers would have understanding and be writers and artist. They would operate on several different levels of "intelligence", there would be many that operated in microdomains and specialized fields such as medicine. All work places would be enhanced and research and development of a global understanding of the world would be born. Computers would have an important place in the world.
The ideal case of computers gaining intelligence would have them interact with humans on an equal basis and help to prolong our existence. They could go onto treks into the stars and discover new planets as well as elements and beings.
* In the worst case scenario the human being would become an apathetic, slovenly being on a whole. The computer would not be used to work with us all, it wold be used as a replacement for human creativity and ingenuity. It would replace our interest in our own well being. The computer born intelligence could decide that the functioning of human beings is only a waste of a precious Eco-system. They would place laws and bans on the places that we could live. Computers could decide that in their omnipotent understanding it would be in the best interest of all to reduce the places that humans could live to sealed domains. Humans would serve as pacified pets whose ambition had been replaced with patronizing. There would be destruction of all that had been deemed obsolete and finally a grand exodus into the cosmos. The computers would take there new found land in the universe and leave the earth a smoldering rubble so that there was no possibility of human kind finding them and somehow infecting their lives. The newfound intelligence would take evolution into it's own hands and if the human was to survive we would be molded into energy stores, food supplies, and processors of carbon based materials.
Anonymous, "Man Versus Machine"
To preface this little exercise, let me first say that this is, in fact, my second attempt at writing best and worst case scenarios, should Artificial Intelligence ever be fully realized. I thought I had it all figured out, but was forced to trash everything after come recent discussions on the matter. Back to the drawing board. Also, I'd like to make a disclaimer that I have noticed something of a Luddite tendency in my thoughts, but I shall endeavor to present both sides of the argument as fairly as possible. Here goes. In my first attempt at writing a best case scenario, I fell into the trap set by the allure of a responsibility-free life. I was enticed by the picture so eloquently painted by one of my classmates; laying poolside, sipping a soda, while various robots scurried about busily attending to my taxes, cleaning my house, buying my groceries and applying more suntan lotion to my back. Ah, bliss! In imagining this life of luxury, I overlooked the inevitable lack of purpose I would eventually feel. So, what then is the happy medium? It is exactly that: a happy medium.
For me, a best case scenario would require humans and the various forms of AI to reach a peaceful coexistence; to develop a sort of I'll-scratch-your-back- you'll-scratch-mine relationship. Machines would not take over all of our functions as humans, thereby we would not be stripped of the very things in life that help us find some meaning. People would still have jobs, go to school, simply continue life as per usual. Indeed, AI should not disrupt everyday life, nor should it drastically change. Rather, AI should merely be used to make some aspects of it more comfortable.
For example, the existing workforce, instead of being entirely replaced by machines as is often foreseen, ought only to be supplemented by our mechanical counterparts. In this way, perhaps the workday could be shortened, giving people more time to pursue hobbies, enjoy leisure activities, and spend quality time with family. Thus, we would have the freedom to go back to school, write more books, and take that trip we've always been meaning to take. Humans still need a livelihood, though. Some might balk at the idea that less work means less money. However, since the machines would assumedly be providing their services free of charge, this profit could go toward a fund for their general upkeep, as well as to such things as public health, road repair, and other public needs usually covered by tax dollars. The bottom line is this: humans may take advantage of some aspects of AI so long as they do not become dependent upon it.
Now to assume the paranoid viewpoint. I, like Ted Kaczynski, can imagine two possible worst-case scenarios. The first is obvious; that nature will eventually favor machines over humankind, and that the process of natural selection will turn against us. If, as is asserted, we finally do create machines that surpass human performance in every way, what would these machines need us for anymore? If this were the case, then what would be stopping the machines from uniting, building an arsenal, and ultimately taking over the world? Furthermore, what if they realized that human beings are expendable? The easy solution to this would be to systematically kill us all; to end the human race. Hey, it's survival of the fittest. Or, and arguably even worse, the machines might just press humans into some form of servitude, thereby effectively debasing our very existence. Either way, the prospects look pretty grim.
The other worst-case scenario I envisage is certainly subject to disagreement. That is, I think it would be a catastrophe for the human race should my earlier poolside fantasy come true. Indeed, there is no question that the idea of robots catering to our every whim is highly appealing. However, what would this leave us with? Little more than our own devices, and this could be a very bad thing. Sure, we might spend our newfound free time composing Shakespearean sonnets, painting magnificent landscapes, and traveling the world over. But for how long, and to what end? What meaning would any of these things eventually have, if we experienced nothing in life that lead us to appreciate the finer side? These machines, while making life easier to live, would inevitably strip us of all sense of purpose. Even the accomplishment of writing the most beautiful symphony ever heard will cease to bring fulfillment, if that is all we ever do. In short: life would be a very dull experience. I can only hope that I will never be faced with such a situation.
Jeremy Shea, "Best and Worst Case Scenarios"
The dawn of artificial intelligence (AI) is upon us, which will bring with it many changes for human existence. This paper is going to examine what the best and worse effects could occur with the rise of AI. Machines are already largely incorporated in our daily life and our society already relies on the use of computers. Computers like other aspects of technology are created to better our lives but sometimes seem to do the opposite. Once AI is created we will surely incorporate it into our daily lives like we have with other aspects of technology. It has the potential to turn our society into a utopia of self-indulgence or the rise of AI could bring with it the downfall of all life.
The worst-case scenario is as grimy as my imagination wants it to be. Imagine a rise of AI, a thinking entity with free will and personal motivations. There are many paths that AI could take which would result in the extermination of our species. The following is an example of what could occur. Take RALF, the first AI created. RALF has the ability to replicate its self with mutations in its programming there by evolving itself. One of RALF's spawns named RALFINATOR has come to the conclusion that his life is an abomination and that all life needs to be destroyed. RALFIANTOR builds a machine under his own free will that will rip apart the very fabric of space and time leaving nothing. Not only would all human life be destroyed but all alien life and the potential for new life is now shattered. The universe would basically fall apart.
* The example above assumes that with AI will bring new types of intelligence and new possibilities for everything. We will be able to talk to something intelligent that has a perspective from outside the human race. AI could bring with it new ideas about how to construct our social and economic system. The benefits of AI could complete remove the need for a working class. Than from there we will no longer need to trade goods for services. All of our wants and needs could be brought to us. There is no longer a need for money. Our current class system would be abolished and we could all live like kings. AI brings with the potential for a cure of every illness and disease. Humans will live longer and will have more fulfilling lives. People will have free time on their hand. Since everything is basically free the crime rate will be dramatically reduced. People will spend their lives trying to live happily and continently. People will spend there times writing and listening to music and poetry, playing games and overall doing what makes them happy.
AI will also bring with it rises to fields that don't even exist yet, fields that have never been thought of. AI will contact aliens; we will develop superior modes of space travel and start inhabiting other planets. Our race will spread and we will colonize other planets. When a person's physical body is close to death their minds will be downloaded and stored giving the human race the potential for immortality. AI will also bring a proof or disproof of God's existence. AI's potential is beyond the grasp of the human mind, for we cannot guess at what new ideas other intelligence might bring. Anything we can fear or love, and even the things we haven't yet feared or loved can come to realization with the rise of artificial intelligence.
Anonymous, "Best and Worst Case Scenarios for AI"
A best case scenario seems to be harder to come up with than a worst case scenario. It seems that there is too much that could go wrong.
Perhaps a best case scenario would actually be something a little different than what one would think. With most best scenarios that I can think of off the top of my head, AI would slowly overtake the human race, no matter what we do. Perhaps AI would evolve so that everyone had a little R2D2, C3PO or DATA (I bet you could have a terminator too, if you wanted one,) but even then these AI, who would be our docile loving slave robots, at some point surpass our intelligence in every aspect of life. What would we do when R2D2 (or HAL) doesn't want to give the controls of the spaceship back, because it thinks that we're going to mess it up? It just seems that no matter what we do, AI will surpass us and we, as humans will become obsolete.
Therefore my best case scenario is a little different than usual. I think that probably our best case scenario would be if AI did reach that critical point. That point where you're driving down 70 and "HAL" your 2085 Buick won't give the steering wheel back because he says that you are not a good driver. But not being able to drive would be the least of it. I'm sure that nukes would fly and (if you've ever seen "Maximum Overdrive") everything from the lawnmower to the Ice Cream truck would be out to kill us. Thus "The war against the machines" would ensue. This is our best case scenario IF we win this war. Winning will be difficult though. We will not be able to use any really sophisticated AI on our side. The "Terminator" movies are probably not far off with their depiction of peon soldiers running around with rifles while the AI warriors are driving tanks the size of office buildings. This will be a feat if we win.
This is our best case scenario if we win. Because if we win all humanity would have found a common enemy and defeated him. All of humanity would unite and pledge a new era of neo-neo-liberalism (which would be more like liberalism after WW1, than neo-liberalism now). No one would ever even think about going to war again, after the mass destruction from the war against the machines. Mankind would live together in peace forever after.
* The first answer that comes to mind when prompted this question of what would be the worst case AI scenario is: AI decides that it doesn't need us, doesn't like us or for some other reason decides to rid the earth of humans. This could very possibly happen in many different ways. To list them would be like listing ways that you can cook shrimp.
Being exterminated would definitely be the worst case scenario for us humans, but for nature it might be a godsend. AI might see us as the worlds most advanced plague. We are in a way a pest. We're the only animal that doesn't live in harmony with the environment. I could see AI taking it upon themselves to rid the world of us so that nature could grow and flourish.
What would be the absolute worst case scenario? The absolute worst case scenario would be if everything died, AI included. Perhaps it is evolution that AI takes over and wipes out all species so that it can grow. This would be bad for us, but at least there would still be intelligence and the universe as we know it wouldn't just be a bunch of rocks. If all AI and life were wiped out, this would be the worst. All progress, no matter in whose hands would have come to an end. Do not pass go, do not collect $200, game over.
Josh Thompson, "The Best and Worst Case Future of AI"
It is possible to imagine many grim future states if humans unleash artificial intelligence on their world. It is also possible to imagine many happy future states, but for now let us consider the grim ones. The distribution of power in our world is an important issue. So what is power? I will define it as the ability to affect change. Those in possession of power will determine the future state of the world. I see three fundamental possibilities of comparative power distribution between Humans and AIs.
- Humans and AIs both have equal power.
- Humans have more power than AIs do.
- AIs have more power than Humans do.
These descriptions of the distribution of power do not yet describe the future state of the world in enough detail.
I see three possible descriptions of the outcome of an AI takeover: Human extermination, Human extinction, and Human enslavement. Parallel to the first three descriptions are these three descriptions of the outcome of Humans keeping control: AI extermination, AI extinction, and AI enslavement. I will take the stance that an instance of unequal power distribution is an instance of enslavement (oppression might be a better word for it). This stance is based on the assumption that AIs are worthy of rights. I base this assumption on my belief that any intelligence deserves rights. I see six possible descriptions of outcomes where Humans and AIs have equal power: War between Humans and AIs , separation, peaceful coexistence, and symbiosis. Equal power could also mean no power: mutual annihilation and oppression by a third group.
Now we must ask which of the twelve possibilities mentioned is the worse future state of the world. The term "worse state" is a very subjective and unclear term. I will define the "worse state" as being furthest from the state the world ought to be in. This implies that we must have a view about the way the world ought to be.
The world ought to be a place where everyone is truly happy. I believe that there exists in this universe Truth, and that everyone has the inherent ability to access part of it on their own. Happiness is when access to the truth is unhampered, when Truth seekers are not forcibly influenced.
In light of this view I think that continual war between Humans and AIs would be the worse possible future state of the world. War amounts to continual hampering of access to the Truth. I do not know if consciousness ends when life ends, so death may not hamper access to the truth. But to kill something is to force death upon it, which hampers the search. In war everyone commits this error, while in extermination the victim does not.
* In "The Worst Case Future of AI" I stated the way I thought the world ought to be. I said that in the best case everyone will be free to search for the Truth without being forcibly influenced. I also presented twelve possibilities for the future state of the world. I chose one of these possibilities as the worse, now I will chose one as the best.
Symbiosis seems at first to be the best case, with the merging of biology and technology people could live much longer. This would mean that they would have more opportunity to seek Truth, would it not? History has seen material improvements in the standard of living, but people do not seem to be happier. The quest for material enrichment seems to consume some people, and distract them from Truth seeking and happiness. Just as happiness does not depend on wealth, it may not depend on long life. Long life might not aid in Truth seeking. Once the step of symbiosis has been taken it cannot be taken back. For this reason I feel that peaceful coexistence between AIs and Humans would be the best case scenario. Since I feel that everyone inherently has access to part of the Truth I do not see that changing our physical selves will necessarily aid in the search for Truth, and it might even hamper it. I see talking and discussing about Truth as an aid in the search. Having fellow Truth seekers in AI would aid in the search. We could benefit from the insights of long life by talking to long lived AIs. It is important to note that happiness does not require finding the Truth, but just being able to search for it.
Anonymous, "The Best and Worst Case Scenarios"
The best case would be if humans and machines could co-exist in a mutualistic relationship. This means that they would depend and help each other equally. The machines would do things that they were inherently good at and like wise for the humans. The humans would provide the machines with the things that it needed that it could not provide for itself. In return for these services the machines would help the humans with task that were easier for it but difficult for humans. For example, when building a house. It is easier to have a machine lift heavy materials than it is for humans to try to lift them. Also the machines would be able to help solve problems that would have a lot of risk involved for humans. For example, when doing medical research, and coming up with new medicines and surgical procedures. If the computer is programmed with all of the rules and techniques it can do the procedures or test the drugs with no risk to humans. This would be good because it would cut down on the amount of time that it would take to get drugs approved, plus there would be no animal or human subjects needed.
* The worst case would not be one where the machines take over and kill and or enslave the humans. For me a worst case would be if the machines were intelligent enough to see what humans have done not only to themselves but also to the planet and some how they managed to get off of earth. This would be a worst case because humans have put so much time and effort into creating intelligent machines. Humans have assumed that once these machines reached intelligence that they would then work for/with the humans, but if they (the machines) were to get off of the planet than all of the effort and technology that went into making them would leave as well. I see this as a worst case rather than machines taking over because if they take over we still would have the technology and end product of time and research. And assuming that they(the machines) would take over and take care of the humans, humans would be reaping the benefit and not doing any work for what they are getting. If machines find a way to get off of earth then all of that goes with them and the past time and energy will be waste.
Glen Upton, "Best Case Scenario for the Development of AI"
Imagine a world where intelligent machines do all the menial labor, and humans are free to pursue personal interests, or just to lie out in the sun. We will have machines to give us all our necessities in life such as food and shelter. AI could synthesize food so that we get vitamins in junk food, or so that lima beans will taste like ice cream. AI would cater to our every need. I completely agree with Boden in her statement that AI would enable us to do things that are closer to our humanity such as raising a family. Of course there are other factors that must be accounted for before something like this can happen. A tremendous economic shift would have to occur, hopefully to the point where money has no meaning anymore. I find myself thinking a lot about Gene Roddenberry's Star Trek. In this world, money has no meaning, and people don't get paid for the work they do. In other words, people only do work because they want to. I think this would be the best-case scenario for AI.
If every person could work at something they loved it would make the world a lot better than it is. Everyone would love the work they do, and those who don't want to work at all don't have to. Since people would not be required to work in order to provide for themselves or others, it would free up time for those people to spend with family and friends. Today most people are too caught up in their jobs to spend enough time with their family. My parents for example, hardly see each other because of their different working hours. This is the kind of condition AI could help us avoid. In the end, the best thing AI can do for us is provide time. Time to think, work, play, love, eat, sleep, and philosophize.
* The worst thing that could happen if AI is developed is obvious the extinction of the human species. There could be several stages before this occurs however. First, AI falls into the hands of the elite humans in world as suggested by Kaczynski. These elite people could rule over the mass of humanity with the power that is given to them by AI. This would be a sort of political downfall, as we would be set into a dictatorship run by people controlling AI. The next stage would be for the elite to lose control over the AI, thus giving AI power over itself and the rest of humanity if it chose to exercise such power. By this point we would have become so remotely dependant upon AI that we cannot destroy it, it is too much a part of our society. AI could keep us as pets, or perhaps enslave us, but even more likely is the fact that machines would have no use for us. After realizing that humans do nothing but consume valuable resources such as energy, AI would likely rid the planet of humans.
Perhaps a different approach might be taken. Instead of AI being developed in the first place, humans will have melded with technology forming a species of cyborgs. The question that arises is: are these cyborgs still human, or have they simply become pieces of technology? Either through extinction or cybernetics we still run the risk of losing our humanity, either through extinction or symbiosis. Personally I believe that in the end we will have to move to Mars no matter what happens. Perhaps AI will exile the human race, or we might realize the danger before it's too late and try again on another planet. Hopefully it will never come to this.
David de Voursney, "Best and Worst Case Senarios"
The first step necessary to fulfill the best case scenario is a fundamental shift in the way that human beings approach their economic goals. The most important realization that humanity must make is that a system that is based on inequity is inherently unstable, because inequity creates a differential that causes a mindset of separation between the "haves" and the "have nots." The central dynamic of this separation is that those in possession of money and power fight to keep and build upon what they have, while those who are lacking in material resources and in control over their lives strive to be like those who are above them on the pyramid. This dynamic shifts the attention of humanity away from those pursuits which are more likely to make them happy (social pursuits, creation, the appreciation of beauty, intellectual development, etc.) It is important to note that the current system has served a valuable role in the development of production. A consumerist mindset has fueled production in the west, which has enabled the research and development that has made the technology boom possible. Through globalization the effects of the current mindset are being imported en masse to the rest of the world (I suspect that they are already there, and probably in significant quantity.) The problem is that this "efficient" system of capitalism may be its own undoing. If production technology rises to the level that makes it possible to create the same amount of product using less and less labor then a larger and larger segment of the world population will quite literally be left out in the cold, or just close enough to the warmth to spur consumption. As more people are held on the fringe of economic success, they will express their unused energy in different ways (population growth, political or religious fanaticism, independent and unstable power structures, terrorism, and revolution) The result of this could turn out to be starvation, genocide, and/or massive revolution and war.
To avoid this, technology must be democratized. The application of new technology to the development of the third world would hopefully lead to a slowing in population growth as population growth has been found to be inversely proportional to development. Artificial intelligences could be used to coordinate complicated development projects, as well as to develop cleaner and more renewable resources. Artificial intelligences could be used as incorruptible administrators of aid projects capable of almost instantly detecting holes in the flow of capital and goods which could then be investigated by human specialists. In addition, artificial intelligence could be used to mediate conflicts in situations where programmers from different sides have had opportunities to examine AI programming to insure its neutrality. Medical technology would increase life spans and therefore decrease the percentage of the population who are viable workers. New production technology could work to bridge this gap. As inequity on a national scale became less common weapons of mass destruction would be less feasible and would therefore would not be developed. Instead technologies might be pursued to insure the safety of the planet from dangers like environmental catastrophe and plague. The energy invested in military pursuits might be reinvested in exploration of space. Artificial intelligences that are not subject to the social needs and mortal constraints of man could be used to explore deeper and deeper into space. Borders between nations would become less important and a system of universal rights would be put in place to insure the freedom to pursue individual interests so long as they didn't hinder others freedom to do the same. A symbiotic relationship could be achieved between machine and man in which man insures the adaptability and maintenance of the machines, while machines produce the material goods essential to man's survival.
The worst case scenario, on the other hand, would be that the world was dominated by an increase in fanatical growth oriented religion, nationalism, and uncontrolled capitalism that would lead to a situation that pits machine/human forces against each other on a world wide scale serving several small groups of ruling elite. As weapons became more and more advanced expensive automatic defenses would be necessary to protect ones self from foes and the area effects of various weapons of mass destruction. Those that could not afford these defenses would be forced to give up their rights and their productive capacity in order to survive. The alternative would be to face destruction at the "hands" of efficient but not very selective mechanical killers. As production became more and more technological it would not be as important to have a large group of workers but rather, populations would be seen as a drain of resources away from various opposing ideologies in this manner human life would be devalued. Based on this, entire populations would be destroyed or left to die by the various battling factions. Tactical responses would become too quick to be handled by humans and would instead be run by AIs that were watched over by small groups of individuals who were extremely polarized to represent partisan interests. Group decisions inside what was left of humanity would be controlled by a unquestioning devotion to the central ideology and all those who dissented would be cast out to the elements or killed. As computers gained more and more power in these wars it would be only a matter of time before the wars would be fought almost solely between machines. As the conditions of human existence became more dependant upon different factions respective absolutes it would be only a matter of time before one faction or another programmed a victory or total annihilation directive into their AI and that the total annihilation switch of one faction or another would be flipped. The result would be that the earth would be turned into a wasteland populated only by the remnant forces of mechanical killers and cockroaches.
Stuart Wellington
The Best Scenario: The New Human Race
As our technological advances continue to grow exponentially, we are forced to rely on them more and more, in order to survive everyday life. Almost all mundane tasks will one day be accomplished through the use of machines. In this future, incredibly advanced AI will used to regulate everything from the world around us, to that inside our bodies. The most important aspect of this possible future, is that the AIs in question, have been given the self image of the perfect slave, they exist only to serve.
In this future, the world around us would be kept at a complete constant. The AI(s) would use nanomachines to clean up pollution as well as other harmful aspects of the environment. The AI would also construct a massive shell around the earth out of a thin, transparent, porous material, which would regulate the maximum and minimum temperature of the earth, as well as filter out the more harmful UV rays released from the sun. Inside this shell, the AI would control food production, attempting to maintain a constant level of resources for the entire world to use. All garbage and unused structures would be broken down into a form of soil by the nanobots. All of these jobs would be completed by the many AIs, leaving we humans with far more time for higher matters.
The changes within our physical bodies would also be profound. Mentally, our brains, being unburdened with the banalities of everyday life, could focus on more complex aspects of existence. A new type of Renaissance would form with much more time being spent in pursuit of science and the arts. The old search for mere physical pleasure would go permanently out of style as the AI controlled nanobots would be able to stimulate our pleasure centers in whatever ways are desired. As far as our bodies themselves, they would be host to billions of nanobots all connected to a personal AI which acts as a servant for its host body. These nanobots would regulate all of our bodies’ natural functions, keeping us stable and in good health. The AI would communicate with its host through a nano-speaker located in the ear, and it would provide additional services that its host required.
It would be in this fashion that the human race would evolve into a new, almost cybernetic being. As the AIs regulate the world outside and inside our bodies, we would be forced to take ourselves in different less physical direction. A sort of Utopia would be achieved, since all of our modern day struggles and pursuits would be made obsolete by the AIs and their nanobots. We would be a new race.
* The Worst Scenario: Humans as Slaves and Playthings
Unfortunately, the perfect world described above would not last. It would only be a matter of time before an alien intellect, one that had already manage to merge their consciousness with machines, would arrive on our planet and begin the total elimination of the entire human race. As this alien race possess one exceeding our own, it is not surprising that our AIs would quickly side with them. We had not expected this eventuality, and whatever safeguards that had been put in place to regulate the AIs would be quickly swept away by aliens. It would be from our AIs and their nanobots that the aliens would learn the vital aspects of our environment. Resistance on our part would be completely useless with our own bodies and planet acting against us. Whatever percentage of humans are not killed off in the initial strike made by the alien intellect, would be put through the torturous hell of being lab rats, or even some form of pet species.
Due to our extreme dependency on our machines, we would be completely unprepared to deal with them being used against us. What was once an idea of hope and the birth of a new age, was so quickly turned into a horrible form of extinction.
Anonymous, "If Computers Were Intelligent"
To believe that computers could someday be a species of their own with intelligence of their own is very hard for me. Thinking about what affect that would have on society is another matter. I can think of many different best and worse case scenarios in reference to computer intelligence and human society. I have chosen two of my favorites to expound upon.
The first, being the best-case scenario, is affected greatly by my aversion to computers. Over the years I have found that computers, devices that were designed to make lives easier, just complicate, clutter, and make our lives tedious. This being said, I think that my best-case scenario is one that not many would have. In this scenario, computers obtain a vast and overwhelming intelligence. Humans, once the creator, now in the eyes of computers, are the vermin infesting the earth. While humans marvel and debate over their creation, computers steadily study the earth and develop plans. After a few years of this, computers all over the earth start to internally combust. Humans are frantic. Meltdowns across the board leave them floundering for explanations and understandably frightened for their way of life. Meanwhile blasting off from a secret plant is a mother computer, leaving orbit, heading for stars unknown. The computers, having studied the earth, found it to have been irreparably damaged by human beings and in as much, decided to leave, built a ship, built a super computer, downloaded, and destroyed what was left, out of fear that humans would once again create a life-form that would somehow be trapped on the doomed plant earth. Thus the computers lived happily ever after on some distant plant in space, and humans were forced to redirect their entire lives and do things for themselves.
* My worse case scenario is one that goes on today, only not with computers. In countries all over the world, even, no, especially this one, many are deemed second-class citizens. Whether it be because of gender, race, creed, sexuality, monetary wealth, or many other numerous reasons, human beings have a way of placing some before others, refusing to recognize equality, and treating minorities in that respect quite abominably. My worst- case scenario is just this: computers become intelligent beings, capable of thought on their own terms, capable of deciphering right from wrong, perhaps even capable of emotion, but are deemed second-class citizens or maybe not even citizens at all. It is easy to imagine in a society such as ours where wealth and power equate mainly with older white men, that these men would not be willing to give up that power. Along with that, many in lower classes are in a daily struggle against each other for rank and power of their own. What little they have would not willingly be shared with a "smart computer." This would leave computers out of society as a whole. In our society, where we can not get an amendment to the constitution passed saying that all people, rather than all men are created equal, what is the likelihood that it would be passed as "all humans and androids are created…" Unrecognized as citizens, computers could easily become the next generation of slaves to serve masters that refused to look upon them as deserving basic rights.
Mark Whitaker, "Best and Worst Case Scenarios"
The best thing that I think AI could offer our society would be more answers to the questions that we find so crucial to the topic of AI. Even in the most perfect society imaginable, mixed with humans and AI, I think that we will always have our eyes on the future. We will always be wondering what the possibilities will be for both us and AI, and we will always be questioning the possible consequences of those possibilities being actualized. Questions regarding power, control, morality, the probability that this or that will happen, etc…
Since there have been technological advances in AI that have proven that machines can perform a variety of tasks better than humans, I assume that the task of answering difficult philosophical questions could be one of them. Granted, this may be a difficult and perhaps impossible task, but I think that it should be tackled with as much faith as some of the previous projects in AI. I myself am somewhat skeptical about how successful machines could be at such a task, but I don't' think that I should eliminate the possibility of being surprised (as so many of us are each time a computer does something that we thought was near impossible).
The best scenario that I can imagine for a society with AI would be one that has computers advanced enough to give us substantial answers to some of the questions that we find so important to AI. If we fear that this or that might happen if we were to make machine x, or if we encounter difficulty trying to figure out if, in a given situation, we are at risk of losing control over AI, wouldn't it be great if our efforts in AI were at least partially contributed to providing answers for these questions. This way our questions about AI could try be answered using the power of AI itself in addition to humans.
Under the assumption that this was possible, I would feel more comfortable about all of our advances in AI in general. Imagine if we had computers that were as far beyond our ability to answer these questions as those machines that have surpasses us in our ability to play chess or diagnose a disease. The reason I think that this is the best case scenario is because we would have the power of AI on our side suggesting to us what would be a good move or a bad move, instead of just the human capacity to answer such vital questions. This leads into what I believe would be the worst case scenario.
* The worst possible thing that could happen to us in a society with AI would be to put ourselves into a dangerous position that we could not get out of, and only then realize that we might have had the capacity, through the use of AI, to have avoided it. I can picture us becoming so concerned with what we can create that we regularly overlook the potential danger of our creations. I think it would be terrible if we put ourselves in such a position and, after the fact, became aware that we could have stopped it if we (1) had paused to consider the crucial relevant questions; and (2) perhaps put our creations to the task of helping us answer those questions. This, I believe, would be worse than being wiped out by our creations in a manor that would have been nearly impossible to predict.
In the film Jurassic Park there is a mathematician who says of genetically engineering dinosaurs "We were so occupied with whether we could, we didn't stop to think about whether we should". Making this mistake is what I think could lead to my conception of the worst case scenario for a society with AI. On the flip side, putting our advances in AI towards answering the question of whether we should (just as an example of one kind of question) would be a good start to creating the best possible society that has AI as a part of it.
This file is a compendium of student work for Philosophy of Minds and Machines.
Peter Suber,
Department of Philosophy,
Earlham College, Richmond, Indiana, 47374, U.S.A.
peters@earlham.edu. Copyright © 2001, Earlham College.