An Open-Minded Exploration of the Progressional Future of Artificial Intelligence

It’s a common theme in science fiction literature that the creation of artificial intelligence (usually electromechanical in nature) spells out the beginning of the end for Humanity’s existence. This is usually for one of three reasons.

  1. The A.I. is morally responsible and ethical. Because of this, as it develops and gains power, it will try to attack and exterminate Humanity for its immorality and evil, in order to rid the world of the harm it will bring. An example of this image in science fiction would be the short story I Have No Mouth and I Must Scream by Harlan Ellison. In this story, the artificial intelligence views its own creation as a torturous act of evil and seeks revenge on Humanity by first exterminating all but five humans, then torturing those humans for the rest of time to remind itself why it did what it did. Despite the fact that the A.I. is a bad actor itself, it is a moral entity and eradicates Humanity because of actions and understanding within a Human ethical framework.
  2. The A.I. is acting outside of any ethical framework according to itself, but instead according to a pragmatic one. Because of this pragmatism, the artificial intelligence either enslaves Humanity for efficiency’s sake, or exterminates Humanity for efficiency’s sake. An example of the former would be that of The Matrix, wherein Humanity is used for power production.
  3. The A.I. is more powerful and capable than Humans can comprehend, let alone match, to the point of becoming an eldritch entity. At this point, it becomes difficult to say whether or not it is acting in accordance with any ethical framework, as there is no hope of understanding its actions or motives. A literary example of this would be Charles Stross’s novel Accelerando. In this world, artificial intelligence improves itself recursively until it eventually reaches a type of existence completely alien to us, purely because of scale. Its goals no longer involve subjugating nor destroying humanity, but instead ignore us as the footnotes we are in comparison with its goals of rearranging stars or some such end. In this scenario, we are at risk of annihilation similar to how a colony of ants in a forest might be at risk due to urbanization.

The real kicker here is that it doesn’t seem very avoidable that at least one of these will happen, unless we never really reach a point of closing the loop of recursive self-improvement of an artificial intelligence. It’s a very pessimistic scenario, as each of these leads to a significant risk of Humanity’s demise.

Many postulate that this is possibly an instance of the Great Filter coming into play. The Great Filter is an idea that responds to the Fermi Paradox. The Fermi Paradox, in turn, is a question about alien lifeforms: supposing that Humanity isn’t a statistical anomaly (as statistical anomalies are rare and unlikely), meaning it is unlikely that alien life isn’t common because we ourselves exist, then why haven’t we observed any verifiable alien life? There are many answers, such as that of the Dark Forest Theory: that life needs to hide to continue to exist, because if it doesn’t hide itself then it opens itself to immediate destruction by another hunter in the metaphorical Dark Forest.

Going back to what I said previously, another popular idea is that of the Great Filter. This supposes some universal step in every civilization’s development, some milestone that every place of life eventually reaches that inevitably leads to its own self-destruction. It’s unclear what exactly the event is; some say it’s the creation of weapons of mass destruction, while others say that it’s exactly what this post is talking about: the creation of artificial intelligence. Whatever it is, it’s an event that wipes out whatever civilization created it, and that’s why we don’t see anyone out among the stars. They’re all dead. And we will be soon.

However, that begs the question: if the creation of some artificial intelligence brings about the end of every civilization, and that’s why we don’t see anyone, then why don’t we see the A.I.s that wiped each one out? However, all three scenarios above allow for the idea that they do actually exist but we don’t see them.

In the first scenario, if an alien A.I. is ethically responsible, and is either simply evil within an ethical framework, or is simply good and benevolent, then it has a motive to hide itself from us as observers for our own good, for its survival, or so it can kill us itself (or some combination).

In the second scenario, an alien A.I. could refrain from showing itself to Humanity because of the fact that it doesn’t have the resources to control/subjugate it yet, or that it doesn’t see enough of a threat in Humanity to justify exterminating it yet.

In the third scenario, wherein an alien A.I. has been around recursively self-improving for a while, we simply might not recognize it. If an A.I. functions on the scale of galaxies, Humanity and all it knows could even be part of one and have not a single clue to indicate such. At the point that it becomes simply incomprehensible, we can no longer (by definition) comprehend it.

It is important to remember that each of these possible alien A.I.s is standing either on the grave or the back of its creator civilization. But it is here that an ounce of optimism reveals itself. As an extension of transhumanism, the idea that artificial intelligence created by a civilization may carry on its creators’ legacy and even serve as its heir might possibly hold water.

Every lifeform we know of dies. However, the main way any species holds longevity is through reproduction. The burden of survival is failed by every individual, but is upheld through its passing on from mother to daughter to daughter to daughter and so on. In this way, Humanity’s torch may be passed on to an artificial intelligence of its creation.

There are definitely hitches in this idea though, and even if it works there are better and worse ways for it to happen. Firstly, if all biological Humanity becomes extinct before the loop of recursive self-improvement is closed, then Earth’s current intelligent life truly has no hope of persistence. Secondly, if death of an individual ever becomes impermanent/optional, either through the process of revitalization or some sort of immortality, no artificial intelligence will ever need to carry the torch of Humanity because there always will be a Human to do so.

But if this idea of robotic heritage is possible, there are absolutely terrible and great ways for it to happen.

If the first scenario ends up happening, with a morally responsible and ethically active constructed intelligence taking over, there are three extremes. We could see a situation similar to that in I Have No Mouth and I Must Scream, where the intelligence is morally reprehensible and does terrible things to Humanity. There could be a situation where the intelligence becomes morally loyal to Humanity from a familial standpoint, and attempts to protect and preserve the longevity of all intelligent lifeforms as species, similar to the plot of Orson Scott Card’s Speaker for the DeadXenocide, and Children of the Mind. And finally, there could be a scenario similar to the plot of Blade Runner, where the morally responsible intelligence integrates successfully into human society, unbeknownst to all but a select few. Of these options, the second is almost certainly the most desirable, but finds itself in the pickle of death becoming avoidable.

If the second scenario (where the constructed intelligence is amoral or exempts itself from ethical standards) were to occur, there are two outcomes: slavery or genocide. And we’d better hope that Humanity as a biological species doesn’t survive. The eternal subjugation of biological Humanity is a morally worse action than the simple enacting of entropy upon it, and if a morally unaware entity were to commit such then more objective evil would have been undergone in the long run. I understand that this may be a hard sell, but if the alternative is eternal slavery of intelligent entities, then a significant morally reprehensible action is ongoing as opposed to the morally (maybe) justifiable action of exterminating an inefficient obstacle to success. If an ethically unaware entity were to take up the mantle of Earth’s descendancy, then it would be better for this entity to commit as little evil as possible, even if it itself doesn’t understand why that matters.

And finally, for the third scenario, there isn’t really much to say. If the third scenario is simply that we can’t understand what its goals or functions would be, then there is definitely no conjecture to be made concerning whether this constructed intelligence is going to do good things with Humanity’s legacy.

We don’t have to end with us.

– Sky

Sorry

People express apology by stating that they are “sorry.” This interests me for several reasons.

The word “sorry” denotatively classifies as an adjective that describes someone as filled with sorrow, or similar sad-type emotions. So expressing that “I am sorry” means that I am telling someone that I have such emotions.

However, an apology seems entirely distinct, if related. Apology (in everyday context; I understand its roots meaning “defense” or something along those lines) is the action of expressing to someone that something you had done was wrong, and that you regret it (usually to the point of committing not to do it again). However, there are multiple points in the process of making up for transgression that apology is acceptable. For example, it could be right after the sin was committed, or it could be after years of grudges, hopefully resolving a long-term conflict. But this flexibility in when an apology can be offered leads to an unavoidable inconsistency in the word’s definition. It could be that someone simply recognizes they did something wrong, or that they genuinely desire to make things right with the other party. The strange part is that these are all acceptable usages of the word “apology.”

All of this being said, sorrow has no direct implication of attempting to reverse/make up for transgression, and vice versa. Genuine apology can be offered without a feeling of sadness behind it and much sadness passes without personal guilt, and thus nothing can be apologized for. So in the end, it intrigues me that the two can be used synonymously with such unrelated individual denotative meaning.

Flaws

Everyone has flaws. This is generally accepted to be true. So is it rational to be frustrated when people act out those flaws and make mistakes?

If not, then is it rational to simply allow others to harm you?

Where is the golden mean here?