How high are the stakes?

By oktober 30, 2017 november 2nd, 2017 Algemeen

Humanity is at the beginning of a technological revolution that is evolving at a much faster pace than earlier ones, and that is so far reaching it is destined to generate transformations we can only begin to imagine. Emerging technologies will change what have seemed to be the fundamental constants of human nature: in fact, they already are and, as a result, it now seems possible to drastically improve human memory, cognitive processes, and physical and intellectual capacities—even to the extent of extending our life expectancy to such a degree that it may actually change our concept of mortality. Offering absolutely new perspectives for the human species, including a move towards a post-human era in which people, with enormously augmented capacities, will coexist with artificial intelligences that surpass human intelligence and will be able to reproduce autonomously, generating even more intelligent offspring—a situation known as the singularity. Another possibility that is increasingly close to hand is the expansion of human, or post-human, life beyond our planet, as well as contact with other intelligences in different parts of the universe. The possibilities are enormous, but the questions they raise for humankind are equally significant.

Possibilities for all, or for some?

Many technologies currently deployed and being developed deserve more careful attention because they have the potential to (re)construct humans and society on an unprecedented scale and scope.

Conceiving of vulnerability as a precondition of being and becoming human – as an ontological given – bound by the fact that we are relational beings, exposed to one another. We are exposed, that is, by virtue of being finite, dependent and limited; and that exposure and vulnerability are what constitutes us as moral beings. Does (our) life requires need to undergo a final upgrade, to be master of its own destiny, finally fully free from its evolutionary shackles?

The fact is, if we’re being truly logical and expecting historical patterns to continue, we should conclude that much, more will change in the coming decades than we intuitively expect. Logic also suggests that if the most advanced species on a planet keeps making larger and larger leaps forward at an ever-faster rate, at some point, they’ll make a leap so great that it completely alters life as they know it and the perception they have of what it means to be a human—kind of like how evolution kept making great leaps toward intelligence until finally it made such a large leap to the human being that it completely altered what it meant for any creature to live on planet Earth.

Technology is seen as a continuation of human evolution. By way of consequence, a deep symbiosis between human and machine up to the emergence of post-human entities might occur. The distinction between human enhancement and technological innovation will fade and lead to a modification of the paradigm of hybridization technological innovation.

Finally, while there are many different types or forms of AI since AI is a broad concept, the critical categories we need to think about are based on an AI’s caliber. There are three major AI caliber categories:

AI Caliber 1) Artificial Narrow Intelligence (ANI): Sometimes referred to as Weak AI, Artificial Narrow Intelligence is AI that specializes in one area. There’s AI that can beat the world chess champion in chess, but that’s the only thing it does. Ask it to figure out a better way to store data on a hard drive, and it’ll look at you blankly.

AI Caliber 2) Artificial General Intelligence (AGI): Sometimes referred to as Strong AI, or Human-Level AI, Artificial General Intelligence refers to a computer that is as smart as a human across the boarda machine that can perform any intellectual task that a human being can. Creating AGI is a much harder task than creating ANI, and we’re yet to do it. Professor Linda Gottfredson describes intelligence as a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience. AGI would be able to do all of those things as easily as you can.

AI Caliber 3) Artificial Superintelligence (ASI): Oxford philosopher and leading AI thinker Nick Bostrom defines superintelligence as an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. Artificial Superintelligence ranges from a computer that’s just a little smarter than a human to one that’s trillions of times smarter—across the board. ASI is the reason the topic of AI is such a spicy meatball and why the words “immortality” and “extinction” will both appear in these posts multiple times.

As of now, humans have conquered the lowest caliber of AI—ANI—in many ways, and it’s everywhere. The AI Revolution is the road from ANI, through AGI, to ASI—a road that will change many things in our lives.

In any case the proliferation of intelligent artifacts, systems, and devices that are context-aware and self-adjusting creates a paradigm change. Priority seems to develop intelligent technologies that improve health, comfort, and security. More tailored to meet individuated demands and market requirements. In this perspective the premises and the main concepts of transhumanism can be easily identified: human nature is the subject of innovation and transformations. Promoting a certain pragmatism concerning exponential technologies linked to solving stringent human problems. On the other hand, it maintains the transhumanist view on innovation when it emphasizes human enhancement. This becomes visible in the idea of human enhancement and in the artificial intelligence research.

After 13.8 billion years of cosmic evolution, development has accelerated dramatically here on Earth: Life 1.0 arrived about 4 billion years ago, Life 2.0 (we humans) arrived about a hundred millennia ago, and many AI researchers think that Life 3.0 may arrive during the coming century, perhaps even during our lifetime, spawned by progress in AI. What will happen, and what will this mean for us?

A change in different directions, a new consciousness, a new worldview. It will continue to be daunting. But it is doubtful that this will require a Brave New World of centralized global moral enhancement schemes: instead, managing our emerging biomedical enhancement abilities begins with the tedious real world tasks of learning to live with human difference and meet human needs.

On one hand, thinking about our species, it seems like we’ll have one and only one shot to get this right. The first Artifical Super Intelligence we birth will also probably be the last—and given how buggy most 1.0 products are, that’s pretty terrifying. On the other hand, Nick Bostrom points out the big advantage in our corner: we get to make the first move here. It’s in our power to do this with enough caution and foresight that we give ourselves a strong chance of success. And how high are the stakes?