Artificial Intelligence: The Long-Term Human Factor
by Thomas Klaus

guestPosted by

Why we shouldn’t be concerned about the ‘AI takeover ‘

The Threat

Artificial intelligence (AI) is in the headlines almost daily, nearly always in alarming form. Will AI eat my job? What will happen if and when it is used for military purposes? What do we make of the assertion of the late Stephen Hawking, later supported by Bill Gates, Elon Musk and others, that ‘the development of full artificial intelligence could spell the end of the human race’?1

When researchers are asked about the risks of an ‘AI takeover’, most answer that the technology is not currently powerful enough to compete with human intelligence or is missing important aspects of autonomous intelligence. Although this is undoubtedly true, I do not find it a comforting thought, as ongoing research and investment will certainly ensure that it continues to add to its capabilities.

The Far Future

However, I want to focus on AI not in the present but in relation to the distant future.

In around 5.4 billion years, it is surmised, the Sun will expand into a red giant, in the process destroying the planets Mercury, Venus, very likely the Earth and possibly also Mars.2 We can easily imagine life on Earth as we know it today coming to an abrupt and very hot end.

So what, you may say. Well, while the end of Life As We Know It Today (LAWKIT) is not an immediate concern, whether distant or not, the likelihood that humans will one day need to leave our home planet Earth is high – and long before the year 5 bn AD.

LAWKIT is physically bound to specific ranges of environmental conditions – a specific temperature zone and specific abundance of oxygen, water, atmospheric pressure. With that in mind, it is pertinent to know that 1 billion years from now, when the Sun’s luminosity increases by 10%, temperatures will rise to 47 °C and ocean water begin to evaporate.3

In a much smaller time horizon, we can already see how humans are undermining LAWKIT on this planet – indeed, some believe humanity itself is on the road to extinction. Others cling to the belief that our powers of innovation will come to the rescue by solving outstanding environmental problems and repairing ecological damage already done. But even then, we will still be faced with the challenge of relentless population growth coupled with limited planetary resources – such that we will almost certainly need to exploit resources on other planets and moons.

Altogether it is more than plausible that at some stage we will have to prepare ourselves to leave this planet, explore the space around us and find a new home elsewhere. After all, isn’t this why humans were born with the spirit of discovery?

Hexit

A Human Exit (Hexit) from planet Earth might seem a daunting prospect. Yet we should also recognise that it has begun already. Mars has already been probed with orbiters, rovers and pathfinders. Now preparations for human missions are under way.4

This obviously raises many questions. The technological challenges of interplanetary travel lasting months or years and sustaining LAWKIT on a planet like Mars are formidable. How can humans adapt to life conditions that they were not made for? Alternatively, can we adapt other planets’ conditions to human needs (“terraforming”)? We may be able to create water and oxygen, but varying surface gravity is tough to handle. So is ionizing radiation: while Earth’s atmosphere and the magnetic field shield us on our home planet, the route to Mars and Mars’ surface have no such protection.

As for interstellar travel, sending humans to the next solar system, Alpha Centauri (“only” 4.37 light years away), by a generation starship will require several generations of humans.5 The effects on human beings and especially humans conceived and born in space are completely unknown and most likely difficult to control.

The AI Way

Do we rely on the power of human ingenuity to smooth the path to Hexit? Another option that I’d like to consider is an AI-based approach in which we principally make use of developed AI machines that explore other moons or planets and later extra-solar objects. Of course, AI will be utilized as it is and could simply be combined with the Hexit approach.

The advantage of AI machines or robots is simple: they don’t need the specialized environmental conditions that humans do. Robots can survive without water, without atmospheric oxygen and in a wider range of different atmospheric pressures and surface gravities

So, let’s invest more in AI research!

But what about the threat of an AI takeover?

The Long-Term Human Factor

We can envisage several ways in which a SuperHuman AI (SHAI) could evolve and behave towards humans were we eventually to lose control. In an AI takeover, SHAI might consider humans to be

  • Irrelevant and ignore them
  • Useful and enslave or exploit them
  • A risk or threat and decide to kill us off.

But let’s assume that, irrespective of the nature of the takeover, SHAI will be able to sustain a long-term existence by overcoming all the threats that the universe might throw at humans.

In that case, we could say with confidence that, even after becoming extinct, in all the variants of an AI takeover that we can imagine, life as we know it today, and specifically human intelligence, would continue to play a fundamental role in any future developments that SHAI was involved in. Why? Simply because humans created SHAI.

Consider: without the evolution of LAWKIT, SHAI wouldn’t exist. SHAI would be a genuine continuation of LAWKIT, including the human species.

We could even think of humans being a necessary evolutionary step from lower life forms to the creation of SHAI, giving a new kind of meaning to human life.

Perhaps SHAI, while spreading out into our solar system and beyond, will have to create another even better-adapted life form, able to overcome further challenges unenvisageable today.

Although we cannot know whether SHAI or any subsequent species would remember humans fondly, they will not be able to deny their human origins. The human dimension, or the long-term human factor, would continue to exist. In other words, we have nothing to lose.

About the author: 

Chief Expert at SAP SE, Globalization Services

LinkedIn Profile: https://www.linkedin.com/in/thomasklaussap/

This article is one in a series related to the 10th Global Peter Drucker Forum, with the theme management. the human dimension, taking place on November 29 & 30, 2018 in Vienna, Austria #GPDF18

This article first appeared  in the Drucker Forum Series on Linkedin Pulse.

2 Schröder, K. P.; Connon Smith, Robert (2008). “Distant Future of the Sun and Earth Revisited”. Monthly Notices of the Royal Astronomical Society. 386 (1): 155–163 (https://arxiv.org/abs/0801.4031)

3 See Note 6

5 “Smith, C.M., “Estimation of a genetically viable population for multigenerational interstellar voyaging: Review and data for project Hyperion””. ScienceDirect. (https://doi.org/10.1016/j.actaastro.2013.12.013)

The author thanks Mr Simon Anstey for a helpful review of this article.

One comment

  1. Thank you so much for sharing all this wonderful information !!!! It is so appreciated!! You have good humor in your blogs. So much helpful and easy to read!

Leave a Reply

Your email address will not be published.