Our reviews generally focus on products that will help enable or inform a secure lifestyle rather than more existential threats, but in the case of Life 3.0: Being human in the age of Artificial Intelligence, we made an exception. The potential implications of artificial intelligence (AI) for our future are simply too big too ignore – including many facets of security – and this book leads the reader through them beautifully.
Following a slightly disturbing scene-setting prologue imagining an ultra-intelligent AI growing out of control, the author (Max Tegmark) gets straight down to business by defining what he means by Life 3.0.
In short, Tegmark classifies life into three broad categories based on its capabilities.
Life 1.0 is biological life that can only update its ‘hardware’ (physical form) and ‘software’ (learned abilities) through evolution. Bacteria adapting over prolonged periods to become more resilient to drugs might be one example.
Life 2.0 represents the ability of life to progress beyond DNA-endowed information and to perform what the author compares to software upgrades through learning; for example, humans developing speech and language skills from birth. Yet Life 2.0 can’t readily modify it’s physical form without extended evolution: humans can’t re-engineer their bodies at will to survive harsh environments of say, other planets.
Tegmark asks us to ponder the emergence of Life 3.0: an artificial general intelligence (AGI) able to perform any task better than a human, with the ability to upgrade both its software and hardware to achieve almost any goal. As the Tegmark puts it, Life 3.0 would effectively become the ‘master of its own destiny, finally fully free from its evolutionary shackles.’
But what could this path mean for us, left bringing up the rear in Life 2.0 territory?
Following a chapter providing a crash course in memory, computation, learning and neural networks, the book dives straight into such questions starting with our near future. This third chapter covers a lot of ground including a wide range of AI-driven benefits and risks that we are already on the cusp of seeing, but which stop short of the magnitude of threat an AGI might one day pose.
Some topics encountered are already familiar (a shrinking jobs market, autonomous weapons etc), but Tegmark is also careful to balance the discussion with benefits AI can introduce – ranging from superior investing, to safer self-driving cars, to faster and fairer legal systems enabled by ‘robojudges’.
Some interesting points on how AI could become involved in both sides of the cat-and-mouse struggle for cyber security are also covered. Spear phishing emails are a long-standing threat, but how much more chillingly effective might a malicious email purporting to be from a colleague be if it was followed up by a call simulating said individual’s voice? There have already been instances of ‘deep fake’ calls being used to extort money out of unsuspecting victims and an AI could build upon and integrate such attack vectors.
Solutions to offsetting the wide range of risks discussed are also covered but are understandably abstract in places due to the sheer range of AI applications. The application of strong governance and robust human-in-the-loop control mechanisms in the design of AI systems are key points. The need for effective human-machine communications is also discussed to enable such control.
The small question of what we’re going to do with ourselves when AI platforms start outperforming humans in various employment sectors is also considered in this chapter. At this point the best advice for competing in a low (human) employment society seems to be pursuing roles requiring an element of emotianal intelligence and without too many predictable, repeating tasks.
The fourth chapter ventures well beyond the implications of narrow (goal specific) AI and considers what could happen if and when an AGI matching human intellect in every respect emerges and evolves itself into a superintelligence.
Tegmark is careful to nip in the bud the clichéd vision of red-eyed Terminators stomping around and taking over the world (if anything, these might be more akin to the comparatively limited autonomous weapons systems discussed in the previous chapter). In truth, we have no idea what global domination by an artificial superintelligence would look like, but it would likely take a far more complex form – perhaps one we wouldn’t see coming.
Quite a range of ‘intelligence explosion’ scenarios are explored in this chapter, but they all involve two chilling underlying themes: (1) a group of humans harnessing an AGI to take over the world in fairly short order and-or (2), the AGI realising it is being controlled by an intellectually inferior species and opting to do something about it.
The later chapters of the book go further down the rabbit hole – following the development of Life 3.0 to some logical conclusions and exploration of possible roles a superintelligence could take (e.g. ‘enslaved-god’, ‘gatekeeper’). Once again, some possibilities for the human species are considered, including being ‘cyborgized’ or uploaded by an AI with the chance to be emulated forever. The ‘AI Aftermath’ scenarios explored – some good, some terrifying – are numerous, to the point the author actually summarises them in the table.
Ultimately, the book leads a far more informed reader full circle back to the types of questions Tegmark encourages from the outset: which of the potential scenarios explored are acceptable? How do we contain or control an AI? Will control even be possible following emergence of a general intelligence? Or do we want to restrict AI to narrow applications? And so on.
Overall, if you want to be know what AI is, what it isn’t and how it might impact future life on earth (and potentially beyond), Life 3.0 is a fascinating read.
Cover image reproduced from Life 3.0 by Max Tegmark, published by Penguin Books Ltd. With permission from Penguin Books Ltd.