Too smart for our own good

 

“Open the pod bay doors, Hal.”

“I’m sorry Dave, I can’t do that.”

In the Kubrick masterpiece, 2001: A Space Odyssey, based on a short story, The Sentinel, by Arthur C. Clarke, the supercomputer, Hal, is programmed to take a spaceship with five astronauts on board to Jupiter. When things start to go wrong, two of the astronauts decide that if necessary, they will disconnect Hal and fly the ship manually. Hal, however, learns of this plan and cannot let human interference stop it from completing its mission as programmed. The computer thus begins to systematically kill the humans by turning off their life support systems in order to prevent them from jeopardising the computer’s mission. Four of the five astronauts are killed before Dave manages to dismantle the rogue computer.

The movie was made in 1968*. The story on which it was based was first published in 1951. The concept of superhuman artificial intelligence was still very much in the realm of science fiction.

Fast forward to 2023: last week the chief executives of three of the biggest companies currently developing AI made a joint statement calling for development of new AI systems to be shut down. The current AI model - Generative Pre-trained Transformer 4 (GPT4) – is in use, but the developers themselves fear that the next generation AI poses the sort of risk to humanity fictionalised by Clarke and Kubrick 60 years ago.

The basic fear is that we are unprepared to deal with an intelligence far in excess of our own, but which lacks human morality. This was put to the test a few weeks ago when AI researchers programmed a drone to undertake a task, but enabled humans to override the programming. When they did so, like Hal in 2001, the drone attacked the humans. A case of life imitating art?

The problem with machines that have super-human intelligence is that they don’t have human empathy or morality. They will also be capable of reproducing their intelligence, making it faster and more powerful. The concern of the AI execs is that we are unprepared for the consequence of AI challenging human life. Such a scenario – and I used this analogy in an on-air chat with my good mate Bill Waterhouse a few nights ago – would be like a suburban under-10s rugby team going out to play against the All Blacks.

What would happen, for example, if AI was programmed to stop climate change? The supercomputers would identify the root cause of the problem, humans, and remove it. As AI is not a moral agent, there is no question that it would remove us in exactly the same way as we bulldoze trees in the way of a new road.

But then, what happens if AI develops an awareness of self? What if it does become a moral agent and therefore an organism with free will? It cannot then be owned by humans – as moral agents we can’t own another moral agent**. It must therefore be permitted to do its own thing.

How many of the systems that keep our society functioning are computerised? Pretty much all of them. Transport, defence, communications, finance, security. Investment banks, Goldman Sachs estimates that up to 300 million jobs worldwide could be lost to AI, allowing the biggest and richest companies to profit by using AI instead of stupid humans.

The AI execs who signed the statement calling for a moratorium on AI development until we catch up with what we’ve created, identify 15 risks of AI. These are:

  • Lack of transparency
  • Bias and discrimination
  • Privacy concerns
  • Ethical dilemmas
  • Security risks
  • Concentration of power
  • Dependence on AI
  • Job displacement
  • Economic inequality
  • Legal and regulatory challenges
  • AI arms race
  • Loss of human connection
  • Misinformation and manipulation
  • Unintended consequences
  • Existential risks

 Science fiction is full of examples of machines gone rogue, but until now, it's never been put to the test.
Are we ready to control what we've created? 

The creators themselves don't think so.

 

*Interestingly, in 1968, while Kubrick could bring to cinematic life superhuman artificial intelligence, a lunar colony and advanced space travel, he was unable to conceive of a computerised, gender-balanced workplace. An early scene in the movie shows a busy office in a colony on the moon, which features a vast typing pool of young women sitting at manual typewriters while all the managers are blokes in suits.

**Yes, last time I checked slavery and the buying and selling of humans was still both illegal and morally wrong.

 

Comments

Popular posts from this blog

On hiding Barnaby

On the problem of mindfulness as a consumer product without ethics

On the conflicted nature of unfriending people