top of page
  • Roman Krome

Should We Fear AI?

About a year ago many people, including Elon Musk, asked for a pause in the development

of AI. The questions asked in an open letter were as follows: “Should we automate away all

the jobs, including the fulfilling ones? Should we develop non-human minds that might

eventually outnumber, outsmart … and replace us? Should we risk loss of control of our

civilisation?”.

This was an example of how AI has caused anxiety among society. ChatGPT has surprised

even its creators with its abilities, including everything from solving logic puzzles and writing

computer code to identifying films from plot summaries written in emoji.

Supporters of AI argue for its potential to solve big problems by developing new medicine,

designing new materials to help fight climate change, or untangling the complexities of

fusion power. To others, the fact that AI’s capabilities are already outrunning their creators’

understanding risks bringing life to the science-fiction disaster scenario of the machine that

outsmarts its inventor, often with fatal consequences.

This mixture of excitement and anxiety makes it hard to estimate the opportunities and risks

but lessons from other industries and past technological shifts can be learned.

The fear that machines will steal jobs is centuries old. But so far new technology has created

new jobs to replace the ones it has destroyed. A sudden loss of some jobs cannot be ruled

out, even if so far there is no sign of this. Previous technology has tended to replace

unskilled tasks but AI can perform some skilled tasks such as summarising documents and

writing code.

The nightmare is that an advanced AI can cause harm on a massive scale, by making poisons

or viruses, or persuading humans to commit terrorist acts. It need not have evil intent, but

researchers worry that future AIs may have goals that do not align with those of their

human creators. And many imagine that future AIs will have uncontrolled access

to energy, money and computing power.

Regulation is needed, but for more banal reasons than saving humanity. Existing AI systems raise real concerns about bias, privacy and intellectual property rights.

If AI is as important a technology as cars, planes and medicines—and there is good reason to believe that it is—then, like them, it will need new rules. Compelling disclosure about how systems are trained, how they operate and how they are monitored, and requiring inspections, would be comparable to similar rules in other industries. Treaties such as those that govern nuclear weapons should be signed between governments should be created. An international body of regulators could study AI safety and ethics.

A measured approach today can provide the foundations on which further rules can be added in future. But the time to start building those foundations is now.


Recent Posts

See All
bottom of page