UK should be more positive about AI to avoid missing out on tech ‘goldrush’

2 February 2024, 03:54

Artificial Intelligence says
Artificial Intelligence says. Picture: PA

The Lords Communications and Digital Committee said the UK risked falling behind other countries if it did not embrace the benefits of AI.

The UK’s approach to artificial intelligence has become too narrowly focused on AI safety and the threats the technology could pose, rather than its benefits, meaning it could “miss out on the AI goldrush”, a House of Lords Committee has warned.

In a major report on artificial intelligence and large language models (LLMs) – which power generative AI tools such as ChatGPT – The Lords Communications and Digital Committee said the technology would produce era-defining changes comparable with the invention of the internet.

However, it warned that the UK needed to rebalance its approach to the subject to also consider the opportunities AI can offer, otherwise it will lose its international influence and become strategically dependent on overseas tech firms for a technology which is expected to play a key role in daily life in the years to come.

It said that some of the “apocalyptic” concerns around threats to human existence from AI were exaggerated, and should not distract policy makers from responding to more immediate issues.

The UK hosted the first AI Safety Summit at Bletchley Park in November, where the Government brought together more than 25 nations, plus representatives from the UN and EU, to discuss the long-term threats of the technology, which includes its potential to be an existential threat to humans as well as aid criminals in carrying out more sophisticated cyber attacks or used by bad actors to develop biological or chemical weapons.

Both the Prime Minister, Rishi Sunak, and Technology Secretary Michelle Donelan have said that in order for the UK to reap the benefits of AI, governments and tech firms must first “grip the risks”.

While calling for mandatory safety tests for high-risk AI models and more focus on safety by design, the report urged the Government to take action to prioritise open competition and transparency in the AI market, warning that failure to do so would see a small number of the largest tech firms consolidate control of the growing market and stifle new players in the sector.

Higher Education Policy Institute report
The technology would produce era-defining changes comparable with the invention of the internet, the committee said (John Walton/PA)

The committee said it welcomed the Government’s work on positioning the UK as an AI leader – including through hosting the AI Safety Summit – but said a more positive vision for the sector was needed in order to reap the social and economic benefits.

The report called for greater support for AI start-ups, a boost for computing infrastructure and more work to improve digital skills, as well as exploring further the potential for an “in-house” sovereign UK large language model.

Baroness Stowell, chair of the Lords Communications and Digital Committee, said: ““The rapid development of AI Large Language Models is likely to have a profound effect on society, comparable to the introduction of the internet.

“That makes it vital for the Government to get its approach right and not miss out on opportunities – particularly not if this is out of caution for far-off and improbable risks. We need to address risks in order to be able to take advantage of the opportunities – but we need to be proportionate and practical. We must avoid the UK missing out on a potential AI goldrush.

“One lesson from the way technology markets have developed since the inception of the internet is the danger of market dominance by a small group of companies. The Government must ensure exaggerated predictions of an AI driven apocalypse, coming from some of the tech firms, do not lead it to policies that close down open-source AI development or exclude innovative smaller players from developing AI services.

“We must be careful to avoid regulatory capture by the established technology companies in an area where regulators will be scrabbling to keep up with rapidly developing technology.

“There are risks associated with the wider dissemination of LLMs. The most concerning of these are the possibility of making existing malicious actions quicker and easier – from cyber attacks to the manipulation of images for child sexual exploitation. The Government should focus on how these can be tackled and not become distracted by sci-fi end-of-the world scenarios.

“One area of AI disruption that can and should be tackled promptly is the use of copyrighted material to train LLMs. LLMs rely on ingesting massive datasets to work properly but that does not mean they should be able to use any material they can find without permission or paying rightsholders for the privilege. This is an issue the Government can get a grip of quickly and it should do so.

“These issues will be of huge significance over the coming years and we expect the Government to act on the concerns we have raised and take the steps necessary to make the most of the opportunities in front of us.”

Bank of England Governor Andrew Bailey said AI will not be a “mass destroyer of jobs” and “there is great potential with it”.

He told the BBC he was an “optimist”, adding: “I’m an economic historian, before I became a central banker.

“Economies adapt, jobs adapt, and we learn to work with it. And I think, you get a better result by people with machines than with machines on their own.”

In response to the report, a spokesperson from the Department for Science, Innovation and Technology (DSIT), said: “We do not accept this – the UK is a clear leader in AI research and development, and as a Government we are already backing AI’s boundless potential to improve lives, pouring millions of pounds into rolling out solutions that will transform healthcare, education and business growth, including through our newly announced AI Opportunity Forum.

“The future of AI is safe AI. It is only by addressing the risks of today and tomorrow that we can harness its incredible opportunities and attract even more of the jobs and investment that will come from this new wave of technology.

“That’s why we have spent more than any other government on safety research through the AI Safety Institute and are promoting a pro-innovation approach to AI regulation.”

By Press Association

More Technology News

See more More Technology News

Person on laptop

UK cybersecurity firm Darktrace to be bought by US private equity firm

Mint Butterfield is missing in the Tenerd

Billionaire heiress, 16, disappears in San Francisco neighbourhood known for drugs and crime

A woman’s hand presses a key of a laptop keyboard

Competition watchdog seeks views on big tech AI partnerships

A woman's hands on a laptop keyboard

UK-based cybersecurity firm Egress to be acquired by US giant KnowBe4

TikTok�s campaign

What next for TikTok as US ban moves step closer?

A laptop user with their hood up

Deepfakes a major concern for general election, say IT professionals

A woman using a mobile phone

Which? urges banks to address online security ‘loopholes’

Child online safety report

Tech giants agree to child safety principles around generative AI

Holyrood exterior

MSPs to receive cyber security training

Online child abuse

Children as young as three ‘coerced into sexual abuse acts online’

Big tech firms and financial data

Financial regulator to take closer look at tech firms and data sharing

Woman working on laptop

Pilot scheme to give AI regulation advice to businesses

Vehicles on the M4 smart motorway

Smart motorway safety systems frequently fail, investigation finds

National Cyber Security Centre launch

National Cyber Security Centre names Richard Horne as new chief executive

The lights on the front panel of a broadband internet router, London.

Virgin Media remains most complained about broadband and landline provider

A person using a laptop

£14,000 being lost to investment scams on average, says Barclays