Artificial intelligence use ‘must be transparent and accountable’
2 December 2019, 17:24
The Information Commissioner’s Office has published its first draft regulatory guidance into the use of AI.
Companies planning on using artificial intelligence (AI) in their work should ensure it is “transparent and accountable”, the Information Commissioner’s Office (ICO) has said.
The UK’s data watchdog has published its first draft regulatory guidance into the use of AI in collaboration with the Alan Turing Institute.
It warned that the public are still uneasy over the use of computer software to make decisions previously made by humans, so any systems must be transparent and provide clear explanations of decisions made.
The guidance identified four key principles for AI: transparency, accountability, consideration of context and reflection on impacts.
The ICO said it had found that more than half of people remain concerned about machines making complex, automated decisions about them.
“The potential for AI is huge, but its implementation is often complex, which makes it difficult for people to understand how it works,” said Simon McDougall, the ICO’s executive director of technology and innovation.
“And when people don’t understand a technology, it can lead to doubt, uncertainty and mistrust.”
Last year, ministers published the AI Sector Deal, a joint venture between the Government and industry to try to push the UK to the forefront of emerging technology such as AI.
The ICO and the Alan Turing Institute’s draft guidance comes after an independent review by Professor Dame Wendy Hall and also the Government urged both parties to provide input on the subject.
The guidance said the four main principles are “rooted” in the General Data Protection Regulations (GDPR), EU-wide laws introduced last year to hand greater control over personal data to individuals.
The principles say organisations should ensure decisions made by AI are “obvious and appropriately” explained to people in a “meaningful” way.
On accountability, it says firms should ensure “appropriate oversight of AI decision systems, and be answerable to others”.
It also called for companies to reflect on the impact their AI use would have by ensuring they “ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome”.
The ICO said it will consult on its guidance until January 24, and Mr McDougall encouraged industry experts to respond to its draft before then.
“The decisions made using AI need to be properly understood by the people they impact,” he said.
“This is no easy feat and involves navigating the ethical and legal pitfalls around the decision-making process built in to AI systems.”