How To Deliver Trusted, Safe, And Responsible AI

[ad_1]

Picture12222 - Global Banking | FinanceBy Shakeel Khan, CEO, Validate AI and David Hand, Emeritus Professor of Mathematics and Senior Research Investigator at Imperial College

A working definition of Artificial Intelligence (AI) is that it is the ability of a machine to perform tasks regarded as requiring intelligence. Although not new, it is an idea which has suddenly become part of everyday conversation. This is largely because the power of modern computers, coupled with the size of databases that are now available, has led to remarkable achievements, with even more remarkable breakthroughs promised. These span all aspects of human enterprise: beating chess and go champions, making scientific and medical breakthroughs, real-time language translation, facial recognition, driverless cars, and so on. Most recently, the media have become enthralled by the potential of chatbots and large language models, such as ChatGPT, Claude 2, and PaLM. These appear to have the capacity to carry out a sensible conversation and even to write documents at a level adequate to pass university examinations.

But two things about these developments are striking. One is the rate of progress. Every week we appear to read about an even more dramatic advance: whereas Chat GPT-3.5 outperformed 10 percent of human candidates on the Uniform Bar law exam, the improved Chat GPT-4 version beat 90 percent. And the other is that sometimes the systems make silly mistakes – like the early version of ChatGPT confidently asserting that 47 was larger than 64 (and then attempting to count from 47 to 64, before giving up).

Put these two things together, and alarm bells might start ringing. Will AI take over jobs? Will it aggravate social inequality? Will it lead to disastrous mistakes? Who bears responsibility when things go wrong? What about autonomous weapons? Is what an AI system is trying to do really aligned with what we want; that is, is it solving the right problem? And even if it is, is it doing so in an ethical way? After all, hospital waiting lists are easily reduced by putting fewer patients on the list.

In short, can we trust AI, is it safe, and how can be ensure that such systems are used to benefit humanity?

Validate AI has been at the forefront of developing strategies to mitigate these risks since its formation in 2019. The risk mitigation strategy is based on six key pillars that could form an outline for businesses to follow to drive assurance:

  1. Scoping. This is the process of clarifying the problem to be solved, exploring the feasibility of a solution, looking at the appropriateness of AI to tackle the problem, and working out the steps needed to develop a solution.
  2. Data preparation. As is well-known, distorted, incomplete, or inaccurate data can lead to mistakes or even disasters. The old adage “garbage in, garbage out” applies even more in a world of AI, which might be based on billions of data items and use very highly sophisticated algorithms. Time taken checking data is more than well-spent in terms of peace of mind that the system is doing what it is supposed to.
  3. Algorithmic development. This involves more than simply building the algorithm. It includes testing and validating it. Key questions are asked at this stage such as, is the objective function optimised by the algorithm really the one we want to optimise? Are there bugs in the software which manifest themselves only in unusual conditions? If software packages are used, are we confident that their default settings are doing what we want?
  4. Deployment and maintenance. AI assurance does not stop once the data has been captured, the system built, and the algorithm developed. Its performance needs to be evaluated, monitored, and audited. It is important to regularly check if it is performing well enough? How does it do when circumstances change – after all, the one thing we know about human society is that change is constant. What fall-back plans are in place should the system go down?
  5. Legal requirements. It should go without saying that the system must adhere to data governance legislation and regulation. It is imperative to also consider whether there is adequate human oversight? What about security considerations? Is it resilient to attack and fraud attempts?
  6. Ethical considerations. Is it discriminatory? Can decisions that are made by a system be explained and justified? Does it preserve privacy appropriately? Is the wider impact of the system on society, work, and wellbeing being considered?

In short, we are moving into a new world. It is a world of huge potential for benefitting humankind. However, as the recent gathering of leaders from around the world at the UK AI Safety Summit illustrated, the technology of AI, like any other advanced technology such as nuclear or biotechnology, carries risks. For its vast promise to be fulfilled, we need to tread carefully. The risk mitigation strategy embodied within the pillars above forms the basis of a checklist which will give us confidence that the future is bright. That safe AI can be delivered.

About the authors:

Shakeel Khan is CEO of Validate AI, a community interest company championing innovation in how we deliver Trusted, Safe and Responsible AI, working with experts from Government, Academia and Industry. He has worked extensively in the banking and government sectors over 28 years leading the development of a comprehensive practitioner centric AI assurance tool kit. This has been adopted for projects by government departments and fiscal authorities globally. He also chairs an AI committee at the OR Society that partners with Validate AI to deliver community events and learning opportunities.

David J. Hand is emeritus professor of mathematics and senior research investigator at Imperial College London and Chair of Validate AI CIC. He a past president of the Royal Statistical Society and is a fellow of the British Academy. His books include Dark Data, The Improbability Principle, Information Generation, Intelligent Data Analysis, Artificial Intelligence and Psychiatry, and Principles of Data Mining.

[ad_2]