News & Thinking

ai generated image aerial view of busy street with people crossing

Bias and reliability issues with AI-generated content

Contributed by:

Anne Wilson
Partner

Read more from
Anne Wilson


Matt Smith
Partner

Read more from
Matt Smith


The ability of AI to process complex queries and deliver an articulate response in a matter of seconds can lead to users treating the output as fact.

The reality, however, is that AI delivers information and makes decisions based off the data set it is trained on – hence the commonly quoted phrase ‘AI is only as good as its data’.

What is AI bias?

‘Machine bias’ or ‘algorithm bias’ can occur when faults embedded within the training data carry forward into the outputs of the software. This can occur, for example, when the training data is not representative of the real-world population, or where human biases and prejudices are contained within the training data itself.

Some notable examples of machine bias include:

  1. A United States healthcare algorithm wrongly determined black patients to be healthier than white patients. The programme used patients’ previous health care spending to determine future medical needs. In doing so, it failed to consider factors such as the income disparity between black and white patients, which had a direct impact on the frequency and type of medical care sought.
  2. In 2014 Amazon developed an AI recruiting tool that inadvertently favoured male applicants over females. The software was trained on historical CV data. As an overwhelming portion of past applicants were male, this led to the model penalising applications that included references to ‘women’.

As AI continues to expand across various fields, we may well see further instances of machine bias begin to emerge.

A biased AI system can result in the unfair and incorrect allocation of resources, impede upon individual liberties, restrict opportunities, and have a negative impact on the safety and wellbeing of citizens.

In New Zealand a biased AI system used by an organisation could lead to a potential claim under the Human Rights Act, which prohibits discrimination on grounds such as sex and race.

How to eliminate AI bias

Biases in AI ultimately need to be addressed by programmers – ideally at the very beginning of the process or, if not, at least rectified quickly once the bias emerges.

Organisations that are end users of AI will have little control over the development of the software and its training data. To help overcome or mitigate the risks of any bias embedded in AI models organisations can:

  • Use AI developed by a reputable developer that acknowledges the occurrence of bias in AI and provides information on the data sets it has used to train the AI and the measures it implements to address and reduce biases.
  • Have in place contractual terms with AI developers that require them to take steps to address biases if they emerge.
  • Test the AI software for biases before putting it into business use.
  • Use AI software for the purpose and in the context that it was developed for – attempting to repurpose software can exacerbate bias and inaccuracies.
  • Train and educate staff on the occurrence of bias in AI and encourage them to flag any ethical issues they may observe. A diverse workforce will likely aid in bias detection.
  • Conduct regular reviews of the software. For example, if AI is used in recruitment, internal reviews should be done to ensure that shortlisted applicants generally reflect the pool of overall applicants.

Can you trust AI-generated content?

Any AI which is trained on limited and/or poor-quality data may end up providing a user with false or inaccurate information (misinformation). Researchers have also raised concerns about a more nefarious issue – disinformation, or false information that is spread deliberately with the intention to deceive the recipient. There is potential for public AI to provide a cheap and efficient avenue for spreading disinformation.

These reliability concerns are compounded by the ability of AI to eloquently phrase responses. Where information is written convincingly and free from spelling and grammatical errors, individuals may attribute greater weight to it.

ChatGPT’s Terms of Use explicitly cautions about reliance on outputs stating: “Given the probabilistic nature of machine learning, use of our Services may in some situations result in incorrect Output that does not accurately reflect real people, places, or facts. You should evaluate the accuracy of any Output as appropriate for your use case, including by using human review of the Output.”

ChatGPT also reiterates that the model has limited knowledge of today’s world, as the system is only trained on data up to September 2021. When asked who the Prime Minister of New Zealand was in July 2023, ChatGPT provides the following response: “As of my knowledge cutoff in September 2021, the Prime Minister of New Zealand is Jacinda Ardern. However, please note that political positions can change, and it’s always a good idea to verify the current officeholder as my information may be outdated.”

Unfortunately, not all answers incorporate such a specific disclaimer.

Using or relying on inaccurate AI-generated content can give rise to a range of liability issues – from liability under consumer protection laws, to product liability, to professional negligence, and so on.

One striking example was the US lawyer who, when suing an airline in a personal injury claim, referred the court to completely fictious precedent cases found via ChatGPT which seemed to support the claim. And this was despite ChatGPT assuring the researcher that the cases were in fact real! The lawyers involved ended up being fined US$5,000, and no doubt learned a harsh lesson in the process.

Ensure your AI-generated content is accurate

Incorrect and irrelevant answers provided by AI have the potential to cause both legal and PR headaches for organisations who rely on the output. These risks will be heightened where the AI technology is public or customer-facing (e.g. an AI-powered bot), as there may not be the opportunity to review the output before it is used or circulated.

To mitigate the reliability and accuracy risks associated with AI, organisations can:

  • Test the AI software for reliability and accuracy before putting it into business use.
  • Review all information provided before it is used for any business purposes (or, if this is not possible, conduct regular reviews of the information that has been used or disseminated). The review should extend to considering whether the information, even if true, has been presented fairly, i.e. whether any relevant aspects have been omitted.
  • Encourage staff to flag any reliability or accuracy issues they observe, and then take action accordingly.
  • Have in place contractual terms with AI developers that require them to take steps to address inaccuracies if they emerge.
  • When inputting queries into AI software, ensure that they are articulated as clearly as possible. Appropriate background should also be provided, for example, restricting answers to the New Zealand context.