News & Thinking
Legal risks associated with the use of AI
In the span of a few years, Artificial Intelligence (AI) has showcased its potential to revolutionise the way we live, work, and interact. However, for many of us, the concept of AI and what it holds for the future remains a grey area.
In this series, our experts outline some specific areas where the use of AI may pose a heightened legal risk for organisations, and what can be done to mitigate this.
What is AI and how does it work?
Broadly speaking, AI refers to computer systems that can perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making. The use of AI enables humans to instead focus on higher-level tasks that have not yet been mastered by technology, such as those involving empathy and critical thinking.
Language Processing AI
A particular form of AI that has received attention in recent years is Language Processing AI (LPAI). LPAI enables computers to process and analyse text or speech data and transform it into coherent, human-readable text.
Popular uses of LPAI include:
- Question answering – where information is extracted from vast data sets and answers are ranked and displayed based on relevance in response to a specific user query.
- Text summarisation – where AI models generate concise summaries of longer texts, enabling users to quickly grasp the main points without reading the entire document.
- Text generation – where the AI generates full and coherent text from small pieces of information or prompts, improving language fluency, and enhancing the overall quality of written content.
Generative AI (GAI) is a field of AI focused on developing algorithms and models that can generate new and original content, such as text, images, music, or videos. Whilst ‘traditional AI’ relies on explicit programming to make decisions or solve specific problems, GAI aims to create new and original content by learning patterns and structures from its training data.
One of the most well-known AI tools that encompasses both LPAI and GAI is ChatGPT, developed by Open AI. The current version of the tool was first launched in November 2022 and reached over 100 million users by January 2023. With Microsoft investing $10 billion into Open AI, the application of this software looks set to continue increasing exponentially.
ChatGPT is an example of ‘public AI’, meaning it is available for the public to use and interact with.
Considerations when using AI for business
While many organisations are understandably looking to embrace this cutting-edge technology and incorporate it into their operations, it is crucial that the legal and reputational risks posed by AI are not ignored. Organisations using or intending to use AI should consider:
- Privacy and confidentiality risks associated with the input of information into public AI software;
- The ownership rights and infringement risks associated with content developed by generative AI;
- The reliability of outputs and ethical concerns driven by biases contained in AI training data; and
- Implementing internal policies and procedures governing the use of the software.