AI use


The Legal & HR Team at Scottish Engineering are reviewing the use of generative AI applications such as Chat GPT in the context of the workplace. This technology is new and ever-advancing, however, we would like to draw your attention to some FAQ and the associated risks with using such applications.

 

What is generative AI?

Generative AI applications generate text, media and other content, learning from training data which has been input into the application. Some specific generative AI tools, such as Chat GPT, focus on interactive conversations, where it can engage in text-based discussions which closely resemble human conversations. The AI is trained on a vast amount of text data, which enables it to understand context, language, nuances, and generate relevant and coherent responses.

 

What can I use generative AI for?

These AI tools can be used for a whole range of functions which are useful in a work context. For example, drafting documents; writing email responses; formulating social media posts; language translation; scheduling appointments; summarising information from larger documents or sources; providing information on a particular subject for learning purposes; assisting with the generation of ideas; and writing content for slides. This is only a snapshot of the capabilities, and AI apps are being developed rapidly to expand their uses.

 

Are there any risks when using generative AI tools?

Yes – there are several different risks associated with employer or employee use of AI tools.  There is a chance that work produced by generative AI models is inaccurate or fictitious, particularly if the input data it was trained on contained errors.  Further, there are data privacy concerns in the event an employee inputs confidential business or personal information. Additionally, results from AI tools are based on data entered at a specific date. For example, Chat GPT has learned from data input on September 2021. This means anything which has occurred since, such as legislation changes, is not factored into the data the application provides. There is also a risk of biased information which can lead to increased discrimination risk. This is particularly relevant if the data the AI model has learned from reflects biases present in the real world e.g. stereotypes, inequalities, preference for majority groups. The model can learn and replicate these biases in the results.

 

What should we be doing to mitigate against this risk?

 

There are a number of precautions that users can take to ensure they are using AI tools responsibly. This includes reducing the amount of data entered into the application and anonymising all sensitive data – it is advisable to avoid any sensitive personal or business data from reaching the application. Any results produced by the app should be checked by a human for inaccurate or fictional results. Member companies should consider whether they want to allow employees to use AI tools. If employees are allowed to the applications, then it is advisable to have policy provision in place which sets out what it can be used for, rules governing its use and what happens in instances of improper use. As previously mentioned, training is also advisable to ensure all users understand the limitations of the application and precautions they should be exercising to protect themselves and the organisation. As always, the Legal & HR Team are happy to assist member companies as they develop their own internal AI policies and procedures.