Generative Artificial Intelligence Policy - March 2026
1. Introduction to Artificial Intelligence (AI) and Generative AI
1.1. Artificial Intelligence (AI) refers to the use of computer systems to perform tasks that typically require human intelligence, such as understanding language, recognising patterns, or making decisions. A specific type of AI, known as Generative AI (GenAI), is designed to create new content such as text, images, audio, or code, based on patterns it has learned from large datasets.
1.2. Common examples of publicly available GenAI tools include ChatGPT, Google Bard, Midjourney, Deep Seek etc. These tools can assist with drafting emails, summarising documents, generating reports, creating presentations, and more. They are increasingly being integrated into everyday software used across the public sector.
1.3. However, it is important to understand that GenAI tools do not “know” facts in the way humans do. They generate content based on probabilities and patterns, which means they can produce inaccurate, misleading, or biased outputs. These tools do not have awareness of context, legal obligations, or local authority policies unless explicitly programmed or guided.
1.4. This policy has been developed to ensure that any use of Generative AI (GenAI) within Carmarthenshire County Council and by its workforce is ethical, secure, and compliant with legal and regulatory standards. It provides guidance on appropriate use, outlines potential risks, and sets clear expectations for accountability and oversight.
1.5. Carmarthenshire County Council may subscribe to its own generative AI tools, such as Microsoft Copilot and Magic Notes, which are integrated into the Council’s secure digital environment. These non-public internal generative AI tools are designed to enhance productivity, support collaboration, and streamline routine tasks by leveraging artificial intelligence capabilities.
1.6. While these tools operate within the Council’s secure digital infrastructure and inherit existing security, compliance, and privacy controls, it is important to recognise that their use may involve the processing of sensitive or personal data. Staff must therefore exercise caution and professional judgement when inputting information into such tools, ensuring that data shared aligns with the Council’s data protection policies and legal obligations.
1.7. All use of internal generative AI tools must comply with this policy and the Council’s broader information governance framework. Where uncertainty exists regarding the appropriateness of using sensitive data with these tools, staff should seek guidance from the Information Governance Team within Digital Services.
2. Purpose
2.1. This policy outlines the regulations for Generative AI (GenAI) usage by Carmarthenshire County Council and its employees, encompassing socioeconomic, legal, and ethical considerations, including the prevention of automated biases.
2.2. This policy governs the use of all Generative AI (GenAI) tools such as ChatGPT, Bard, and Bing, ensuring they are used ethically and in compliance with laws, regulations, and council policies.
3. Scope
3.1. All individuals, including employees, casual workers, agency staff, elected members, volunteers, external consultant or contractors, who are granted access to digital facilities by the Council, must agree to adhere to this policy.
4. Policy Statement
4.1. This policy will be used in accordance with:
• The Information Security Policy.
• Handling Personal Information Policy and Procedure.
• Breach Reporting and Response Policy.
• Relevant legislation – including the General Data Protection Regulation, the Data Protection Act 2018, the Computer Misuse Act 1990, the Freedom of Information Act 2000, The Copyright, Designs and Patents Act and the Equality Act 2010.
5. Usage Principals
5.1. GenAI must be used fairly, avoiding bias to promote equality and support the council’s goals. Users may use GenAI for work-related tasks such as creating reports, emails, presentations, images, and customer service communications, following the usage principles of this policy.
5.2. Governance
Before accessing GenAI technology, users must be fully aware of this policy. Any queries should be discussed with the Information Governance Team. You must consider the reason for use, and the expected information to be input as well as the generated output and distribution of content. This must be clearly specified in any quotation or tender document.
5.3. The procurement of Digital Systems using GenAI must be in accordance and comply with the Council’s procurement rules and regulations.
5.4. Suppliers
Any use of GenAI technology suppliers must provide documentation on fairness, bias mitigation, security certifications (such as ISO 27001 or Cyber Essentials), and data handling practices. The Council will assess these as part of procurement process.
5.5. Copyright
Users are required to comply with copyright laws when utilising GenAI. It is strictly prohibited to use GenAI to generate content that infringes upon the intellectual property rights of others, including but not limited to copyrighted material. If users are uncertain whether a particular use of GenAI constitutes copyright infringement, they should contact the Legal Services or Information Governance Team before using GenAI. Inquiries can be directed to legalservices@carmarthenshire.gov.uk
5.6. Accuracy
All information generated by GenAI must be reviewed and edited for accuracy prior to use. Users of GenAI are responsible for reviewing output and are accountable for ensuring the accuracy of GenAIgenerated content before it is used or released. If a user has any doubt about the accuracy of information generated by GenAI, they should not use it.
It is important to note that GenAI tools can produce “hallucinations”— outputs that appear plausible but are factually incorrect, misleading, or entirely fabricated. These hallucinations may arise due to the probabilistic nature of GenAI models and can have significant political, legal, and financial implications for the authority. Therefore, AI-generated content should never be assumed to be accurate and must always be subject to thorough verification.
5.7. Confidentiality
Confidential and personal information must not be entered into any public GenAI tools, as these are outside of the control of our organisation and information may therefore enter the public domain. Users must follow all applicable data privacy laws and Council’s policies, such as our Handling Personal Information policy, when using GenAI. Data Protection Impact Assessments (DPIA) should also be considered. If a user has any doubt about the confidentiality of information, they should not use GenAI.
Examples of information that should not be entered into public GenAI tools include personal identifiers (e.g., names, addresses, NI numbers), sensitive case details, unpublished reports or draft policies, social care records, safeguarding reports, internal audit findings, and legal case notes.
5.8. Ethical Use
GenAI must be used ethically and in compliance with all applicable legislation, regulations and Council’s policies. Users must not use GenAI to generate content that is discriminatory, offensive, or inappropriate. If there are any doubts about the appropriateness of using GenAI in a particular situation, users should consult with their line managers.
5.9. Disclosure & Human Oversight
Users must ensure that significant decisions assisted by GenAI are explainable and documented. Where GenAI is used to support decision making that affects individuals or communities, a clear rationale for the use and interpretation of GenAI outputs must be recorded. All outputs must be subject to human review before application.
In accordance with Article 22 of the UK GDPR, individuals have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal or similarly significant effects.
6. Risks
6.1. Using GenAI carries risks and requires a thorough risk assessment. Consider impacts like legal compliance, bias, security protections, certifications, and data sovereignty and protection.
6.2. Legal compliance
Using public GenAI may result in data entering the public domain, potentially disclosing non-public information and violating regulatory requirements, customer or vendor contracts, or compromising intellectual property. Any release of private or personal information without the owner's permission could lead to breaches of relevant data protection laws. Additionally, using GenAI to generate content may infringe upon regulations protecting intellectual property rights. Users must ensure that their use of GenAI complies with all applicable laws and regulations, as well as the Council’s policies.
GenAI can potentially produce inaccurate results, and outputs should be thoroughly verified for accuracy. Incorrect GenAI outputs can have substantial political, legal, and financial consequences for the authority.
Examples include:
In 2021, a UK local authority faced legal challenges after relying on AIgenerated data that incorrectly identified properties for council tax reassessment, leading to wrongful tax increases.
Another public sector organisation used AI to screen job applicants, which inadvertently introduced bias and led to discriminatory hiring practices.
6.3. Bias and discrimination
GenAI may make use of and generate biased, discriminatory or offensive content. Users should use GenAI responsibly and ethically, in compliance with council policies and applicable laws and regulations.
6.4. Security
Public GenAI may store sensitive data and information, which could be at risk of being breached or hacked. If a user has any doubt about the security of information input into GenAI, they should not use GenAI.
6.5. Data sovereignty and protection
Public GenAI platforms may be subject to data sovereignty laws, meaning information created or collected in a country remains under that country's jurisdiction. Conversely, information sourced from GenAI hosted abroad is subject to the country's laws.
6.6. Safeguarding
Use of GenAI in contexts involving children or vulnerable individuals must comply with safeguarding policies and the ICO Children’s Code.
6.7. Accessibility and Welsh Language
GenAI tools and outputs must comply with WCAG 2.1 accessibility standards and meet Welsh Language Standards where applicable.
7. Compliance
7.1. Compliance with this policy is mandatory. Breaches of this policy by staff may lead to disciplinary action being taken. Breaches by elected members may be reported to the Standards Committee.
8. Custodian
8.1. It is the responsibility of Digital Services to ensure that this policy is regularly reviewed and updated.
9. Ensuring Equality of Treatment
9.1. This policy must be applied consistently to all irrespective of race, colour, nationality, ethnic or national origins, language, disability, religion, age, sex, gender reassignment, gender identity or expression, sexual orientation, parental or marital status.
10. Glossary
Artificial Intelligence (AI)
The simulation of human intelligence in machines that are programmed to think and learn like humans. AI can perform tasks such as understanding language, recognizing patterns, and making decisions.
Generative AI (GenAI)
A type of AI designed to create new content, such as text, images, audio, or code, based on patterns it has learned from large datasets. Examples include ChatGPT, Google Bard, and Microsoft Copilot.
Large Language Models (LLMs)
A subset of GenAI that focuses on understanding and generating human language. LLMs are trained on vast amounts of text data and can perform tasks such as translation, summarisation, and conversation.
Prompt
A piece of text or input given to a GenAI model to generate a response. The quality and specificity of the prompt can influence the accuracy and relevance of the generated output.
Bias
A tendency of AI models to produce prejudiced results due to the data they were trained on. Bias can lead to unfair or discriminatory outcomes, and it is important to mitigate it in AI applications.
Hallucination
A phenomenon where AI models generate content that is not based on the input data or real-world information. Hallucinations can result in false or misleading outputs.
Data Sovereignty
The concept that data is subject to the laws and regulations of the country in which it is collected or processed. It is important to consider data sovereignty when using AI tools hosted in different jurisdictions.
