We are getting so used to the hype around generative AI (GenAI) that it may seem like we are on the verge of it being used for all purposes, everywhere, all the time. There is significant pressure on public sector organisations, in particular, not to miss the opportunity to reap its (expected, presumed) benefits.
However, GenAI comes with many challenges and risks, especially when we talk about free to use, generally available GenAI models. This is not sufficiently understood or recognised and most of the conversations I have on GenAI use with public sector leaders and procurement officials tend to quickly reach an awkward moment where I pop the bubble by stressing those risks and ranting about why I think GenAI should not just be used as is offered off the shelf (or at all, for public sector activities that need to comply with strict requirements of good administration and factuality).
In the context of public sector AI adoption, the widespread availability of these tools poses a significant governance challenge and I think we are just a bad decision away from a potentially very significant scandal / problem. The challenge comes from many directions, but especially through the embedding (or slipstreaming) of AI tools into existing systems and software packages (AI creep) and access by civil servants and public sector employees through free to use platforms (shadow AI).
Given this, I have been glad to see that two recent pieces of guidance on public sector AI use have clearly formulated the default position that non-contracted / generally available GenAI should not be used in the public sector and that exceptional use should follow a careful assessment and many interventions to ensure compliance with rightly demanding standards and benchmarks.
The Irish Guidelines for the Responsible Use of AI in the Public Service (updated 12 May 2025), building on an earlier 2023 recommendation of the Irish National Cyber Security Centre recommend “that access is restricted by default to GenAI tools and platforms and allowed only as an exception based on an appropriate approved business case and needs. It is also recommended that its use by any staff should not be permitted until such time as Departments have conducted the relevant risk assessments, have appropriate usage policies in place and staff awareness on safe usage has been implemented” (p 39).
In very similar terms, but perhaps based on a different set of concerns, the Dutch Ministry of Infrastructure and Water Management’s AI Impact Assessment Guidance (updated 31 Dec 2024) has also stated that the provisional position for central government organisations is that GenAI use is in principle not permitted: “The provisional position on the use of generative AI in central government organisations currently sets strict requirements for the use of LLMS in central government: “Non-contracted generative AI applications, such as ChatGPT, Bard and Midjourney, do not generally comply demonstrably with the relevant privacy and copyright legislation. Because of this, their use by (or on behalf of) central government organisations is in principle not permitted in those cases where there is a risk of the law being broken unless the provider and the user demonstrably comply with relevant laws and regulations.”” (p 41).
I think that these are good examples of responsible default positions. Of course, monitoring and enforcement a general prohibition like this will be difficult and more needs to be done to ensure that organisations put in place governance and technical measures to seek to minimise the risks arising from unauthorised use. This is also a helpful default because it will force organisations that purposefully want to explore GenAI adoption to go through the necessary processes of impact assessment and careful and structured consideration, as well as place a focus on the adoption (whether via procurement or not) of GenAI solutions that have appropriate safeguards and are adequately tailored and fine-tuned to the specific use case (if that is possible, which remains to be seen).