Much ado about nothing? — How to Crack a Nut – Go Health Pro

The UK Government’s Department for Science, Innovation and Technology (DSIT) has recently published its Initial Guidance for Regulators on Implementing the UK’s AI Regulatory Principles (Feb 2024) (the ‘AI guidance’). This follows from the Government’s response to the public consultation on its ‘pro-innovation approach’ to AI regulation (see here).

The AI guidance is meant to support regulators develop tailored guidance for the implementation of the five principles underpinning the pro-innovation approach to AI regulation, that is: (i) Safety, security & robustness; (ii) Appropriate transparency and explainability; (iii) Fairness;
(iv) Accountability and governance; and (v) Contestability and redress.

Voluntary approach and timeline for implementation

A first, perhaps, surprising element of the AI guidance comes from the way in which engagement with the principles by current regulators is framed as voluntary. The white paper describing the pro-innovation approach to AI regulation (the ‘AI white paper’) had indicated that, initially, ‘the principles will be issued on a non-statutory basis and implemented by existing regulators’, with a clear expectation for regulators to make use their ‘domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used’.

The AI white paper made it clear that a failure by regulators to implement the principles would lead the government to introduce ‘a statutory duty on regulators requiring them to have due regard to the principles’, which would still ‘allow regulators the flexibility to exercise judgement when applying the principles in particular contexts, while also strengthening their mandate to implement them’. There seemed to be little room for discretion for regulators to decide whether to engage with the principles, even if they were expected to exercise discretion on how to implement them.

By contrast, the initial AI guidance indicates that it ‘is not intended to be a prescriptive guide on implementation as the principles are voluntary and how they are considered is ultimately at regulators’ discretion’. There is also a clear indication in the response to the public consultation that the introduction of a statutory duty is not in the immediate legislative horizon and the absence of a pre-determined date for the assessment of whether the principles have been ‘sufficiently implemented’ on a voluntary basis (for example, in two years’ time) will make it very hard to press for such legislative proposal (depending on the policy direction of the Government at the time).

This seems to follow from the Government’s position that ‘acknowledge[s] concerns from respondents that rushing the implementation of a duty to regard could cause disruption to responsible AI innovation. We will not rush to legislate’. At the same time, however, the response to the public consultation indicates that DSIT has asked a number of regulators to publish by 30 April 2024 updates on their strategic approaches to AI. This seems to create an expectation that regulators will in fact engage—or have defined plans for engaging—with the principles in the very short term. How this does not create a ‘rush to implement’ and how putting the duty to consider the principles on a statutory footing would alter any of this is hard to fathom, though.

An iterative, phased approach

The very tentative approach to the issuing of guidance is also clear in the fact that the Government is taking an iterative, phased approach to the production of AI regulation guidance, with three phases foreseen. A phase one consisting of the publication of the AI guidance in Feb 2024, a phase two comprising an iteration and development of the guidance in summer of 2024, and a phase three (with no timeline) involving further developments in cooperation with regulators—to eg ‘encourage multi-regulator guidance’. Given the short time between phases one and two, some questions arise as to how much practical experience will be accumulated in the coming 4-6 months and whether there is much value in the high-level guidance provided in phase one, as it only goes slightly beyond the tentative steer included in the AI white paper—which already contained some indication of ‘factors that government believes regulators may wish to consider when providing guidance/implementing each principle’ (Annex A).

Indeed, the AI guidance is still rather high-level and it does not provide much substantive interpretation of what the different principles mean. It is very much a ‘how to develop guidance’ document, rather than a document setting out core considerations and requirements for regulators to embed within their respective remits. A significant part of the document provides guidance on ‘interpreting and applying the AI regulatory framework’ (pp 7-12) but this is really ‘meta-guidance’ on issues such as potential collaboration between regulators for the issuance of joint guidance/tools, or an encouragement to benchmarking and the avoidance of duplicated guidance where relevant. General recommendations such as the value of publishing the guidance and keeping it updated seem superfluous in a context where the regulatory approach is premised on ‘the expertise of [UK] world class regulators’.

The core of the AI guidance is limited to the section on ‘applying individual principles’ (pp 13-22), which sets out a series of questions to consider in relation to each of the five principles. The guidance offers no answers and very limited steer for their formulation, which is entirely left to regulators. We will probably have to wait (at least) for the summer iteration to get some more detail of what substantive requirements relate to each of the principles. However, the AI guidance already contains some issues worthy of careful consideration, in particular in relation to the tunnelling of regulatory power and the imbalanced approach to the different principles that follows from its reliance on existing (and soon to emerge) technical standards.

technical standards and interpretation of the regulatory principles

regulatory tunnelling

As we said in our response to the public consultation on the AI white paper,

The principles-based approach to AI regulation suggested in the AI [white paper] is undeliverable, not only due to lack of detail on the meaning and regulatory implications of each of the principles, but also due to barriers to translation into enforceable requirements, and tensions with existing regulatory frameworks. The AI [white paper] indicates in Annex A that each regulator should consider issuing guidance on the interpretation of the principles within its regulatory remit, and suggests that in doing so they may want to rely on emerging technical standards (such as ISO or IEEE standards). This presumes both the adequacy of those standards and their sufficiency to translate general principles into operationalizable and enforceable requirements. This is by no means straightforward, and it is hard to see how regulators with significantly limited capabilities … can undertake that task effectively. There is a clear risk that regulators may simply rely on emerging industry-led standards. However, it has already been pointed out that this creates a privatisation of AI regulation and generates significant implicit risks (at para 27).

The AI guidance, in sticking to the same approach, confirms this risk of regulatory tunnelling. The guidance encourages regulators to explicitly and directly refer to technical standards ‘to support AI developers and AI deployers’—while at the same time stressing that ‘this guidance is not an endorsement of any specific standard. It is for regulators to consider standards and their suitability in a given situation (and/or encourage those they regulate to do so likewise).’ This does not seem to be the best approach. Leaving it to each of the regulators to assess the suitability of existing (and emerging) standards creates duplication of effort, as well as a risk of conflicting views and guidance. It would seem that it is precisely the role of centralised AI guidance to carry out that assessment and filter out technical standards that are aligned with the overarching regulatory principles for implementation by sectoral regulators. In failing to do that and pushing the responsibility down to each regulator, the AI guidance comes to abdicate responsibility for the provision of meaningful policy implementation guidelines.

Additionally, the strong steer to rely on references to technical standards creates an almost default position for regulators to follow—especially those with less capability to scrutinise the implications of those standards and to formulate complementary or alternative approaches in their guidance. It can be expected that regulators will tend to refer to those technical standards in their guidance and to take them as the baseline or starting point. This effectively transfers regulatory power to the standard setting organisations and further dilutes the regulatory approach followed in the UK, which in fact will be limited to industry self-regulation despite the appearance of regulatory intervention and oversight.

unbalanced approach

The second implication of this approach is that some principles are likely to be more developed than other in regulatory guidance, as they also are in the initial AI guidance. The series of questions and considerations are more developed in relation to principles for which there are technical standards—ie ‘safety, security & robustness’, and ‘accountability and governance’—and to some aspects of other principles for which there are standards. For example, in relation to ‘adequate transparency and explainability’, there is more of an emphasis on explainability than on transparency and there is no indication of how to gauge ‘adequacy’ in relation to either of them. Given that transparency, in the sense of publication of details on AI use, raises a few difficult questions on the interaction with freedom of information legislation and the protection of trade secrets, the passing reference to the algorithmic transparency recording standard will not be sufficient to support regulators in developing nuanced and pragmatic approaches.

Similarly, in relation to ‘fairness’, the AI guidance solely provides some reference in relation to AI ethics and bias, and in both cases in relation to existing standards. The document falls awfully short of any meaningful consideration of the implications and requirements of the (arguably) most important principle in AI regulation. The AI guidance solely indicates that

Tools and guidance could also consider relevant law, regulation, technical standards and assurance techniques. These should be applied and interpreted similarly by different regulators where possible. For example, regulators need to consider their responsibilities under the 2010 Equality Act and the 1998 Human Rights Act. Regulators may also need to understand how AI might exacerbate vulnerabilities or create new ones and provide tools and guidance accordingly.

This is unhelpful in many ways. First, ensuring that AI development and deployment complies with existing law and regulation should not be presented as a possibility, but as an absolute minimum requirement. Second, the duties of the regulators under the EA 2010 and HRA 1998 are likely to play a very small role here. What is crucial is to ensure that the development and use of the AI is compliant with them, especially where the use is by public sector entities (for which there is no general regulator—and in relation to which a passing reference to the EHRC guidance on AI use in the public sector will not be sufficient to support regulators in developing nuanced and pragmatic approaches). In failing to explicitly acknowledge the existence of approaches to the assessment of AI and algorithmic impacts on fundamental and human rights, the guidance creates obfuscation by omission.

‘Contestability and redress’ is the most underdeveloped principle in the AI guidance, perhaps because no technical standard addresses this issue.

final thoughts

In my view, the AI guidance does little to support regulators, especially those with less capability and resources, in their (voluntary? short-term?) task of issuing guidance in their respective remits. Meaningful AI guidance needs to provide much clearer explanations of what is expected and required for the correct implementation of the five regulatory principles. It needs to address in a centralised and unified manner the assessment of existing and emerging technical standards against the regulatory benchmark. It also needs to synthesise the multiple guidance documents issued (and to be issued) by regulators—which it currently simply lists in Annex 1—to avoid a multiplication of the effort required to assess their (in)comptability and duplications. By leaving all these tasks to the regulators, the AI guidance (and the centralised function from which it originates) does little to nothing to move the regulatory needle beyond industry-led self-regulation and fails to discharge regulators from the burden of issuing AI guidance.

Leave a Comment

x