Creating (positive) friction in AI procurement — How to Crack a Nut – Go Health Pro

I had the opportunity to participate in the Inaugural AI Commercial Lifecycle and Procurement Summit 2024 hosted by Curshaw. This was a very interesting ‘unconference’ where participants offered to lead sessions on topics they wanted to talk about. I led a session on ‘Creating friction in AI procurement’.

This was clearly a counterintuitive way of thinking about AI and procurement, given that the ‘big promise’ of AI is that it will reduce friction (eg through automation, and/or delegation of ‘non-value-added’ tasks). Why would I want to create friction in this context?

The first clarification I was thus asked for was whether this was about ‘good friction’ (as opposed to old bad ‘red tape’ kind of friction), which of course it was (?!), and the second, what do I mean by friction.

My recent research on AI procurement (eg here and here for the book-long treatment) has led me to conclude that we need to slow down the process of public sector AI adoption and to create mechanisms that bring back to the table the ‘non-AI’ option and several ‘stop project’ or ‘deal breaker’ trumps to push back against the tidal wave of unavoidability that seems to dominate all discussions on public sector digitalisation. My preferred solution is to do so through a system of permissioning or licencing administered by an independent authority—but I am aware and willing to concede that there is no political will for it. I thus started thinking about second-best approaches to slowing public sector AI procurement. This is how I got to the idea of friction.

By creating friction, I mean the need for a structured decision-making process that allows for collective deliberation within and around the adopting institution, and which is supported by rigorous impact assessments that tease out second and third order implications from AI adoption, as well as thoroughly interrogating first order issues around data quality and governance, technological governance and organisational capability, in particular around risk management and mitigation. This is complementary—but hopefully goes beyond—emerging frameworks to determine organisational ‘risk appetite’ for AI procurement, such as that developed by the AI Procurement Lab and the Centre for Inclusive Change.

The conversations the focus on ‘good friction’ moved in different directions, but there are some takeaways and ideas that stuck with me (or I managed to jot down in my notes while chatting to others), such as (in no particular order of importance or potential):

  • the potential for ‘AI minimisation’ or ‘non-AI equivalence’ to test the need for (specific) AI solutions—if you can sufficiently approximate, or replicate, the same functional outcome without AI, or with a simpler type of AI, why not do it that way?;

  • the need for a structured catalogue of solutions (and components of solutions) that are already available (sometimes in open access, where there is lots of duplication) to inform such considerations;

  • the importance of asking whether procuring AI is driven by considerations such as availability of funding (is this funded if done with AI but not funded, or hard to fund at the same level, if done in other ways?), which can clearly skew decision-making—the importance of considering the effects of ‘digital industrial policy’ on decision-making;

  • the power (and relevance) of the deceptively simple question ‘is there an interdisciplinary team to be dedicated to this, and exclusively to this’?;

  • the importance of knowledge and understanding of the tech and its implications from the beginning, and of expertise in the translation of technical and governance requirements into procurement requirements, to avoid ‘games of chance’ whereby the use of ‘trendy terms’ (such as ‘agile’ or ‘responsible’) may or may not lead to the award of the contract to the best-placed and best-fitting (tech) provider;

  • the possibility to adapt civic monitoring or social witnessing mechanisms used in other contexts, such as large infrastructure projects, to be embedded in contract performance and auditing phases;

  • the importance of understanding displacement effects and whether deploying a solution (AI or automation, or similar) to deal with a bottleneck will simply displace the issue to another (new) bottleneck somewhere along the process;

  • the importance of understanding the broader organisational changes required to capture the hoped for (productivity) gains arising from the tech deployment;

  • the importance of carefully considering and resourcing the much needed engagement of the ‘intelligent person’ that needs to check the design and outputs of the AI, including frontline workers and those at the receiving end of the relevant decisions or processes and the affected communities—the importance of creating meaningful and effective deliberative engagement mechanisms;

  • relatedly, the need to ensure organisational engagement and alignment at every level and every step of the AI (pre)procurement process (on which I would recommend reading this recent piece by Kawakami and colleagues);

  • the need to assess the impacts of changes in scale, complexity, and error exposure;

  • the need to create adequate circuit-breakers throughout the process.

Certainly lots to reflect on and try to embed in future research and outreach efforts. Thanks to all those who participated in the conversation, and to those interested in joining it. A structured way to do so is through this LinkedIn group.

Leave a Comment

x