Introduction
The introduction of new technologies have always thrust the issue of Public Procurement into the spotlight, subtly or otherwise. Recently, the Overton window in tech circles has been dominated by one discussion uber alles: Artificial Intelligence.
So far, AI has already obtained credible influence in the way public procurement tenders can be drafted and submitted. Over the next few years, many anticipate larger strides, in hopes of a more efficient and smooth procedure for any individual or entity interested in a bidding procedure for a public service or product.
However, this technology has not yet registered a significant foothold when it comes to the evaluation process. Indeed, unlike other elements related to the monitoring and effectiveness of spending control, the issue of using AI tools contain its own legal and ethical quandaries. The answers for which remain not only elusive in practice, but are the subject of considerable debate in theory. Approaching this debate is no longer an option – it’s becoming more of a necessity.
Qui custodiet ipsos custodes?
The apex of any public procurement procedure is in its decision stage – be it through a sole individual, a tribunal or a committee. The forms and procedures may differ between countries, but the crucial elements of transparency, expediency and adherence to regulations, remain global.
The adjudicators or evaluators of a tender act as both gatekeepers on behalf of the tendering process. Simultaneously, they are also entrusted by the State within the Executive, in ensuring that the process remains smooth and cost-effective.
This process, at present, is not without its faults. Evaluation Committees have many a time been deemed to be improperly constituted, due to cases of prima facie impartiality, or conflicts of interest. In other instances, evaluation organs have acted ultra vires, or were found at fault for any substantive and/or procedural requirements alike. This in itself has led to inter alia an increase in costs by supplier and Government alike, significant delays, and further discontent in national project delays.
Aside from the identifiable characteristics, there is also the probability of implicit biases (such as triggering personal experiences, instances of groupthink, political and ideological agendas,) that members of an evaluation committee, unintentionally or otherwise, fail to address properly.
These are issues inherent in human nature. And yet, most of the time, they play a subtle yet important role in anyone’s decision making. As an end result, evaluators unknowingly or otherwise, adjudicate with the lack of true objectivity so fundamental to any evaluation that takes place.
Consistent Criteria, or Intransigent Checklisting?
Proponents of Artificial Intelligence point towards the above issues as further ailments of a system in need of a revamp. Furthermore, another point argued specifically by those with some experience of tender evaluation within their local Governments is that, more often than not, such an adjudication ends up more akin to a checklist exercise.
=IF A, THEN B.
Such an inflexible state of affairs may do well in the bulk of tendering processes. This is not exclusive to the least economically significant of procurement – many elements related to tendering processes do indeed require formulaic, standardised requirements, which help ensure compatibility with extraneous political, legal and environmental constraints. Indeed, any type of software, especially one as intelligent as AI, can alleviate considerable pressures in ensuring such conformity, and leave room for even more ambitious projects to materialise.
However, adherence to standardised templates can only lead one so far.
A tick the box exercise
Current AI technology harvests (or is fed) substantial amounts of data, with the ability to process and regenerate the requested output. On paper, this procedure allows any user to obtain a result which is not only optimal, but also consistent. And for many proponents within the public sector, this element of procedural and substantive uniformity, is one of the cardinal rules by which all tendering procedures should abide by.
However, no public procurement department operates in isolation. Any evaluation process must keep in mind issues such as market forces and supply issues, local and/or regional. This is all the more relevant for small-scale economies.
Of course, an evaluation committee made out of human beings, especially those intimately familiar with these technicalities, have the ability to calculate these issues, and decide on the fairest way forward. Any evaluation process that takes place in the adjudicatory stage of any public procurement process, must be in tune with all relevant matrices necessary for an informed decision. For human evaluators, this information can be brought to their attention by either the bidders themselves, or through the research that they themselves conduct when analysing the submissions.
For any AI software, this relevant data (ironically) may not be fully automated. Rather, not only must it be submitted as part of its evaluation criterion, but rather prioritised accordingly to its importance in the relevant geo-political context. If not, then the whole tendering process would be built, evaluated and submitted for approval in a manner wholly unrepresentative of the scenario outside the boardroom.
What this may mean is a zero-sum game, with all parties facing a litany of different issues: an ineffective service or product, which fails to reach any of its intended targets; a public body facing risky cost overruns, and an internal or external outcry for failing to delivery on what was promised; a bidder who will either refrain from engaging with other public bodies in the future, or else act comfortably enough within his assigned position to demand payments or conditions over and above those agreed upon and stipulated upon; and the taxpayer being left with a hefty bill for marginal gains, if any.
The Appellate Stage – More than just Ones and Zeros?
The current human element when it comes to adjudication in the first instance, is also important if and when we reach the appellate stage.
Appeals can be lodged for many reasons, as discussed in a previous section. And to the trained eye, this is par for the course: A legal system working as designed, ensuring as much as possible that the most equitable and just conclusion is reached the second time around.
Another way to look at it is as follows: An appeal is usually the least direct outcome. Furthermore, the possible use of AI in appellate stages, definitely leads to more questions than answers.
Philosophically, the use of AI software can lead to a political and legal Catch-22 for any entity or evaluator. Let’s say that such a technology is applied in all evaluation stages: Both in the first instance, and in subsequent re-evaluations. Should the same AI software used in the first stage of proceedings, be used again in the second stage? Will the same parameters used in the initial evaluation, remain unchanged? If that were to occur, then any subsequent requests for review will be inherently worthless. Expecting a different outcome from the same set of variables over multiple times is not a process of adjudication – it’s an exercise of ticking a box. If the AI software in the second stage of proceedings is indeed operable on a different dataset, or a different software entirely, then another question pops up: What parameters are changed? And if the evaluation software is entirely different, what justification is there for one system to be held on a higher legal level than the other?
One may answer by requiring that such a software plays only a limited part in adjudicating any tenders or calls issued by a Government entity. If the software is strictly used in the first instance, then the essential human element in the second instance not only remains – it supersedes that of the technology, the same software initially sought for to remove any issues brought by human interference.
If the opposite scenario plays out, then it’s the human element which is subservient to the rigidity of such a software. This however, downgrades the human ability of foresight and contextualisation to that of a program, leaving all decisions at the mercy of an uninvolved, absent and uninterested party – the software programmer.
Concluding comments
All things must change, and in turn so must we. The technostructure affecting our daily lives continues to be transformed at breakneck speed – in turn, Governments, businesses and other associations are struggling to remain in step with the times, let alone ahead of them.
Indeed, while a discussion on the role of AI seemed merely the work of science-fiction a few years ago, it is nowadays a prominent feature. More so in an area of public administration so tightly regulated, a sphere of law so frequently the source of public controversy and media speculation. However, the influence that this technology has had, and may continue to have, on our daily lives as citizens, is rapidly accelerating.
Whatever role that AI may play in the next few generations, is yet to be fully deciphered. Alas, this technology must never supplant the fundamental rights and principles that have ensured faith, trust and accountability in all public procurement proceedings. If that Rubicon is crossed, and this trust is disintegrated, the involvement of both private entities and individuals withers away, paralysing public entities everywhere, and removing the stability so pivotal that is provided for the good of all citizens alike.