This guest post by Saema Jaffer* explores the policy approaches that can be followed in the UK, and elsewhere, in relation to the use of AI in the writing and evaluation of tender documents.
It was submitted to the ‘a book for a blog’ competition and has won Saema a copy that will soon be on the post.
Please note Saema’s disclaimer that “I am commercial practitioner and I am writing in a personal capacity. The views expressed in this blog are my own, and do not represent those of my employer”.
PROCUREMENT CONFERENCE DAY
I am at an industry conference day, there are rows upon rows of supplier stands, and tables covered with branded pens and tote bags. I am reading a leaflet for a contract management portal when I overhear a conversation between a buyer and a supplier.
“I suppose at this rate, you could just get AI to write all your bids for you?” says a buyer. This elicited a defensive reply from the supplier: “and you could write the tender documents and evaluate all of the bids using AI without ever reading them!”
AI EVALUATIONS
I am absolutely eavesdropping at this point and think: “now there’s a terrible idea!” AI might be fast, but buyers cannot rely on it to be a subject matter expert. What’s more, quite apart from breaching evaluation requirements, what if the AI tool were to misunderstand a supplier commitment, or miss a crucial element revealing non-compliance with the contract? If a bid is incorrectly evaluated, the buyer can hardly hold the AI response generator accountable.
MUTUAL NIGHTMARE
I suppose the bidder’s response would be that as much as the buyer wants to avoid contracting on scope which does not meet their requirements and still less with a supplier who does not meet the scope, the bidder also does not wish to be tied to contractual commitments as part of their bid that they cannot deliver. Simply relying on AI without any human involvement in either client or supplier side, then, presents major problems for both buyer and supplier.
ONE SIZE FITS NO-ONE
But what if the supplier were to use AI to simply write the initial bid, and then review and edit it to make sure there were no commitments that it could not deliver?
After all, the public sector is keen to simplify procurement for small and medium-sized enterprises. An easy way to help those bidders is to use an automated bid writer to help them write their first draft.
Relying on the same AI tools, as I would imagine most SME bidders would do, poses another risk: that the breadth of ideas and sources of creative solutions shrinks. If AI is to be used, we all do need to be aware of what is being lost and be ready to commit to the one-size-fits-all offer drafted by AI.
FEEDING THE AI MACHINE
So, does this mean that bidders should have no restrictions in writing their bids using AI, provided they commit to delivering whatever they end up submitting in their tender?
Not quite. The reason for this is yet another question: what information did the bidder feed the AI response generator to obtain an answer? Was it simply the evaluation question? Possibly.
Was it full extent of the ITT documents, including the scope and the contract? That seems more likely.
What if that ITT contained commercially sensitive information about the buyer’s organisation? That information would have been pasted into a chatbot and then stored…where, and sent to whom?
All the time and effort the buyer would have spent checking for conflicts of interest in the bidding team, signing non-disclosure agreements, being careful to release documents only to pre-qualified suppliers, would mean nothing if the information can simply be released to an unidentifiable person who can disappear into the ether. It’s the same reason why a buyer might be wary of using AI to write invitations to tender.
POTENTIAL SOLUTIONS
So what are my solutions?
Option 1: Place a ban bidders to commit not to using AI. This seems draconian for the vast majority of tenders. I suspect it would also prove ineffective. It’s not so much using a sledgehammer to crack a nut, but using a cardboard cut-out of a sledgehammer to intimidate the nut into cracking.
Option 2: Using the bidder’s own AI solutions. So far, I’ve considered the use of AI that is owned and managed by a third party outside the buyer and bidder relationship. But what if an AI tool is owned by the bidder so that it can control where the information goes?
It’s a partial solution but it does not come without risks. For example, where ethical walls need to be set up within the bidding organisation to prevent conflicts of interest, how can I be sure that a shared AI bot won’t take information from one side of the ethical wall and feed it to the other?
If the bidder is eliminated, can commercially sensitive information that has already been inputted into the system really be destroyed? Once it has been fed into an AI system, can anything really be destroyed?
Option 3: Slowing down AI adoption by establishing a framework with clear guidelines on where AI can be used, and the risks (and consequences) of improper use.
This option recognises that an all-out ban on AI is unlikely to work. Therefore, we need to work on developing suitable criteria for where it can and cannot be used, making clear the inherent risks.
Buyers and suppliers both need to be aware of the risks of using AI as part of tendering processes, and ensure their organisations develop suitable frameworks and ways of working to incorporate AI in a way that manages risks and protects sensitive data.
SLOWING DOWN AI ADOPTION
What requirements should be established to slow down reckless AI adoption and ensure there are effective safeguards in place?
I’ve come up with a shortlist of six practicable ideas below:
-
Contracting authorities should be clear on the scope of AI Use. Specific areas or aspects of the tender submission process where AI cannot be used should be marked out, alongside information which cannot be inputted into third party AI systems.
-
AI systems should include mechanisms for human oversight and intervention, ensuring that critical decisions are reviewed and approved by humans. Linked to this, bid writers must have been trained in how to effectively and ethically use AI technologies before participating in a tendering process.
-
The bidding organisation must confirm, as part of their offer, that everything that the AI tool has produced has been separately verified by the bidding entity and that the AI-drafted submission constitutes a firm offer from the bidder.
-
Bidders must explicitly disclose if and how AI technologies were used in the preparation of their tender submissions. This must be auditable, allowing the contracting authority to verify the integrity and reliability of the AI tools used.
-
Bidders should provide clear explanations of AI generated outputs, making it understandable how conclusions or recommendations were reached.
-
Independent third-party audits or certifications must be required to validate the AI technology’s adherence to specified standards in relation to:
-
Adherence to Technical Standards, so that AI systems are quality assured.
-
Adherence to ethical guidelines, ensuring that there is no theft of intellectual property.
-
Compliance with data protection laws, ensuring that any personal data used or processed is secure and handled in accordance with legal requirements.
-
Non-discrimination, because the AI tools must not introduce bias or discrimination into the tendering process. Bidders should provide evidence of measures taken to prevent such issues.
The adoption of such a framework should give confidence that AI technologies are used responsibly and ethically in the procurement process.
In the words of Alan Turing, the creator of modern computing: “We can only see a short distance ahead, but we can see plenty that needs to be done.”