“The Department will select the best value for money proposal having regard for the results of the technical assessment and the price, and after considering the associated risks”.
This is a standard phrase from Request for Tender documents that many of us have wrestled with. But what does this Holy Trinity of Evaluation actually mean and how do Tender Evaluation Panels work it out?
Holy Trinity Element 1: Technical Assessment
Each proposal is evaluated against the assessment criteria to determine its technical ability to satisfy each of the key items listed in the Statement of Requirements (SOR). The scores against each criterion are then aggregated and divided by the total number of criteria to determine an average score.
Holy Trinity Element 2: The Price
This single average number is then divided by the total price of the solution to develop a measure of how much capability the Agency is getting for each dollar they spend. The result of this calculation is the Value for Money (VFM) Index.
Value for Money Index Hypothetical
Suppose there are three bids – Bid A had a final average technical score of 3.89 after considering all the assessment criteria and a cost of $1000. Bid G had a score of 3.70 and cost $850 and Bid X scored 3.00 and cost $650. Applying the simple formula described in the paragraph above shows what the VFM index would be for each proposal:
A (3.89/1000) = 0.00389
G (3.70/850) = 0.00435; and
X (3.0/ 650) = 0.00462
And the winner is?
On a strict VFM index basis Bid X would win the tender because it has a higher VFM index than the other two bids, even though they were both better technical solutions. By choosing X, the customer is getting more technical capability for each dollar invested. More “bang for their buck” so to speak.
Holy Trinity Element 3: Risk
What role does risk play in this process? Risk introduces a level of sophistication into the debate concerning which bid finally emerges with the recommendation of “preferred supplier” and wins the contract. Bid X may well have the highest VFM rating but its technical value is substantially below that of the other two bids such that final, end users could be very dissatisfied once they get to actually use it. We have all seen headlines decrying solutions that can’t do what the Agency wants or numerous contract variations which result in major cost blow outs. No evaluation panel worth their salt wants to be known as the group that “picked that turkey”.
So wait… who wins the Tender?
In my experience, and there are no guarantees, Bid G would likely emerge as the preferred supplier. Why? Because it is 15 percent cheaper than Bid A, and with a comparable technical score, has less risk and therefore represents a better solution – all things considered.