Q&A Part 1 from 7 secrets from inside government source evaluations and how you can use them to create winning proposals
Do you know how the government really evaluates proposals? Have you ever wondered what they look for when they read through each offer and what they like and dislike when scoring proposals? Not knowing this makes submitting a proposal to the U.S. Government like firing a shot across their bow. What happens on the “other side” is a mystery to most contractors, and debriefs often don’t tell the whole story. Or, even half the story! This is because those who prepare proposals and those who evaluate them have vastly different perspectives.
In this webinar, we release the results of our 3-year research project on how the government evaluates proposals and what capture and proposal managers need to know in order to create better, higher-scoring proposals and win more highly competitive bids. Watch the webinar replay to hear Wendy Frieman, Lohfeld Principal Consultant, provide lessons from the “other side of the divide” based on 3 years’ of interviews with acquisition personnel who have read and evaluated proposals with values up to and exceeding $1B. She explains 7 secrets learned from the other side and shows you how you can use these to up your competitive game.
Here is Part 1 of the questions we received during the webinar with answers from Wendy Frieman.
Q: How is information organized and presented in winning bids? Is there a magic formula? Is it all about compliance with the evaluation criteria as they are presented, or is it more important to tell a story for a more emotional buy-in?
A: I love this question! No, there is no magic formula. Winning bids come in all shapes and sizes, and any proposal manager who is honest will tell you that there are always surprises—proposals that seemed dead on arrival turn out to be winners, and vice versa. Stories keep the evaluators reading, which is a good thing. But, stories won’t compensate for an absence of material that maps directly to the evaluation criteria. So, I would start with the foundation and then build the house. Address the evaluation criteria first. Then tell your story.
Q: How does the government assess risk with regard to proposal responses? That is, how does the government determine the “riskiness” of a proposal?
A: Good question. This was not the subject of any of my questions, but it came up repeatedly in answers. It appears that proposal teams regularly underestimate the degree to which source selection teams are risk averse. Of course, what constitutes a risk is highly subjective. But, if the perception is there, the fear will overwhelm the stated evaluation criteria.
Q: Is there a standard to which the source selection board members must adhere, e.g., government contractors must adhere to the FAR?
A: Yes, the people on the board have to follow the FAR. However, the FAR provides guidance that is open to interpretation.
Q: We hear about automated evaluation tools – what tools are used, and when/how are they used?
A: There are basically two categories of tools: 1) those used to help the non-price evaluators, and 2) the pricing tools. Both are used, and there are contracting officers who will discuss the use of the tools. It depends on the particular agency and office involved.
Q: What is the evaluator expecting when the solicitation has 50 pages of “shalls” and you have a response limited to 30 pages?
A: This is a great question, and one we all struggle with. There are many ways to address requirements. First, it depends on what kind of RFP you are responding to. Not all RFPs require or suggest that you write to every single “shall.” That format is more common in product or commodity bids. If you do need to respond to every “shall,” you can aggregate them in a table or refer to them by their RFP paragraph number to save space. If you don’t have to address each individually, I always recommend sorting them in priority order and allocating your page count accordingly. This demonstrates knowledge of the work because it shows you know what is important.
Q: Who provides oversight to ensure the evaluators are really using the evaluation criteria provided in the RFP?
A: The contracts shop has this responsibility. It is exercised differently in different organizations, but there should be training and well-articulated guidelines in advance of each source selection process.
Q: How much subjectivity is there in what is supposed to be a very objective process? What’s the latest “thing” in government procurement evaluations?
A: There is subjectivity, but it is hard to say how much. The people I interviewed portrayed the process as fair, rational, and objective. Perhaps because of a wishful-thinking bias. Although there are regulations, processes, and training, ultimately human beings are performing the evaluation and they are, well, human. This is why it is extremely important to find out as much as possible about who is participating in the source selection process.
My interviews did not focus on trends, but I think everyone in the business has noticed increased sensitivity to the potential of a protest. This drives a lot of the source selection process.
Q: Can you discuss, from your perspective, why companies do not invest in formal capture management as a discreet function in the process of new business acquisition?
A: Although this was not within the scope of my interviews, my impression is that it takes time to show the return on investment for a disciplined capture process. Many companies just don’t have the staying power.
Q: Is it your experience that evaluators are using electronic word search functions to evaluate proposals? Does this mean that we should avoid rephrasing concepts for readability in favor of parroting back the exact phraseology to meet the word search function?
A: In general, it is best to use the terminology from the RFP to the extent possible. However, it is also important not to repeat the RFP requirements word for word, and many RFPs warn against this. So it’s a balancing act.
Q: I would love you to address the situation when the government publishes an RFP riddled with so many errors and, during the Q&A window, refuses to cooperate and it is apparent that the contracting officer (CO) will not be making the needed clarifications by the consistent response of “bid as shown” or “refer to RFP.” How many times and ways can you badger a response out of the CO without ticking them off?
A: Great question! If the RFP ambiguities have to do with contractual or pricing issues, I would stay on it and not worry about too much badgering. If you don’t win, you will have a paper trail showing that you tried to clarify.
If the questions are about the non-price factors, I use a three-part test. First, is the question itself going to reveal anything to my competitors? If yes, don’t go any further. Second, can I submit a compliant proposal without getting the answer to the question? If yes, stop there. Finally, can I live with the worst possible answer to the question? If so, then I would ask the question, but only once. As you have seen, often the answers that come back just make the RFP less clear. The best strategy is to state your understanding of the RFP clearly. That is a better use of time and energy than badgering a CO who probably cannot answer the questions in the first place. I think it is less about annoying the CO and more about how best to use the finite amount of time afforded for the RFP response.
Q: Is there a difference between the evaluation process used by DoD and Federal Government clients? Do artistic cover pages make a difference since they are not part of the evaluation?
A: There are differences among evaluation processes even within the same agency and contracts shop. The FAR allows for this, and often the idiosyncrasies are not documented in any way that contractors can see. I did ask my interview subjects about proposal covers, and they all said that covers make no difference.
Q: Our company has received evaluation notices regarding our sister-subsidiary connections for evaluation of both past performance and management approach. Understanding how best this can be presented to receive higher ratings and to avoid questions would be of interest.
A: This was outside the scope of my interviews. Typically, the evaluation teams consider many sources when they are rating past performance. I suspect that it would depend on how close your organization is to its sister-subsidiary and how much they have in common with you. If you both do the same kind of work for the same customer, this could be a problem. In general, when there is bad past performance, the best way to deal with it is to show what you learned from it and how you improved your processes or technology to avoid a recurrence.
Paperback or Kindle
by Bob Lohfeld
contributors Edited by Beth Wingate
Did you know that contracting officers spend up to 20% of their time mitigating disputes between teaming partners? In an informal poll we conducted on LinkedIn last month, 40% of respondents classified their teaming partners as “frenemies” on their last bid.
- Advice (436)
- AI (4)
- APMP (17)
- Business Development (196)
- Capture Management (192)
- Favorite Books (5)
- Go-to-Market (27)
- Graphics (6)
- Lohfeld Books (3)
- Past Performance (58)
- Post-submission Phase (15)
- Pre-RFP Preparation (207)
- Proposal Management (261)
- Proposal Production (57)
- Proposal Reviews (25)
- Proposal Writing (71)
- Pursuit Phase (89)
- Research Report (2)
- Resources (60)
- Tools & Tips (250)
- Training (10)
- Uncategorized (214)