Q&A Part 2 from 7 secrets from inside government source evaluations and how you can use them to create winning proposals

Do you know how the government really evaluates proposals? Have you ever wondered what they look for when they read through each offer and what they like and dislike when scoring proposals? Not knowing this makes submitting a proposal to the U.S. Government like firing a shot across their bow. What happens on the “other side” is a mystery to most contractors, and debriefs often don’t tell the whole story. Or, even half the story! This is because those who prepare proposals and those who evaluate them have vastly different perspectives.

In this webinar, we release the results of our 3-year research project on how the government evaluates proposals and what capture and proposal managers need to know in order to create better, higher-scoring proposals and win more highly competitive bids. Watch the webinar replay to hear Wendy Frieman, Lohfeld Principal Consultant, provide lessons from the “other side of the divide” based on 3 years’ of interviews with acquisition personnel who have read and evaluated proposals with values up to and exceeding $1B. She explains 7 secrets learned from the other side and shows you how you can use these to up your competitive game.

Click to watch the webinar replay and download the presentation and research brief.

Here is Part 2 of the questions we received during the webinar with answers from Wendy Frieman.

Q: What do the evaluators really do with an executive summary?
A: The executive summary can be a quick reminder of what is in the proposal and a roadmap that makes life easy for the evaluators. That is always a good thing. However, if it is not evaluated, it probably doesn’t make sense to spend too much time on it.

Q: I hear so much about the importance of a good executive summary. Do you have any specific recommendations for government responses?
A: I dealt with this to some extent in the webinar. Typically, the executive summary is not evaluated. Evaluators find it helpful, however, so it might influence the way they see other factors. Also, it can be very helpful for the team because its creation forces a distillation and concise presentation of different concepts. It is not worth spending too much time on, however.

Q: Does the government review executive summaries required by the RFP when there is an LPTA selection criteria (especially when the executive summary is not scored)?
A: Executive summaries are rarely evaluated, but they can be helpful to the source selection team if the proposal is long or complicated.

Q: We often hear that, “Evaluators don’t read proposals, they score them,” or that they don’t have time to read, so they skim. Generally, we take this to mean that they look for compliance to requirements and responsiveness to specific criteria, strengths, risk reducers, key benefits, facts, and proofs—the things that allow them to make a fair and objective decision quickly. This all makes sense. However, our company, and most of the major capture/proposal consulting companies still offer training on writing for proposals that does touch on writing mechanics. Does good, clear, concise writing (active voice, direct, plain talk, short-to-medium length sentences/paragraphs, small words when appropriate, etc.) truly matter to evaluators?
A: I did not ask this specific question. What I learned is that the evaluators appreciate anything that makes their job easier. As they are hunting for material that relates to the evaluation criteria, concise writing makes a big difference. It makes the proposal easier to score. So, I like your approach of active voice, short words, and plain talk very much. It is consistent with what my interview subjects said they appreciated.

Q: Some folks in industry say storyboarding is dead. What are your thoughts on that, and if you don’t use storyboards to bake in themes from the beginning, how do you write a winning proposal tailored for you client?
A: Good question. I stopped using storyboards because instead of helping the writers, the tool took on a life of its own and became an obstacle. I like using an annotated outline based on the evaluation criteria. I have found that the best way to introduce themes and discriminators is to plan for many iterations. Start with a compliant proposal. Then add all the bells and whistles.

Q: Do you think the fear of protests has elevated the amount of scrutiny on valuations/evaluation boards to make them more unbiased?
A: In a word, yes. I did not ask this question specifically. But, the importance of defending the scores assigned to each proposal came up many times in my discussions. Interview subjects cited instances when the team wanted to award a contract to a particular company because there were features they liked, but they couldn’t find a way to assign points against the evaluation criteria.

Q: I have always heard about word search software being used and how if you are too graphic intensive that the software can’t pick it up. How pervasive is this?
A: I did not ask about this specifically. But, to be safe, I would make sure that important compliance items are addressed in text as well to the extent that space permits.

Q: How ubiquitous are checklists to evaluators as compared to more general subjective assessment of responses?
A:
Checklists are very common, but for compliance and not for evaluation against subjective criteria.

Q: What would be the average level of specific knowledge of the services being procured by the typical evaluator?
A:
There are almost always some generalists on the evaluation team because the SMEs are very busy and often overcommitted.

Q: How much training does the typical evaluator (general and specific) receive prior to performing his/ her evaluation?
A:
The level of training varies widely, but there is always just-in-time training for the entire team.

Q: How closely does the classical “color” team approach used in proposal preparation mirror the client’s evaluation process?
A: I believe that color teams in industry are quite different from the source selection evaluation team, for reasons I tried to explain in my webinar.

Q: How consistently and closely do evaluation plans follow L and M? When L and M track, is it safe(r) to assume the evaluation will be largely consistent with the instructions and criteria? Does the acquisition team prepare them, or the evaluation team, or a legal team? When are they prepared—with the RFP or when someone gets around to it AFTER the RFP is finalized?
A: I believe I addressed some of these question in my webinar. Evaluation plans are part of the acquisition process and are prepared ahead of time. You can find general guidance on these in FAR Part 15 and in various entries in Acqupedia at the Defense Acquisition University website.

Q: What advice do you have when Section L and Section M are not aligned? Which should take priority in your proposal outline at the top level when they are in conflict.
A: This question is debated endlessly by those of us in the field who go to proposal conferences and discuss proposal theology. I did not address it directly in my questionnaire. However, I try to organize according to Section M first. Then I address those items in L that are not also in M; but, if there are space and time constraints, I invest much more heavily in the Section M topics. Of course, some RFPs make it difficult to do this. It’s a constant frustration.

Q: How do your evaluators rate “desired/preferred” qualifications that the bidder’s proposal might not meet?
A: Good question. I am afraid I don’t have a good answer. This did not come up in my interviews. However, I always treat “desired” or “preferred” as “required,” because I figure that this is how my competitors will interpret that language.

Q: In structuring the source evaluation teams is there general acquisition guidance that contracting officers use to help them decide the size or composition of those teams, or is it just who is available regardless of competency?
A: There is very little in the FAR about how to structure the team. This is up to the contracting officer, and each agency and office has its own process.

Q: How do you write a technical proposal that addresses each task in the PWS, but avoids just reiterating the PWS, particularly with fairly non-technical requirements (for example administrative support services)?
A: This is the art and science of good proposal writing! The answer is different every time, depending on the format of the proposal and the products or services being sold. For non-technical services, I would amplify what is in the PWS by showing exactly how the work will be done, how it will be supervised, where your organization has done this successfully in the past and with what results.

Q: If the evaluation criteria for past performance is neutral if you don’t have any, would that be a reason to not bid (assuming that competitors had good past performance)?
A: This was not covered in my interview questions, however, I would not bid in this scenario. It is true that you will be rated neutral, but other companies will have positive ratings, which puts you at a disadvantage.

Q: Can you elaborate on protests of source evaluation board decisions—how to protest-proof your submittal?
A: I did not ask this question. However, it appears that avoiding a protest is really the responsibility of the government, not the bidder. The best the bidder can do is to adhere closely to the proposal instructions and evaluation criteria (which presumably you would be doing anyway).