In earlier blogs I was discussing the difficulties of getting funded, about the competition for limited funds, and the cost of doing research. This blog is about the process of how a grant proposal requesting funding is either judged successful or not, mostly not. The process is called peer review, and has been part of the fundament of scientific funding in the United States for over 75 years. As the name suggests, a group of your ‘peers’ read your proposal and judge it based on an ‘objective’ set of criteria. Peers in this case, are Ph.D-level scientific colleagues, most often from academia.
Typically, there is a panel meeting hosted by a government agency that is providing the funding. Here the reviewers gather together to discuss the proposals. Such a panel might have twenty members, and each proposal will have been read by three members of the panel. In total, the panel might be responsible for between 80 and 120 proposals, five or ten of which will be funded. Each reviewer expresses their opinion of the proposal and gives it a ranking, other panel members may weigh in on the discussion; discussions per proposal usually last 5 – 10 minutes. Proposals earlier in the review session get longer discussions, by the end of the panel meeting, lasting two to three days, discussions are shorter.
What the scientist who spent weeks of blood, sweat, and tears on the proposal will see is a summary of the review panels opinion that elicited their overall favorability score for the proposal. At a number of funding agencies, the proposer will also see the reviews written by the individual reviewers, and perhaps some ad hoc reviews, written by outside experts who were asked to weigh in. The reviews can be detailed, pointing out everything from a fundamental flaw in logic to a misspelled word on page 6, 3rd paragraph. Other reviews are short and cryptic, like they were written in the middle aisle seat of an airliner approaching Washington, DC at 500 mph at 10 o’clock on a Sunday night. Of course, everything is anonymous, as it must be, so as a proposer you do not know who has reviewed your proposal.
Peer-review is a cornerstone of the scientific funding process in the US, and in most of the world’s leading scientific countries. Sometimes, like democracy, it’s decried as a terrible system, but still the best of any alternatives. I’ve served on quite a few review panels over the years, and almost invariably I come away feeling like the process has worked, perhaps not perfectly, but that the good science has risen to the top. On the other hand, I’ve had many proposals rejected over the years, and almost invariably felt at best a mistake was made, and at worst, wondered how this particular set of morons was ever allowed to sit on a review panel. It’s the fallible conundrum of humanity.
In reality of course, we are blessed with lots of good science. Bad proposals will almost always self-select themselves out of contention. Bad proposals can be badly written, poorly conceived, not be familiar with important concepts or techniques within their own field, or any combination of these and other things. But the majority, somewhere between half and two-thirds, of the proposals are good proposals, seeking an answer to an interesting question, posing a testable hypothesis, and proposing to use a reasonable approach. The scientists proposing the work are either seasoned veterans with good track records, or younger scientists, who’ve already proven themselves by landing jobs at prestigious research universities. Given the number of grants that can actually be funded, this leaves plenty of room for good proposals that will not make the grade. I’m not convinced there’s any formula for writing an outstanding proposal, other than doing your best, knowing your field, being quantitative wherever it’s possible, and not being purely derivative of what’s come before.
The truth is if you don’t have a champion on a review panel who will really tout your proposal on a review panel, you’re probably doomed. Interestingly, a champion may be someone who knows little about the work you do, but is it taken by some aspect of it, perhaps it seems novel to them because it’s not a field they know that well. Of course, it’s nice, and more often the case, if the champion is someone who does know the field and thinks your proposal is breaking new ground. So, finding a champion is really about a stroke of luck.
Scientist’s constantly debate over how peer review can be improved. I was recently on a review panel where two-thirds of the members were within a few years of finishing their Ph.D’s. I was the old-timer in the room. While it’s really important to have representation of younger scientists, old-timers can provide useful wisdom, experience, and perspective. I asked one of the program managers for this particular panel about why there were so many youngsters compared to oldsters, and his response was: ‘Oh, we asked, but most of the old-timer’s pleaded being too busy, overworked, etc,’; as if this isn’t the case for some youngster trying to make their way in this extraordinarily competitive game. Another issue can be that in some cases, nearly everyone with leading expertise in a field will have submitted a proposal themselves, so cannot be reviewers because they have a conflict of interest.
It is now reasonable to ask if machines could do a better job. My guess is that it won’t be too long before a pretty strong case could be made that they would be able to, at least theoretically. Apparently, IBM’s famous super computer Watson has inputted the data from just about every paper written on cancer, and is able to formulate logical experimental protocols aimed at solving specific problems relevant to certain types of cancer. Why then couldn’t a computer review a research proposal and balance what is proposed with the state of the knowledge, the relative novelty of the project with respect both to work in similar areas, and also more broadly into other disciplines, the soundness of the methods and statistical approaches, and the background and track record of the scientists proposing the work, as well as the proposed costs. It’s interesting to think about how Watson would grade a proposal written by yours truly compared to a proposal written by Holmes, a futuristic computer developed by Google, to pursue a similar topic. I’m not liking my chances (unless I can pay some Moldavian teenagers to hack the system).
Ahh, the fallible conundrum of humanity.