Notes on Methodology | 2019
The BRIMR rankings are derived each year from data compiled and released by the National Institutes of Health shortly after the federal fiscal year closes. The NIH posts these data on its Research Portfolio Online Reporting Tool (RePORT) website in the form of a master Excel spreadsheet file called “Worldwide_XXXX”, where “XXXX” denotes the year. For fiscal 2019, which began on 1 October 2018 and ended on 30 September 2019, this file was posted in late December 2019 and presented tabulated information on more than $31 billion in funding through 60,081 extramural NIH grants and contracts awarded to more than 41,000 Principal Investigators (PIs).
We begin by making a large number of modifications to the NIH master file, most of which are aimed at further standardizing the terminology and nomenclature used. This results in a modified “Worldwide” file that is posted on the BRIMR webpage, and which then serves as the basis for all other BRIMR lists and rankings for that year. For example, various names used for a given grantee organization in the NIH file (under column A) are combined into a single listing where appropriate, so that any grants attributed to ACME UNIVERSITY COLLEGE OF MEDICINE, or ACME UNIVERSITY MEDICAL CENTER, or ACME UNIVERSITY HEALTH CENTER INC, might all be reassigned simply to ACME UNIVERSITY. Similarly, the designations SCHOOL OF MEDICINE & DENTISTRY or OVERALL MEDICAL are each replaced by SCHOOLS OF MEDICINE in column K, and city names are fully capitalized (e.g., BOSTON) in column P.
A few revisions are more significant: for example, the Mayo Clinic is not listed as a medical school in the NIH master file, but BRIMR has always designated it as such because it has been training medical students since 1972. Commencing in 2019, the medical schools at Case Western Reserve University and the Cleveland Clinic (which share facilities and a single Dean) have been combined by BRIMR under one listing, as have several teaching or research entities affiliated with Rutgers University that were listed separately in the past. Additionally, we have introduced a standardized format to name the various University of Texas campuses (e.g., UNIVERSITY OF TEXAS HLTH SCI CTR SAN ANTONIO). All such changes are intended to facilitate computerized analysis and to make the BRIMR database as valid, intuitive, and user-friendly as possible.
Important aspects of our rankings are dictated by NIH policy and practice, as reflected in the data posted on NIH RePORT. In particular, we defer to the NIH’s longstanding policy of crediting only one lead PI and one institution for any given grant or contract, even in the case of large subcontracts, program projects, or multi-PI awards. Grants to faculty at some respected medical schools may be credited by NIH to teaching hospitals, which tends to understate the funding and rankings of those schools; examples arise from Harvard Medical School’s affiliations with Massachusetts General Hospital and the Brigham and Women’s Hospital, as well as from those of other medical schools with research-intensive children’s hospitals.
In its “Worldwide” file (under column J), the NIH routinely combines certain disciplines, such as diagnostic and therapeutic radiology, into a single category even if they reside in separate departments at a given institution, and it likewise assigns departments with hybrid names (such as “PHYSIOLOGY AND PHARMACOLOGY”) to one category or the other; BRIMR typically conforms to those assignments made by NIH. We also accept NIH’s practice of crediting a few medical schools with grants to non-traditional departments such as biology, chemistry, psychology, or physics.
Lastly, although contracts constitute about 10% of all NIH extramural funding, they are not considered in many BRIMR rankings because the NIH master file does not generally ascribe contracts to a specific school (such as a School of Medicine) or department within an institution. NIH contracts for 2019 are, however, listed as a stand-alone file and ranked by PI. These and all other aspects of the rankings are transparently accessible for public review by comparing the “Worldwide” file to other files on the BRIMR site and to the “Worldwide_XXXX” file on NIH RePORT for the corresponding year.
We welcome your feedback and suggestions at . For best consideration, any proposed corrections must be received by the January 31 deadline. BRIMR receives inquiries from around 30 institutions in a typical year, each concerning as many as two dozen grants (mean = 6). Most involve grants that may appear to have been credited to an incorrect department, to no department at all, or to the wrong School within an institution. Inaccuracies sometimes occur if a given PI’s name, in column F, is listed differently on different awards (e.g., as “GEORGE, SALLY”, “GEORGE, SALLY H.”, and “SALLY, GEORGE”), or if a PI has grants administered by more than one department or institution (e.g., some by the University of Pennsylvania, and others by the Gordon Research Conferences).
It is important to emphasize that NIH RePORT is not the same as NIH RePORTER, a separate website that often presents data from more than one fiscal year and includes funding from other agencies, such as CDC, FDA, HRSA, and the VA. BRIMR draws all of its data from NIH RePORT – we do not use data from NIH RePORTER unless it is also listed in NIH RePORT.
We regret that not all revisions will be applied retroactively to prior years. If you find what you think may be a discrepancy or error, please check whether it is present in the original Worldwide file on NIH RePORT; if so, you may wish to contact the NIH to have it corrected there.
~ Robert Roskoski Jr. and Tristram G. Parslow
Thinking is highly overrated.
~ Violet Crawley
(played by Maggie Smith) Dowager Countess of Grantham as seen on Downton Abbey