Methodology

The following describes the methodology used to develop the Global Health Research League Table. The development of the methodology was overseen by a student committee, based on metrics that meet defined criteria and the advice of colleagues working in global health.

All elements of the evaluation, including selection of universities, selection of metrics, data collection, scoring and grading, were conducted by GHRLT between September 2013 and December 2014.

Click here to download the full methodology: GHRLT full methodoloy

SELECTION OF UNIVERSITIES

The GHRLT seeks to evaluate the commitment to global health research of the top 25 funded research universities in the UK. For the purposes of this evaluation, the “top 25 funded research universities” are defined as those which receive the highest levels of funding from the primary public funding agency in the UK, the Medical Research Council (MRC). The evaluation list was selected based on the latest figures available from 2010/11, and limited to the 25 universities receiving the highest value of public grants from the MRC.

SELECTION OF EVALUATION METRICS

To provide a comprehensive overview of university commitment to global health research, the League Table measures 12 performance indicators in two general categories.

The 12 specific metrics were selected on the basis of the following criteria:

  • Significance as indicators of global health research
  • Availability of standardized data sources for all evaluated institutions
  • Consistent measurability and comparability across evaluated institutions
  • Ability of evaluated institutions to concretely improve performance on these metrics
  • Diverse range of indicators measuring policy and implementation

INNOVATION:

Q1 – What percentage of the university’s medical research funding is devoted to projects focused on neglected diseases? (data from 2011-2013)

Q2 – What percentage of the university’s medical research funding is devoted to projects focused on health in Low- and Lower-Middle Income countries? (data from 2011-2013)

Q3 – What percentage of the university’s total publications are focused on neglected diseases, including neglected aspects of HIV, TB, malaria ? (data from 2011-2013)

Q4 – What percentage of the university’s total publications are focused on health in Low- and Lower-Middle Income countries? (data from 2011-2013)

ACCESS:

Q1 – Has the university officially and publicly committed to licensing its health-related technologies in ways that promote access and affordability in Low- and Middle-Income countries?

Q2 – Does the website of the university or the university’s technology transfer office make an effort to disclose, explain and promote access licensing commitments and practices?

Q3 – In the past year, what percentage of the university’s licenses for health-related technologies (e.g. medicines, vaccines, diagnostics) were non-exclusive?

Q4a – In the past year, for what percentage of all health-related technologies (e.g. medicines, vaccines, diagnostics) did the university seek patents in Upper-Middle Income Countries (including Brazil, Russia, India, China, and South Africa), where they may restrict access?

Q4b – In the past year, for what percentage of all health-related technologies (e.g. medicines, vaccines, diagnostics) did the university seek patents in Low- and Middle-Income countries, where they may restrict access?

Q5a – In the past year, what percentage of the university’s exclusive licenses of health technologies included provisions to promote access to those technologies in Low- and Middle-Income countries?

Q5b – What percentage of those access provisions included the biggest Middle Income economies (Brazil, Russia, India, China or South Africa) in their scope?

Q6 – Does the university promote public-access publication of research?

Q7 – Is the university’s research published in a way that is freely accessible to everyone?

NOTE: Specific information on each evaluation metric’s significance, data source, and potential for university improvement can be found in the detailed data “pop-outs” for each institution listed in the left hand column.

To view this detailed information, simply mouse over over the “?” symbol located in the upper right corner of each question box as illustrated here:

tooltip_pic

 

As universities selected for evaluation still vary in significant ways (e.g. levels of overall research funding, university size) League Table metrics and scoring systems are designed to minimize the impact of such differences.

Most importantly, almost all quantitative metrics used in the League Table are normalized with respect to degree of institutional funding, as a proxy for university size. For example, when evaluating a university’s investment in neglected disease (ND) research, the League Table considers research devoted to ND research projects as a proportion of that institution’s overall medical research funding. This enables meaningful comparison across institutions while minimizing or eliminating the impact of variations in size, budget, or resources.

For categorical metrics, the League Table employs pre-defined sets of discrete categories by which all universities can be uniformly evaluated, and for which performance is again likely to be independent of variations in university size, funding, capacity or resources.

DATA SOURCES AND COLLECTION

League Table evaluation data can be separated into two general categories based on source and method of collection:

CATEGORY 1 –

Data obtained by accessing publicly available sources, including university websites, online funding databases, publication databases, and search engines. These data were collected by GHRLT volunteer researchers. All students were provided with an online training package and assessment prior to data collection.

These data sources include:

CATEGORY 2 –

Data obtained through self-reporting by university officials in response to standardised survey questionnaires designed by GHRLT and provided to all evaluated institutions, and in response to requests made by GHRLT under the Freedom Of Information Act 2000.

Each subsection of the League Table (Innovation and Access) includes a combination of:

  1. metrics based entirely on data from public sources (CATEGORY 1), and
  2. metrics based either entirely on self-reported data (CATEGORY 2), or, when possible, based on self-reported data supplemented/verified through public data (CATEGORY 1 and CATEGORY 2).

This combination of metrics enables evaluation of universities which did not respond or declined to respond to requests for self-reported data.

QUALITY AND CONSISTENCY OF DATA

For CATEGORY 1, GHRLT took the following steps to address quality and consistency of data collection:

  • Prospectively developed standardized operating procedures (SOPs) and standardized data extraction forms, including uniform search terms to which all investigators were required to adhere;
  • Implemented quality assurance procedures to ensure that investigators were obtaining consistent results from the collection procedures;
  • Where possible, multiple individual investigators independently and concurrently performed the same data collection and search processes to ensure consistency of data;
  • Standardized scoring was applied across all institutions (see “Scoring” below).

For CATEGORY 2, data quality and consistency, including concerns about questionnaire non-response, were addressed through the following:

  • Provided identical questionnaires to all institutions;
  • Developed a standardized process for identifying and verifying contacts to receive questionnaires at each institution;
  • Used standardized scripts and communication strategies to deliver the questionnaire to all institutions and conduct consistent follow up via e-mail, phone, and other contact methods;
  • Self-reported questions were structured such that the variable under question was either dichotomous or categorical, rather than continuous, so as to maximize the consistency and likelihood of response from institutions;
  • Applied standardized scoring of responses across all institutions;
  • Measured response rates both for the entire questionnaire and for individual questions;
  • In cases where we did not receive a response to the online survey by 12 weeks fromafter first contact (and following multiple email and telephone contacts), we sent Freedom Of Information requests to obtain thise information. that we were seeking in the survey. Data obtained through FOI request has been clearly indicated on the website. When we have used data obtained through the FOI request instead of the online survey, this is indicated in the data presented.

SCORING AND GRADING

For each of the 13 metrics, universities were first assigned a raw score from 1 to 5 based on a standardized scoring scale applied to the data gathered for each institution. A standardized weighting multiplier from 0.5 to 5 was then applied to each metric. Weighting multipliers were based on the source of evaluation data (public vs. self-reported) and the relative importance of the metric in question as determined by GHRLT. The League Table displays the weighted score for each metric, which is the product of a university’s raw score and the weighting multiplier.

Grades for each subsection (Innovation and Access) are determined by the total of a university’s weighted scores for all questions in that subsection. For each subsection, a standard grading scale was developed that assigns a letter-grade based on the sum points for each question (questions in the access section had varying weightings).