Increase Font Size Decrease Font Size View as PDF Print


Research Design and Implementation (RDI) Checklists

Each study the DGAC reviewed received a quality rating of positive, neutral, or negative, based upon a predefined scoring system. The appraisal of study quality is a critical component of the systematic review methodology because in a highly transparent manner, it indicates the Committee’s judgment regarding the relevance (external validity/generalizability) and validity of each study’s results. Ratings were assessed using two versions of the Research Design and Implementation Checklists.
The Research Design and Implementation Checklist: Primary Research includes ten validity questions based on the AHRQ domains for research studies. Sub-questions are listed under each validity question that identify important aspects of sound study design and execution relevant to each domain. Some sub-questions also identify how the domain applies in specific research designs.

Research Design and Implementation Checklist: Primary Research
 
RELEVANCE QUESTIONS
  1. Would implementing the studied intervention or procedure (if found successful) result in improved outcomes for the patients/clients/population group? (NA for some Epi studies)
  1. Did the authors study an outcome (dependent variable) or topic that the patients/clients/population group would care about?
  1. Is the focus of the intervention or procedure (independent variable) or topic of study a common issue of concern to dietetics practice?
  1. Is the intervention or procedure feasible? (NA for some epidemiological studies)
VALIDITY QUESTIONS
  1. Was the research question clearly stated?
1.1 Was the specific intervention(s) or procedure (independent variable(s)) identified?
1.2 Was the outcome(s) (dependent variable(s)) clearly indicated?
1.3 Were the target population and setting specified?
  1. Was the selection of study subjects/patients free from bias?
2.1 Were inclusion/exclusion criteria specified (e.g., risk, point in disease progression, diagnostic or prognosis criteria), and with sufficient detail and without omitting criteria critical to the study?
2.2 Were criteria applied equally to all study groups?
2.3 Were health, demographics, and other characteristics of subjects described?
2.4 Were the subjects/patients a representative sample of the relevant population?
  1. Were study groups comparable?
3.1 Was the method of assigning subjects/patients to groups described and unbiased? (Method of randomization identified if RCT)
3.2 Were distribution of disease status, prognostic factors, and other factors (e.g., demographics) similar across study groups at baseline?
3.3 Were concurrent controls used? (Concurrent preferred over historical controls.)
3.4 If cohort study or cross-sectional study, were groups comparable on important confounding factors and/or were preexisting differences accounted for by using appropriate adjustments in statistical analysis?
3.5 If case control study, were potential confounding factors comparable for cases and controls? (If case series or trial with subjects serving as own control, this criterion is not applicable. Criterion may not be applicable in some cross-sectional studies.)
3.6 If diagnostic test, was there an independent blind comparison with an appropriate reference standard (e.g., “gold standard”)?
  1. Was method of handling withdrawals described?
4.1 Were follow up methods described and the same for all groups?
4.2 Was the number, characteristics of withdrawals (i.e., dropouts, lost to follow up, attrition rate) and/or response rate (cross-sectional studies) described for each group? (Follow up goal for a strong study is 80%.)
4.3 Were all enrolled subjects/patients (in the original sample) accounted for?
4.4 Were reasons for withdrawals similar across groups?
4.5 If diagnostic test, was decision to perform reference test not dependent on results of test under study?
  1. Was blinding used to prevent introduction of bias?
5.1 In intervention study, were subjects, clinicians/practitioners, and investigators blinded to treatment group, as appropriate?
5.2 Were data collectors blinded for outcomes assessment? (If outcome is measured using an objective test, such as a lab value, this criterion is assumed to be met.)
5.3 In cohort study or cross-sectional study, were measurements of outcomes and risk factors blinded?
5.4 In case control study, was case definition explicit and case ascertainment not influenced by exposure status?
5.5 In diagnostic study, were test results blinded to patient history and other test results?
  1. Were intervention/therapeutic regimens/exposure factor or procedure and any comparison(s) described in detail? Were intervening factors described?
6.1 In RCT or other intervention trial, were protocols described for all regimens studied?
6.2 n observational study, were interventions, study settings, and clinicians/provider described?
6.3 Was the intensity and duration of the intervention or exposure factor sufficient to produce a meaningful effect?
6.4 Was the amount of exposure and, if relevant, subject/patient compliance measured?
6.5 Were co-interventions (e.g., ancillary treatments, other therapies) described?
6.6 Were extra or unplanned treatments described?
6.7 Was the information for 6.4, 6.5, and 6.6 assessed the same way for all groups?
6.8 In diagnostic study, were details of test administration and replication sufficient?
  1. Were outcomes clearly defined and the measurements valid and reliable?
7.1 Were primary and secondary endpoints described and relevant to the question?
7.2 Were nutrition measures appropriate to question and outcomes of concern?
7.3 Was the period of follow-up long enough for important outcome(s) to occur?
7.4 Were the observations and measurements based on standard, valid, and reliable data collection instruments/tests/procedures?
7.5 Was the measurement of effect at an appropriate level of precision?
7.6 Were other factors accounted for (measured) that could affect outcomes?
7.7 Were the measurements conducted consistently across groups?
  1. Was the statistical analysis appropriate for the study design and type of outcome indicators?
8.1 Were statistical analyses adequately described the results reported appropriately?
8.2 Were correct statistical tests used and assumptions of test not violated?
8.3 Were statistics reported with levels of significance and/or confidence intervals?
8.4 Was “intent to treat” analysis of outcomes done (and as appropriate, was there an analysis of outcomes for those maximally exposed or a dose-response analysis)?
8.5 Were adequate adjustments made for effects of confounding factors that might have affected the outcomes (e.g., multivariate analyses)?
8.6 Was clinical significance as well as statistical significance reported?
8.7 If negative findings, was a power calculation reported to address type 2 error?
  1. Are conclusions supported by results with biases and limitations taken into consideration?
9.1 Is there a discussion of findings?
9.2 Are biases and study limitations identified and discussed?
  1. Is bias due to study’s funding or sponsorship unlikely?
10.1 Were sources of funding and investigators’ affiliations described?
10.2 Was there no apparent conflict of interest?
The Research Design and Implementation Checklist: Review Articles has ten validity questions that incorporate the AHRQ domains for systematic reviews. These questions identify the systematic process for drawing valid inferences from a body of literature.
Research Design and Implementation Checklist: Review Articles
 
RELEVANCE QUESTIONS
  1. Will the answer if true, have a direct bearing on the health of patients?
  1. Is the outcome or topic something that patients/clients/population groups would care about?
  1. Is the problem addressed in the review one that is relevant to dietetics practice?
  1. Will the information, if true, require a change in practice?
  1. Was the question for the review clearly focused and appropriate?
  1. Was the search strategy used to locate relevant studies comprehensive? Were the databases searched and the search terms used described?
  1. Were explicit methods used to select studies to include in the review? Were inclusion/exclusion criteria specified and appropriate? Were selection methods unbiased?
  1. Was there an appraisal of the quality and validity of studies included in the review? Were appraisal methods specified, appropriate, and reproducible?
  1. Were specific treatments/interventions/exposures described? Were treatments similar enough to be combined?
  1. Was the outcome of interest clearly indicated? Were other potential harms and benefits considered?
  1. Were processes for data abstraction, synthesis, and analysis described? Were they applied consistently across studies and groups? Was there appropriate use of qualitative and/or quantitative synthesis? Was variation in findings among studies analyzed? Were heterogeneity issued considered? If data from studies were aggregated for meta-analysis, was the procedure described?
  1. Are the results clearly presented in narrative and/or quantitative terms? If summary statistics are used, are levels of significance and/or confidence intervals included?
  1. Are conclusions supported by results with biases and limitations taken into consideration? Are limitations of the review identified and discussed?
  1. Was bias due to the review’s funding or sponsorship unlikely?
 

 

 


 

Last Updated: 02/12/2014