This page answers some of the questions you may have about Physiotherapy Choices:

  1. How are trials, reviews and guidelines located?
  2. How far back in time does the database go?
  3. How often is the database updated?
  4. What can I do if I find a trial, review or guideline that is not on the database?
  5. How are trials rated?

1. How are trials, systematic reviews and clinical practice guidelines located?

Specific criteria are used to define which clinical trials, systematic reviews and evidence-based clinical practice guidelines are archived in PEDro. We have located (and are still locating) trials, reviews and guidelines in a number of ways:

  1. Drs Rob de Bie and Riekie de Vet of the Rehabilitation and Related Therapies Field of the Cochrane Collaboration generously gave us their pre-existing database of randomised trials in rehabilitation. They have also sent us details of trials on physiotherapy identified by handsearches of approximately 200 allied health journals conducted by Nederlands Paramedisch Institut. More recently they have sent us copies of Dutch guidelines.
  2. These were combined with personal databases of the Steering Committee of the Centre for Evidence-Based Physiotherapy.
  3. Then we performed optimised searches of four databases of the serials literature (Medline, Embase, Cinahl and PsycInfo). Now we prospectively search these databases using automated (SDI) optimised searches.
  4. We search each new release of the Cochrane Database of Systematic Reviews, the Cochrane CENTRAL Register of Controlled Trials, and the Database of Abstracts of Reviews of Effectiveness (DARE).
  5. We search the internet for practice guidelines. The general strategy is to search databases of clinical practice guidelines (such as the database of the National Guideline Clearinghouse in the USA) and to follow links from there. The National Guideline Clearinghouse provides us with weekly notification of new guidelines.
  6. We track citations in systematic reviews on the PEDro database.
  7. Lastly, “Friends of PEDro” and users of PEDro notify us of trials that are not on PEDro by contacting us.

2. How far back in time does the database go?

We will include any trial, review or guideline that satisfies the criteria for inclusion on the database (see above), regardless of how long ago it was published. At the time of writing, the oldest record on the database (a clinical trial) was published in 1929. To find the oldest record in the database, type “0…1929” in the “Published Since” field on the search page. This returns all records published up to and including 1929.

3. How often is the database updated?

PEDro is now updated once per month, usually on the first Monday of the calendar month (except in January).

4. What can I do if I find a trial, systematic review or practice guideline that is not on the database?

If you know of a trial, review or guideline which you think ought to be on PEDro but is not, please let us know. First, check that it meets the criteria for inclusion. If it does, please contact us. The more details you can provide, the more likely it is that we will be able to find it. If you are the author of a paper that you think ought to be on PEDro but is not, please send us a reprint.

5. How are trials rated?

Trials (but not reviews or guidelines) are rated with a checklist (called the “PEDro scale“). The PEDro scale considers two aspects of trial quality, namely the “believability” (or “internal validity”) of the trial and whether the trial contains sufficient statistical information to make it interpretable. It does not rate the “meaningfulness” (or “generalisability” or “external validity”) of the trial, or the size of the treatment effect.

To assess believability we look for unambiguous confirmation of a number of criteria, including random allocation, concealment of allocation, comparability of groups at baseline, blinding of patients, therapists and assessors, analysis by intention to treat and adequacy of follow-up. To assess interpretability we look for between-group statistical comparisons and reports of both point estimates and measures of variability. This gives a total of 10 scale items. Trials are rated on the basis of what they report. If a trial does not report that a particular criterion was met, we score it as if the criterion was not met (“guilty until proven innocent”).

All but two of the PEDro scale items are based on the Delphi list, developed by Verhagen and colleagues. The Delphi list is a list of trial characteristics that was thought to be related to trial “quality” by a group of clinical trial experts (for details see Verhagen et al, Journal of Clinical Epidemiology 51: 1235-41, 1998). The PEDro scale contains additional items on adequacy of follow-up and between-group statistical comparisons. One item on the Delphi list (the item on eligibility criteria) is related to external validity, so it does not reflect the dimensions of quality assessed by the PEDro scale. This item is not used to calculate the method score that is displayed in the search results (which is why the 11 item scale gives a score out of 10). This item has, nevertheless, been retained so that all Delphi list items are represented on the PEDro scale.

The “PEDro score” is determined simply by counting the number of checklist criteria that are satisfied in the trial report. When the PEDro database is searched, the PEDro score is used to sort clinical trials on the “search results” page. Systematic reviews and clinical practice guidelines are not rated for quality (they get a quality score of “N/A”, meaning “not applicable”). In the search results, guidelines are presented first, sorted by year of publication (most recent guidelines first). These are followed by systematic reviews, also sorted by year.

Rating of trials is carried out by raters who are either casual staff of the Centre for Evidence-Based Physiotherapy or volunteer physiotherapists. All raters undergo training, which involves practice with feedback. Three other mechanisms are used to ensure the quality of ratings. First, we aim to rate all trials twice. A third rater resolves any disagreements. We say that ratings are “not confirmed” until the trial has been rated twice and disagreements have been resolved by the third rater. When this has been done we say ratings are “confirmed”. Second, we perform informal and non-systematic checks of the quality of some ratings (not all). Lastly, a mechanism has been provided for users of PEDro to dispute trial ratings (see “What can I do if I disagree with the quality rating on a particular trial?” below).

A paper describing the reliability of the PEDro scale for rating the quality of randomised controlled trials was published by Maher et al (2003).

A paper comparing the PEDro scale to the Jadad scale for rating the quality of randomised controlled trials was published by Bhogal et al (2005).

A paper evaluating the validity of the PEDro scale was published by de Morton (2009).