What is the difference between results and interpretation




















RRs and RRRs remain crucial because relative effects tend to be substantially more stable across risk groups than absolute effects see Chapter 10, Section Review authors can use their own data to study this consistency Cates , Smeeth et al Risk differences from studies are least likely to be consistent across baseline event rates; thus, they are rarely appropriate for computing numbers needed to treat in systematic reviews.

In addition, if there are several different groups of participants with different levels of risk, it is crucial to express absolute benefit for each clinically identifiable risk group, clarifying the time period to which this applies. Studies in patients with differing severity of disease, or studies with different lengths of follow-up will almost certainly have different comparator group risks. In these cases, different comparator group risks lead to different RDs and NNTs except when the intervention has no effect.

For example, a review of oral anticoagulants to prevent stroke presented information to users by describing absolute benefits for various baseline risks Aguilar and Hart , Aguilar et al This presentation helps users to understand the important impact that typical baseline risks have on the absolute benefit that they can expect. Direct computation of risk difference RD or a number needed to treat NNT depends on the summary statistic odds ratio, risk ratio or risk differences available from the study or meta-analysis.

When expressing results of meta-analyses, review authors should use, in the computations, whatever statistic they determined to be the most appropriate summary for meta-analysis see Chapter 10, Section Here we present calculations to obtain RD as a reduction in the number of participants per For example, a risk difference of —0. RDs and NNTs should not be computed from the aggregated total numbers of participants and events across the trials.

This approach ignores the randomization within studies, and may produce seriously misleading results if there is unbalanced randomization in any of the studies. Using the pooled result of a meta-analysis is more appropriate. When computing NNTs, the values obtained are by convention always rounded up to the next whole number. It is convention to round the NNT up to the nearest whole number. For example, if the risk difference is —0.

Note that this approach, although feasible, should be used only for the results of a meta-analysis of risk differences. In most cases meta-analyses will be undertaken using a relative measure of effect RR or OR , and those statistics should be used to calculate the NNT see Section To aid interpretation of the results of a meta-analysis of risk ratios, review authors may compute an absolute risk reduction or NNT.

In order to do this, an assumed comparator risk ACR otherwise known as a baseline risk, or risk that the outcome of interest would occur with the comparator intervention is required. It will usually be appropriate to do this for a range of different ACRs. The computation proceeds as follows:. Then the effect on risk is 24 fewer per Review authors may wish to compute a risk difference or NNT from the results of a meta-analysis of odds ratios.

In order to do this, an ACR is required. Then the effect on risk is 62 fewer per Because risk ratios are easier to interpret than odds ratios, but odds ratios have favourable mathematical properties, a review author may decide to undertake a meta-analysis based on odds ratios, but to express the result as a summary risk ratio or relative risk reduction.

This requires an ACR. It will often be reasonable to perform this transformation using the median comparator group risk from the studies in the meta-analysis. Note that this confidence interval does not incorporate uncertainty around the ACR. Review authors should describe in the study protocol how they plan to interpret results for continuous outcomes. When outcomes are continuous, review authors have a number of options to present summary results.

These options differ if studies report the same measure that is familiar to the target audiences, studies report the same or very similar measures that are less familiar to the target audiences, or studies report different measures. If all studies have used the same familiar units, for instance, results are expressed as durations of events, such as symptoms for conditions including diarrhoea, sore throat, otitis media, influenza or duration of hospitalization, a meta-analysis may generate a summary estimate in those units, as a difference in mean response see, for instance, the row summarizing results for duration of diarrhoea in Chapter 14, Figure However, when units of such outcomes may be difficult to interpret, particularly when they relate to rating scales again, see the oedema row of Chapter 14, Figure Knowing the MID allows review authors and users to place results in context.

For example, the chronic respiratory questionnaire has possible scores in health-related quality of life ranging from 1 to 7 and 0.

When studies have used different instruments to measure the same construct, a standardized mean difference SMD may be used in meta-analysis for combining continuous data. Without guidance, clinicians and patients may have little idea how to interpret results presented as SMDs. Review authors should therefore consider issues of interpretability when planning their analysis at the protocol stage and should consider whether there will be suitable ways to re-express the SMD or whether alternative effect measures, such as a ratio of means, or possibly as minimal important difference units Guyatt et al b should be used.

Table Adapted from Guyatt et al b. It is widely used, but the interpretation is challenging. It can be misleading depending on whether the population is very homogenous or heterogeneous i. See Section Presenting data with this approach may be viewed by users as closer to the primary data. However, few instruments are sufficiently used in clinical practice to make many of the presented units easily interpretable.

When the units and measures are familiar to the decision makers e. Note: Conversion to natural units is also an option for expressing results using the MID approach below row 3. Dichotomous outcomes are very familiar to clinical audiences and may facilitate understanding.

However, this approach involves assumptions that may not always be valid e. If the minimal important difference for an instrument is known describing the probability of individuals achieving this difference may be more intuitive.

Review authors should always seriously consider this option. Note: Re-expressing SMDs is not the only way of expressing results as dichotomous outcomes. For example, the actual outcomes in the studies can be dichotomized, either directly or using assumptions, prior to meta-analysis. This approach may be easily interpretable to clinical audiences and involves fewer assumptions than some other approaches. It cannot be applied when measure is a change from baseline and therefore negative values possible and the interpretation requires knowledge and interpretation of comparator group mean.

Consider as complementing other approaches, particularly the presentation of relative and absolute effects. This approach may be easily interpretable for audiences but is applicable only when minimal important differences are known. The SMD expresses the intervention effect in standard units rather than the original units of measurement. The value of a SMD thus depends on both the size of the effect the difference between means and the standard deviation of the outcomes the inherent variability among participants or based on an external SD.

However, absolute values of the intervention and comparison groups are typically not useful because studies have used different measurement instruments with different units. One example is as follows: 0. Variations exist e. However, some methodologists believe that such interpretations are problematic because patient importance of a finding is context-dependent and not amenable to generic statements. The second possibility for interpreting the SMD is to express it in the units of one or more of the specific measurement instruments used by the included studies row 1b, Table The approach is to calculate an absolute difference in means by multiplying the SMD by an estimate of the SD associated with the most familiar instrument.

To obtain this SD, a reasonable option is to calculate a weighted average across all intervention groups of all studies that used the selected instrument preferably a pre-intervention or post-intervention SD as discussed in Chapter 10, Section To better reflect among-person variation in practice, or to use an instrument not represented in the meta-analysis, it may be preferable to use a standard deviation from a representative observational study.

The summary effect is thus re-expressed in the original units of that particular instrument and the clinical relevance and impact of the intervention effect can be interpreted using that familiar instrument.

The same approach of re-expressing the results for a familiar instrument can also be used for other standardized effect measures such as when standardizing by MIDs Guyatt et al b : see Section Reproduced with permission of Wolters Kluwer. Options for presenting information about the outcome post-operative pain and suggested description of the measure.

The pain score in the dexamethasone groups was on average 0. As a rule of thumb, 0. Scores calculated based on an SMD of 0. The minimal important difference on the 0 to pain scale is approximately Weighted average of the mean pain score in dexamethasone group divided by mean pain score in placebo.

An effect less than half the minimal important difference suggests a small or very small effect. A third approach row 1c, Table A transformation of a SMD to a log odds ratio is available, based on the assumption that an underlying continuous variable has a logistic distribution with equal standard deviation in the two intervention groups, as discussed in Chapter 10, Section The assumption is unlikely to hold exactly and the results must be regarded as an approximation.

The log odds ratio is estimated as. The comparator group risk in this case would refer to the proportion of people who have achieved a specific value of the continuous outcome. In randomized trials this can be interpreted as the proportion who have improved by some specified amount responders , for instance by 5 points on a 0 to scale.

The risk differences can then be converted to NNTs or to people per thousand using methods described in Section Reproduced with permission of Elsevier. Situations in which the event is undesirable, reduction or increase if intervention harmful in adverse events with the intervention. Situations in which the event is desirable, increase or decrease if intervention harmful in positive responses to the intervention.

A more frequently used approach is based on calculation of a ratio of means between the intervention and comparator groups Friedrich et al as discussed in Chapter 6, Section 6. Interpretational advantages of this approach include the ability to pool studies with outcomes expressed in different units directly, to avoid the vulnerability of heterogeneous populations that limits approaches that rely on SD units, and for ease of clinical interpretation row 2, Table This method is currently designed for post-intervention scores only.

However, it is possible to calculate a ratio of change scores if both intervention and comparator groups change in the same direction in each relevant study, and this ratio may sometimes be informative.

Limitations to this approach include its limited applicability to change scores since it is unlikely that both intervention and comparator group changes are in the same direction in all studies and the possibility of misleading results if the comparator group mean is very small, in which case even a modest difference from the intervention group will yield a large and therefore misleading ratio of means.

It also requires that separate ratios of means be calculated for each included study, and then entered into a generic inverse variance meta-analysis see Chapter 10, Section The ratio of means approach illustrated in Table To express results in MID units, review authors have two options. First, they can be combined across studies in the same way as the SMD, but instead of dividing the mean difference of each study by its SD, review authors divide by the MID associated with that outcome Johnston et al , Guyatt et al b.

This approach avoids the problem of varying SDs across studies that may distort estimates of effect in approaches that rely on the SMD. The approach, however, relies on having well-established MIDs.

The approach is also risky in that a difference less than the MID may be interpreted as trivial when a substantial proportion of patients may have achieved an important benefit. The other approach makes a simple conversion not shown in Table For example, one can rescale the mean and SD of other chronic respiratory disease instruments e. This approach, presenting in units of the most familiar instrument, may be the most desirable when the target audiences have extensive experience with that instrument, particularly if the MID is well established.

While Cochrane Reviews about interventions can provide meaningful information and guidance for practice, decisions about the desirable and undesirable consequences of healthcare options require evidence and judgements for criteria that most Cochrane Reviews do not provide Alonso-Coello et al In describing the implications for practice and the development of recommendations, however, review authors may consider the certainty of the evidence, the balance of benefits and harms, and assumed values and preferences.

Drawing conclusions about the practical usefulness of an intervention entails making trade-offs, either implicitly or explicitly, between the estimated benefits, harms and the values and preferences. Making such trade-offs, and thus making specific recommendations for an action in a specific context, goes beyond a Cochrane Review and requires additional evidence and informed judgements that most Cochrane Reviews do not provide Alonso-Coello et al Thus, authors of Cochrane Reviews should not make recommendations.

If review authors feel compelled to lay out actions that clinicians and patients could take, they should — after describing the certainty of evidence and the balance of benefits and harms — highlight different actions that might be consistent with particular patterns of values and preferences.

Other factors that might influence a decision should also be highlighted, including any known factors that would be expected to modify the effects of the intervention, the baseline risk or status of the patient, costs and who bears those costs, and the availability of resources.

Review authors should ensure they consider all patient-important outcomes, including those for which limited data may be available. In the context of public health reviews the focus may be on population-important outcomes as the target may be an entire non-diseased population and include outcomes that are not measured in the population receiving an intervention e.

This process implies a high level of explicitness in judgements about values or preferences attached to different outcomes and the certainty of the related evidence Zhang et al b, Zhang et al c ; this and a full cost-effectiveness analysis is beyond the scope of most Cochrane Reviews although they might well be used for such analyses; see Chapter Patients with a high preference for a potential survival prolongation, limited aversion to potential bleeding, and who do not consider heparin both UFH or LMWH therapy a burden may opt to use heparin, while those with aversion to bleeding may not.

It is helpful to consider the population, intervention, comparison and outcomes that could be addressed, or addressed more effectively in the future, in the context of the certainty of the evidence in the current review Brown et al :.

While Cochrane Review authors will find the PICO domains helpful, the domains of the GRADE certainty framework further support understanding and describing what additional research will improve the certainty in the available evidence. Note that as the certainty of the evidence is likely to vary by outcome, these implications will be specific to certain outcomes in the review. All studies suffered from lack of blinding of outcome assessors. Trials of this type are required.

The estimates of effect may be biased because of a lack of blinding of the assessors of the outcome. Unexplained inconsistency: need for individual participant data meta-analysis; need for studies in relevant subgroups.

Studies in patients with small cell lung cancer are needed to understand if the effects differ from those in patients with pancreatic cancer. Unexplained inconsistency: consider and interpret overall effect estimates as for the overall certainty of a body of evidence.

Explained inconsistency if results are not presented in strata : consider and interpret effects estimates by subgroup. Studies in patients with early cancer are needed because the evidence is from studies in patients with advanced cancer.

It is uncertain if the results directly apply to the patients or the way that the intervention is applied in a particular setting. Studies with approximately more events in the experimental intervention group and the comparator intervention group are required. Same uncertainty interpretation as for certainty of a body of evidence: e.

Need to investigate and identify unpublished data; large studies might help resolve this issue. Same uncertainty interpretation as for certainty of a body of evidence e. The effect is large in the populations that were included in the studies and the true effect is likely going to cross important thresholds. Studies controlling for possible confounders such as smoking and degree of education are required.

The effect could be even larger or smaller depending on the direction of the results than the one that is observed in the studies presented here.

Further research may be justified to investigate the relative effects of different strengths of stockings or of stockings compared to other preventative strategies.

Further randomised trials to address the remaining uncertainty about the effects of wearing versus not wearing compression stockings on outcomes such as death, pulmonary embolism and symptomatic DVT would need to be large. Future trials need to be rigorous in design and delivery, with subsequent reporting to include high quality descriptions of all aspects of methodology to enable appraisal and interpretation of results.

When the confidence intervals are too wide e. For example, when the effect estimate is positive for a beneficial outcome but confidence intervals are wide, review authors may describe the effect as promising. Even if it is possible, it makes very dull reading. Why use the past tense when reporting results? Just as the Methods gives an account of what you did , the Results section provides an account of what you found.

The Results section requires you to narrate the account as if it is history: it took place in the past , and is now being reported as something in the past. This also applies to what your respondents said or reported in any interview or questionnaire responses.

Generally really, really opinion here , I would rather not to be too dogmatic when discussing scientific writing style with others. Keep an open mind but do know your own field's general "rules. Yet, one thing that I probably will never mess with is APA style APA style fandom is strong and I'll steer away from upsetting them.

Sign up to join this community. The best answers are voted up and rise to the top. Stack Overflow for Teams — Collaborate and share knowledge with a private group.

Create a free Team What is Teams? Learn more. What is the difference between an explanation and an interpretation? Ask Question. Asked 4 years, 11 months ago. Active 4 years, 10 months ago. Viewed 4k times. And if so, can someone please clarify? Improve this question. I'm tempted to say this might be a better fit for the English language SE, but it's also sufficiently on-topic here I think. That being said, this might just be a semantics argument.

I think the question is on-topic, because I interpret the question to be less about the particular terms "explanation" and "interpretation" and more about what the colleague meant when suggesting "adding explanations" to the results section. Maybe OP could edit the title slightly to emphasize this part of the body of the question, because the body tells me that semantics is not the real point of issue here. I always had the feeling that interpretation is more flexible when it comes to explaining results, so it can be more personal, biased.

Thus, we say "this is your interpretation of what happened". We could use explanation instead, but it seems a little bit odd and explanation seems to be or should be more objective. Add a comment.



0コメント

  • 1000 / 1000