By William Sutherland
Some patients with heart blockages have additional ventricle beats and it has been shown that these can be suppressed by various drugs; this treatment thus became common practice (1). This follows the conventional methodology of identifying a problem and finding ways of alleviating it. However, analysis using randomized trials showed that patients given these drugs were more likely to die (2). Similarly, there are over thirty documented ways in which doctors can detect whether patients are obtaining insufficient air to survive, but when these are tested for precision and accuracy only a few both bear a relation to the actual airflow and can be confirmed by repeat measures (some were not repeatable even with the same clinician) (3). In another study of general physicians in America, the researchers interviewed the physicians after each consultation with a patient and asked if the physicians had any questions they would like the answer to. The researchers consulted the literature to answer these questions and showed that in a typical day, eight clinical decisions would have been altered (4). Finally, the advice of clinical experts regularly differs from the consensus of already published studies (5, 6).
Concerns over a discrepancy between practice and evidence led to the development of evidence-based medicine (e.g., 7) which is revolutionizing clinical practice. Clinicians often base their decisions on intuition and experience such that it is often difficult for others to learn from them except through mimicry. Evidence-based medicine involves unraveling, understanding, and making objective the logic of experts so that it is possible to replace mimicry and tradition with understanding. It builds on, rather than replaces, clinical skills, clinical judgement, and clinical experience.
Traditional medical practice is, of course, heavily based on scientific evidence but often largely derived from secondary information obtained from textbooks, from discussion with colleagues, or from information obtained during training, with little use of scientific journals or reviews (8). Thus a study of hypertension showed that the main factor determining whether doctors decided to prescribe antihyperten-sion drugs was not the severity of organ damage, as would be expected, but the number of years since the doctor graduated from medical school (9, 3). The real change in methodology of evidence-based medicine is for doctors to have the training to interpret studies and access to information so that they can review primary studies and continually update their methods.
Do we need a similar revolution to produce evidence-based conservation? At the University of East Anglia, in the U.K., we have been carrying out a series of research projects to determine the consequences of various management practices and it is striking that repeatedly the conventional dogma turns out to be mistaken. For example, it is widely accepted that burning reed beds (a traditional practice disliked by many conservationists) kills many soil invertebrates, but a series of replicated, randomized and controlled experiments showed that this was not true; but flooding (a standard practice approved of by most conservationists) did kill them (10, 11). As a second example, a U.K. government conservation scheme paid farmers to flood fields in winter to enhance populations of breeding waders, but the flooding actually kills the earthworms on which the waders feed (12). The waders are largely dependent upon shallow pools in summer but with the government scheme all areas receiving subsidies must be dry by the spring. As a final example, it was thought that the low survival of wader chicks on saline lagoons was largely due to a decline in fertility and predation was a major problem. Research showed that fertility has little effect, the major problem is chicks starving due to high salinities killing the invertebrates (13).
Conservationists are continually using methods on species and habitats similar to those that have been used elsewhere and would surely benefit from being able to review past experiences. In practice, information is obtained partly from reviews and books (but rarely from primary sources) and largely by talking to others. In practice, even wardens managing similar habitats often do not exchange information, especially if working for different organizations or in different regions or countries. Similarly, Wright (14) states that research results are rarely shared between parks in North America.
The following are the main techniques (3) of evidence-based medicine, which I have illustrated with a conservation example:
1. Convert information into answerable questions (e.g., Do motor boats affect waterweed populations?).
2. Efficiently track down the best evidence with which to answer the question. This may include comparisons within parts of the site varying in boat activity, published papers, or evidence from other sites.
3. Critically appraise evidence both for its validity and usefulness (e.g., Is the water depth or plant community so different in the other sites that comparisons are not useful?).
4. Apply results of this appraisal.
5. Evaluate performance. A critical need is to find ways in which conservationists can record the exact problem (e.g., the spread of a particular problem plant), what was done (e.g., exactly how it was treated, when it was treated, exactly which equipment was used), and whether successful (ideally by quantifying but at least by recording if the species increased, decreased, or was equally abundant). A further need is for practicing conservationists to have the training to be able to interpret studies in a critical manner. The essence of my point is that, for example, in Britain there must be dozens of wardens trying to control introduced rhododendron (Rhododendron ponticum), and dozens more trying to prevent damage to regenerating woods by introduced Muntjac deer (Muntiacus reevesi), attracting wading birds to wet grasslands, deciding when to allow grazing on fens, or trying to restore damaged raised bogs. If each had an account of everyone else’s methods, successes, and failures, and those of wardens in other countries, then surely conservation would be progressing at a more rapid pace. The Internet provides the ideal medium for this.
One problem is that conservationists have too much to do already and documenting responses to management is yet another boring administrative task. Just as I am glad that doctors are taking time to learn whether a treatment works, I believe conservationists need to find ways of assimilating their collective experiences. Rather than waste time, the opportunity to learn from others and avoid repeating mistakes should actually provide more time for conservation that works.
1. Morganroth, J., J.T. Bigger Jr., and J.L. Anderson. 1990. Treatment of ventricular arrhythmia by United States cardiologists: a survey before the Cardiac Arrhythmia Suppression Trial results were available. American Journal of Cardiology 65:40-8.
2. Echt, D.S., P.R. Liebson, and B. Mitchell. 1991. Mortality and morbidity in patients receiving encainide, flecainide, or placebo: the Cardiac Arrhythmia Suppression Trial. New England Journal of Medicine 324:781-8.
3. Sackett, D.L. et al. 1998. Evidence-based Medicine: How to Practice and Teach EBM. Churchill Livingstone, Edinburgh.
4. Covell, D.G., G.C. Uman, and P.R. Manning. 1985. Information needs in office practice: are they being met? Annals of Internal Medicine 103:596-9.
5. Antman, E.M. et al. 1992. A comparison of results of meta-analyses of randomized control trails and recommendations of clinical experts. Journal of the American Medical Association 268:240-8.
6. Warren, K.S. and F. Mosteller (eds.) 1993. Doing more harm than good: the evaluation of health care interventions. Annals of the New York Academy of Sciences 703:1-341.
7. Friedland, D.J. (ed.) 1998. Evidence-based Medicine: A Framework for Clinical Practice. Appleton & Lange, Stamford.
8. Kanouse, D., D. Killich, and J.P. Kahan. 1995. Dissemination of effectiveness and outcomes research. Health Policy 34:167-192.
9. Sackett, D.L. et al. 1977. Clinical determinants of the decision to treat primary hypertension. Clinical Research 24:648.
10. Cowie, N.R. et al. 1993. The effects of conservation management of reed beds: II The flora and litter disappearances. Journal of Applied Ecology 29:277-284.
11. Ditlhago, M.K.M. et al. 1993. The effects of conservation management of reed beds: I The invertebrates. Journal of Applied Ecology 29:265-276.
12. Ausden, M. 1996. Invertebrates. In Sutherland, W.J. ed. Ecological Census Techniques. Cambridge University Press, Cambridge.
13. Robertson, P.A. 1993. The management of artificial coastal lagoons in relation to invertebrates and avocets Recurvirostra avosetta (L). PhD thesis, University of East Anglia.
14. Wright, R.G. 1999. Wildlife management in the national parks: questions in search of answers. Ecological Applications 9:30-36.
Excerpted from The Conservation Handbook: Research, Management and Policy by William J. Sutherland. Copyright © 2000 Blackwell Science Ltd.
You Pay or We DrillDecember 11th, 2013
Pay It ForwardMarch 8th, 2013
State of FearMarch 8th, 2013
What Tragedy? Whose Commons?September 7th, 2012
Hungry for LandMarch 9th, 2012