Thursday, October 23, 2008

"Knife Crime" In The Media

Britain is in the grip of a knife crime epidemic. Isn't it? That is certainly the impression one gets from the media: every week seems to bring new stories of stabbing and murder among city youth. The latest victim is 16-year-old Joseph Lappin, who was set upon by a gang outside a Liverpool youth club on Monday evening, and knifed to death in an apparently motiveless attack. A friend was also stabbed twice and seriously wounded, while another friend received minor injuries in the attack by up to fifteen youths.

Then, yesterday, the Home Office published its latest quarterly update of the statistics for police-recorded crime and the British Crime Survey. The figures appear to show a big rise in violent crime compared to the same period last year: a 22% rise in "most serious violence against the person" and a 28% increase in the number of attempted murders with a knife. Some newspapers reported this as a clear rise in crime: the Telegraph gave the story these headlines:
Violent crime increases by a fifth as police fail to keep proper figures
Violent crime has jumped by fifth because police forces have failed to keep proper figures for more than a decade, the Government has admitted.
This is a very misleading headline, and the impression it gives is simply not true. You have to read further to find out what the figures really mean: violent crime has not jumped by a fifth, the number of recorded violent crimes has risen, but this is largely because many police forces have changed the way they categorise these offences. Similar headlines appeared in the Times ("Police fail to record crime properly, as violence rises 22%"), the Mail ("Violent crime up 22%") and the Express ("Violent crime soars"). The Guardian headlines give a truer picture:
Row over police statistics as recount leads to 22% 'rise' in worst violence
Apparent increase 'due to misinterpretation of rules'
The Independent too emphasises the confusion rather than the apparent rise ("Violent crime underestimated for 10 years"). In other words, your perception of the level of violence in Britain this morning is probably heavily influenced by your choice of newspaper. So what's the truth? More detailed analysis, reported in the Guardian but missing from the other papers, shows that the real rise in serious violence against the person was only around 5%.

Another problem with today's stories is common to all the papers I have seen: emphasis on the percentage change (or "relative risk") rather than the actual numbers (or "natural frequencies"). This has the effect of making the picture seem far worse than it really is, because a rise of 28% in the number of attempted murders by knife sounds far more serious than an increase from 50 incidents to 64, yet both are accurate ways of saying the same thing. We know from research on health statistics that people in general are very bad at understanding relative risk, yet the media persist in using whatever makes the story seem most dramatic.

There are many other reasons to be careful when trying to understand knife crime. The first point to bear in mind is that it is difficult to define "knife crime". The term encompasses a range of behaviours, from actual use (eg in stabbing) via threatened use (eg in mugging) to knife-carrying. None of these is simple to quantify, however. Some offences recorded as involving a "sharp instrument" might actually refer to a screwdriver, broken bottle or glass, not a knife. Similarly, offences listed as "threatening another person with a weapon" might involve sticks, rocks or other blunt objects as well as knives. Knife-carrying is even harder to define, since certain types of knife may be carried legally if the carrier has a good reason (eg for work, or a hobby such as fishing), adding a major subjective element to the definition.

The best attempt so far to disentangle the problem of knife crime in the UK has been conducted by the Centre for Crime and Justice Studies (CCJS) at King's College London. Their 2007 report ‘Knife Crime’ A Review of Evidence and Policy uses official police statistics as well as data from the British Crime Survey, the Offending, Crime and Justice Survey, and the MORI Youth Surveys carried out for the Youth Justice Board. All these surveys have strengths and weaknesses, but taken together they give little reason to believe that knife carrying has changed much since 2002 or that knife use has increased since 1997. How did the media report these reassuring findings? By now you can probably guess. The Times ran with the headline "Knife crime doubles in 2 years". This claim was justified in the story as follows
The full extent of Britain’s violent crime epidemic, which yesterday claimed the life of another teenager, is revealed in shocking new figures that show the number of street robberies involving knives has more than doubled in two years. Attacks in which a knife was used in a successful mugging have soared, from 25,500 in 2005 to 64,000 in the year to April 2007. The figures mean that each day last year saw, on average, 175 robberies at knife-point in England and Wales – up from 110 the year before and from 69 in 2004-5.
I have scoured the report but have been unable to find the source of these figures. Perhaps they were found in an early draft or press release, but they certainly do not appear in the final report. Here's what it does say about knife use and mugging:
Mugging figures have been ignored in the analysis because of low sample sizes and recent changes in the definition of mugging.
A footnote explains this in more detail: in 2003-4 there were 19 knife-point muggings reported in the BCS, and in 2006-7 there were 45. Remember that the BCS has in excess of 40,000 participants per year, so these frequencies are a tiny proportion of the total sample. To extrapolate from these figures to the whole population is simply ridiculous, yet that seems to be what the Times did. They cherry-picked the worst figures from a generally positive report and spun them to create the scariest possible headline.

The reason all this matters is that sensationalist reporting is actually making the problem worse. The CCJS report shows that those young people who do carry weapons almost always say they do so for self-defence. If they believe that others are likely to be carrying, they are more likely to do so themselves. When the Mail screams that "Shock figures reveal no part of Britain is safe as knife violence spreads EVERYWHERE" they help to create the problem they purport to abhor. Here's another (even crazier) example: "Britain on alert for deadly new knife with exploding tip that freezes victims' organs". It's about a knife that is sold in America, designed to kill sharks and bears. Is there any reason to believe such knives are being carried in Britain? Any evidence at all? No.

Last year, one of my undergraduate students conducted a research project on weapon-carrying among the youth of Liverpool. It was a small study, which is why I am blogging about it rather than trying to publish it, but it was well conducted and had some very interesting findings. She managed to recruit 57 young people, most of whom were excluded from school, in an area notorious for crime (including the murder of Rhys Jones a few weeks earlier). Because we were concerned that self-report questionnaires may lead to under- or over-reporting, we used a randomising element to make it very clear to the participants that their responses would be anonymous. Here's how it works. The questions are phrased as YES/NO items, for example "Have you ever carried a knife on the streets" or "Have you ever used a knife at school?". Before responding to each question, the participants are asked to flip a coin. We tell them "If the coin comes down heads, say YES to the question. If it comes down tails, tell us the truth". This was explained very carefully so that they all understood that nobody, not even us, would be able to tell if they had actually carried out the behaviour or not. We then asked about knife and gun carrying and use, at school and on the streets ("use" in this context could mean use to threaten as well as use to stab or shoot). What we wanted to know was whether we would get more YES responses than the 50% that would be predicted by chance (see footnote for an example). The answer was no. A series of binomial tests revealed no significant deviations from chance.

We also asked for the participants' perceptions of weapon carrying. We gave them a visual analogue scale, labelled from 0% to 100% in 10% steps and anchored at either end with "Nobody" and "Everybody", and asked them to rate what proportion of people their age in their area they believed to have carried or used knives or guns at school and on the street. Here is what we found:

Weapon type

Location

Mean % (SD)

Guns

Carried at school

17.54% (23.16)


Carried on the street

29.82% (20.39)


Used in school

3.68% (7.22)


Used on the street

19.29% (17.71)

Knives

Carried at school

30.17% (23.33)


Carried on the street

46.31% (26.56)


Used in school

12.80% (16.55)


Used on the street

34.38% (25.35)


In other words, these young people believe that nearly half of their peers have carried knives and that nearly a third have carried guns. They believe that over a third have used a knife on the street and that a fifth have used a gun. These figures are far in excess of the actual percentages, which we have seen were not significantly different from zero. Why do they have such a negative view of the world? I believe the media must take a large part of the blame.

Here is the take-home message. Maybe you believe you need to carry a knife, because you think nowhere is safe. You think that everyone else has a knife, so you'd better have one too. You are wrong. Leave the knife at home.



Footnote: Imagine we had surveyed 100 people and obtained a 60% yes rate for kinfe carrying. 50 of these would have been directed to say yes by the coin, while 10 of the remaining 50 people were giving us a true yes. This would suggest that 20% of respondents had actually carried a knife. This is significantly higher than chance, according to a binomial test (p=0.0284). In our study, with 57 participants, we would have needed 36 yes responses to obtain a p value below 0.05, which would have meant approximately 30% of the sample had given a real yes. This is close to the population rate of knife carrying according to the MORI Youth Surveys, but because of the nature of our sample we expected higher frequencies than this.

Thursday, October 2, 2008

Placebo Needles for Acupuncture Research

The biggest problem with acupuncture research is that it is very difficult to create a convincing placebo treatment for the control group to undergo. If participants can tell whether they have received acupuncture or not, the results of the trial are pretty much worthless as evidence. Similarly, it is important that the practitioner too should be unable to tell whether they have given real acupuncture or not. Many different sham treatments have been tried, but none had been particularly convincing until, in recent years, some dummy needles have been developed that do allow double-blinding (Park et al, 2002, White et al 2003).

I wanted to see these needles for myself, so I contacted Dong Bang Acuprime, who kindly sent me a few samples of their Park Sham Device to try out. They have helped develop and test these fake needles in order to improve the evidence base for acupuncture, and I really admire them for doing so. Few CAM enthusiasts are willing to put their beliefs to the test like this.

So here I am with my colleague Nikola Bridges, who is about to insert two needles into the plastic sleeves you can see attached to my hands with sticky pads. One needle is real, one sham, but at this point neither of us know which is which. When Nikola presses down on the needles, the real one will penetrate my skin, while the sham one will slide inside its handle, giving the appearance of penetration without actually going in. If the placebo is truly convincing, neither of us will be able to tell which is which until I turn my hands over, at which point the fake will fall out onto the table.

To see what happened next, watch this video before scrolling down to read on...


















All in all, I was a little disappointed. The fake needle telescoped very easily while the real one took a lot of tapping to get down to the same extent, so it was fairly clear to Nikola which one was real. I couldn't feel much difference at first, after the initial insertion, but it was pretty obvious which one was real once it started to go in properly. The real needle also left a tiny but noticeable mark on my skin, and was obviously longer than the fake one when we took them out again. There is also still (a few hours later) a hint of pain in my left hand. Sadly, I have to conclude that the placebo was not 100% convincing.

Let's not be too negative, however. There were many flaws with my little experiment. In a real study, you would not have the same person experiencing both types of needle at the same time, so the contrast between needles would not be so apparent. If the acupoints were out of sight, such as on the patient's back, it would be harder (for the patient, at least) to tell what was going on. I am not sure if it is important that Nikola is NOT an acupuncturist and had not had any chance to practice the insertions (she is a neuroscientist and has plenty of experience with needles, just not this kind) but I think it would probably be even more obvious to an experienced needler which one was which. We were both a bit nervous, though! Maybe two single confident thrusts might have been harder to discriminate?

My conclusion is that it is very hard to maintain double-blindness in acupuncture studies, even with the best sham needles. I think it may be possible to do a good double-blind study with these, but both patient and practitioner would have to be very self-disciplined, to resist the temptation to break the blinding. Sadly, my experience suggests that it would be very easy to succumb.

References

Park, J., White, A., Stevinson, C., Ernst, E. & James, M. (2002). Validating a new non-penetrating sham acupuncture device: two randomised controlled trials. Acupuncture In Medicine, 20, 168-174.

White, P., Lewith, G., Hopwood, V. & Prescott, P. (2003). The placebo needle, is it a valid and convincing placebo for use in acupuncture trials? A randomised, single-blind, cross-over pilot trial. Pain, 106, 401–409.

Wednesday, October 1, 2008

Homeopathy Is Antiscience (Part 1)

Homeopathy is simply an elaborate placebo. It is certainly not scientific, but it is worse than mere non-science: it is anti-science, and so should have no place in a University. In this series of posts I will explain why. In the first instalment I will describe the dilution problem, and the bizarre range of substances that homeopaths claim to have medicinal properties. In later posts I will consider the research evidence and evaluate the counterarguments that homeopaths use against their critics. But let's start with the basics...

The Dilution Problem

The probability that a homeopathic substance could have any effect (other than placebo) is vanishingly small. We can say this with some confidence because homeopathy contradicts at least two of the most solidly-established principles in biology and chemistry. The first is that larger amounts of a drug or toxin have larger effects. This is called the dose-response relationship, and it is an iron law of biomedicine. Ten paracetomols are more dangerous than two, just as ten pints of ale will get you drunker than two. There are no exceptions, except in the topsy-turvy world of the homeopath, where lower dilutions such as 3X can be bought over the counter and given to babies (e.g. for teething) whereas extremely high dilutions such as 200C are thought to be far too dangerous for this, and should only be prescribed by a trained practitioner.

For readers unfamiliar with homeopathic notation I will explain these dilutions. The "X" means that the original essence has been diluted one part in ten, and the number tells you how many times the dilution has been repeated. So 3X means that a one in ten dilution has been repeated three times, leaving a final concentration of one in a thousand, or 1x10E3. A "C" dilution is one in a hundred, so a 200C preparation would have a concentration of one in 1x10E400 (forgive me for not writing out the full number: a one followed by 400 zeroes). This brings me to the second basic principle that homeopathy flaunts: the Avogadro limit. Once the dilution process has passed 24X or 12C we can be pretty sure that no molecules of the original substance remain. According to the standard molecular model of Chemistry it is impossible for these dilutions to have any effect. At the lower dilutions yes, a 3X dilution will still contain a fair dose of the original essence, but most homeopathic preparations are taken way beyond the Avogadro limit: 30X and 30C are probably the most commonly used. To visualise a 30C dilution, imagine one molecule of an active ingredient being added to 10E60 molecules of diluent. What would this look like? We are not talking drop-in-a-swimming-pool or even drop-in-the-ocean here. 10E60 water molecules would make a sphere twenty-eight billion times larger than planet Earth.

Homeopaths are aware of the dilution problem, of course. However, they contend that it is irrelevant because of the way the dilutions are carried out. Between each step in the dilution process, the preparation is vigorously shaken (or "succussed") in order to transfer the "healing energies" of the solute into the diluent. This is obvious nonsense, and we will see later on that there is absolutely no evidence that succussion has any effect. But first, let's look in more detail at the range of substances homeopaths use to create these strange solutions.

Animal, Vegetable, Mineral?

At dilution levels beyond the Avogadro limit it makes no difference what the original essence was, but it is still worth spending a few moments considering the range of ingredients that homeopaths use. This provides another reason to be sceptical about the claims of homeopathy: the jaw-dropping silliness of the so-called remedies.

One common misconception about homeopathy is that it uses only natural substances, such as herbal essences and plant products like coffee or onions. Indeed, there are homeopaths who choose to specialise in such remedies, but for most practitioners the herbals are only a small part of their armoury. Another very important group of remedies are based on minerals, especially salts such as sodium chloride, magnesium phosphate and silicon dioxide, which you can buy in combination as a hayfever remedy. A third group of remedies are based on animal parts or products, such as duck liver, snake venom and even dog excrement. Fourthly, there are remedies known as “nosodes”, which are made from human disease products, such as pus, mucus, blood, faeces and scraps of tissue. Finally, there is a group of remedies known as “imponderables”, made from such things as electricity, thunderstorms or sunlight. There is even a remedy made from fragments of the Berlin Wall, used for those who feel oppressed, or who find themselves having to mediate between warring factions. I am not joking: homeopath Charles Wansbrough reports using Berlin Wall for patients who have “decided that their surrounding environment was hostile and suppressive and chose to create a ‘wall’ of fury that encircled their way of being”. Kees Dam was sceptical but decided to try a proving and was convinced:

“My ‘Berlin Wall’ was broken down when I trusted and believed my eyes seeing the effects of Berlin Wall as a homeopathic remedy”.
Dam admits that the proving was not done blind, but does not think that to be a problem:
“I must honestly say that I never saw any difference in the quality of the proving depending on if the prover knew the remedy or not”.
I hope it is clear that such thinking is closer to sympathetic magic or voodoo than it is to science. Some homeopaths agree, and remedies like Berlin Wall have proved divisive. George Vithoulkas launched a stinging attack in a speech in, appropriately, Berlin:
“If we teach our students to do or apply ridiculous things we will only reach the 'ridiculous', if we potentize the Berlin wall, or the National Anthem of France and we encourage our students to follow such nonsensical ideas, homeopathy will be identified with the ridiculous.”
Quite.

Homeopaths select these "remedies" according to a principle known as the “Law of Similars”, or “Like cures like”. In 1790, the German physician Samuel Hahnemann noticed that cinchona bark, which contains quinine and had long been used as a malaria remedy, actually produced some of the symptoms of malaria when taken by a healthy person (namely himself). Unfortunately for homeopaths, it may be that Hahnemann’s reaction to the cinchona was in fact simply the result of an undiagnosed allergy. Nevertheless, it led him to wonder if a general principle of similarity could be used to discover new remedies, and to classify the chaotic muddle of herbal and mineral preparations that constituted the materia medica of the day. He therefore embarked on a series of experiments upon himself and others, to test for the pathological effects of various substances, including mercury, belladonna, tobacco and nux vomica. Family, friends, students and colleagues submitted themselves to these “provings”, and by 1796 he was convinced that homeopathy (“similar suffering”) was indeed the answer: a substance that causes particular symptoms in a healthy person can be used to cure those symptoms in a sick person. A few examples of the current uses of well-known substances should suffice to give the general idea: onions irritate the eyes and nose, and so may be given as a treatment for colds; coffee is a stimulant, and so can be used to treat insomnia; arsenic causes sickness and diarrhoea, and so is used for food poisoning, and so on.

Hahnemann intended his work on similars to be a refutation of the doctrine of signatures, which provided the basis for much of the medicine of the time. For Hahnemann, similarity was solely a matter of the effects a substance had, not (as in the doctrine of signatures) anything to do with its physical appearance or provenance. However, it is immediately clear from considering the range of substances described above that the doctrine of signatures quickly re-asserted itself, and that a major flaw in the method used in Hahnemann’s provings allowed this to happen (and has been perpetuated in almost all subsequent provings by others): his experiments were not done blind. In other words, he always knew exactly what his guinea pigs were taking, and probably they knew it too, and so his and their perceptions of any symptoms would inevitably have been coloured by the nature of the test and their existing knowledge of the substance. It is not therefore surprising that many ancient herbals resurfaced in homeopathy with similar functions based on appearance. Euphrasia, for example, re-appears as a homeopathic remedy for eye problems, just as it did under the doctrine of signatures owing to its supposed resemblance to a bloodshot eye. So as we have seen, nowadays virtually anything can be (and is) used as a homeopathic remedy, often based on nothing more that superficial resemblances or associations.

Conclusion

In this post I have explained three reasons to be highly sceptical of homeopathy: it defies the dose-response relationship, it ignores the Avogadro limit, and it uses a bizarre range of ingredients. Despite this, many homeopaths claim that there is good evidence that homeopathy does work. In the next instalment I will consider this evidence in more detail.