|
Welcome to the International Skeptics Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. |
![]() |
#1 |
Anti-homeopathy illuminati member
Join Date: Feb 2004
Posts: 5,363
|
Homeopathy in the Journals: a question
Could someone give me a quick hand?.
Please, remind me, aside from the academic papers published by the Lancet and BMJ, what have those Journals published as editorial on the efficacy, or otherwise of homeopathy, i.e. have they as organisations expressed any editorial opinion. |
__________________
"i'm frankly surprised homeopathy does as well as placebo" Anonymous homeopath. "Alas, to wear the mantle of Galileo it is not enough that you be persecuted by an unkind establishment; you must also be right." (Robert Park) Is the pen is mightier than the sword? Its effectiveness as a weapon is certainly enhanced if it is sharpened properly and poked in the eye of your opponent. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#2 |
Adult human female
Join Date: Sep 2003
Location: NT 150 511
Posts: 50,593
|
The Lancet had an absolute corker of an editorial to accompany the Shang et al. paper in 2005. You may recall. The text isn't online as far as I know (someone sent me the pdf), but the salient passage was:
Quote:
The BMJ as far as I recall is generally more circumspect. Rolfe. |
__________________
"The way we vote will depend, ultimately, on whether we are persuaded to hope or to fear." - Aonghas MacNeacail, June 2012. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#3 |
Graduate Poster
Join Date: Feb 2006
Posts: 1,306
|
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#4 |
Graduate Poster
Join Date: Mar 2006
Posts: 1,433
|
There's one from Edzard Ernst here, in British Journal of Clinical Pharmacology. I don't think it's precisly an editorial opinion, but it's a corker of a smackdown.
http://dcscience.net/ernst-bjcp-07.pdf (Thanks again to David Colquhoun) |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#5 |
Illuminator
Join Date: Jul 2003
Posts: 3,789
|
All I can track down for the BMJ is:
Feder , Katz. BMJ 2002;324:498-499 ( 2 March ) Editorial: Randomised controlled trials for homeopathy Thompson, Feder. BMJ 2005;331:856-857 (15 October) Editorial: Complementary therapies and the NHS Just to add - The Lancet does not exist as an "organisation" as such, and while the BMJ is the mouthpiece of the BMA it is editorially independent so nothing it says can be assumed to represent the views of the BMA. |
__________________
"Reci bobu bob a popu pop." - Tanja "Everything is physics. This does not mean that physics is everything." - Cuddles "The entire practice of homeopathy can be substituted with the advice to "take two aspirins and call me in the morning." - Linda "Homeopathy: I never knew there was so little in it." - BSM |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#6 |
Anti-homeopathy illuminati member
Join Date: Feb 2004
Posts: 5,363
|
|
__________________
"i'm frankly surprised homeopathy does as well as placebo" Anonymous homeopath. "Alas, to wear the mantle of Galileo it is not enough that you be persecuted by an unkind establishment; you must also be right." (Robert Park) Is the pen is mightier than the sword? Its effectiveness as a weapon is certainly enhanced if it is sharpened properly and poked in the eye of your opponent. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#7 |
Graduate Poster
Join Date: Jul 2004
Posts: 1,150
|
|
__________________
No one could make a greater mistake than he who did nothing because he could do only a little. Edmund Burke (1729 - 1797) Blog - Majikthyse |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#8 |
Illuminator
Join Date: Jul 2003
Posts: 3,789
|
What about the stance of other "organisations"?
Sense about science has published some comments before, including this one (not so much a statement about homeopathy but a comment on the change in regulations governing hom remedies) |
__________________
"Reci bobu bob a popu pop." - Tanja "Everything is physics. This does not mean that physics is everything." - Cuddles "The entire practice of homeopathy can be substituted with the advice to "take two aspirins and call me in the morning." - Linda "Homeopathy: I never knew there was so little in it." - BSM |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#9 |
Adult human female
Join Date: Sep 2003
Location: NT 150 511
Posts: 50,593
|
I like the point they make about "attitudes .... that engender alternative-therapy seeking behaviours....". It's time the BMA, and the RCVS and the rest of them, realised that their failure to tell the truth about homoeopathy and their mealy-mouthed reluctance to proscribe it, is doing exactly that. So long as legitimate medical bodies continue to recognise homoeopathic approaches as being in any way legitimate, the woo tendency will use this as advertising material, and will point to this recognition and sanction as "proof" that their methods are valid. The RCVS may think that all it is doing is allowing a few vets to pander to the existing woo tendencies in their clients, and so prevent them from going to unqualified alternative practitioners (who could in fact be sanctioned quite easily if that happened, as they would be breaking the law). They also seem to think that these vets will in general treat their patients conventionally, and simply apply the woo as a little bit of add-on to keep the clients happy. What they totally fail to realise is that they are in fact allowing veterinary surgeons to promote and advertise woo as having the legitimate backing of the RCVS, and so seduce clients who may be undecided or even sceptical. They are also giving comfort and succour to the unqualified woos, who seize on the professional sanction as evidence that their methods are legitimate, and say as much, loudly and often. And finally, they seem to be completely oblivious to the fact that these veterinary surgeons are frequently not practising conventionally at all, but are themselves denying the patients cpnventional treatment in favour of alternative - exactly the harm the sanctioning of professional practice of alternative methods is supposed to prevent. Ending the professional sanction for homoeopathy in the medical and veterinary professions would do far more than anything else to reduce "alternative-therapy seeking behaviour", which is the root of the problem. Rolfe. |
__________________
"The way we vote will depend, ultimately, on whether we are persuaded to hope or to fear." - Aonghas MacNeacail, June 2012. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#10 |
New Blood
Join Date: Jun 2005
Posts: 16
|
Well, Interesting. Didnt know much about Homeopathy. Just used it for me & my family for the common cold or a sore throat when winter strikes yet again, with some success.
Anyway, reading the following somehow confirms my subjective perception that there may be more behind this than most of you here seem to believe: BMJ (2000) Randomised controlled trial of homoeopathy versus placebo in perennial allergic rhinitis with overview of four trial series Morag et al. Objective: To test the hypothesis that homoeopathy is a placebo by examining its effect in patients with allergic rhinitis and so contest the evidence from three previous trials in this series. Design: Randomised, double blind, placebo controlled, parallel group, multicentre study. Setting: Four general practices and a hospital ear, nose, and throat outpatient department. Participants: 51 patients with perennial allergic rhinitis. Intervention: Random assignment to an oral 30c homoeopathic preparation of principal inhalant allergen or to placebo. Main outcome measures: Changes from baseline in nasal inspiratory peak flow and symptom visual analogue scale score over third and fourth weeks after randomisation. Results: Fifty patients completed the study. The homoeopathy group had a significant objective improvement in nasal airflow compared with the placebo group (mean difference 19.8 l/min, 95% confidence interval 10.4 to 29.1, P=0.0001). Both groups reported improvement in symptoms, with patients taking homoeopathy reporting more improvement in all but one of the centres, which had more patients with aggravations. On average no significant difference between the groups was seen on visual analogue scale scores. Initial aggravations of rhinitis symptoms were more common with homoeopathy than placebo (7 (30%) v 2 (7%), P=0.04). Addition of these results to those of three previous trials (n=253) showed a mean symptom reduction on visual analogue scores of 28% (10.9 mm) for homoeopathy compared with 3% (1.1 mm) for placebo (95% confidence interval 4.2 to 15.4, P=0.0007). Conclusion: The objective results reinforce earlier evidence that homoeopathic dilutions differ from placebo. So, i dont really understand why there is such a buzz about homeopathy here. Indeed, nobody until now can claim that one knows exactly how it works, but this does not mean that it cannot work. In some studys it apparently does, in some not(see above). And this is quite common too, on almost every scientific subject i have seen. |
__________________
" Are you Allen Ginsberg ? no, but This is how I am called " Allen Ginsberg |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#11 |
Butterbeans and Breadcrumbs
Join Date: Jan 2007
Location: Emily's shop
Posts: 17,708
|
If you cherry pick your studies rather than look at the evidence as a whole (including the methodological quality of the research), then you can show that pretty much anything works. The most recent meta analysis of homeopathic treatment shows that it doesn't work better than placebo - and the poorest designed studies show positive effects, while the better designed ones show no effect.
Should we base our opinions on poorly designed studies, or well designed ones? |
__________________
Sponsor me please! http://www.justgiving.com/Catherine-Kiernan http://www.justgiving.com/Catherine-Kiernan1 My blog |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#12 |
Critical Thinker
Join Date: Oct 2001
Posts: 299
|
"Evidence based medicine" is becoming a popularly supported topic in some areas. Medical librarians are learning to use it as a standard and are refusing to buy books that are not based on "evidence based medicine." This policy may also have trickled down to regular libraries... i don't know. I can only speak for medical libraries in America. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#13 |
Illuminator
Join Date: Jul 2003
Posts: 3,789
|
|
__________________
"Reci bobu bob a popu pop." - Tanja "Everything is physics. This does not mean that physics is everything." - Cuddles "The entire practice of homeopathy can be substituted with the advice to "take two aspirins and call me in the morning." - Linda "Homeopathy: I never knew there was so little in it." - BSM |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#14 |
Penultimate Amazing
Join Date: Jul 2006
Posts: 18,773
|
p=0.0001 with only 50 people?
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#15 |
Penultimate Amazing
Join Date: Jan 2005
Posts: 10,226
|
We get this a lot - somehow our reading of the subject is erroneous, usually with at least an implicit (but often explicit) assumption that we are ignorant of the research surrounding the topic. What is interesting, is that this criticism often comes from someone who makes it clear that their knowledge of homeopathy is scant at best. The consistency of this particular pattern is quite amazing.
Quote:
They've also obviously manipulated the characterization of whether or not a response different from placebo was obtained by purposely defining any result as a response (if the homeopathy group happens to be worse than average they had an 'aggravation', if they happen to be better than average, they had an improvement). There were other serious flaws in this paper, as well.
Quote:
Linda |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#16 |
Adult human female
Join Date: Sep 2003
Location: NT 150 511
Posts: 50,593
|
Ha bloody hah. ![]() Note the actual author citations for the paper.
Quote:
Thus, the correct citation should be to TAYLOR, M. A., REILLY, D., LLEWELLYN-JONES, R. H., McSHARRY, C. & AITCHISON, T. C. (2000) Randomised controlled trial of homoeopathy versus placebo in perennial allergic rhinitis with overview of four trial series. British Medical Journal 321, 471-476. Abbreviated, if you like, to "Taylor et al., 2000". Which means it's Yuri Nalyssus to blame for this lot, I suppose. ![]() Rolfe. |
__________________
"The way we vote will depend, ultimately, on whether we are persuaded to hope or to fear." - Aonghas MacNeacail, June 2012. |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#17 |
New Blood
Join Date: Jun 2005
Posts: 16
|
Hi Linda!
No, i dont think you are ignorant to the ressearch here, but i do question your interpretation of the research. No, they just observe two main variables, one beeing an objective measure, amount of airflow, which is (very) highly significant - unusual in such small samples - the other one a subjective, which was not significant. I can not see any flaw here, because indeed,one of the main feats of hayfever is that you dont get enough air because everything is swollen due to the allergic reaction, so this seems reasonable to me, as a lay person. There is also no problem with more than one variable that defines any given construct (in this case the implicit construct of 'getting better'). The problem is more that they dont define their model very well. Also, they dont tell how much better this betterment is and that they do not control for other factors or check for interactions between the two dependent variables that they are using. (apparently, they are only using simple t-tests. I would have done a anova) Also, certain statistical methods are indeed setup to test for many factors, for example in multi-factorial analysis, but they didnt do that here. So this study is rather crude but alas, its not a full article, but only a small editorial. Conclusion: The outcomes are supported by the evidence. Clearly, a main symptom of hayfever, reduced airflow due to allergic reaction was significantly reduced in the verum group. Well in the text, they had one center where people did worse, but overall, there was a significant difference in airflow. I cannot see manipulation here. Hm, yes but statistics are as good as it gets. Of course, its significance is always questionable, but thats true for all studys using these methods. Also, you take as a fact that the the bulk of homeopathic research is too poor to draw any conclusions - well, i dont know, but i will investigate it further, to see if this opinion is justified. Yes, thats what effectiveness studys are for, to see if a certain method/compound has a positive effect in a certain situation (in this case,hayfever.) Me neither. But again, i happen to use homeapthy for those every-day ailements, and most of the time it works for me. Only placebo effect? Well then, but it seems to be a good one, with no side effects whats or ever. And if it doesnt work,ok, i just take antibiotics or other suitable pharmceuticals. easy! Cheers, Roland |
__________________
" Are you Allen Ginsberg ? no, but This is how I am called " Allen Ginsberg |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#18 |
Graduate Poster
Join Date: Jul 2004
Posts: 1,150
|
We can all argue till the cows come home about individual studies. The point is well made that one study doesn't prove a general principle. My question is, after 200 years why is the efficacy of homeopathy in any doubt at all? If it really is effective, why doesn't that fact leap out and hit us in the face? And don't give me all the flannel about RCTs not being suitable for testing homeopathy - here we have a claim that one RCT shows efficacy, while most others of decent quality don't. It seems to me that RCTs are fine when they appear to support what homeopaths claim, and not fine when they don't. And don't give me `It works for me' either - that's not science and we all know it.
There is a thing called the advancement of science. The paper by Taylor et al is dated 2000. The latest meta-analysis (Shang et al 2005) shows it doesn't. This is the top level of evidence and is worthy of more attention than a single small trial. |
__________________
No one could make a greater mistake than he who did nothing because he could do only a little. Edmund Burke (1729 - 1797) Blog - Majikthyse |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#19 |
Illuminator
Join Date: Jul 2003
Posts: 3,789
|
I've looked at this paper, and I agree with Linda.
Taking the stats, the authors say in the methods
Quote:
This sounded rather dodgy to me, so I checked it. With Fishers exact test (one-tailed) the p value is actually 0.047 which should have been rounded up to 0.05 (ie borderline significance) and not down to 0.04. They should be using a two-tailed test anyway, in which case the true p value is 0.066 (not significant). Using Yates correction (another method of correcting for categorical data with fewer than 5 numbers in a cell) the p value is 0.96. I haven't gone through their other data (I've got proper work to do here!) but this behaviour smacks of opportunistic and lazy stats to me. They say they had an "independent statistician" - I beg to differ. ETA The published BMJ trial is actually followed by a commentary by Lancaster and Vickers. They had this to say(extract follows):
Quote:
Quote:
Responses here. |
__________________
"Reci bobu bob a popu pop." - Tanja "Everything is physics. This does not mean that physics is everything." - Cuddles "The entire practice of homeopathy can be substituted with the advice to "take two aspirins and call me in the morning." - Linda "Homeopathy: I never knew there was so little in it." - BSM |
|
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#20 |
Critical Thinker
Join Date: Mar 2007
Posts: 494
|
Many of us here, have had academic training, including a large amount of courses on statistics and how to perform research.
While I don't want to make an argument from authority, I do want to know what specifically you question about her interpretation. Have you actually read what they have done? They have looked at much much more than the two main variable they later selected to write the article about. Given how your assumption of what they have done is wrong, I can understand your criticism of Linda's analysis, but can we please go back to the actual research and not your interpretation of it. What they have done is equivalent to taking a random letter and than looking for statistic significance. (For brievety let's assume they pulled a 'c') Hmmm: pulling an 'a': 3,6% chance 0% occurance, not significant pulling an 'b': 3,6% chance 0% occurance, not significant pulling an 'c': 3,6% chance 100% occurance, very significant!! This not only shows the error in the methology this also shows why your remark of: "highly significant - unusual in such small samples -" is exactly the wrong way around, using this method it's usual to find false highly significants, ESPECIALLY in small samples. This is exactly why we in this forum, and serious medical researchers with us, frown on this methology. We would all agree if the research would have had the "increased flow of air is the method by which this (homeopathic) medicine eases the hayfever symptons" as an a priori proposition in the research. The research as it stands is only usable as a starting point for a completely new setup which actually specifically tests this flow of air. There is indeed no problem with more than one variable, there is however a problem with declaring individual variable as relevant AFTER the results are in. In Linda's words: Gathering data on all variables is shooting the side of the barn, with enough variables you are garantueed that at least one of them shows a significant difference between the two groups. Choosing to only look at the ones that have that difference is drawing the target around a bullethole. What you seem to think are plus sides to the research, not looking at dependancies etc, are actual weaknesses. Rather crude is a nice way to describe it. But this is also totally irrelevant, because that's not what they were specifically testing, so if they were honest researcher they would not have drawn a conclusion like this. I assume with evidence you mean data, but that's okay. No researcher is bad enough that post priori outcomes are not supported by the data. So that fact that the data matches something they mined out of the data proves exactly ... nothing. The manipulation is that it's the best known manipulatory way to double your chances of 'hitting significance'. Also by counting both up and down as results you measure the significance of seeing an effect, which they than represent as the significance of the chance to see improvement. They are basically doing something that ALWAYS, if done properly lowers the significance, and apply it in such a way that it heightens significance. They are not the first, nor the last, but at least this is a wel known method of fraud, that most serious researcher will not be fooled by. That's the main point you seem to miss. If conducted well, significance is NEVER questionable. When it is, the research is suspect. When it comes to this, I'm going to trust Linda's and Rolfe's and opinion as I know they have read many times the number of studies I have. But not just because of that, but also because to me, personally, they have shown to posses the necessary abilities to make that judgement. You probably haven't read as many of their posts as I have, so you not taking their word for it is smart. We agree here. And I'm not even saying that the research itself was conducted poorly, but the post-priori statistical fraudulence doesn't really strengthen my confidence that they've conducted even that honestly. Well that's one thing we all agree on here, no side effects whatsoever. Does that never make you wonder though? What can I say, my wife take homeopathic drops against common cold and is usually better in seven days, where I refuse to take them and require a full week to get better. Cheers, Edwin. |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
#21 |
Penultimate Amazing
Join Date: Jan 2005
Posts: 10,226
|
They obviously were successful in their obfuscation.
![]() They were attempting to replicate a particular finding (as revealed by their power analysis) and failed to find a difference that was significant. They then present one of their other measures, in this case change in nasal inspiratory peak flow, in order to salvage their desire to conclude that homeopathy makes a difference. They had a choice of other measures - use of conventional drugs, ratings on each of a number of symptoms (interference with sleep, nasal symptoms, sneezing, eye symptoms, chest symptoms), adverse events - so it is not a surprise that they were able to find one that showed a statistically significant difference. The other studies included in the analysis show a similar pattern - multiple outcome measures with heterogeneity and inconsistency in the results. When you don't have a convergence of the data, this would normally be a cause for concern. Doesn't it make you wonder why there were no changes in perception of symptoms when there was a large difference in an 'objective' measure? Especially since other studies showed the opposite pattern or no relationship. When you divide people into groups, you can always find differences between the two groups. Statistical analysis is used to discover whether the differences are unexpectedly large, but you have to pay close attention to avoid violating your assumptions of what should be expected (such as multiple comparisons). Study design is used to ensure that you can attribute these differences to a particular cause, but the presence of reasonable alternate explanations weakens your ability to conclude that a particular treatment was the cause.
Quote:
More than one variable can define a given construct, but one indication that your variables are providing a valid measure of that construct is whether or not variables that should be measuring the same thing actually show changes that are similar in direction and magnitude. If some measures go up, some go down, and some don't change, you obviously are not measuring that construct in a valid fashion. Simply selecting out those measures that went the direction you wanted and ignoring those which didn't after the fact is most definitely not the way to fashion a valid measure of a construct.
Quote:
Quote:
Quote:
Quote:
Quote:
![]()
Quote:
Linda |
![]() ![]() |
![]() ![]() ![]() ![]() |
![]() |
Thread Tools | |
|
|