tag:blogger.com,1999:blog-33960805.post116527192530797811..comments2018-03-05T04:23:18.372-08:00Comments on Clinical Psychology and Psychiatry: A Closer Look: Effects the Size of BOLDERsCL Psychhttp://www.blogger.com/profile/13990549972520745769noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-33960805.post-1168004630851333202007-01-05T05:43:00.000-08:002007-01-05T05:43:00.000-08:00I'm looking at the article. On page 604 (table 2)...I'm looking at the article. On page 604 (table 2), the standard errors for the change scores are .99 for PL, .99 for 300 mg Seroquel, and 1.01 for 600 mg Seroquel. <BR/><BR/>I converted these to standard deviations by dividing them by the square root of N for each group. N's were 161 for PL, 155 for 300 mg Seroquel, and 151 for 600 mg Seroquel. <BR/><BR/>The SD's were hence much larger for the change scores than for the baseline scores, which is often the case. A relatively homogenous group starts the study but their responses to treatment vary quite a bit.<BR/><BR/>So the SD's come out to be 12.56, 12.33, and 12.41. Hence the ES of 5 divided by 12.45, or .40. <BR/><BR/>So their ES was actually more liberal. It appears that MMRM reduced the SD a fair amount and that's why the ES changed. <BR/><BR/>My main point is that the ES changed notably depending on whether LOCF or MMRM was used and that such a difference should have been discussed in the article, in my opinion. <BR/><BR/>If MMRM is going to become the standard method used in clinical trials, then research should be done on how ES calculated by LOCF differs from MMRM-calculated ES. If there's much of a difference generally, and MMRM ES's tend to be larger, then do we need to change our general criteria for interpreting effects?<BR/><BR/>In other words, if trials that would yield a .30 ES using LOCF yield an ES of .50 with MMRM, does that mean that suddenly we have increased the effect of treatment by 40% or is this a statistical artifact?<BR/><BR/>Sorry, I wasn't intending to ramble on quite like that! If you've read this far, thanks. Please let me know if I missed something or if you think I'm way out in left field on this one.CL Psychhttps://www.blogger.com/profile/13990549972520745769noreply@blogger.comtag:blogger.com,1999:blog-33960805.post-1167964757983868412007-01-04T18:39:00.000-08:002007-01-04T18:39:00.000-08:00Thanks for the comment. I'll look at the article ...Thanks for the comment. I'll look at the article again then make a comment after I've checked my figures.CL Psychhttps://www.blogger.com/profile/13990549972520745769noreply@blogger.comtag:blogger.com,1999:blog-33960805.post-1167962070317869112007-01-04T17:54:00.000-08:002007-01-04T17:54:00.000-08:00Really like your blog - I'm also an academic with ...Really like your blog - I'm also an academic with a leery eye toward the pharmaceutical industry and its academic denizens. However, I'm pretty sure that your cacluation of the effect size in the MMRM is incorrect. I'm not sure where you got the standard errors from, but assuming that the MADRS score standard deviation are similar to those at baseline (about 5), the effect size between groups at week 8 as calculated in the traditional way (Cohen's d) would be about 5 points mean difference divided by 5 (standard deviation), yielding an effect size of 1. MMRM is actually more conservative, and is less deceptive than LOCF in most instances - in general people on placebo drop out earlier and their last observation is carried forward (even though their depressive symptoms will commonly lessen as is the placebo effect). MMRM and the approach that Thase et al took didn't seem all that innapropriate to me, if anything, was fairly conservative. Now, one could argue about excluding people with suididal ideation from such studies, or whether a 5 point difference is clinically meaningful relative to the side effects that would occur, but I don't think the stats are the source of concern here. Once again though, I really enjoy reading your posts.Anonymousnoreply@blogger.com