Aaron Carroll also did me a favor and saved me a lot of work by compiling mortality rates for various cancers in G8 countries as another way to illustrate what is almost certainly closer to the real situation. The fact that he did it for years much more recent than the time period Philipson et al. did was rather curious, of course, but maybe not so much so given that more recent statistics show that most European countries have results much closer to those in the U.S. than 20 years ago. Be that as it may, Carroll points out that the U.S. is among the best in the world when it comes to breast cancer but not actually the best. Japan appears to be doing much better than the U.S. As a cancer surgeon, I will point out that breast cancer in Japan might be different, possibly due to lifestyle differences. In terms of other cancers, Carroll concludes that for cervical cancer, we’re in the middle of the pack; for colorectal cancer, we’re unequivocally doing the best; and for prostate cancer we’re in the high end of the middle of the pack. The most interesting observation is that for lung cancer we are doing abysmally. The obvious excuse for that is tobacco smoking, but it turns out that the U.S. has one of the lowest rates of tobacco use of these countries; so that doesn’t explain it. As Aaron’s last cancer graph shows graph when it comes to overall mortality from cancer, compared to the G8 countries, the U.S. is doing well but is not the best. As Aaron sums it up:
Not nearly where you’d like to see us. Because we don’t do as well with some of the more prevalent cancers, we wind up doing much worse overall when it comes to cancer mortality than you’d think. This is why, when some point to us having the “best” health care system, they focus on colon cancer or breast cancer, not on lung cancer. Overall, though, we’re not.
I can’t help but notice, too, that if you really want to compare countries with universal health care systems to the U.S. (and, let’s face it, that’s what this is really all about, trying to show that “socialized medicine” leads to “death panels,” health care rationing, and lower survival rates for deadly diseases like cancer), you really should include Japan in the mix. The problem, of course, is that Japan does a lot better than the U.S. in many areas. My point, however, is not to denigrate the U.S. healthcare system. It does quite well in some areas, not so well in others, and overall it’s very good but not spectacular, at least when we look at cancer mortality. The real problem is not that the U.S. system doesn’t deliver quality cancer care. Rather, the problem is that delivering that care in the U.S. is spectacularly expensive for the results it gets compared to other countries that spend considerably less.
Surprisingly (to me at least), there’s been some really good reporting that punctures the claims of this particular study. First, there’s this one, in which Steven Reinberg interviews Dr. Otis Brawley, the chief medical officer and executive vice president at the American Cancer Society. Dr. Brawley points out that, yes, overdiagnosis is likely the fatal confounder not accounted for by Philipson et al.
More impressive is Reuters article by Sharon Begley, in which she explains very well why this study doesn’t show what Philipson et al conclude that it shows. Also, unlike the case in some other posts and articles that I’ve seen dealing with this study, Begley doesn’t suggest that Philipson et al completely ignored lead time bias. In all fairness, before finishing, I have to state unequivocally they didn’t. The problem with their analysis is that they made a highly unconvincing argument using mortality statistics for why they don’t think lead time bias was a major confounder of their results. In fact, they did some rather amazing contortions to try to justify their approach of focusing on survival statistics instead of mortality rates, even to the point that they included an online supplement in which they examined mortality rates in the various countries included in their study. Here’s the problem with that argument, which is summed up very well in Begley’s article:
The Philipson team acknowledges that survival data can be misleading. They justify their approach, however, by saying that because deaths from cancer as a percentage of a country’s population fell faster in the United States than in 10 countries in Europe from 1982 to 2005, the higher U.S. survival “suggests that lead-time bias did not confound our results.”
Some experts in cancer statistics were not convinced.
“Why do the authors use the wrong metric – survival – in the analysis and then argue that the right measure – mortality – provides corroborating evidence?” asked Welch. “As long as your calculation is based on survival gains, it is fundamentally misleading.”
Indeed. I found that very curious myself, particularly how the justification was buried in an online supplement, rather than described in the text of the paper itself. It makes me wonder if it was something the authors cooked up to justify themselves after peer reviewers started hammering them on the issue of lead time bias. It wouldn’t surprise me in the least if that were the case, although, again in all fairness, it might not be. Similarly, their lame argument that they chose increases in survival as their metric because it would allow them to compare each country to its baseline made me laugh. Do they seriously believe that following mortality trends over time wouldn’t allow them to compare each country to its baseline? Is it just me? Am I alone in finding such an argument … unconvincing?
In reality, there are two ways to study how well different countries are doing in terms of cancer care. One way is, as mentioned several times in this post, to focus on cancer mortality. Another way is much more difficult in that it involves comparing stage-specific survival rates, and, for cancers for which there are screening programs, survival rates for screen-detected cancers and survival rates for all cancers. The latter is an analysis that is very difficult to carry out, given that not all countries have good registries that have cases properly stratified by stage and that it wouldn’t be able to compare how countries are doing against all cancers overall, only individual cancers. Also, stage definitions change over time, and carrying out such an analysis would involve taking such changes into account—not an easy adjustment. Yet, Philipson et al. chose to do neither of these things; indeed, they picked the very metric for which confounding factors, such as lead time bias and overdiagnosis, tend to be the most problematic.
I wonder why.
More importantly, I wonder how this study ever passed peer review. One would think that Health Affairs would have at its disposal a cancer epidemiologist who understands overdiagnosis, lead time bias, and length bias to tap as a peer reviewer. I guess not.