Does Boston have too many clinical trials?
Want to get smarter about biotech clinical trials, R&D strategy, and financial modeling? Join the thousands of biopharma execs and professionals who subscribe to our free email newsletter. Read a sample issue, and then sign up here.
A biotech exec and I once met with a top academic hospital in Boston to ask if they’d recruit patients for his company’s upcoming cancer drug trial. During our chat, we learned that this center had over a half dozen other trials running or in the queue with virtually the same inclusion and exclusion criteria as ours.
And that got me wondering … would there be the same degree of competition for patients in Baltimore, Baton Rouge, Birmingham, or Buffalo? In other words, does Boston have many more trial seats per capita compared with other cities? And if it does, should we care?
The “should we care?” aspect is a mixed bag. From a drug developer’s perspective, concentrating trial seats in cities already overrun with studies could increase the time and cost to get to the market. Some of that handicap might be offset if centers in the over-trialled cities have more experience and higher execution efficiency – but in that case, you’d want to be pretty darned sure that the benefits outweigh the negative effects of the seat/population mismatch.
You could also imagine that other factors besides population might affect companies’ choices of clinical study sites. For example, maybe cities with higher health care density or more academic centers might have more than their fair share of trial seats, which might be somewhat logical. Or – far more worrying from an ethical perspective – maybe biopharma trials tend to be run in cities where people are richer or whiter.
In a new paper in JAMA Network Open, Yevgeniy Feyman, Frank Provenzano, and I explored how population and these other factors are related to the number of trial seats in urban areas. To do that, we annotated and curated information from clinicatrials.gov, and combined it with data from the U.S. Census Bureau and other sources into a tool we call TrialHunt. For this paper, we focused on industry-supported studies (through Phase 3) active in 2016 across the 171 combined statistical areas (CSAs) in the U.S., each of which is comprised of at least one city, plus the surrounding area that is linked to it socially and economically.
The bottom line: population size explains 87% of the variation in industry-supported trial seats. Adding race, income, NIH funding, and number of hospital beds to the model didn’t boost its explanatory power.
So overall, we didn’t detect a large over- (or under-) trialling problem across U.S. urban areas. Sure, some CSAs (including the one containing Boston) have more seats than one might have expected from the number of residents, but population explained an impressively high fraction of the overall variability in study seats per CSA. In addition, it’s comforting that we didn’t detect any significant role for race, income, hospital beds, or academic funding in explaining the locations of industry-sponsored trial seats in and near urban centers. (Obviously, our study doesn’t address the separate problem of low trial access for the 20 percent of Americans who live in more rural regions.)
On the other hand, these are aggregate data across all diseases and companies — so it’s likely that there are greater mismatches when one looks at particular illnesses and/or sponsors that bear closer study. If a company consistently favors Boston over Baton Rouge for its lung cancer studies, for example, that might reflect a conscious tradeoff between experience, execution capabilities, and other positive factors on the one hand and competition for patients on the other — but maybe the benefits are being overstated, and the downsides under-appreciated. And if a certain urban area is systematically under-trialled by most of the companies developing drugs in a particular indication, it might be worth asking why, and whether it’s a missed opportunity. If you’re interested in those issues, so are we! Please read more about our TrialHunt tool, and drop us a line.
A huge shout-out to Yevgeniy Feyman and Frank Provenzano for their work on TrialHunt and the research discussed in this post. Please see the full article for additional details, including a link to the underlying data.