Red flags in biotech press releases
This post is a sample of Pharmagellan’s free newsletter on biotech clinical trials. So if you like it, join thousands of biopharma pros who get analyses and case studies like this delivered 2-4 times per month. Sign up here.
There's nothing like a tweet from Adam Feuerstein about a small biotech’s PR shenanigans to get me riled up. (And on a Monday morning, no less ... thanks, Adam!)
I'll discuss this example in a moment — but first, I'll review the most common red flags in PR related to clinical study results (from my book on analyzing biotech clinical trials):
Any top-line statement except "met [or failed to meet] primary endpoint" — There is only one acceptable way to headline a clinical trial report. Period. "Superior response", "numerical advantage", "trend toward significance", "clinically significant" — those phrases should all make you suspicious that something shady is afoot.
Misdirection toward non-primary endpoints — When I see a headline touting a positive result that doesn't explicitly say it was the primary endpoint, my immediate next stop is clinicaltrials.gov to figure out what the primary endpoint actually was. In most cases, absence of the word "primary" in the top-line report is highly suggestive that the study whiffed on its main goal.
Over-emphasis on positive results in a subset of patients — A classic biotech spin move is to focus attention on a subset of patients, usually identified after the fact, who managed to eke out a response with P<0.05. If a subgroup is highlighted in the press release's lead sentence, that's often a bad sign.
Non-standard statistical methods — It's impossible to list all the possible shenanigans here, but one-sided P values and non-ITT (intention to treat) analyses are highly irregular and usually mean the study failed using conventional analytic approaches.
With those pointers in mind, let’s return to the example that got me riled up. Here's the key excerpt from the Q2-2022 update from Clene (CLNN):
Reported topline results from the Phase 2 VISIONARY-MS clinical trial with CNM-Au8 that met the primary and secondary endpoints of LCLA and mMSFC compared to placebo over 48 weeks in a modified intent to treat (mITT) population.
— Primary endpoint: LCLA letter change in the clinically affected eye (least squares [LS] mean difference, 3.13; 95% CI: -0.08 to 6.33, p = 0.056)
— Secondary outcomes:
• mMSFC mean standardized change (LS mean difference, 0.28; 95% CI: 0.04 to 0.52, p = 0.0207)
• mMSFC average rank score (LS mean difference, 13.38; 95% CI: 2.83 to 23.94, p = 0.0138)
• Time to first repeated clinical improvement to Week 48 (45% vs. 29%, log-rank p=0.3991)
• CNM-Au8 treatment was well-tolerated and there were no significant safety findings reported.
— Results provide support to advance CNM-Au8 into Phase 3 clinical development.
As announced in February 2022, the trial was stopped prematurely due to COVID-19 pandemic operational challenges, limiting enrollment to 73 out of the 150 planned participants. Due to the limited enrollment, the threshold for significance was pre-specified at p=0.10 prior to database lock. The primary analysis was conducted in a modified intent to treat (mITT) population, which censored invalid data. The mITT population excluded data from a single site (n=9) with LCLA testing execution errors and the timed 25-foot walk data from one subject with a change in mobility assist device. The ITT results were directionally consistent with the mITT results, although the ITT results were not significant.
Clene has initiated a second cohort of the more severe non-active, progressive MS population in the REPAIR-MS Phase 2 clinical trial to confirm target engagement following the target engagement demonstrated in the first cohort of relapsing MS patients.
At first, everything looks good — "met the primary and secondary endpoints", woo-hoo!
But what's this we see a few sentences later? The P value was actually non-significant at 0.056? What's going on?
Aaaaaand there it is: a litany of caveats, excuses, and misdirections. Some of them are "greatest hits," but there are a few less common ones here as well that bear mention:
MOVING THE BAR: Yes, it's unfortunate that the trial ended before recruiting its target sample size, and that means the study was only powered for a much larger difference than originally intended. But that's not a free pass to move the goalposts for statistical significance. It’s fine to use a laxer P value cutoff than P<0.05 for decision-making about future R&D, but it’s frankly misleading to tout that to the public as a “positive” result.
MODIFIED ITT ANALYSIS: Excluding some patients who were treated with the study drug or placebo isn't *always* the kiss of death for a trial. But as noted above, it's a definite red flag, and typically indicates that the sponsor is grasping for a significant P value after a conventional analysis didn't deliver one.
OPERATIONAL SNAFU: The additional information about the mITT population isn't exactly comforting. How did the company manage to select a study site that was incapable of rigorously assessing the primary endpoint?! If you're in a less-than-charitable mood, that might call the team's operational competence into question, and cast a shadow over the entire trial (and any subsequent ones).
"DIRECTIONALLY CONSISTENT": This phrase belongs in the same category as "trending toward significance". Its main purpose here is to distract our attention from the main finding: the trial failed.
The disappointing thing here is if Cleve had just reported the truth, clearly and without spin, they might have both preserved their credibility and made an at least somewhat convincing argument for progressing into Phase 3. Here's an example of what the company *could* have said:
"The Phase 2 study failed to meet its primary endpoint of LCLA letter change in the clinically affected eye (least squares [LS] mean difference, 3.13; 95% CI: -0.08 to 6.33, p = 0.056). But despite the non-significant results, we believe several factors support the underlying hypothesis. Due to COVID-19 operational challenges, the trial only recruited 73 out of 150 planned participants, which meant it was only powered to detect an extremely ambitious improvement in LCLA. In addition, we determined after database lock that a trial site that contributed almost 15 percent of the recruited patients had execution errors with LCLA testing that compromised the final results. Taking both of these factors into account, we believe these data support running a Phase 3 study with similar design, with the addition of more stringent study site vetting and training protocols to ensure high-quality data capture."
(Let’s set aside for now the question of whether or not those are actually convincing arguments to advance the program into Phase 3 — although interested readers should check out this great NEJM piece on interpreting “negative” studies (open access).)
In summary, there is no denying that when given the choice between issuing a spin-free report and intentionally obscuring the findings, Cleve took the low road — and by doing so, I'd argue that they significantly increased what I call the "sponsor-dependent risk" of the program. (I discussed this concept in a blog post on assessing clinical trial risk.)
I don't know the company's management at all, but I can tell you that in many due diligence analyses I've done on biotechs for investors and potential partners/acquirers, the credibility and abilities of the leadership team have had a big influence on the appetite for an investment or deal. And shenanigans like these often factor in to the final assessment.
So if you're evaluating biotech trial and you see any of the PR red flags I listed above, be concerned — not just about that particular study, but future ones too, as well as the company's overall judgment and competence.
And if you're a biotech exec or aspire to be one in the future — please, please, please, don't pull this crap. Spinning your PR when a trial fails may give you a short-term stock bump fueled by rubes who don't know any better, but in the long-term, it's credibility-destroying and frankly embarrassing.
DID YOU FIND THIS USEFUL? If so, subscribe to Pharmagellan’s free newsletter on biotech clinical trials, and join thousands of biopharma pros who get analyses and case studies like this delivered 2-4 times per month. Sign up here.