Hallmarks and Principles of Translational Research Success: Free Chapter from the Book “Cures vs. Profits”


[This is a chapter from my book, “Cures vs. Profits“, published in 2015 (World Scientific).  It is the chapter in which I coin the term “Shamwizardry”. Looking back, it now seems like the most important chapter in the book. Enjoy, I hope it helps someone do better science. – JLW]

In researching the topics for this book and getting into the details of the studies involved that support the important findings, I noticed a common theme to those studies that can be considered the most successful translational successes. While the entire scientific endeavor is laudable, studies that are most notable tend to share the following characteristics.

1. They asked important questions; more specifically, they addressed a problem important to society. Implicit in this, they were able to recognize the value of their findings to society. As a result, they had no difficulty in communicating the relevance of their effort.

2. The investigators were keen observers of events, patterns and trends around them, often looking to nature for inspiration.

3. They spent time thinking about the problem, and were not hesitant to “search for” correlations, and conduct observational studies to help identify plausible, important new research directions. When faced with barriers, they persisted — sometimes for a decade or more.

4. They stated a clearly translational hypothesis.

5. They clearly spelled out the next translational phases in easy-to-comprehend and logical terms.

6. They did more than confirm existing studies. They paid more than “lip service” to the true translational potential; as cognates, they were dedicated to the clinical problem at hand from question to answer.

7. They tended to ensure accuracy and precision of measurements.

8. They relied on the proper calibration of measurements (avoiding biases).

9. They took steps to balance variables across clinical groups, avoiding confounded study designs, and took pains to insure that control groups were appropriate and relevant to the study.

10. They were not afraid to try an alternative approach, counter to mainstream accepted practices.

11. They easily looked at their problem at a variety of scales — molecular to symptoms.

12. They were careful to use proper interpretation of results from trials. They did not over-interpret their results beyond what was supported by the data from their studies.

13. They paid close attention to and adhered to complete diagnostic criteria of the patients involved.

14. They refused to engage in activities that could resemble, or lead to actual conflicts of interest. If they held a financial interest in the outcomes of their science, they worked even harder to protect the validity of their science, as a matter of principle. Thus, the “profit pressure” reinforced their objectivity — they would not imagine jeopardizing their long-term prospects with short cuts or making leaps of faith in interpretation.

15. They prepared themselves with strong observational data first, allowing their experiments to rule out applications where the treatments or procedures seemed ineffective.

16. They remained ethical, at all costs — they never cheated.

17. They were forthright in reporting the complete set of results, including those that may be counter to their current understanding.

18. As a result, they did not mislead themselves, and others, by presenting only the most positive results.

19. They did not abuse their colleagues by having multiple data analysts analyze their data and then “cherry-pick” the results that made their results “look” best.

20. They asked the right types of question(s) at the right stage(s) of translational research and did not confuse, for example, discovery or research and development with commercialization. Compartmentalization of these activities allowed them to conduct properly focused activities leading to the appropriate types of successes at the appropriate stages.

21. They tended to see barriers to translation as opportunities for well-applied effort.

I am hopeful that communicating the details of the pitfalls of research that are preventing a greater number of successes will motivate people from all walks of life to demand better from the regulatory process, from scientists, from corporations and from themselves. If more people thought to seek to enroll themselves in a clinical trial for new treatments for whatever affliction they suffer, sample sizes would be larger. If people got involved and demanded more money for the NIH budget, a greater number of larger studies could be afforded. Critical follow-up studies could be conducted. But the research community, and the regulatory community, must get their act together before confidence in their enterprise can exist. Headlines are filled with contradictory findings from nearly identical studies, and just recently (Feb 2015) the FDA reversed their decades-long position on the ills of dietary cholesterol. What is the American public supposed to think? How are they supposed to be able to sort this out? Headlines also include tales of academic misconduct and research fraud. Institutions must do all they can do to weed out the cheaters and should not tolerate abuses of conflicts of interest. Responsible conduct on the part of individual research scientists is needed.

Hallmarks of Shamwizardry

There are plenty of doctors, nurses, teachers and people who work for drug companies who are in biomedical research for the right reasons: to reduce human pain and suffering. Medicine is still a noble cause. I started this book project open-minded, with the goal of finding the best examples of successes in translational research I could find. I did it for my family, my friends, and for myself. In writing Ebola: An Evolving Story, I discovered that so many things went wrong, and were wrong with the national and international institutions charged with handling epidemics such as Ebola, that I found myself wanting to read about, learn about and share the positive aspects of modern biomedical research. In that search I, inevitably perhaps, found both positives, and negatives. Just as I learned about the hallmarks of success, I also learned something critical about the source of the negatives. There is a pattern: There are hallmarks of swindlers and cheats who are out for profit regardless of, or in some cases oblivious to, increasing human pain & suffering. Together, these attribute fall into a category of behaviors and dispositions that I call “Shamwizardry.”

1. You will like them. At first. They tend to be highly charismatic, and tend to make you feel as though they are doing you a favor.

2. Abuse of positions and conflicts of interest. They tend to have serious conflicts of interest, and have no qualms about acting in ways that place themselves in positions where they can personally benefit from those conflicts. Some tend to be so bold as to joke about their conflicts of interest, as if that provides them with a pass. It does not.

3. One-sided arguments. They will be critical of the developments with which their alternative procedure or treatment is in competition, but they will rarely, if ever, discuss or emphasize the limitations of their options. They will not cite any literature that does not support their one-sided view. This is not how science works.

4. Negative appeals to emotion and other distractions. When there are no data to support their criticisms of their targets, they will ruthlessly resort to negative emotional appeals. Or, they will distract with non-sequitur information that draws attention away from the main thrust of the benefits of the new technique or treatment. Or, they may create a false dichotomy in a situation when better understanding is had by considering a continuum, or situation-dependence of conditions.

5. Willingness to resort to spite and condescension. When data exists that is counter to their biased view, they will resort to ad hominem (personal) attacks, coming close to or even going far as to draw into question a person’s character. Some go so far as to systematically destroy a person’s career. My advice to anyone experiencing this kind of treatment? Meet with them, but never alone; ask if it’s ok to take notes; get everything they offer in writing and document everything. And talk with your friends and family about their doings, but do not discuss it with colleagues (don’t gossip).

6. When everything is on the line, they will lie. Actually, shamwizards will lie simply because there is a slight breeze. These individuals are not interest in promoting real understanding. But they can willfully and woefully misrepresent the truth, especially in highly technical areas where they think they can get away with it.

7. If allowed to fester, the Shamwizard will become a Tyrant. Researchers and others in the workplace around the Shamwizard will fail to thrive; they will do only the bare minimum to get through their work week; they will not be inspired to do more than they are asked. Many co-workers will appear to want to keep their heads “below the radar” so as to avoid being singled out. Many will also be stressed if they are asked to participate in unethical research practices, and may remain quiet (and thus complicit) for years. Shamwizardry is a form of tyrannical quackery in which perfectly good and valid options in medicine and medical research are attacked and dismantled by an over-exaggeration of the potential limitations. Shamwizards desperately want to control the way you think, so you believe what they want you to believe. As a result, they lead people to erroneous conclusions. Sooner or later, however, science catches up with them. The physicist Max Planck was an advocate of the truth. He is famously quoted as saying: Truth never triumphs — its opponents just die out.

I, for one, am not willing to wait for them to pass on to see progressive change in research back toward pure and applied science. We saw shamwizardry with the promotion of radiation over Coley’s toxins; we saw this in the turfing of Dr. Gretchen LaFever Watson’s community behavioral counseling program alternative to off-label amphetamine treatment of ADHD; and we saw this in local ad hominem attacks on an excellent surgeon by a local competitor who accused him of promoting robotic surgery just to take a bigger share of the available market. What is remarkable to me is that I did not expect to see this common thread emerge. All but one chapter of this book was written before I began to see the common themes, but they are undeniable. I’ve spent 20 years optimizing what should have been straightforward options for the analysis of large genomic, proteomic and genetic data streams. I have helped many people, and have met a few detractors. Some big mistakes were made by early adopters; these mistakes caught on, and have been passed on (such as the use of ratio (fold-change) to express differences, the continued use of the log-rank test in survival analysis). Hopefully, the next generation of researchers will benefit from the knowledge that this effort has provided. Similarly, advances in our knowledge of the effectiveness of clinical trials are being made. A just-so story told long ago about how measuring things twice with the same instrument with a made-up infinite amount of data has misled most of field of biomedicine into thinking that no useful information can exist in within-arm comparisons. This has effected the field immensely in terms of acceptable clinical trial designs, and yet nearly everyone seems to have missed that the power lies in the within-arm comparisons due to paired samples, and that covariates caused by the main effect are relevant to the main null hypothesis and should not be factored out with ANCOVA. Some will deny this vehemently and try to claim that I do not quite understand the principles of ANCOVA, nor how it is used to isolate variances. I do understand, quite well; what they do not understand is that the prescription in ANCOVA to use the interaction term if it is significant instead of the main term is post-hockery at its finest — it fails to identify the source of the significance, which can exist within one, or the other arm. The default should be within-arm analyses, and ANCOVA if necessary, depending on empirical demonstration of incidental covariance. I have had statisticians tell me that they are not interested in the slightest in the outcome to baseline comparisons. The fairy tale originated by Cronbach and Furby (1970) and Lord and Novick (1968) automatically reduces valid within-arm comparisons to ashes in an utterly unnecessary manner. Also, the mere possibility of a confounding variable does not establish its existence, and if appropriate care has been used to ensure no baseline differences (by multivariate matching, or by combining baseline groups), studies will be made more powerful for the cross-group comparison as well, and such studies should be considered to be more appropriately designed.

In spite of the numerous negatives revealed in my research for this book, I found a number of extremely exciting findings and ongoing studies that are not quite at the “translational stage” of development. The progress being made by researchers at all levels — basic, translational, and clinical — is impressive. We can only expect to “get there” with such studies if we cast off the binds that tie us to old practices that are known to be flawed. Even promising results from studies are ignored by the FDA unless they are “double blinded, placebo controlled, randomized, prospective” clinical trials. They weighed in on the need for placebo control arms in tests of drugs to help fight Ebola during the 2014 epidemic before any trials were even conducted (Cox et al., 2014) under the idea that patients in Guinea, Sierra Leone, and Liberia enroled in the trials would receive best available care, and the results of any drug trials would therefore be confounded with changes to care for patients who received the treatment with no proper control arm. Doctors Without Borders (MSF) proceeded to evaluate drugs without placebo control arm under the idea that the drugs may help turn the tide of the epidemic, and there were already plenty of people around dying without the new drugs. The need for placebo may be evident when best care is, in fact, available, but it was not realistic of the FDA to think that best available care for Ebola would be available to the larger clinical population upon which the drugs would be used, and therefore even providing best available care could make the outcome of those drug trials irrelevant to the general population.

There are some fairly simple fixes to numerous stages of biomedical research that cost nothing. I’m an expert in complex data analysis, and on my journey through over a hundred research studies, I discovered a few years ago that an extremely commonly used measure of differences between two groups (say, treatment vs. control) is so biased that its use is the equivalent of throwing away a large proportion of the data collected. This particular bias has hindered success in translational research in a most insidious way: Those involved find it hard to believe that using a ratio of two measurements (treatment/control) is in any way misleading because that ratio is used so widely that it cannot be true. Here we see the failure in the understanding of how science works. A philosopher of science once portrayed truth as the consensus belief of scientists, and that change in science occurs via scientific revolutions. The changes in general understanding occurs, under this Kuhnian model of science, as a paradigm shift. This social aspect of science is widely appreciated as fact; however, consensus cannot replace objective, accurate assessments of specific knowledge claims, which is determined by our ability to measure what we hope to understand, and our ability to construct critical tests of hypotheses. We often fail in the area of constructing the right test, which requires (among other things) an understanding of the impact of our decisions on how to represent the data. There are numerous serious biases that result in the use of some methods of analysis, such as so-called “fold change” ratios instead of differences. If we represent the data in a biased manner (say, ignore all numeric measurements measurements that start with “6” or “8”), how can we expect to see a full representation of the data and understand that which we have selected to study?

References Cox E, Borio L, Temple R. (2014) Evaluating Ebola therapies — the case for RCTs. N Engl J Med 371(25): 2350–51. [doi: 10.1056/NEJMp1414145]

Lyons-Weiler, J. 2015. Cures vs. Profits:Successes in Translational Research Wspc. Kindle Edition.

Cures_JLW_Cover_Art_front

 

Original source: https://jameslyonsweiler.com/2018/04/10/hallmarks-and-principles-of-translational-research-success-free-chapter-from-the-book-cures-vs-profits/

0 0 votes
Article Rating

Follow James Lyons-Weiler, PhD on:

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments