Listen to the Clinical Chemistry Podcast
M. Kellogg, C. Ellervik, D. Morrow, A. Hsing, E. Stein, and A. Sethi. Preanalytical Considerations in the Design of Clinical Trials and Epidemiological Studies. Clin Chem 2015;61:797-803.
Dr. Mark Kellogg is the Associate Director of Chemistry and Director of Quality Systems in the Department Laboratory Medicine at Boston Children’s Hospital and an Assistant Professor of Pathology at Harvard Medical School.
This is a podcast from Clinical Chemistry, sponsored by the Department of Laboratory Medicine at Boston Children’s Hospital. I am Bob Barrett.
Clinical trials, epidemiologic and most types of clinical research, include the acquisition of biological samples to be analyzed either immediately after collection, or in the future for biomarkers related to the study hypotheses. Tested biomarkers include those used to monitor subjects’ health and to detect harmful side effects or to follow impact of therapeutic interventions. These samples may also be used for biomarker discovery. Whatever the study goals may be, the quality of study outcomes will depend heavily on the quality of the samples obtained and subsequent analysis.
Since the largest component of total error in the clinical laboratory is often associated with the preanalytical phase, it is probably safe to assume that the same conclusion applies to clinical trials and epidemiologic studies. Standardization of preanalytical variables is not a trivial manner, it is essential to ensure successful outcomes of these studies. Samples must be handled in an identical fashion at all times and in all locations, and procedures must be in place to avoid sample mix ups. These are only few of the challenges encountered during the preanalytical phase.
In the June 2015 issue of Clinical Chemistry, five experts with a vast experience in national and international biomarker studies provided their insights into the issue of preanalytical variables and how these can impact the design of clinical and epidemiologic studies. That “Q&A” section was moderated by Dr. Mark Kellogg, from Departments of Laboratory Medicine and Pathology, Boston Children’s Hospital and the Harvard Medical School and he joins us in this podcast. Dr. Kellogg joins us in this podcast.
Doctor, we know that how a sample of blood, urine, or any tissue was collected, handled, or stored can have a significant impact on the results reported out of a clinical laboratory. Is this also an issue with samples used in clinical trials or epidemiologic studies?
Yes, it definitely is and it’s probably a bigger issue because it’s more difficult for us to detect when sample quality is compromised when it’s in the study. When we are dealing with patients we can compare the data we have against that actual patient’s status and if it’s inconsistent then we will, you know, we can do investigations to determine was it the sample quality issue, was it collected incorrectly. When we are looking at data from the clinical trial or an epidemiologic study we usually don’t have that patient info to compare against, so we are sort of shooting in the dark to know was this the right number or the wrong number.
And then on top of that another factor impacting clinical trials is the number of sites. Within a hospital we may have a few hundred people collecting samples but usually with one SOP that’s consistent for the institution, but when you have a clinical trial you may have multiple sites across the world, and making sure that the processes and training of all of those individuals becomes difficult, and so the more site you have, the more variation you will have in each of those processes. So that adds even more to the potential issues with sample quality.
And then on top of that sometimes a lot of studies will have a central lab perform the processing in which case then samples may be shipped from all over the world and so they will have different exposures in terms of time and temperature or even things like atmospheric pressure, when they are shipped on planes across the different continents.
So the variables in clinical trials and epi-studies related to sample quality are huge compared to what we do within the hospital, and so then we know that has an impact within the hospital and it definitely has an impact on the outcomes for trials and studies too.
Well, oftentimes we will see that results from one study disagree with results from another, creating confusion in the public, even among health care professionals. Do you think the quality of samples plays a role in these disagreements?
Again, a definite good example from the Q&A, which Dr. Morrow discusses is the soluble CD40 ligand marker where lots of studies saw desperate results and the differences actually turned to be differences in how the samples were centrifuged, nothing to do with differences in the disease states that were being studied.
And there are few other examples in the literature from analytes like leptin or ghrelin and then lots of the markers of oxidation also fall into this category. Unfortunately, there hasn’t been a lot of research or literature to investigate this in detail just more isolated results where investigators found these things and then would publish that.
And I think another issue there is that lots of times when these things happen, then those studies simply don’t get published because it’s more negative data. We don’t know as much about those issues when there are disagreements--are they related to the samples? Or was it a true difference because of something else in the study design? But there is definitely enough data to support the fact that sample quality has an impact in those disagreements between studies; we have to believe that sample quality plays a big role in that.
And then I think on top of that not knowing -- in many studies they don’t know the quality of their sample, they don’t know the history of their samples. So if that’s unknown and any disagreements, you have to believe the sample quality might be part of that disagreement.
Seems like a pretty big issue to tackle; are there tools that investigators can use to help address preanalytical variables and create improved plans for their studies?
I think the two big to tackle concept is why investigators will ignore this component of study design. But like any project if you simply break it into small steps and tackle one thing ahead of time then that hurdle is not as big. Several of the contributors to the Q&A point out that simple advanced planning really is the key tool, nothing fancy. You need to pull in the laboratory professionals who have expertise to see the potential pitfalls in sample collection and handling and storage and so just have a nice standard operating procedure and a good plan is probably the key tool.
A couple other areas to go and help with that planning process, the Biospecimen Reporting for Improved Study Quality Report, in my opinion should be a required reading for investigators who conduct clinical trials or epidemiologic studies. It was a very well written report, it was easy to understand, and has several good examples and lots of tips to help investigators plan. So between just having a plan and using the BRISQ, the Biospecimen Reporting for Improved Study Quality Report, those are the two tools that will go a long way to improving sample quality.
And then, I gave a little more detail on National Cancer Institute’s Biorepository & Biospecimen Branch has a best practices document and then also several standard operating procedures that you can use as templates for studies. But again, they all go back to that simple concept of plan in advance, be ready.
Well doctor there are 100s of biorepositories and companies that offer to provide clinical samples for research. What are some key questions an investigator should ask before purchasing and using samples from these resources?
There is really just one key question, can that provider provide the complete history for the sample all the way down to date and time of collection, where it was collected, how it was collected, how was it processed? Was it aliquoted immediately after collection and then frozen in aliquots or was one sample opened several times and frozen and thawed again prior to being shipped off to the investigator for use?
Also things like how was it centrifuged, or was it refrigerated or frozen, all of those key factors are important and the BRISQ study points out that those are the things that are missing in most studies, so asking those vendors and companies that provide that if they have that data is the key question.
And then also depending on the type of study you are doing, knowing the patient information, most investigators, actually, that’s the question they ask first, like I am looking patients who have had a heart attack or have had hypertension, but knowing just that also was insufficient. You need to know were they smoker, were they taking other medications, did they have other diseases that might impact what you are trying to look for?
So the key question really is, what’s that history of that sample? If they just tell you it was collected somewhere in the US and from a patient who had a heart attack, that’s probably not enough info to really understand the quality.
Well finally, doctor, let’s put you on the spot here, would you care to share an example of where your own study design was less optimal and impacted study outcomes?
Lots of them. Probably more one related to not necessary from a clinical trial or epi-study design, but we were working on developing an assay for a clinical trial, and we spent almost a year developing an assay for oxysterols for a study that investigated if those markers were able to predict atherosclerotic plaque instability. We got that assay up and running and we started collecting samples and we rarely found evidence for any of those analytes and it turned out that samples needed to be frozen within minutes of collection. Had we thought about up front, we would have not wasted an entire year developing an assay.
So I think one of the lessons we learned was early in the planning process we have to think about that sample issue and not just assume that it’s going to be easy when we get to it. When I work with investigators trying to plan studies I always start with “garbage in garbage out” concept, that if I don’t have a good sample and I may have the best assay in the world and a whole bunch of the PhDs doing the work, but if I get a garbage sample I am going to give them garbage data out the other side.
And then a lot of investigators also are along the “let’s do things quickly” and I point out that, well, the wrong answer fast, is still the wrong answer. So, we can’t assume that sample quality is going to wash out in the statistics or isn’t something that we have to worry about, we really have to consider that variability just like any other variable in the study.
Dr. Mark Kellogg is the Associate Director of Chemistry and Director of Quality Systems in the Department Laboratory Medicine at Boston Children’s Hospital and an Assistant Professor of Pathology at Harvard Medical School. He has been our guest in this podcast from Clinical Chemistry. I am Bob Barrett.