Working with the Bioanalytical Method Validation Guidance (BMV) in 2019


Here at the beginning of 2019 it seems fitting to update my personal thoughts on the status of regulated bioanalysis including how we are adapting to the finalized FDA 2018 BMV Guidance and anticipate what may lie ahead. As expected the FDA Guidance has generated healthy debate on the practices of bioanalytical method validation (BMV), primarily around compliance with guidance language. While consistency with health authority expectations is fundamental, most modern bioanalytical laboratories need to address a variety of assays that we can’t expect to fit under a single prescriptive doctrine. We are now experiencing the introduction of techniques that supplement the traditional chromatographic or ligand binding technologies in the bioanalytical laboratory. These can present formidable challenges when correlating again BMV Guidance language. We’ve heard this practical reality referenced in the discussions around the 2018 FDA BMV Guidance both online and at workshops and conferences. To me, this surmises the future and the challenge facing today’s bioanalysts. We have a foundation for regulated bioanalysis in BMV Guidance but new and rapidly evolving challenges await us. We can expect further BMV guidance soon from other global initiatives (e.g. ICH M10) but I expect the need to generate reliable, decision enabling data will increasingly rely on the validation design rather than conformance to the prescriptive direction of any single document.

So where does this put us today in terms of compliance with BMV expectations of regulators, auditors and sponsors? The root of this question may imply that all the expectations are the same, but experience suggests otherwise. Frequently we hear of an inspection of one bioanalytical laboratory generating findings and observations that don’t correlate with another laboratory inspection. Of course, we don’t always know the context of such events, but this doesn’t prevent the industry responding with efforts to avoid a reported ‘FDA 483’ or equivalent citation. We have also experienced observations arising after years of compliance to given regulatory language interpretation that can only be attributed to inspector viewpoint. Then there are the nuances of regulatory expectations across the globe that differ. These events combine to create uncertainty within the bioanalytical community and often translate to complexity in procedures and practice that is not value added. While remaining current with regulatory developments and interpretations is crucial, it’s not a substitute for understanding the specific bioanalytical variables of a given method and the samples that will be assayed. Truly understanding a bioanalytical method starts with development of the assay, in effect making this phase of establishing an assay a necessary foundation on which to build the method validation that will make the assay ‘suitable for the intended use.’

Method Development

In my previous blog article, I noted, the FDA 2018 BMV Guidance lays out a compelling case for thorough method development. From personal discussions, this is consistent with the practices of experienced bioanalysts. While not considered a regulated phase, method development experiments should establish and proof-test the bioanalytical strategy intended for validation. Here is where the bioanalyst should draw on existing and supporting information regarding the analyte, matrix and criteria that will impact the actual study samples. It makes little sense to validate desired sensitivity with a 500µL sample aliquot when the assay is intended to support a pediatric study. Likewise, in-matrix stability or non-specific binding (NSB) characteristics should be investigated before assay validation and indeed before study sample collection. Thus begins another key aspect of optimized assay establishment; open communication across all parties. There should be effective and comprehensive dialogue between the sponsor and bioanalyst from method development onwards. I also note this with reference to structures, known metabolites, existing stability information and other meta-data associated with an analyte which may not always be provided to the bioanalyst in the interest of confidentiality. A recent paper by GSK authors (H. Licea-Perez, C. Evans and S. Summerfield, Bioanalysis 11 (2) 85-101 (2019)) lays out a compelling case for using analyte and major metabolite structures to drive method development. The conclusions are consistent with the FDA Guidance recommendations of providing the bioanalyst with all available or known information to enable method development that will support a successful method validation. 

I will continue to defend that how the bioanalyst documents, reviews and archives method development data should be at the discretion of the bioanalytical laboratory but should be sufficient to support the approach taken. In our own experience, regulatory inspectors can and do ask to see supporting method development data on occasion. I will reason that the method development strategy itself should be adapted to the objectives of the assay. We hear and use the term ‘fit-for-purpose (FFP)’ when discussing the bioanalysis of biomarkers and endogenous compounds, as well as with reference to leveraging new technologies, but shouldn’t all regulated bioanalysis be FFP? We need to think of what the purpose of a proposed urine or tissue assay is, or even those early-phase drug investigations, rather than assume full BMV as a default. Appropriate assay customization that is still FFP offers tremendous opportunities for efficiency gains and resulting cost savings. And it all starts with and is supported by the appropriate method development. 

BMV Does Not Equate to GLP

As the assay moves into the method validation phase we trigger additional aspects of documentation, data review (including QC and QA), reporting and archiving. That is, we are conducting the work in accordance with the standards of Good Laboratory Practices (GLP). Note that I am not saying a validation is a GLP study, however, the GLP standards can extend to how we validate the equipment, instruments or software used and how we document the activities. It’s appropriate to note that we can do this independent of the actual validation experiments performed. That is, bioanalytical method validation whether fully validated, partial or FFP does not equate to a “GLP validation”. Since we can perform any bioanalytical method validation strategy under GLP standards, latitude to customizing the overall establishment of the assay need not circumvent GLP regulatory standards or expectations. The FDA 2018 BMV Guidance supports FFP method validations and I see this as a welcome addition to the regulatory language. I expect FFP to significantly influence how we approach modern and future bioanalytical challenges. Fully validated assays (i.e. consistent with BMV) may be appropriate but this does not mean that all assays established in a regulated bioanalytical laboratory need to meet such standards. When combined with FFP strategies the efficiency and cost gains can continue through validation while still maintaining GLP standards of operation.

Don’t Fall for the Precision Trap
Any validated bioanalytical assay is predicated on establishing the key attributes of accuracy and precision. Recent discussions around biomarkers (particularly endogenous protein biomarkers) have raised the debate about relative accuracy as opposed to absolute accuracy. Of course, our bioanalytical assays are always relative…relative to the reference standard. Our ability to demonstrate absolute accuracy of an assay is dependent on the characterization and purity assessment of the reference standard. We typically have a good handle on drug reference material (either small molecule white powders or large molecule therapeutics) but we should still be cognizant of what purity assessments that support the standard and/or other potential impacts (e.g. hygroscopic compound water uptake) that may impact accuracy of the bioanalytical data. 

As for the samples themselves, renewed attention is being given to pre-analytical activities that may impact overall assay accuracy. The voiced concern here is “how can the bioanalyst be accountable for factors that could impact accuracy before a sample reaches the bioanalytical laboratory?” Well in some respects we can. The earlier reference to stability and NSB potential tested in control matrix at method development stage is such an example. Other pre-analytical variables present confounding challenges to the bioanalyst, including demonstrating conformance to collection protocols, stability in disease state and patient/subject specific matrices and comprehensive NSB assessments that cover all sample exposures to surfaces from the point of sample draw or collection. 

None of this is to suggest that bioanalytical data is inherently inaccurate but rather it’s prudent to be aware of potential impacts to the true accuracy of our data that is not guaranteed by just following BMV. Precision assessments are the bastion of the bioanalyst because we have full control of the variables that impact reproducibility. We can go to great lengths (and we do) driving down precision statistics and it’s comforting to stand behind %CV of single digits. The warning here is to think as a good analytical scientist and not fall into the trap of having an assay with impressive precision statistics but one which is nonetheless inaccurate.  

Elephant in the Room: A&P Testing

If there is one resounding concern of the FDA 2018 BMV Guidance shared among the majority of bioanalysts it is the recommendation to use fresh calibrators (calibration standards) and fresh quality control (QC) samples for all accuracy and precision (A&P) assessments. I’m not going to re-hash the whole debate, but I’ll maintain a position of demonstrating QC samples are accurately prepared and then using this preparation of bulk QCs to test the method intra- and inter-run precision and accuracy across multiple runs (i.e. the A&P runs). Fresh calibrator preparations are justifiable but to include QC preparation variance into the method accuracy and precision assessment confounds the very intent of this assessment i.e. to determine the A&P of the assay itself. Of course, we want to know how accurate and precise an assay is in practical use but that will come from overall statistics of the pre-study and in-study validation runs. 

More relevant to A&P assessments of small molecule and large molecule assays, but arguably less represented in BMV Guidance, is the practical implications of inter-lot matrix effects. Selectivity testing an assay with the different lots of control matrix goes some way to investigating potential impact but depending on the design of the assay there may be additional concerns over simple interference checks. This is particularly true when using an analogue internal standard (IS). From the experience of our laboratory, the bioanalyst needs to be particularly attuned to the potential of inter-lot variables, hemolysis, lipid levels and the disease state on any analyte A&P assessments whenever a stable-isotope labeled (SIL) internal standard is not available. So much so that I believe we cannot have the same confidence in bioanalytical LC-MS assays that rely on analogue IS as we have when using a SIL IS. This is independent of any practical BMV experimental strategy. Again, this does not mean that we should never use an analogue IS. It just means such assays cannot guarantee the same level of A&P performance as a SIL IS based assay. That may be practically acceptable, and the degree of risk associated with using analogue IS(s) be perfectly appropriate for the intended use of the assay. However, appropriate acceptance criteria should correlate with the use of an analogue IS and we should avoid trying to ‘fit the BMV square plug into the FFP round hole’.

A Batch Equates to a Batch…Until it Does Not

When we first saw the request for batch performance assessments in the FDA draft BMV Guidance it induced some confusion and questions around the purpose. Then we heard the case of multiple solid-phase extraction (SPE) manifolds being used where one or more were defective but overall acceptance criteria of a run being met. The potential for reporting inaccurate data from such a specific scenario is obvious and maybe a valid reason to not employ conventional 12-24 port SPE manifolds for regulated bioanalysis. Arguments were made for and against the need for batch assessments and continue at meetings and conferences. However, extension to the 96-well plate assays is clarified in the finalized FDA 2018 BMV Guidance and therefore warrants appropriate adoption since such assay formats are quite common. The potential for differences in handling each 96-well plate, or batch, in a 2-plate run is a practical reality. Differences in volumetric liquid handling, contamination potential, vacuum draws, centrifugation and many other sample preparation steps could introduce plate specific variance. By including a full complement of QCs on each plate the ability to assess such inter-plate effects is relatively straightforward. The bigger concern is how this may extend to new technology developments where batch effects may be more difficult to discern. With the need for ultrasensitive bioanalytical assays of potent biologics some complicated extraction-based methods are being used. Such approaches may present fundamental challenges to the batch criteria concept. Even considering the SPE manifold example, I’m not sure how you would practically batch test such an assay set-up. Ultimately the validation of any bioanalytical method should support how the assay will be used to measure study samples. As we explore and employ new technologies and bioanalytical strategies we should consider the batch test expectations now in effect with the FDA 2018 BMV Guidance and be able to defend the resulting data.  

Co-medication Stability Concerns 

Beyond the limitations of fixed dose combination studies and co-formulated compounds, I am still struggling with the FDA BMV rationale of “consider the stability of the analyte in the presence of other co-medications.” When we consider the complexity of endogenous compounds in a biological matrix (that we do test as a matter of course with matrix-based stability QCs), a unique co-medication impact upon the stability of the drug of interest is not apparent. Remember this is not an interference selectivity/specificity test but rather the potential of impact on stability. Presumably the intent is to more accurately simulate a study sample. However, without accurate knowledge of the co-medication matrix concentration levels, metabolites and potentially complicated permutations of medications, it is impractical to implement beyond the fixed-dose combination studies. This proposal is unique to the FDA BMV Guidance and it is still to be seen how the bioanalytical community will respond with practice that is scientifically justifiable and defendable. 


The studious bioanalytical reader of this blog-style post may note that I have not addressed some aspects of the FDA Guidance which have already garnered significant discussion. This is somewhat on purpose since for example, I don’t have anything to add to the vigorous debate on reference standard retest dates impacting solution preparation or expiration dates (really?). Also, the recommendations for reporting formats has some elements that raise questions and confusion that we are still finding our way through with our own sponsors.

It is to be expected that the draft ICH M10 initiative will become public in the first half of 2019. The bioanalyst attention may then turn to that document as it purports to replacing other global BMV guidance language. That said, areas that are not covered in the ICH language (e.g. biomarkers) may still defer to local regulatory guidance. It will be interesting to see how this all plays out and the impact it may have on bringing consensus and harmonization.

Throughout this article I have tried to stay close to the practicalities of responding to evolving regulatory guidance and give some perspectives on where I believe we are headed. Since our technology, the drug modalities and the scope of bioanalysis is also evolving I think we all need to remember the basics of what we are tasked with but simultaneously adapt to the new needs. We are in the job of producing documented and scientifically defendable quantitative data that are suitable for the intended use. This may be supported by prescriptive BMV Guidance or at times we will need to defend the appropriate path taken based upon a customized approach. We may acknowledge that a customized assay validation design is needed but we often struggle with our convictions because of BMV precedence. I maintain that that it is our responsibility to obtain the right data for the right need and that we can still do so under the regulated bioanalytical umbrella. I encourage the bioanalytical community to continue the dialogue in this respect and look forward to any corresponding feedback on this article.  

It is an exciting time to be a bioanalyst with the pace only increasing in terms of the influence of our discipline on disease treatment and understanding of the associated biology. As we see out another decade, I have no doubt that the challenges and opportunities will be met with innovation and experience of the competent bioanalyst. 

Welcome to 2019 and what lies ahead in regulated bioanalysis.