Date: Fri, 30 Jul 1999 15:55:10 -0500
From: S Thomas Forgue <FORGUE_S_THOMAS@Lilly.com>
Subject: BQL values, version 3

NONMEM Users:

Reference is made to NONMEM UsersNet Archive; Subject 116: "Concentration values below assay limits" [May 1997].

We are modeling (ADVAN2) plasma concentrations of a polypeptide drug (1461 observations from 358 patients in a Phase 3 clinical study). Fully 33% of the observations are BQL (< 50 pg/mL). Peak concentrations are typically 200 - 300 pg/mL. The proportion of BLQ values increases greatly with increasing time from last dose. How we treat these BQL values has a *marked* effect on estimates of drug exposure -- with clinically important consequences concerning safety assessment.

We are aware of some approaches to estimate BQL values directly from detector response data; however current reality is that we accept databases from our analytical laboratories with assigned "BQL" values. Please, let's table that issue for now.

The trivial solution of just omitting BQL values has little support within our group, unless these data represent a "negligible" fraction. Dr. Sheiner's suggestion was (in part) to use an additive and proportional residual error structure, fix variance of the additive epsilon to (QL/2)**2 [one possibility] and enter the BQL observations as QL/2. [Please see the archive reference for his actual comments]. This is easily implemented and works "well" in our polypeptide problem. We do not have consensus among our PK'ists and statisticians that this is necessarily the best solution for the polypeptide study. What are the alternatives? Dr. Aarons alluded to an iterative numerical intergration routine based on likelihood theory. We are thinking about an iterative Monte Carlo strategy of imputing BQL values randomly drawn from the left tail of a log-normal distribution of plasma concentrations.

***THE REQUEST*** We will appreciate knowing of any *statistically sound and pragmatic* approach to the BQL issue that we can implement within the next month and submit with confidence to worldwide regulatory agencies. Our thanks in advance for your time and expertise.

Ben Cerimele, Tom Forgue, Mike Heathman and Julie Satterwhite

 

 

*****

 

 

Date: Fri, 30 Jul 1999 15:14:13 -0700
From: LSheiner <lewis@c255.ucsf.edu>
Subject: Re: BQL values, version 3

Let me amend and amplify my (simple fix) suggestion. I did suggest substituting QL/2 for BQL as a simple strategy that tries to extract some info from the BQL but not work to hard doing so.

It turns out (can't recall who pointed this out to me), that if the concs are all BQL after say 6 hours, and there are therefore BQLs at 6, 10, 12, 24,

and 48 hours, then it's a bad idea to put them all in at QL/2 because, even with a relatively large additive error variance, such data imply a long flat tail to the C vs t curve (potentially wreaking havoc with AUC extrapolation for example).

So, my modified suggestion for a simple fix is to put ONLY the first BQL in (=QL/2) and delete all subsequent ones in each time series (occasion) from an individual.

Now, on to the theory. The "right" thing to do is indeed to set the likelihood contribution of a QL equal to the integral over the support of the observation below QL, conditional upon the current values of the pop params. This is tedious and involves some fancy programming. We (JM Gries, Davide Verotta and I) did this in a problem where we couldn't avoid it, and JM (Jean-Michel.Gries@hmrag.com) may be able to supply you with some useful code fragments (but then again, our problem was different, and it may not be easy for him to extract the relevant code).

The imputation idea is a good one, but tell us more just what you have in mind. I would imagine you would be imputing essentially a single likelihood contribution for each QL at each function evaluation? If so, the problem is that then the "likelihood" conditional on a fixed set of parameters would be stochastic. Would this "jitter" mess up convergence? Of course, if you imputed say 100 BQL values from the current model for each "observed" BQL, instead of just one, as I assume you intend, and used the average likelihood contribution of these 100, then this would constitute a particular (Monte Carlo) implementation of the "right" method; i.e., integration over the support for the observation below QL.

LBS.
--
Lewis B Sheiner, MD Professor: Lab. Med., Biopharm. Sci., Med.
Box 0626 voice: 415 476 1965
UCSF, SF, CA fax: 415 476 2796
94143-0626 email: lewis@c255.ucsf.edu

 

 

*****

 

 

Date: Sat, 31 Jul 1999 00:08:58 +0100 (GMT)
From: "J.G. Wright" <J.G.Wright@newcastle.ac.uk>
Subject: Re: BQL values, version 3

There are two things to bear in mind when dealing with BQL observations.

1) A BQL observation does not necessarily mean that the "true" value is below BQL. Sometimes observations above BQL will be recorded as below BQL because of assay variability etc. Thus the support for a BQL observation on the likelihood should actually extend above BQL. Exactly how much is hard to determine.

2) Some account of the uncertainty induced by these censored obervations has to be acknowledged. One approach to this is multiple imputation, where you create numerous datasets with different random values(generated from a sensible model) and analyse each dataset separately. Then combine across datasets to get an average value and confidence intervals which acknowledge this uncertainty (definitely superior to a single imputation). This is debatably a discrete analogue of an EM-type analogue, with the advantage that it can be easily implemented in NONMEM.

Of course, the choice of imputation model is crucial. However for regulatory submission, a conservative method (with large variability) is probably the best option.

James Wright

 

 

*****

 

 

Date: Fri, 30 Jul 1999 16:25:32 -0700
From: LSheiner <lewis@c255.ucsf.edu>
Subject: Re: BQL values, version 3

"J.G. Wright" wrote:
>
> There are two things to bear in mind when dealing with BQL observations.
>
> 1) A BQL observation does not necessarily mean that the "true" value is
> below BQL. Sometimes observations above BQL will be recorded as below BQL
> because of assay variability etc. Thus the support for a BQL observation
> on the likelihood should actually extend above BQL. Exactly how much is
> hard to determine.

I don't think this is right. The likelihood conditions on the OBSERVED data. What was observed is BQL. It has a certain probability under the model (the model might indeed have its expectation > QL, but that doesn't chagne the observation, only its likelihood), and that is given by the integral I defined.

>
> 2) Some account of the uncertainty induced by these censored obervations
> has to be acknowledged. One approach to this is multiple imputation,
> where you create numerous datasets with different random values(generated
> from a sensible model) and analyse each dataset separately. Then combine
> across datasets to get an average value and confidence intervals which
> acknowledge this uncertainty (definitely superior to a single imputation).
> This is debatably a discrete analogue of an
> EM-type analogue, with the advantage that it can be easily implemented in
> NONMEM.
>
> Of course, the choice of imputation model is crucial. However for
> regulatory submission, a conservative method (with large variability) is
> probably the best option.
>

Again, I don't beleive so: the uncertainty in the BQL observation is captured by the probability model for it just as is the uncertainty of a >QL observation.

BUT,
1. In my simplified approach, I am indeed, not proerly acknowledging uncertainty since I am imputing the missing observation as QL/2.
2. Multiple imputation may indeed be a way to deal with the BQL problem if you don't want to do the integration: as Jim says, create say 10 data sets filling in the BQL obnservations with reasonable imputed values including (large) uncertainty. Then proceed as appropriate for mult imputation. This amounts to treating the BQL observations as "missing data", which, within the constraint that they are BQL, they are.

LBS.
--
Lewis B Sheiner, MD Professor: Lab. Med., Biopharm. Sci., Med.
Box 0626 voice: 415 476 1965
UCSF, SF, CA fax: 415 476 2796
94143-0626 email: lewis@c255.ucsf.edu

 

 

*****

 

 

Date: Mon, 2 Aug 1999 16:57:06 +0100 (GMT)
From: "J.G. Wright" <J.G.Wright@newcastle.ac.uk>
Subject: Re: BQL values, version 3

Dear Lew,

Of course, you are right about the definition of the likelihood. My phrasing was unfortunate, what I meant to emphasize was that was that BQL observations can come from true values above the range and fixing BQL observations to BQL/2 with symmetric error doesn't seem appropriate to me. I misused the term "support" to mean weighted true values consistent with the observation. This indeed what you have to consider when imputing values. I believe the word support in your explanation means the relative weighting of possible observations consistent with the observed value.

My point about substituting data values providing narrower confidence intervals is, I believe correct, and i don't think "inflating" residual error (downweighting the BQL observations) correctly compensates for this. Indeed, it may introduce some degree of bias.

Of course, if you are going to integrate over the possible observed values below QL, this is not an issue as you are not imputing a fixed value. In the implementation of the integration approach however what distribution (ie support) do you assume for the observations below BQL (eg uniform, lognormal etc)? This remains an assumption which is critical to the analysis, whatever method you use.

James

 

 

*****

 

 

From: "Stephen Duffull" <sduffull@fs1.pa.man.ac.uk>
Subject: Do we need BQL?
Date: Tue, 3 Aug 1999 09:51:06 +0100

Dear NM Users

I read with continued interest the discussion about BLQ (BQL, LOQ, BLT ...). I thought that I might add something for more general discussion. BLQ to me is an assay artefact. It has no specific clinical or modelling value and as far as I can tell is arbitrary (although most publications seem to recommend the same cut-off value). If BLQ was never reported, but rather the "observed" concentration was reported instead, then this would make sense from a modelling perspective. We can then let the likelihood compute the appropriate contribution of each concentration (assuming an appropriate error model has been chosen). The important assay parameter "limit of detection" which does have specific meaning in a modelling context (although its value is still arbitrary), would continue to be of interest.

I realise that this is of no help to the discussion about what to do with BLQs, but perhaps this problem is somewhat self-induced.

Regards
Steve
=====================
Stephen Duffull
School of Pharmacy
University of Manchester
Manchester, M13 9PL, UK
Ph +44 161 275 2355
Fax +44 161 275 2396

 

 

*****

 

 

Date: Tue, 03 Aug 1999 08:09:44 -0700
From: LSheiner <lewis@c255.ucsf.edu>
Subject: Re: Do we need BQL?

Ah, a breath of science instead of simply speculation.

Indeed, the censoring that BQL represents is self-induced, and probably stems from the fact that labs are reluctant to state a number when the (%) uncertainty in that number is very high. We now know how to deal with varying degrees of uncertainty, and are, in fact, losing information by censoring our very low observations.

A key point here is that we use the model for the "signal" to determine the magnitude of the noise, so we are not left with only the libertarian's calibration curves and daily controls; we examine internal evidence and construct a variance model that is compatible with our notion of the underlying process. In so doing, we would be aided by getting an honest report from the lab on what it's machines said, rather than an arbitrarily censored one.

Good point.

LBS.

PS. Why is "limit of detection" meaningful? Is it not simply the lowest value reliably distinguishable from zero? Doesn't the error model cover this as part of its continuum?
--
Lewis B Sheiner, MD Professor: Lab. Med., Biopharm. Sci., Med.
Box 0626 voice: 415 476 1965
UCSF, SF, CA fax: 415 476 2796
94143-0626 email: lewis@c255.ucsf.edu

 

 

*****

 

 

Date: Tue, 3 Aug 1999 18:23:22 +0100 (GMT)
From: "J.G. Wright" <J.G.Wright@newcastle.ac.uk>
Subject: Re: Do we need BQL?

BQLs returned from laboratories are unnecessary in pharmacokinetics, but may have value in toxicokinetics. I am sure the lab people would argue that we really don't know (and can't determine) the behaviour of the assay in this range. Fortunately most of us don't believe that the assay is all that important compared to other sources of variation in the data and so we couldn't care less. The problem is that once we have BQLs we have to deal with them.

The best solution is the sophisticated coding suggested by Lew Sheiner, however if I was in a regulatory body I might look on this sceptically, not because of any theoretical flaws, but because it is complicated and you could argue your support is chosen to be overly consistent with the other data, difficulties in evaluating integrals and propagating the error through to your estimates etc. Lew may wish to crushingly refute these comments but I think he would agree it is difficult to implement. Is there a publication somwhere with details of how to do this?

All of the single imputation methods with inflated variance have a more sinister problem, in that the variance function is dependent on the individual observed values, rather than the predicted values. This is kind of similar to weighting observations according to their observed values and is good way to get inconsistent estimators.

Multiple imputation is better than single imputation as you do not have to inflate the BQl variance to approximate uncertainty in your analysis. Although, if I remember my undergarduate days correctly, ten repetitions gives excellent approximations to confidence intervals and mean values for fixed effects, I do not know if theory extend to mixed effects models, especially if you are interested in your variance components. Also if you are doing some other kind of sensitivity analysis alongside this and now have to do all of these runs x number of times this will rapidly become impractical.

BQL is bad but once you are stuck with them, we have to do something. Some labs are evn reluctant to tell you what their QL is for a given assay...

James

 

 

*****

 

 

From: "Stephen Duffull" <sduffull@fs1.pa.man.ac.uk>
Subject: Do we need LOD?
Date: Wed, 4 Aug 1999 09:07:40 +0100

Lewis wrote:

>PS. Why is "limit of detection" meaningful? Is it not simply the
>lowest value reliably distinguishable from zero? Doesn't
>the error model cover this as part of its continuum?

Limit of detection is important from a clinical (& medico-legal) view point as an arbitrary qualitative measure. It's derivation is irrelevant as long as everyone understands what it is and accepts it for what it is. From a modelling view point I agree that limit of detection as it is usually defined is not particularly meaningful or useful. But it would seem to me that we do need to know the value when signal and noise become indistinguishable (thus

redefining "limit of detection").

Regards
Steve
=====================
Stephen Duffull
School of Pharmacy
University of Manchester
Manchester, M13 9PL, UK
Ph +44 161 275 2355
Fax +44 161 275 2396

 

 

*****

 

 

From: "Stephen Duffull" <sduffull@fs1.pa.man.ac.uk>
Subject: Re: Do we need BQL?
Date: Wed, 4 Aug 1999 09:20:01 +0100

James wrote:

>BQLs returned from laboratories are unnecessary in pharmacokinetics, but
>may have value in toxicokinetics.

I agree that it is unnecessary in PK, but am not sure why TK would be any different?

>I am sure the lab people would argue
>that we really don't know (and can't determine) the behaviour of the assay
>in this range.

This may be a lab paradigm. I do not believe that the assay behaviour cannot be determined at this range. From my understanding BQL is often arbitrarily set to the concentration value when the CV% exceeds 20% (or sometimes some other value). I do not perceive the difference between a concentration measured with a CV of 19% vs one with a CV of 21%. What I perceive is needed is the best possible assay that the "lab people" can produce. Then all concs are reported irrespective of where they fall on the regression line and let the "modelling people" decide how to use them. Indeed Mats has suggested that the "modelling people" could work from peak areas and do not require the transformation to concentrations [I hope I have quoted you correctly here Mats.]

> Fortunately most of us don't believe that the assay is all
>that important compared to other sources of variation in the data and so
>we couldn't care less.

I don't think that this is the point. The contribution of assay error to the residual error variance can be easily handled without BQL.

> The problem is that once we have BQLs we have
>to deal with them

But this shouldn't stop us thinking of where we might like to be in the future?

Regards
Steve
=====================
Stephen Duffull
School of Pharmacy
University of Manchester
Manchester, M13 9PL, UK
Ph +44 161 275 2355
Fax +44 161 275 2396

 

 

*****

 

 

Date: Wed, 4 Aug 1999 15:39:27 +0100 (GMT)
From: "J.G. Wright" <J.G.Wright@newcastle.ac.uk>
Subject: Re: Do we need BQL?

Dear Steve,

I agree with everything you say below, but merely wished to represent the view taken from the people we actually have to convince (unfortunately not each other. Perhaps we should start some kind of campaign). I am not sure that the definition of QL etc will necessarily be consistent across labs. The difference between toxicokinetics and pharmacokinetics is that in pharmacokinetics we know the drug is there. To an extent you can argue that good modelling could be applied in toxicokinetics but I think the notion of a sample which is indistinguishable from a blank is simple and usefulin this context (and doesn't need extensive assumptions). Of course, this is not the same definition of QL as you suggest below. It corresponds more closely to your LOD. My comments about variability at very low concentrations were perhaps somewhat callous to modelling principles but what I was getting at is that in some circumstances crude approximations will work fine (few BQLs, consistent with other data), the trick is to see when they won't. Preferably in advance so you convince the people doing the assay. However, I am not for a second implying there is anything good about BQLs, they are a damned nuisance.

The methodologies discussed are applicable to situations where you have bounded intervals for observations, not necessarily in BQL format. This could be some pharmacodynamic endpoint or a missing covariate (which happens all the time in every field) so the methods suggested have quite broad applicability. I think Thomas Forgues mentioned in his original email that they knew you shouldn't have BQL. What we need is a surefire method to convince the lab folks. I know that some labs do not even return samples with the label BQL but omit them entirely from the report, so perhaps it is a long way until we get to the BQL-less future.

James