From: Paul Hutson
Subject: Subject: [NMusers] Using BLQ data for SIGMA
Date: Mon, 07 Aug 2006 09:58:49 -0500

Good day.

I am looking at some old dose escalation PK data in which the doses of drug were injected
at higher doses in subsequent days.  Short halflife and history allow me to reset the system
before each dose.  Interestingly, the concentration on the highest (and last) IV bolus dose
for one subject is below assay quantitation levels (BLQ) at all time points, but was
measurable (albeit at low end of assay validation) at the prior, lower doses.

I have read prior postings about using BLQ data as the concentration falls below this
limit.  This does not appear appropriate in this case, where there is no measurable drug
at any time point after the dose.Can anyone offer suggestions on how to incorporate this
absence of measurable data?  Excluding this dose from the pop fit is the easiest thing to
do, but I don't like to leave this data behind.  The evidence that the highest dose was
associated with lower concentrations on the third day suggests perhaps enzyme induction,
but mechanistically based upon what we know of this drug's metabolism, it is unlikely.

I am inclined to think that this "lack of data" with BLQ on the highest dose may be more
useful in fitting the SIGMA.   That is, does it seem reasonable to include this dose event
with BLQ results set at "0" (or at the limit of detection?) in order to show to the model
that the assay results have low reliability at this concentration range?

I look forward to your counsel.



Paul R. Hutson, Pharm.D.
Associate Professor
UW School of Pharmacy
777 Highland Avenue
Madison WI 53705-2222
Tel  608.263.2496
Fax 608.265.5421
Pager 608.265.7000, p7856

From: Mark Sale - Next Level Solutions
Subject: RE: [NMusers] Using BLQ data for SIGMA
Date: Mon, 07 Aug 2006 09:08:53 -0700

   You are correct that deleting the data will result in bias, see
Stuarts really, really good paper on this subject from a few years ago.
 My suggests are:
   1.  Use Winbugs.  Winbugs is (IMHO), the only really correct way to
do this - assuming your model is available in the PKbugs library.  Your
concern about possible enzyme induction may mean it isn't a standard
libary model.  WINBUGs does have an ODE solver for complex, nonlinear
models, but my experience is that it is pretty hard to make work. 
Chuanpu Hu implemented this in a very elegant solution a few years ago,
I think he had at least a poster somewhere about it.
   2.  Use the method described in the archives by Lewis et al.  It is a
numerical approximation to the normal cumulative distribution that, in
theory correctly calculated the likelihood contribution of the left
censored data.  I can send you this code if your interested, it isn't
too bad.  
   3.  Wait for NONMEM V 6, which has suggestion #2 built in.

Setting the value to 0 was one of the methods in Stuarts paper.  I'm
sure it wasn't the "winner", but I don't recall how well/badly it

WRT the parametric methods (2 and 3) I am concerned that the results may
be very very sensitive to the assumption  (BUGS makes much more limited
assumptions).  So, if possible, I would recommend PKBugs. 

Mark Sale MD
Next Level Solutions, LLC

From: "Serge Guzy"
Subject: RE: [NMusers] Using BLQ data for SIGMA 
Date: Mon, 7 Aug 2006 10:25:31 -0700

I would recommend also to try the MC-PEM methodology. Both PDx-MC-PEM
and S-ADAPT allow BQL handling. The contribution to the likelihood for
BQL data are estimated by computing the integral from -infinity to LOQ
(Stuart Beal method). Both programs allow you to write your own
differential equation. Therefore your problem should be able to be
handled using the MC-PEM program(s).

Serge Guzy
President POP-PHARM

Subject: RE: [NMusers] Using BLQ data for SIGMA 
Date: Mon, 7 Aug 2006 14:23:40 -0400

Paul - It sounds from your posting that you believe the subject actually got the
high dose as was intended.  What is the likelihood of a dosing error, precipitation
of drug, instability, or some other explanatory problem?  

Assuming that the correct dose was given when it was in fact not will also cause
bias in your estimation and subsequent inferences.  However, if you feel the dosing
was more reliable than the assay results you could in principle include an error
term for values that are BLQ but were still measurable.  Just treating it as censored
data ignores the knowledge contained in the assay results that happen to be falling
below the lowest standard.  How that would translate in practice with this particular
dataset in which the structural model is being challenged by this profile from one
subject on one occasion...I think we can only speculate.

Good luck, Jeff 

From: "Stephen Duffull"
Subject: RE: [NMusers] Using BLQ data for SIGMA
Date: Tue, 8 Aug 2006 09:41:22 +1200

Hi all

It is my understanding that Stuart's paper and Lewis's comments on handling
BLQ data were based on not all the data in a dose interval being censored.
I'm not sure that leaving out an entire dose would result in bias - indeed
if there were errors in the dose taken (compliance or otherwise) then
incorporating the data at the nominal dose would lead to bias in itself.  So
- I don't think that BUGS, NONMEM VI, Monolix or even MCPEM will be a
panacea for your problem (which to me is probably much more fundamental).

We have code implementing the various methods in NONMEM (poster at PAGE) and
doing it correctly in BUGS (presentation at PAGE several years ago) - but I
don't think this is the issue.


Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 913 Dunedin
New Zealand
P: +64 3 479 5044
F: +64 3 479 7034

From: Mark Sale - Next Level Solutions
Subject: RE: [NMusers] Using BLQ data for SIGMA 
Date: Mon, 07 Aug 2006 14:56:11 -0700

  Stuarts paper dose discuss only data missing from within a dosing
interval.  But, I am pretty sure that an entire dose interval data
being missing will bias the result in a similar way.  Consider the case
where you have IOV (although I'm pretty sure it applies in the absence
of IOV as well).  If the mean CL is 1.0, with an IOV SD of 0.5, but any
value of CL greater than 1.6 will result in all data being BQL, you will
get an estimate of CL less than 1.0 (because you deleted some the data
with CL > mean, but retained all the data with CL < mean).  It occurs
to me that the most likely cause(s) of Pauls observation are:

1. clincal error - (placebo, or nothing given rather than the correct
dose), or someone left the samples on the benchtop overnight and the
drug all degraded.
2. IOV

being in an academic setting, you of course would never see problems
like #1 - or any other problems with clincal trial conduct   ;-) , but I
assure you they occur in industry settings.
Chuanpu - please comment, you know this area better than I (unless I'm
wrong about all this, then please keep your opinions to yourself).

Mark Sale MD
Next Level Solutions, LLC

From: "Stephen Duffull"
Subject: RE: [NMusers] Using BLQ data for SIGMA 
Date: Tue, 8 Aug 2006 10:41:40 +1200


I hadn't thought of IOV as a contributor here.  But taking your example a
little further.  If you had IOV greater than some arbitrary value that
resulted in censoring of data as BLQ then you would presumably also get the
other end of the spectrum too - with concs that are much higher than
expected.  So you would probably see some signal from IOV to support this
phenomena - and indeed simulations from your model would predict BLQ
observations for some dose levels.  In this case you would have reason to
believe that there is some need to account for BLQ data.  In the absence of
this signal it seems to me that execution error (which even happens in
academia but of course at a much slower rate) is a much more probable cause.


Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 913 Dunedin
New Zealand
P: +64 3 479 5044
F: +64 3 479 7034

Subject: RE: [NMusers] Using BLQ data for SIGMA
Date: Tue, 08 Aug 2006 03:55:24 +0200

Dear Dr Hutson,

As proposed before, I would also rate an error in the clinical part or sample storage / sample
transport most likely from my experience as a clinical monitor. 
What is written in the final study report or clinical report about the dose of the profile which
was completely below the quantification limit (BQL)? By how much was the dose increased in this subject? 

You might try to test the ?high between occasion variability (BOV) hypothesis?:
a) Estimate the PopPK model with BOV while ignoring the profile which is completely BQL.
b) Estimate the PopPK model with BOV while setting all concentrations of the profile which is
completely BQL to zero or to a very small concentration.**

Both a) and b) will probably yield biased estimates. However, a) and b) might give you an idea
about the range of ?possibly true? models. If these two models yield similar answers to your
ultimate modeling objectives, this procedure might be sufficient and you might stop here.

In case a) and b) yield substantially different answers, I would study the distribution of eta?s
(outlier?) and the variance of the BOV terms in cases a) and b). If case b) yields an extremely large
BOV for clearance (e.g. a BOV larger than the population parameter variability reported in literature
for your drug and group of patients), you might argue that this is objective evidence for a clinical
error and that case a) is the most appropriate model choice. (This assumes that there is no relevant
auto-induction occurring in this patient.)

**As a modification of case b), you might impute a more realistic profile for the BQL samples of the
highest dose profile. You could e.g. use the average profile in this subject at the lower doses and
multiply each concentration by the same factor so that the peak concentration of the imputed profile
for the highest dose is e.g. 75% of the limit of quantification. I realize that this approach may
lack statistical consistency, however, might be a practical approach before using more sophisticated
analysis techniques.

Hope this helps.
Best regards

Juergen Bulitta, MSc
Pharmacometrician, IBMP - Institute for Biomedical and Pharmaceutical Research
Paul-Ehrlich-Str. 19, D-90562 Nurnberg-Heroldsberg, Germany

Subject: RE: [NMusers] Using BLQ data for SIGMA 
Date: Tue, 8 Aug 2006 10:40:29 +0200

Dear All,

Without entering into the debate of which method is a "panacea" for this issue (to quote Steve),
I just want to mention a very elegant example of Stuart's method, with differential equations, implemented
by Samson, Mentré and Lavielle with SAEM in Monolix software. It was presented at last PAGE meeting in the
Lewis Sheiner session: 



Dr Pascal Girard
EA 3738, Ciblage Thérapeutique en Oncologie
Fac Médecine Lyon-Sud, BP12
69921 OULLINS Cedex
Tel  +33 (0)4 26 23 59 54 / Fax +33 (0)4 26 23 59 76
Master Recherche aMIV, parcours Bio-Mathématiques et Pharmacologie 

Subject: RE: [NMusers] Using BLQ data for SIGMA
Date: Tue, 8 Aug 2006 11:09:26 +0200

Dear all,

The debate is very interesting. However, in this particular case it seems that it is about
ONE patient in ONE occasion. What about the others at this last dose level?

From my experience in such a case I keep the data of the patient in memory, but analyse the
data omitting this patient. Omitting this entire occasion cannot lead to bias for the population
analysis, except if the model is not stable (not enough patients, misspecification,...). I agree
with Steve when he says that execution errors (problems in conservation, transfer to laboratory,
drawing samples,...) may happen in academia but of course at a much slower rate. Are the samples
assayed in the same handling?



Dr Brigitte Tranchand
EA3738 CTO
Fac Médecine Lyon-Sud
69921 OULLINS Cedex
Tel 33 4 26 23 59 53
e-mail :



Subject: RE: [NMusers] Using BLQ data for SIGMA 
Date: Tue, 8 Aug 2006 10:03:58 -0400

Dear Mark, Steve, and all,

(By commenting here, I am implying that I don't think Mark is wrong about
all this.  ;-)  )
It looks clear that everyone agrees that, if we trust the dosing info, then
the correct way is to model BQL data. (For this purpose I like WinBUGS, but
that may just mean that I don't know other software well.)

The issue is what if we don't trust the nominal dose. It is of course
interesting to investigate whether the dose/assay is wrong, however my
guess is we may never know for sure. Similarly, I suspect it would be
difficult finding conclusive evidence that IOV caused the BQL. In
principle, dosing uncertainty can be modeled as well, e.g., in the spririt
of (Mu and Ludden, "Estimation of Population Pharmacokinetic Parameters in
the Presence of Non-compliance", JPP 2003), however this is more complex.

In Paul's case, I would first look at the results of modeling BQL vs.
deleting them, and see how much difference there is. This will give a sense
of the robustness of the conclusions. Attempts to account for dosing error,
or IOV, should give inbetween results. Ultimately, I think the issue is
robustness of the conclusions, as we likely will not be able to quantify
how uncertain we are about the dose.

Chuanpu Hu, Ph.D.
9 Great Valley Parkway, Room 242
Malvern, PA 19355-1304
Tel: (610) 889-6774
Fax: (610) 889-6932

From: Leonid Gibiansky
Subject: RE: [NMusers] Using BLQ data for SIGMA
Date: Tue, 08 Aug 2006 10:57:31 -0400

It is interesting that BQL topic is so popular. In 1999 I asked a question how
to treat BQL, and below is Lewis Sheiner reply (even at that time it was a very old topic)

It looks like setting the BQL values to BQL/2 and using additive error of at least
SD=BQL/2 approximates the problem well enough without need to wait for NONMEM VI, WinBugs
or any other software. I am not sure whether you need to model BQL in this particular case,
but if yes, NONMEM V is capable to deliver a reasonable answer if you have a reasonable
model (more than one point should be retained if you need to use the entire BQL profile).



From: LSheiner
Subject: RE: [NMusers] Using BLQ data for SIGMA
Date: Tue, 05 Oct 1999 08:15:17 -0700

All -

The BQL thing just doesn't go away ... I have a feeling we've been through this before.

The BQL observations are left censored. They could be any value between 0 and
QL. The likelihood contribution for such an observation is therefore the integral
of the distribution of observations centered at the prediction, from 0 to QL. This
distribution, unfortunately, cannot be normal since such a distribution implies that
the "observation" BQL might be negative, so on might use log(y) vs log(f), or
approximate the distribution of epsilon near 0 by a half-normal, or such.

Unfortunately, this "fix" involves modifying the objective function so that it can
include integrals like the ones I described above. That is not easy, and it is why
I suggested as a simple expedient,

1. Delete all but the first in each continuous series of BQL observations
2. Set the remaining (first) one DV = QL/2
3. Use an additive plus proportional error model with the SD of the additive part >= QL/2.

This should preserve whatever "information" the BQL possesses, and does not require
modifying the likelihood. I admit I haven't studied this, but before resporting to
elaborate schemes, I would like to see some evidence that this simple one has problems.

I agree with Jim's issue with Leonid's solution; introducing discontinuities in the objective
function is, it seems to me, more dangerous than approximating an integral with a point on
the integrant, as I have suggested.

Lewis B Sheiner, MD Professor: Lab. Med., Biopharm. Sci., Med.
Box 0626 voice: 415 476 1965
UCSF, SF, CA fax: 415 476 2796
94143-0626 email: