From: "Gibiansky, Leonid" <GibiansL@globomax.com>
Subject: treatment of BQL
Date: Mon, 4 Oct 1999 11:37:58 -0400

Dear NONMEM users,

I'd like to experiment with the way how NONMEM treats observations below quantification limit (BQL). The idea is the following: if the observation (DV) is BQL, and the model prediction (F) is BQL, then the program should not penalize for the difference (if any) (F-DV). I've tried the following:

IF(DV.LT.BQL .AND. F .LE. BQL) IPRED=DV
Y=IPRED*(1+ERR(1))+ERR(2)

This works and gives reasonable results. However, I am concerned that it may alter the distributions of ERR(1) and ERR(2) is there are many BQL measurements.

Could you shear your thoughts about this way of treating BQL measurements ?

Thanks !
Leonid

 

 

*****

 

 

Date: Tue, 05 Oct 1999 09:31:42 +0100
From: James <J.G.Wright@ncl.ac.uk>
Subject: Re: treatment of BQL

Dear Leonid,

I have a couple of comments and everyone already knows I have some controversial opinions on this issue. Firstly, setting a prediction equal to the observation does incur a penalty dependent on the estimate of the variance of that observation. However, its the smallest penalty you can get, all other things being equal.

It is not entirely clear to me what you have done from your email. I am guessing that you have effectively fixed DV to the BQL value in your data file/code, but then used your NONMEM file to render predictions below this to be equivalent to a prediction of QL. This means that when the model makes a prediction above QL you are carrying out analysis equivalent to fixing BQL levels to QL. If you have done something different, I find it hard to see how you have avoided discontinuities in your likelihood surface. If my understanding of your method is correct, you have still introduced discontinuities in the second derivative which may have consequences for your covariance estimates.

On the other hand, you gain by flattening the likelihood below BQL, in that you acknowledge uncertainty in an approximate way, however we can always impute values from some flattened distribution anyway, acknowledging that with time the values are likely to be at the bottom end of BQL whereas early BQLs may correspond to true values above QL.

I think you are right to be concerned with the distribution of the errors, as you are effectively chucking in a series of terms which the model can decrease in value by decreasing the estimated variance of a QL prediction. That is you are throwing a load of observations which are equal to the predictions, and tempting the ML to decrease sigma so it pays less for these observations.

In short, if your model predicts BQL when you get BQL then you have downward pressure on sigma from these terms, at best. If it predicts a value above QL then you are treating the BQL observation as if it is equal to QL.

Essentially any method will work with just a couple of BQLs (throw them away, single imputations, mutilate your objective function). Its an interesting idea but I need to be convinced by some simulations, with a high proportion of BQLs. You are altering the objective function dependent on your observation, this makes inconsistency a real danger (or at least makes it difficult to demonstrate consistency).

James

 

*****

 

 

From: "Gibiansky, Leonid" <GibiansL@globomax.com>
Subject: RE: treatment of BQL
Date: Tue, 5 Oct 1999 09:13:57 -0400

Dear James,
My idea is to fix DV to, say, BQL/2, if DV < BQL. Then fix IPRED=F, if F > BQL, and fix IPRED=DV (=BQL/2) if F < BQL and DV < BQL. This way, the minimization procedure will "push" the model to the BQL/2 value, but will not penalize for the difference as soon as F reaches BQL. Yes, I will create discontinuous objective function surface. But fixing DV=BQL (for DV < BQL) is not too good either. My experiments show that in this case (DV=BQL for DV < BQL) there is no "force" to push F below BQL, and it takes a lot of iterations to overcome the IPRED=BQL level. I was using this trick in the minimization algorithm based on the least square and iterative weighted (1/y) least square objective functions, and I have a pretty good feeling of what

was going on there. Now I'd like to use similar trick in NONMEM. I do not think that it is worth the trouble trying to create smooth objective function via the smooth penalization, at least before I face any serious convergence problems (oscillations near the jump in objective function).

I have some problems with the idea
>impute values from some flattened distribution anyway, acknowledging that
>with time the values are likely to be at the bottom end of BQL whereas
>early BQLs may correspond to true values above QL.

since in this method you will need to impose your prior knowledge on the long-term behavior of the system. I tried it and ended up with the additional compartment that served only to fit those imputed values. The idea of fixing DV=BQL/2 and then fixing high variance of the error term associated with the BQL observations worked fine, but I'd like to create something that directly corresponds to our knowledge: observation is somewhere in the interval [0,BQL]. It is like using the inequality DV < BQL (*) instead of equality DV=?? in the minimization procedure. Lagrange multiplier corresponding to the inequality is equal to zero (no penalization) if IPRED < BQL. I do not know how to do it in NONMEM (in my procedure, I just excluded penalty for the difference multiplying it by zero if DV<BQL and F < BQL). So I am trying to modify prediction and observation to achieve the same result.

I agree that we need simulations to solve this problem. My question was whether there are any obvious deficiencies in such use of NONMEM, or whether there are any other NONMEM ways to implement this idea.

Thanks for the comments!

Leonid

 

*****

 

 

Date: Tue, 05 Oct 1999 15:17:40 +0100
From: James <J.G.Wright@ncl.ac.uk>
Subject: RE: treatment of BQL

Dear Leonid,

How many iterations it takes your model to converge is the least of your worries in this situation. The model you have described is almost certainly inconsistent because you have created an objective function where predicting BQL when you have a BQL observation is far more desirable than predicting a value near to it (The discontinuity matters. A lot). Essentially, you have created a situation where the BQL observations are extremely highly weighted. If the model does manage to predict above QL, then you are simply carrying out single imputation. The BQL observations are the least well-determined in the sample and this uncertainty needs to be acknowledged.

Imputing values (or a distribution) influenced by the other data is a sensible way to approach this and avoid the heavy-tail phenomena. Chopping terms out of your objective function in an ad hoc manner does not correspond to "using the inequality in the objective function". The approach you describe still suffers from the other problems I described in my first mail. You can never accomodate uncertainty when the model would like to predict above QL with this approach, no matter what methodology you are using. The problems I have raised are not NONMEM specific.

James

 

 

*****

 

 

Date: Tue, 5 Oct 1999 07:50:13 -0700 (PDT)
From: ABoeckmann <alison@c255.ucsf.edu>
Subject: Re: NONMEM

Veronique,

The undefined references are not to NONMEM routines. They look to me like FORTRAN library routines. For example,

ZXMIN1.o(.text+0x15aa):ZXMIN1.for: undefined reference to `d_lg10'

This is probably the double precision LOG10 function, which is used in ZXMIN1. Similarly, do_fio might be an I/O subroutine.

On the command line
ld *.o -o nonmem.lib
you probably need to list one or more libraries of FORTRAN functions. The documentation for g77 might explain where to obtain these libraries. Or, it may be that there is an environment variable that you can set that tells the compiler where the libraries are.

Alison Boeckmann

 

*****

 

 

Date: Tue, 05 Oct 1999 08:15:17 -0700
From: LSheiner <lewis@c255.ucsf.edu>
Subject: Re: treatment of BQL

All -

The BQL thing just doesn't go away ... I have a feeling we've been through this before.

The BQL observations are left censored. They could be any value between 0 and QL. The likelihood contribution for such an observation is therefore the integral of the distribution of observations centered at the prediction, from 0 to QL. This distribution, unfortunately, cannot be normal since such a distribution implies that the "observation" BQL might be negative, so on might use log(y) vs log(f), or approximate the distribution of epsilon near 0 by a half-normal, or such.

Unfortunately, this "fix" involves modifying the objective function so that it can include integrals like the ones I described above. That is not easy, and it is why I suggested as a simple expedient,

1. Delete all but the first in each continuous series of BQL observations
2. Set the remaining (first) one DV = QL/2
3. Use an additive plus proportional error model with the SD of the additive part >= QL/2.

This should preserve whatever "information" the BQL possesses, and does not require modifying the likelihood. I admit I haven't studied this, but before resporting to elaborate schemes, I would like to see some evidence that this simple one has problems.

I agree with Jim's issue with Leonid's solution; introducing discontinuities in the objective function is, it seems to me, more dangerous than approximating an integral with a point on the integrant, as I have suggested.

LBS.
--
Lewis B Sheiner, MD Professor: Lab. Med., Biopharm. Sci., Med.
Box 0626 voice: 415 476 1965
UCSF, SF, CA fax: 415 476 2796
94143-0626 email: lewis@c255.ucsf.edu

 

 

*****

 

 

Date: Tue, 05 Oct 1999 16:51:26 +0100
From: James <J.G.Wright@ncl.ac.uk>
Subject: RE: treatment of BQL

Dear Leonid,

This is not a variational problem. BQL observations are not constraints on your predictions. They are observed values and poorly determined ones at that. Acknowledge uncertainty or suffer the consequences.

The discontinuities in your likelihood treat BQL observations almost as if they were constraints. I suggested setting BQLs to QL to limit this effect. I don't think this is a good idea either. In my opinion Lew's suggestion will work in many practical situations where BQLs represent a limited proportion of the data. What this proportion is I don't know, but its probably pretty high especially if the BQLs aren't inconsistent with the other data.

And finally, you cannot resolve any theoretical problem with simulations. They are useful to help you gain insight. They are assumption and case-specific.

James

 

 

*****

 

 

Date: Tue, 05 Oct 1999 10:09:29 -0700
From: LSheiner <lewis@c255.ucsf.edu>
Subject: Re: treatment of BQL

Leonid,

I can see the problems with extensive data sets. As I think I mentioned before, the reason for deleting all but the key BQLs (i.e., those where the observations first dip below detection, or those just before the observations first rise above detection) was pointed out to me by Dennis Fisher I think: Imagine fitting a biexponential to single dose PK observations that are only slightly off from being mono-exponential, and that observations dip below detection at say 12 hours, followed by BQLs at 18, 24, 30, 36, 42, and 48 hours. If you set all the BQLs to QL/2 and leave them in, the second exponential will not be the one you seek, but will be estimated as very long, since the set of BQLs are operating to "tell" the fit that that there is a long flat tail. But if you delete all but the value at 12 hours, then the BQL there simply helps the fit understand that the levels are low at 12 hours and beyond ...

LBS.
--
Lewis B Sheiner, MD Professor: Lab. Med., Biopharm. Sci., Med.
Box 0626 voice: 415 476 1965
UCSF, SF, CA fax: 415 476 2796
94143-0626 email: lewis@c255.ucsf.edu