From:Daniel Corrado 
Subject: [NMusers] Log-transformation
Date:Mon, 31 Mar 2003 13:26:32 -0800 (PST)

Hello!

I am trying to fit data to a 2-compartment model after
log transformation. My initial runs were without log
transformation. The data fits ok but I was advised to
fit it after log transformation. Transforming DV to
LogDV gives negative values to concentrations below
1.0. Is this a problem?

Though I got successful minimization prior to log
transformation, when I try to run Nonmem (using
PDx-POP) after log transformation nothing happens. I
wanted to find out whether it was the negative log
values or something in my control stream that is
causing the problem.

Thanks,

Dan


My control stream is as follows. 



;Model Desc: base model run 1 
;Project Name: population run
;Project ID: EN-001

$PROB RUN# 801 (TWO COMP PK MODEL)
$INPUT C ID TIME DV AMT
$DATA  801.csv IGNORE C

$SUBROUTINES  ADVAN4 TRANS 1                          
                                                      
                                                      
                                                      
            

$PK
TVKA = THETA(1)
TVK = THETA(2)
TVK23 = THETA(3)  
TVK32 = THETA(4)
TVV2 = THETA(5)


KA=EXP(TVKA)*EXP(ETA(1))
K=EXP(TVK)*EXP(ETA(2))
K23=EXP(TVK23)*EXP(ETA(3))
K32=EXP(TVK32)*EXP(ETA(3))
V2=EXP(TVV2)*EXP(ETA(4))

S2=V2                                                 
                                                      
                                  


$ERROR
Y=LOG(F)+ERR(1)

$THETA
(0, 0.39); KA
(0, 0.3); K
(0, 0.048); K23
(0, 0.023); K32
(0, 1250); V2

$OMEGA
0.1 ;[P] INTERIND VAR IN KA
0.1 ;[P] INTERIND VAR IN K
0.001 ;[P] INTERIND VAR IN K23
0.1 ;[P] INTERIND VAR IN V2

$SIGMA
0.01 ;[P] PROPORTIONAL COMPONENT

$ESTIMATION MAXEVAL=9999 PRINT=5 METHOD=1 INTERACTION
NOABORT POSTHOC
$COVARIANCE
$TABLE ID TIME DV ETA(1) ETA(2) TVKA TVK TVK23 TVK32
TVV2 KA K K23 K32 V2 
FILE=901.TAB NOPRINT
_______________________________________________________

From:"Bachman, William" 
Subject:RE: [NMusers] Log-transformation
Date: Mon, 31 Mar 2003 16:51:55 -0500

Daniel,

The following code can be used to provide individual predictions (and avoid
the
ln(DV)=log(0) which would occur for the dosing records):

$ERROR
IPRED=0
IF(F.GT.0) IPRED=LOG(F)
Y=IPRED+ERR(1) 
_______________________________________________________

From: Leonid Gibiansky 
Subject:Re: [NMusers] Log-transformation
Date:Mon, 31 Mar 2003 17:01:36 -0500

Daniel,
You should not have any problems with negative values for log(DV). Try to 
start the model with the values obtained for the "not-transformed" model. 
Also, check whether you actually ran the control stream: your run number 
and output files have different names (run number is 801 but output file is 
901). PDx-POP would made them equal (if check run numbers box is checked). 
Just to make sure: you should change not only the controls stream 
(Y=LOG(F)+..) but also the data file (place LOG(DV) into the DV column).
Leonid
_______________________________________________________

From: "Sam Liao" 
Subject:RE: [NMusers] Log-transformation
Date:Mon, 31 Mar 2003 19:01:16 -0500

Hi Daniel:

Your control file needs some changes in KA, K, K23, K32 and V2:
----------------------------
$PK
TVKA = THETA(1)
TVK = THETA(2)
TVK23 = THETA(3)  
TVK32 = THETA(4)
TVV2 = THETA(5)

KA=TVKA*EXP(ETA(1))
K=TVK*EXP(ETA(2))
K23=TVK23*EXP(ETA(3))
K32=TVK32*EXP(ETA(4))
V2=TVV2*EXP(ETA(5))
----------------------
Best regards,

Sam Liao, Ph.D.
PharMax Research
20 Second Street,
PO Box 1809,
Jersey City, NJ 07302
phone: 201-7983202
efax: 1-720-2946783
_______________________________________________________

From:Daniel Corrado 
Subject: RE: [NMusers] Log-transformation
Date: Tue, 1 Apr 2003 09:12:49 -0800 (PST)

Leniod, Bill, Sam,

Thanks a lot for your suggestions. I used all of them.
It worked beautifully. I had difficulty getting a
covariance matrix.  I had to reduce the number of
ETA's from 5 to 4 to obtain the covariance matrix.

The assumption was ETA(K23) = ETA (K32)

The objective function and the Theta's did not change
and I got the covariance matrix.

Thanks,

Dan
_______________________________________________________

From: Luann Phillips 
Subject:Re: [NMusers] Log-transformation
Date:Fri, 04 Apr 2003 10:52:30 -0500

NM Users,

Sorry about the delay in response to this posting.  Below is a note that
I posted to the users group sometime in Feb. (99feb072003.html)
It was about a similar topic.  This code differs from Bill Bachman's reply to Daniel's
question. In Bill's code, predicted concentrations of zero are not
transformed. Because the DV is log(cp), keeping the IPRED=0 in essence
changes the predicted Cp from 0 to 1(log(1)=0) and will impact MVOF. The
code below will prevent your run from terminating because of a predicted
Cp=0 on dosing records (a frequent occurence).  Below the code is a list
of things that can be done when a predicted Cp of zero occurs for a
concentration record.

Regards,

Luann

/*****Previous note**************/

$ERROR

FLAG=0
IF(AMT.NE.0)FLAG=1  ;dosing records only

IPRED=LOG(F+FLAG)   ;transform the prediction to the log of the
                    ;prediction
                    ; IPRED=log(f) for concentration records and
                    ; IPRED=log(f+1) for dose records
                   
W=1                 ;additive error model

Y= IPRED + W*EPS(1)

This will allow NONMEM to continue running when a predicted
concentration of 0 occurs on any dosing record.  Since predictions for
dose records do not contribute to the minimum value of the objective
function this change to the F (or IPRED) does not influence the outcome
of the analyses.  However, if code is used to alter the predicted
concentration on a PK sample record the minimum value of the objective
function is changed and its value can be highly dependent upon what
value of IPRED is chosen as the 'new' predicted concentration.

Using the above code, if NONMEM predicts a concentration of 0 on a PK
sample record the run will still terminate (on some systems) with errors
because LOG(0) is negative infinity. In this case, the patient ID and
the observation within that patient for which the error occured will be
provided. 

If this occurs, you may want to consider the following options:

(1) Check the dosing and sampling times and the dose amounts preceding
the observation for errors. Is it reasonable that a patient would have
an observable concentration, given the time since last dose for the
sample?

(2) Is NONMEM predicting a zero concentration because of a modeled
absorption lag time? Consider removing the absorption lag time or using
a MIXTURE model to allow some subjects to have a lag time and others to
have a lag time of zero. 

(3) Test a combined additive + constant CV error model (Y= F + F*EPS(1)
+ EPS(2)) using  DV=original concentration instead of
DV=log(concentration). 

(4) Consider temporarily excluding measured concentrations with a
predicted value of zero. Work out the key components of the model and
then re-introduce the concentrations. The concentrations may no longer
have a predicted value of zero.

(5) If none of the above works, you could switch back to the code that
Vladimir suggested(see below). Because the minimum value of the
objective function
will be dependent upon the 'new' value of log(F) (or log(IPRED)), I
would test smaller values (-3, -5, -7, -9, etc.) until the change in
minimum value of the OBJ is not statistically significant for 2
successive choices (alpha less than the values used for covariate
analyses).  If this is not done then any change to the model that would
allow the model to predict a small non-zero value for the observation
could result in a statistically significant change in the minimum value
of the objective function. This type of model behavior could lead one to
think that a covariate is statistically significant based upon the
covariate changing the predicted value for 1 observation instead of its
inclusion improving the predictions for the population in general.

Regards,

Luann Phillips

/******Vladimir's original note***************/

> VPIOTROV@PRDBE.jnj.com wrote:
> 
> Luciane,
> 
> Bill is right saying that the error structure should reflect somehow
> your data. All PK parameters are positive, and by coding
> interindividual variability like CL=THETA(.)*EXP(ETA(.) and by using
> FOCE method we constrain CL to be positive. Similarly, concentration
> is positive, and the way to constrain it could be Y=F*EXP(EPS(1)).
> However, due to model linearization, NONMEM will treat this as
> Y=F*(1+EPS(1)). In order to properly constrain the model prediction
> you have to apply a so-called tranform-both-side approach by taking
> the logarithm of measured concentrations (DV variable in your data
> set) and of model prediction. In the log domain the exponential
> residual error becomes additive. The $ERROR block may look as follows:
> 
> $ERROR
>  IPRE = -5 ; arbitrary value; to prevent from run stop due to log
> domain error
>  IF (F.GT.0) IPRE = LOG(F) ; note: in FORTRAN, LOG() means natural
> logarithm, not decimal!
>  Y = IPRE + EPS(1)
> 
> BTW, the magnitude of SIGMA depends not only on the assay error.
> Nevertheless, if you know the precision of the bioanalytical method
> decreases as concentration drops below a certain level you may
> consider the model with 2 EPS.
> 
> Best regards,
> Vladimir
_______________________________________________________

From:"Kowalski, Ken" 
Subject:RE: [NMusers] Log-transformation
Date:Fri, 4 Apr 2003 12:10:09 -0500

Hi Luann,

Another option is the following log-transformed model that introduces an
additional theta to account for systematic bias at very low concentrations
to resolve the log(0) problem.  This approach is suggested by Beal, JPP
2001;28:481-504.

M = THETA(n)
Y = LOG(F+M) + (F/(F+M))*EPS(1) + (M/(F+M))*EPS(2)

When F>>M the model collapses to the standard log-transformed model with
EPS(1) the additive residual error in the log-scale.  When M>>F (i.e., as F
goes to zero) the prediction goes to log(M) (i.e., the bias) and EPS(2)
becomes dominant representing the residual variation at very low
concentrations.  A reasonable estimate of M should be around the QL
(quantification limit) or lower.

Ken
_______________________________________________________

From:VPIOTROV@PRDBE.jnj.com
Subject:RE: [NMusers] Log-transformation
Date:Wed, 9 Apr 2003 11:32:32 +0200

I don't think this could be considered as a solution to the problem of log(0). I have an example where the data simulated 
with the 2-comp disposition model and 

$ERROR 
 IPRE = -5 
  IF(F.GT.0) IPRE = LOG(F) 
 Y = IPRE + ERR(1) 
 
were perfectly fitted with the 1-comp model having 

$ERROR 
 M = THETA(n) 
 Y = LOG(F+M) + (F/(F+M))*ERR(1) + (M/(F+M))*ERR(2) 

Best regards, 
Vladimir
_______________________________________________________

From:"Kowalski, Ken" 
Subject:RE: [NMusers] Log-transformation
Date:Wed, 9 Apr 2003 08:04:41 -0400

Vladimir,
 
I'm not sure what you are trying to say regarding the log(0) problem and 2-comp simulation
vs 1-comp model fitting.  These are two separate issues.
 
For the model that Beal proposed below if F is a valid PK model prediction (F>=0) and M=THETA(n)
is constrained to be greater than 0 on the $THETA record then it should avoid the log(0) problem.
I must confess that I don't have a lot of experience with this model as I was merely suggesting it to
Luann as another option based on my reading of Beal's paper in the situation where one wants to use
the log-transformed-both-sides approach but F=0 is a valid prediction such as when one incorporates
a Tlag parameter where for some t, t<Tlag.  It may be that in some cases NONMEM may wish to
estimate THETA(n) to a lower bound of 0 in which case the log(0) problem would still exist.
 
Ken
_______________________________________________________

From: VPIOTROV@PRDBE.jnj.com
Subject: RE: [NMusers] Log-transformation
Date:Wed, 9 Apr 2003 14:17:27 +0200

Ken,
 
I only wanted to say that using that error model may result in structural  model misspecification/confusing.
 
Best regards, 
Vladimir
_______________________________________________________

From: "Kowalski, Ken" 
Subject:RE: [NMusers] Log-transformation
Date: Wed, 9 Apr 2003 08:35:57 -0400

Vladimir,
 
I agree which is why I commented that when using such a model a reasonable estimate of M should be
around the QL or lower.  I suspect in your case if you can observe a 2nd elimination phase (beta) above
the QL but you only fit a one-compartment model then M will be estimated to a value perhaps considerably
higher than the QL.  In this case I would be very much concerned with using this error model.  However,
if you have rich enough information to fit a 2-compartment model (i.e., dense sampling) it seems to me that
although M may pick up some of the lack-of-fit when fitting a 1-compartment model, we would probably
still see some lack-of-fit in our diagnostic plots that would suggest fitting a 2-compartment model.
 
Regards,
 
Ken
_______________________________________________________