From: "Luciane Velasque" 
Subject: [NMusers] model for OMEGA and SIGMA
Date:Fri, 7 Feb 2003 14:45:31 -0200

Dear Users,


Can I have an additive model for interindividual error : CL = TVCL+ETA(n) and a proportional
structure for intraindividual error : Y=F*(1+EPS(1))? 


Which is it the implication of that ?  How do I calculate the OMEGA and SIGMA CV ?


Thanks in advance

From: "Bachman, William" 
Subject:RE: [NMusers] model for OMEGA and SIGMA
Date:  Fri, 7 Feb 2003 13:25:38 -0500


Basically you can code anything you want (you're not limited to additive,
proportional, exponential, etc.).

But, the idea is that your error structure reflect your data!

So, typically we use a proportional or exponetial inter-individual error
model because PK parameters like V & CL are often log-normally distributed.
By the same token, if you know something about the residual error
distribution, e.g. from the characteristics of your assay, etc, you can make
some assumptions about what the residual error model should be.  As an
example, you might have an assay where the error is proportional over most
of the range of concentrations but constant near the limits of detection.
In that case, and additive plus proportional residual error model might be
an appropriate choice:  Y = F + F*ERR(1) + ERR(2)

Finally, fit your model to your data and test your assumptions.

Subject: RE: [NMusers] model for OMEGA and SIGMA
Date: Fri, 7 Feb 2003 21:59:08 +0100


Bill is right saying that the error structure should reflect somehow your data. All PK
parameters are positive, and by coding interindividual variability like CL=THETA(.)*EXP(ETA(.)
and by using FOCE method we constrain CL to be positive. Similarly, concentration is positive,
and the way to constrain it could be Y=F*EXP(EPS(1)). However, due to model linearization, NONMEM
will treat this as Y=F*(1+EPS(1)). In order to properly constrain the model prediction you have to
apply a so-called tranform-both-side approach by taking the logarithm of measured concentrations
(DV variable in your data set) and of model prediction. In the log domain the exponential residual
error becomes additive. The $ERROR block may look as follows:

 IPRE = -5 ; arbitrary value; to prevent from run stop due to log domain error 
 IF (F.GT.0) IPRE = LOG(F) ; note: in FORTRAN, LOG() means natural logarithm, not decimal! 
 Y = IPRE + EPS(1) 

BTW, the magnitude of SIGMA depends not only on the assay error. Nevertheless, if you know
the precision of the bioanalytical method decreases as concentration drops below a certain level
you may consider the model with 2 EPS.

Best regards, 

From: Luann Phillips 
Subject:Re: [NMusers] model for OMEGA and SIGMA
Date:Tue, 11 Feb 2003 10:13:10 -0500

NM Users,

I would like to offer an alternative method for coding the
Y=F*EXP(EPS(1)) error model using the 'transform-both-side' approach.


IF(AMT.NE.0)FLAG=1  ;dosing records only

IPRED=LOG(F+FLAG)   ;transform the prediction to the log of the
                    ; IPRED=log(f) for concentration records and
log(f+1) for dose records
W=1                 ;additive error model


This will allow NONMEM to continue running when a predicted
concentration of 0 occurs on any dosing record.  Since predictions for
dose records do not contribute to the minimum value of the objective
function this change to the F (or IPRED) does not influence the outcome
of the analyses.  However, if code is used to alter the predicted
concentration on a PK sample record the minimum value of the objective
function is changed and its value can be highly dependent upon what
value of IPRED is chosen as the 'new' predicted concentration.

Using the above code, if NONMEM predicts a concentration of 0 on a PK
sample record the run will still terminate (on some systems) with errors
because LOG(0) is negative infinity. In this case, the patient ID and
the observation within that patient for which the error occured will be

If this occurs, you may want to consider the following options:

(1) Check the dosing and sampling times and the dose amounts preceding
the observation for errors. Is it reasonable that a patient would have
an observable concentration, given the time since last dose for the

(2) Is NONMEM predicting a zero concentration because of a modeled
absorption lag time? Consider removing the absorption lag time or using
a MIXTURE model to allow some subjects to have a lag time and others to
have a lag time of zero. 

(3) Test a combined additive + constant CV error model (Y= F + F*EPS(1)
+ EPS(2)) using  DV=original concentration instead of

(4) Consider temporarily excluding measured concentrations with a
predicted value of zero. Work out the key components of the model and
then re-introduce the concentrations. The concentrations may no longer
have a predicted value of zero.

(5) If none of the above works, you could switch back to the code that
Vladimir suggested. Because the minimum value of the objective function
will be dependent upon the 'new' value of log(F) (or log(IPRED)), I
would test smaller values (-3, -5, -7, -9, etc.) until the change in
minimum value of the OBJ is not statistically significant for 2
successive choices (alpha less than the values used for covariate
analyses).  If this is not done then any change to the model that would
allow the model to predict a small non-zero value for the observation
could result in a statistically significant change in the minimum value
of the objective function. This type of model behavior could lead one to
think that a covariate is statistically significant based upon the
covariate changing the predicted value for 1 observation instead of its
inclusion improving the predictions for the population in general.


Luann Phillips