From email@example.com Thu Jul 20 10:22:38 1995
Subject: Fitting a Log-Normal 1st-Stage Error Model using PREDPP or PRED To anyone in the know,
I have a population dataset which I want to analyse using a log-normal 1st-stage error model. So I have to log the data and log the kinetic model which means writing a PREDPP or PRED subroutine. I would much prefer to use a PREDPP routine.
The data is multiple infusions, with columns id,dose,rate,time,conc. i.e. for the first individual
id D rate conc time
1 15.0 15.0 0.00 .
1 . . 7.75 9.60
1 15.0 15.0 8.00 .
1 15.0 15.0 15.00 .
1 15.0 15.0 23.50 .
1 . . 33.20 8.00
1 15.0 15.0 33.40 .
1 . . 35.00 14.80
I then log the concentrations 9.6,8.00,14.8 to create my new log-data file. I want to use a simple 1-compartment model.
I so far have:
K = CL/VD
E1 = EXP(-K*DOSE/RATE)
E2 = EXP(-K*TIME+K*DOSE/RATE)
EF = RATE*(1-E1)*E2/CL
F = LOG(EF)
Y = F+ERR(1)
but this does'nt even vaguely work.
Do I need to put zeros in the dose and rate columns for the concentration records? Will it assume that there are concentration values of zero for the dose records? Will contributions from all previous doses automatically be summed up for a particular concentration? Any suggestions or help would be very much appreciated.
In advance, thankyou very much,
From alison Thu Jul 20 13:13:42 1995
James Bennett (e-mail: firstname.lastname@example.org) sent mail to nmusers today.
> I have a population dataset which I want to analyse using a log-normal
> 1st-stage error model. So I have to log the data and log the kinetic
> model which means writing a PREDPP or PRED subroutine. I would much
> prefer to use a PREDPP routine.
You can do this with PREDPP. Use the log of the concentrations in the data, and the usual control stream for PREDPP with ADVAN1 (or whatever model you choose.) In the $ERROR block:
Y = LOG(F) + ERR(1)
> The data is multiple infusions, with columns id,dose,rate,time,conc.
> i.e. for the first individual
> id D rate conc time
> 1 15.0 15.0 0.00 .
> 1 . . 7.75 9.60
> 1 15.0 15.0 8.00 .
> 1 15.0 15.0 15.00 .
> 1 15.0 15.0 23.50 .
> 1 . . 33.20 8.00
> 1 15.0 15.0 33.40 .
> 1 . . 35.00 14.80
Clearly you meant to say:
> id D rate time conc
> I then log the concentrations 9.6,8.00,14.8 to create my new log-data file.
Ok so far.
......... his $PRED .......
> Do I need to put zeros in the dose and rate columns for the
> concentration records? Will it assume that there are concentration
> values of zero for the dose records? Will contributions from all
> previous doses automatically be summed up for a particular
These questions can be answered "yes" when PREDPP is used. Values of concentration (DV) on dose records are ignored. PREDPP computes the system at every point in time in a recursive manner, by advancing from the previous state of the system; hence, previous doses are "summed up".
When a $PRED block is used, the answers are "no." What you code is what you get. There is no way to code a "memory" of prior states, or prior doses. (Stuart Beal sent a recent memo on Effect site concentrations in which he added such code to the $ERROR block, but it took many lines of verbatim code and expert knowlege of PREDPP -- Don't try this at home, kids!)
With $PRED, each record must stand on its own. There is no such thing as separate dose vs. observation records, unless you yourself include:
a) MDV data item (set MDV=1 for doses)
b) Some code to remember a previous dose, which is too complicated to do in this example.
$PRED is really only suitable with one dose per subject. If you code your own PRED subroutine, you can code in memory of prior doses, but it gets complicated.
I don't know how to fix your $PRED block, or to fix your data for use with $PRED. My advice is to use PREDPP, and tell it that D is the AMT. E.g., $INPUT ID AMT RATE TIME DV I presume that your dose records have the meaning that they have in
1 15.0 15.0 23.50 .
indicates that a dose with amount 15 and rate 15 (hence duration 1) starts entering the system at time 23.50.
From alison Fri Jul 21 09:47:30 1995
TO: email@example.com , nmusers
Yesterday, James Bennett sent email to nmusers about fitting the log of the obs. to the log of the prediction. I sent a response to the whole group. Today he sent this response to me:
Many thanks for your reply. I assumed that you wouldn't be able to
implement the log model using the ADVAN subroutines. I'm using y =
log(f) + eps(1) with advan1 as suggested and the minimisation is
successful. However, there are several thousand "log: sing error"'s
appearing in the output file before the noraml output results. Is
there anything you can suggest which might be causing this, or
anyway in which I can stop these being written to the output file?
I guess I could have anticipated this question, having seen the data. The dose records are infusions. Hence the compartments are all zero, and F is zero in $ERROR, for the initial dose record. The computation of LOG(0) is causing the error messages. They are benign, in that the value computed for Y with dose records is ignored by NONMEM. To eliminate the messages, here are two suggestions. I'm sending this message to all nmusers because the problem is of general interest, and this sort of "coding trick" is useful in many situations.
1) Tell PREDPP not to call the ERROR routine (i.e., not to evaluate the $ERROR statements) for dose records:
Y = LOG(F)+ERR(1)
This gets rid of the error messages, but has the disadvantage that, for dose event records, the value of PRED shown in table and scatters is F, rather than LOG(F). (This does not affect the fit for the reason stated above, but is a bit disconcerting.)
2) A "coding trick" can be employed:
IF (F.EQ.0) THEN
Y = (1-QF)*LOG(F+QF)+ERR(1)
Only with the first dose record is F=0. With this record, the calculation is Y= 0 * LOG(1) = 0. With all other records, F is > 0, and the calculation is
Y= 1 * LOG(F)
This will give you identical output to your original run, but without the error messages associated with LOG(0).
(Since LOG(1)=0, the term (1-QF) is not needed. I included it for general reasons: sometimes one adds a small "fudge factor" to avoid an arithmetic difficulty, and must then include a term to set the results back to what they should have been.)