From: "Tsai, Max" max.tsai@spcorp.com
Subject: [NMusers] Problems with an apparent compiler-senstive model
Date: Sat, 29 Jul 2006 18:08:18 -0400

I have been working on a poppk model.  Basic run-of-the-mill two compartment model
with 1st order absorption.  When I run the model in NM on one machine, it converges
and the estimates appear to be reasonable and GOF plots look good.  When I attempt
to run the same model on a second machine, the run blows up and the model run
terminates.  Same control stream and same dataset.  The processor and windows operating
system are different, but it does not appear to be important, based on some of the
previous postings that I have read in the NM archives.  So, the only important difference
is that one computer is running Digital Compaq Fortran v6.0 and the other computer is
running Digital Compaq Fortran v6.5.  Also, the compiler options for the two systems
are identical in NONMEM.  Has anybody had experience with a model that was sensitive
to the version of the fortran compiler that is been used?  If so, what to do in this
situation?  Obviously a model that is not reproducible under different conditions is
not very robust.  How would you handle a model like this?  Attempts to simplify or
reparameterize the model have not been successful in obtaining consistent results.
I hope that the collective minds of this experienced group can shed some light on
this issue.  Thanks.

-Max

Max Tsai, Ph.D.
Associate Principal Scientist (DMPK)
Schering-Plough Corporation
2015 Galloping Hill Road
K-15-2-2650
Kenilworth, NJ 07033-0530
(:  (908) 740-3911
 :  (908) 740-2916
*:  max.tsai@spcorp.com 
_______________________________________________________

From: Nick Holford n.holford@auckland.ac.nz
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Sun, 30 Jul 2006 12:36:48 +1200

Max,

You are a bit vague about the difference in results. In my limited experience of comparing
df6 with df6.6 some years ago I recall that the actual OBJ would be very similar if not
identical but there might be some small differences in the parameter estimates. 

Indeed in some cases NONMEM appears to toss a coin and decide the run was successful and in
other cases it decides to terminate (usually with rounding errors). I and others have
investigated the termination criteria used by NONMEM and have not found any consistent
difference in parameter estimates between runs which NONMEM describes as 'successfull' and
those which are described as 'terminated'.

So I suggest you look at the parameter estimates and model performance criteria (e.g. predictive
check) to decide if there are really any important differences between the results from the two
compilers. Dont rely on the NONMEM message about minimization status.

On a somewhat different note -- all the df compilers are now obsolete i.e. it is impossible to
buy a license for them from Compaq/HP. You may wish to consider switching to an actively
supported compiler e.g. Intel Visal Fortran or the GNU g95 compiler.

Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
_______________________________________________________

From: "Tsai, Max" max.tsai@spcorp.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Sun, 30 Jul 2006 15:23:43 -0400

Nick,
Let me provide some additional details for clarity.  I often encounter floating point error
messages for which no final parameter estimates are provided in the output for comparisons. 
Comparing the gradients of the two separate runs using the same initial estimates, both seem
to travel the same path initially.   After 10-20 iterations, the paths diverge where one run
appears to reach a minima and the other run encounters problems with numerical integration. 
In attempts to "guide" the model in the right direction, I used the final estimates of the
successful run as initial estimates for the run on the troublesome compiler without any luck
(same floating point error).   If parameter estimates are reasonable and model performance
(predictive check) seems adequate for this model, would you still use this model as a basis
for simulations, even though it does not run consistently?

-Max

Max Tsai, Ph.D.
Associate Principal Scientist (DMPK)
Schering-Plough Corporation
2015 Galloping Hill Road
K-15-2-2650
Kenilworth, NJ 07033-0530
(:  (908) 740-3911
 :  (908) 740-2916
*:  max.tsai@spcorp.com 
_______________________________________________________

From: Leonid Gibiansky leonidg@metrumrg.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Sun, 30 Jul 2006 16:18:33 -0400

Max,
Floating point error message should be cured somehow. Two most often reasons are huge values in the
exponents and divisions by zero. In the code, there should be a defense against each of these causes
(if you face the problem). For example, if you have
error ~ 1/F then
do something like
IPRED=F
IF(F.LT.0.0001) IPRED=0.0001
error ~ 1/IPRED
This is just an example but often it helps to look on each division and make sure that the denominator
cannot be negative. Same things with the power operator: only positive values should be exponentiated.
Even zero in some power can give an error because of rounding error (when zero is interpreted as very
small negative number).

Another common reason is expression like
CL=TCL*EXP(ETA(1))
When ETA(1) is huge, it can result in error. You can use
MYETA=ETA(1)
IF(ETA(1).GT.20) MYETA=20
CL=TCL*EXP(MYETA)


or

IF(ETA(1).GT.20) EXIT
CL=TCL*EXP(ETA(1))

I would try to check the code with the idea to identify places where exactly floating point error
can be observed.

Leonid

_______________________________________________________

From: Nick Holford n.holford@auckland.ac.nz
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Mon, 31 Jul 2006 08:54:49 +1200

Max,

I agree with Leonid. If your runs crash because of floating point errors then either your compiler is
not using suitable options (e.g. it should round to zero) or your model code is inappropriate. 

What compiler options are you using?

Leonid suggests a workaround for avoiding divide by zero errors. One should also be aware of
the advice given by Stuart Beal on this issue:
http://www.cognigencorp.com/nonmem/nm/98feb112004.html

Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
_______________________________________________________

From: "Tsai, Max" max.tsai@spcorp.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Sun, 30 Jul 2006 23:03:56 -0400

Thanks for suggestions for guarding against floating point errors.  I will re-examine the
code and make sure that large values do not occur in the exponent.

For compiler options, I was originally using the following:
set f=df
set op=/optimize:1 /fpe:0

When different results were observed, I tried to change the compiler options so that the
compiler options would match those of the successful run.
set f=fl32
set op=/Ox /Op
_______________________________________________________

From: "Bonate, Peter" Peter.Bonate@genzyme.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Mon, 31 Jul 2006 10:00:29 -0400

Max,
A few years ago I organized a study with about 30 different people.  I gave
them all 5 different models to run on Nonmem.  Some were overparamaterized,
others were not.  I then compared their results.  The results were presented
at AAPS in 2002 or 2003, I don't remember exactly what year.  There was a
definite compiler-computer interaction.  The results were really
interesting.  People got different error messages.  Some models crashed,
others did not.  Even when the models minimized successfully, people got the
same parameter estimates but got different standard errors.

I have the poster as a pdf to anyone that wants it.

Pete bonate 


Peter L. Bonate, PhD
Genzyme Corporation
Senior Director, Pharmacokinetics
4545 Horizon Hill Blvd
San Antonio, TX  78229   USA
peter.bonate@genzyme.com
phone: 210-949-8662
fax: 210-949-8219
_______________________________________________________

From: Mark Sale - Next Level Solutions mark@nextlevelsolns.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Tue, 01 Aug 2006 14:57:59 -0700

Nick,
  Toss a coin is an interesting analogy.  Tom Ludden and I presented
some results at the ECPAG meeting where we randomly sequenced the
subjects in the data set - same data, same compiler etc. About half
converged successfully, about half didn't.  Basically a very slow
random number generator.

Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com

_______________________________________________________

From: Nick Holford n.holford@auckland.ac.nz
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Wed, 02 Aug 2006 10:46:12 +1200

Mark,

Do you have a URL pointing to these results you obtained with Tom?

It would be useful to add to the evidence that one should not rely on NONMEM to diagnose itself for successful minimization.

Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
_______________________________________________________

From: Leonid Gibiansky leonidg@metrumrg.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Tue, 01 Aug 2006 22:54:39 -0400

Nick,

It is a pretty slim evidence given the importance of the claim. As far as I understood, the example was carefully
chosen to be instable. Non-convergence in this (or any other) example could be an indication of the poor study
design (too small sample size, incorrect sampling times, insufficient number of samples, unbalanced data, etc.)
or poor quality of the data rather than proof that convergence is unimportant. Most often, non-convergence
indicates problems with the model or with the data and it is not recommended to dismiss it as a minor technicality.

Leonid 
_______________________________________________________

From: Nick Holford n.holford@auckland.ac.nz
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Wed, 02 Aug 2006 15:25:26 +1200

Leonid,

The not so slim evidence comes from 3 sources:

1. An investigation reported by Mark Gastonguay and Ahmed El-Tahtawy
"Minimization status had minimal impact on the resulting BS [bootstrap] parameter distributions"
http://metrumrg.com/publications/Gastonguay.BSMin.ASCPT2005.pdf

2. An investigation of 13 data sets reported by myself with Carl Kirkpatrick and Steve Duffull
"NONMEM Termination Status is Not an Important Indicator of the Quality of Bootstrap Parameter Estimates"
http://www.page-meeting.org/default.asp?abstract=992

3. The work described in this thread by Mark Sale and Tom Ludden in which NONMEM converged about 50% of the time with
identical data (but randomly re-ordered) and identical model. The parameter estimates were essentially identical 
whether or not NONMEM claimed to converge.

All of these experimental investigations has found that NONMEM's own diagnosis of successful minimization is not a
reliable indicator of the quality of the parameter estimates. 

Contrary evidence that NONMEM is good at diagnosing the quality of the fit is not known to me. It seems to me that
support for NONMEM doing a good job here is based on "pretty slim evidence". If you wish to make claims such as
"non-convergence indicates problems with the model or with  the data" then I ask you to provide some concrete
experimental evidence for this assertion  :-) 

Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
_______________________________________________________

From: Mark Sale - Next Level Solutions mark@nextlevelsolns.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Wed, 02 Aug 2006 06:52:44 -0700

For those interested, I've put a short paragraph and (more importantly)
the Excel workbook that randomly sequenced the subjects with the
results.  Web site is:
http://www.nextlevelsolns.com/downloads.html
click the link
Excel spreadsheet/macro examining effect of sequence of subjects on
convergence near the bottom.


Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
_______________________________________________________

From: Leonid Gibiansky leonidg@metrumrg.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Wed, 02 Aug 2006 10:24:01 -0400

Nick,
The root of the non-convergence is the instability of the model-data combination.
For example, there was PAGE poster by Lindbom et al.

http://www.page-meeting.org/default.asp?abstract=997

that concluded: "The condition number of the covariance matrix of the original model is a strong predictor of
NONMEM stability in the bootstrap and case-deletion diagnostics."

You may choose to ignore this instability, and get away with a reasonably good model, but this is not the
reason to dismiss a perfectly useful and very important diagnostic like the convergence status. Note also
that in your examples the authors started with the models that were studied to death to insure that these
are the best possible models (for the data in hands). It was not clear whether the final models in those
examples converged, and discussion was centered only on the bootstrap samples. Bootstrap samples by a
nature of the problem (too many of them) cannot receive as much attention as the final model. In any
case, it is premature to conclude from these examples that convergence is not important.

If you like confirmation of the statement "non-convergence indicates problems with the model or with 
the data", try to estimate bioavailability, CL and V at the same time in the absence of the reference
formulation. You may end up with the model that can even pass the most stringent scrutiny using the
predictive check procedure, but still is deficient, and this deficiency is easily revealed by either
non-convergence or by the failure of the covariance step.

Leonid 
_______________________________________________________

From: Mark Sale - Next Level Solutions mark@nextlevelsolns.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Wed, 02 Aug 2006 08:52:44 -0700

Lenoid,
  I, for one am not ready to discard convergence as a measure of model
"goodness".  I'm not even prepared to discard covariance success as a
measure of model "goodness" - everything else being equal I will always
prefer a model that converges|does a covariance step over one that
doesn't.  But, at the same time, I'd suggest that covariance step or
convergence isn't required to deem a model useful (as we all know, they
are never correct), or even final.  The hard choices are when a model
that makes biological sense refuses to converge and simple, empirial
models do converge.
  Next, there are, I beleive three factors that contribute to "model
instability" (meaning that the variance/covariance matrix cannot be
inverted and/or the model fails the internal criteria for NONMEM to
declare it converged.  These three factors overlap greatly, and are
very rarely black and white.  They are:
1.  Model dependent non-identifiability - your example, you cannnot,
regardless of the amount/quality of the data identify CL, V and F with
only oral data. (although I had an example where NONMEM converge
successfully in such a case - supporting Nicks position). Essentially,
any value of F is consistent with the data (with a corresponding value
for CL and  V). In this case, I beleive that the condition number/rank
of the covariance matrix would indicate this. 
2.  Data dependent non-identifiability.  Imagine that you want to
estimate KA, but all of your data is in the terminal phase.  Basically
any value of KA is consistent with the data (therefore, the likelihood
of the data isn't effected by the value of KA, the objective function
surface is flat in that dimension).  This will be true regardless of
the quality of the data. In this case as well, I beleive that the
condition number/rank of the covariance matrix would indicate this.  
Note the same root cause as data dependent non-identifiability - any
value of a parameter is consistent with the data.
3.  Numerical problems.  Much more vague concept. Partly related to
"quality" of the data (model misspecification,  residual error, auto
correlation).  But, also includes true rounding errors, which are most
likely to be seen if we have a wide range of likelihoods between
subjects (e.g., some individuals have a lot of data, some have little
data).  But, this source of rouding error is probably small compared to
the model misspecification, large residual error and autocorrelation.
Auto correlation, BTW, is known to be very, very bad in linear
regression - resulting in bias in both parameter estimates and
estiamtes of SE.  I'm not aware that it has been studied much in
non-linear regression, but I suspect it is a significant problem, only
partly addressed by the L2 variable.



Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
_______________________________________________________

From: "Elassaiss - Schaap, J. (Jeroen)" jeroen.elassaiss@organon.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Thu, 3 Aug 2006 09:22:25 +0200

Mark,

Your notion about autocorrelation is interesting. I have applied some aspects of autocorrelation
- in linear cases - as part of my master's in the field of (neuro)electrophysiology. It is a
useful concept in different ways but hardly ever applied in the (pop)PK-PD world. At PAGE 2006
one poster was presented that dealt with it: http://www.page-meeting.org/?abstract=933# ;
Davidian & Giltinan furthermore discuss several examples treating autocorrelation problems. The
latter also express a number of cautions regarding estimation of autocorrelation parameters (e.g. p133
of the CRC reprint 1998; indomethacin PK example). I am not aware of any other non-linear analyses
incorporating autocorrelation.

One obviously might take signs of autocorrelation as a cue to investigate its root cause rather 
than try to model it. Anyway, specific graphical analysis is needed to start with before it might
be picked up at all. And I confess that I have never attempted to go there; I also would not know
what kind of plot to make: observations or residuals, separate plots or phase plot, lagged or
time-shifted/smooth_or_model-based, which combinations thereof.

Another note: autocorrelation can also be used to correct for oversampling, essentially decreasing
the degrees of freedom.

Regarding the point you make about autocorrelation in the context of rounding errors, can you provide
some more details or examples? I would have assumed (wihtout any first-hand experience) that severe
problems arise with indirect PD behaviour in combination with second- or higher-order
autocorrelation, i.e. irregular oscillations in the same frequency domain.

Excuse me for drifting way of topic.

Best regards,
Jeroen
_______________________________________________________

From: Nick Holford n.holford@auckland.ac.nz
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Thu, 03 Aug 2006 20:41:05 +1200

Mark,

Thanks for a good summary of the problem. Let me also make it clear that I also prefer
to see that a model converges and the covariance step completes. But the absence of these
features does not make the model results bad. 

I agree with you that it is not uncommon to find a model that makes biological sense failing
to converge while some empirical function allows NONMEM to run the convergence step. 

I prefer biology over empiricism. So today I more or less ignore NONMEM's diagnosis of "success"
and prefer to judge a model on its performance e.g. with a predictive check.

Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
_______________________________________________________

From: "James G Wright" james@wright-dose.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Thu, 3 Aug 2006 12:19:46 +0100

I think there are 2 questions in this discussion:-

1)  Is an estimated covariance matrix a good way to look at the behaviour of
maximum likelihood estimates ie calculate confidence intervals?

Estimated covariance matrices are quick and useful descriptors of local
behaviour.  Likelihood profiling, bootstrapping and MCMC are some
(computationally expensive) alternatives, but could provide richer insight.
Since I caught a nasty dose of Bayesian-ism, I am not just interested in the
local behaviour around my current estimates, but the entire likelihood
surface.  

2)  Is NONMEM's covariance step good at calculating covariance matrices
and/or diagnosing problems?

I donít believe NONMEM is good at this particular task, partly because
NONMEM works with the likelihood surface defined in terms of all theta's,
eta's and epsilons.  This gives a huge (and unsimplifiable) n x n matrices
that are difficult for computers (or people) to invert, particularly if any
of the n(n+1) correlations strays close to 1.  In particular, eta's are
often poorly determined and their impact is "linearized" in the NONMEM
likelihood surface.  Please note the use of the word "believe" at the start
of this paragraph - this implies I have no actual "proof".
  In my experience, inconsistent error messaging is commonplace in more
complex NONMEM models, and the NONMEM user requires a degree of cynicism to
proceed effectively.  I have also experienced some moderate
compiler/platform sensitivity with NONMEM - the existence of this
implementation variation may suggest that NONMEMs algorithms are not
well-insulated from rounding errors. However, the truth is that platform
variations are typical in computationally sophisticated applications.
Matrix inversion involves lots of division by (very) small numbers, and this
amplifies error in those small numbers.

There are many tricks that can improve model stability, such as
reparameterization and judiciously removing eta's but I have certainly
encountered a few models that just won't be persuaded to "converge" by
NONMEM's definition without mortally wounding their intellectual basis.
This can be the case despite the "non-convergent" models being excellent
descriptions of the data and well-characterized in terms of the available
data, as demonstrated by likelihood profiling or using alternative software.
The risk of "pseudo" error messages increases as model complexity increases,
so it tends to be the most realistic and biologically insightful models that
are selected against by NONMEM pseudo-error messages.

James G. Wright PhD,
Scientist,
Wright Dose Ltd,
www.wright-dose.com
Tel: UK (0) 772 5636914
_______________________________________________________

From: MANOJ KHURANA manoj2570@yahoo.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Thu, 3 Aug 2006 06:45:56 -0700 (PDT)

Hi Nick,

Could you elaborate more on the last part of your e-mail "So today I more or less ignore
NONMEM's diagnosis of "success" and prefer to judge a model on its performance e.g. with a
predictive check."  with regards to "how can we perform predictive check in cases where model
fails to converge and covariance step also fails?".
 
Thanks in advance
Manoj Khurana 
_______________________________________________________

From: Mark Sale - Next Level Solutions mark@nextlevelsolns.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Thu, 03 Aug 2006 06:57:53 -0700

At the risk of annoying the majority of people on this user group, a few
more comments.  I think we need to step back even farther and ask the
critial question - We are looking for usefulness - not correctness. So,
what is the model to be used for?  Increasingly - in fact almost
exclusively in my recent experience, we want to simulate from these
models - and in fact extrapolate the models.  We extrapolate accross
dose (higher doses), duration (longer), populations (older, younger)
and even diseases - sometimes even species.  If we want to
extrapolate/simulate, why would we care about statistical properties of
a model (like the conditioning number/rank of the variance matrix). 
What we should care (mostly) about is:

1. Is the model biologically plausible?
2. Are simulations from the model consistent with observed data?
(predicitive check|posterior predicitive check).

If you want to test/generate hypotheses, then this doesn't apply, but I
actually haven't done that in some time.  For hypothesis testing, we do
need to live in the world of statistics. 

But, in fact, statistics do matter if you want to extrapolate/simulate.
It is easy to show that an overparameterized model is dangerous to
extrapolate, even if it is entirely consistent with the data from which
it was derived. 

So, back to my point:
For extraploation/simulation (and the two go together, why would you
bother simualted data that you already have real data for?) my first
priorities are biological plausability and predicitive checks, as well
as reality checks for extrapolations.  But, I'd also really, really
like to to converge, and I'd like it to do a covariance step as well. 
However, with enough testing, I can live without convergence - I can't
live with a biologically implausible model (you need to either change
the model or rethink the biology), or a model that cannot reproduce the
data from which it is derived.



Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
_______________________________________________________

From: Nick Holford n.holford@auckland.ac.nz
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: Fri, 04 Aug 2006 09:30:41 +1200

Manoj,

I was careful in my comments to say 'predictive check' (PC) NOT 'posterior predictive check' (PPC).  The
PPC, as described by Gelman et al. (1996), involves simulation using the posterior distribution of the
parameter estimates. 

The PPC is tricky to do using NONMEM (Yano et al. 2001) even if one has an estimate of the
variance-covariance of the estimate (from $COV or bootstrap). Yano et al. demonstrated that a
degenerate PC (DPC) could be equivalent to a PPC (with their specific example). The DPC involves
simulation using the point estimates of the model without consideration of uncertainty.

My (limited) experience of using a DPC for visual evaluation of model perfomance (the visual predictive
check or VPC) has shown that a relatively quick and simple check can be done which can reveal major
problems with a model (Holford 2006). More sophisticated methods have been described by Mentre et al (2006).

These simulation based methods of evaluating model performance do not necessarily require the 
availability of the variance-covariance matrix of the estimate. They can be performed with any model
regardless of NONMEM's termination status.

I consider standard errors to be almost worthless for evaluating model performance. They can give
some crude clue for parameters that are not well identified by the design but do not help diagnose
model deficiencies for predictions. Indeed having a few poorly identified parameters may not harm the
predictive performance. It is only a desire for parsimony that is affected in this situation.

If you are really and truly interested in the parameter estimate itself (rather than the model predictions)
then bootstrap estimates of parameter uncertainty are probably more reliable than predictions of confidence
intervals using asymptotic estimates of standard errors and the often invalid assumption of normality (see
Matthews et al 2004 Table 5 for an example of the discrepancies).

Nick


Gelman A, Meng X-L, Stern H.
Posterior predictive assessment of model fitness via realized discrepancies.
Statist Sinica. 1996;6:733-807.

Holford NHG. The Visual Predictive Check Ė Superiority to Standard Diagnostic (Rorschach) Plots
http://www.page-meeting.org/default.asp?abstract=738  PAGE; 2005; Pamplona; 2005.

Matthews I, Kirkpatrick C, Holford NHG. Quantitative justification for target concentration
intervention - Parameter variability and predictive performance using population pharmacokinetic
models for aminoglycosides. British Journal  of Clinical Pharmacology. 2004;58(1):8-19.

Mentre F, Escolano S. Prediction discrepancies for the evaluation of nonlinear mixed-effects
models. J Pharmacokinet Pharmacodyn. 2006 Jun;33(3):345-67.

Yano Y, Beal SL, Sheiner LB. Evaluating pharmacokinetic/pharmacodynamic models using the posterior
predictive check. J Pharmacokinet Pharmacodyn. 2001 Apr;28(2):171-92.
_______________________________________________________
From: Peter L. Bonate peter.bonate@genzyme.com
Subject: Re: [NMusers] Problems with an apparent compiler-senstive model
Date: 7/31/06 11:41am


For those who are interested in the poster on Nonmem compilers, you can find it at
 
http://www.aapspharmaceutica.com/inside/focus_groups/ModelSim/index.asp
 
Thanks,
 
pete
 
 
Peter L. Bonate, PhD
Genzyme Corporation
Senior Director, Pharmacokinetics
4545 Horizon Hill Blvd
San Antonio, TX  78229   USA
peter.bonate@genzyme.com
phone: 210-949-8662
fax: 210-949-8219


_______________________________________________________