From:Leonid Gibiansky 
Subject: [NMusers] model diagnostics
Date: Thu, 01 May 2003 13:07:17 -0400

Dear All,
It was unusually quiet recently in the list, let me suggest an idea for 
discussion:

There were recent messages that mentioned model diagnostics other than 
usual estimates and standard errors of estimates. It would be good to have 
an expect advise on what to do with
COVARIANCE MATRIX OF ESTIMATE
CORRELATION MATRIX OF ESTIMATE
INVERSE COVARIANCE MATRIX OF ESTIMATE

What they offer on top of what we recover from estimates and standard 
errors of estimates ? Do you always check them ? Any criteria what is good, 
what is bad ? Practical advices what to do if something looks bad ?

Thanks,
Leonid
_______________________________________________________

From:"Bachman, William" 
Subject: [NMusers] from Ken Kowalski re: model diagnostics
Date:Fri, 9 May 2003 08:32:40 -0400

Leonid,

Attached please find a 4-page discussion on the $COV step that I excerpted
from a best practices document that my company is preparing.  Hopefully
you'll find this useful.

Ken
COV Step Estimation
The $COV statement is used to estimate the asymptotic variance-covariance
matrix of the estimates in q, W, and S.  The default matrix used in NONMEM
is the R-1SR-1 matrix where R denotes the hessian matrix (i.e., matrix of
second derivatives with respect to the parameters evaluated at the final
estimates) and S denotes the cross-product gradient matrix (i.e., matrix
obtained from the cross product of the gradient vector and its transpose
where the gradient is a vector of first derivatives evaluated at the final
estimates).  When the random effects (h's and e's) are normally distributed
the inverses of both the R and S matrices are consistent estimators of the
covariance matrix of the parameter estimates.  However, in the presence of
non-normal random effects the R-1SR-1 matrix is a more robust estimator of
the covariance matrix (1).  For categorical and other non-continuous data
(i.e., using the LIKELIHOOD option with the $EST statement), consistency
(asymptotically unbiased) and other optimal properties of the estimates are
highly dependent on the assumption of normality of the random effects.  For
these types of data, the MATRIX=R option should be used with the $COV
statement.

It is good practice to only report out models for which the COV step runs
successfully.  COV step failures should not be ignored as they usually imply
ill conditioning of some aspect of the model.  Two common warning messages
that NONMEM reports out when the COV step fails are:  1) the R or S matrix
is singular, and 2) the R matrix is non-positive semi-definite.  The
singularity condition implies an infinite set of solutions (estimates) of
the parameters can result in the same OFV suggesting that the likelihood
surface is very flat near the final estimates.  When the likelihood surface
is extremely flat, rounding errors may occur and the estimation may not
converge to a final solution.  The non-positive semi-definite condition
suggests that the stationary point is not a global minimum but rather a
saddle point.  In which case, in some direction of the parameter space the
OFV can go to **.  Both of these conditions usually imply that the model is
over-parameterized in some aspect of the model related to q, W, or S.
Regardless of which warning message is reported the solution to the ill
conditioning or over-parameterization is the same.  Simplification of the
structural and/or statistical aspects of the model is usually necessary.
Although the reported OFV may not be a global minimum when the COV step
fails, in practice it is reasonable to identify a more parsimonious (fewer
parameters) model with a successful COV step that has an OFV close to that
reported for the over-parameterized model.  In some cases, the more
parsimonious model may lead to a lower OFV than the over-parameterized model
even when the two models are hierarchical.  This can happen when the
over-parameterized model converges to a local optimum rather than a global
minimum.

When the COV step runs successfully, it is also good practice to inspect the
correlation matrix of the estimates of q, W, and S reported in the NONMEM
output to ensure that the fitted model is stable.  When one or more
correlations are near *1 the model may still be over-parameterized and
unstable.  A model may be unstable if the COV step fails for one set of
'reasonable' starting values but runs successfully with another set of
starting values.  In this setting one or more correlations may be near *1 (a
near singular condition) but numerically nonsingular so that the COV step
runs successfully.  When this occurs, standard errors for one or more of the
parameters are usually quite large relative to their parameter estimates.
For these reasons, simply changing starting values to obtain a successful
COV step usually does not resolve the ill conditioning of the model.  Bates
and Watts (2) suggest that when one or more correlations exceed 0.99 in
absolute value the model is over-parameterized.  Given the large number of
parameters often involved in fitting population models this guidance
suggests that further evaluation and possible simplification of the model to
guard against over-parameterization should be considered when one or more
correlations exceed 0.95 in absolute value.  It should also be noted that
ill conditioning could exist without any one correlation exceeding 0.95.  An
alternative approach for assessing ill conditioning is to inspect the
eigenvalues of the covariance matrix.  This can be performed by using the
PRINT=E option on the $COV statement.  Specifically, the ratio of the
largest eigenvalue to the smallest eigenvalue, referred to as the condition
number, is a measure of ill conditioning.  A condition number exceeding 1000
is indicative of severe ill conditioning (3).  This document recommends
routine use of the PRINT=E option and calculation of the condition number.
When the condition number is high (>1000) often there is a cluster of
correlations that are relatively high even if no one correlation exceeds
0.95.  The condition number provides a simple statistic for assessing the
degree of ill conditioning, while inspection of the correlation matrix can
provide insight into the source of the ill conditioning.

When over-parameterization exists, it may be that the over-parameterized
model is scientifically plausible but the data do not support estimating all
of the parameters.  For example, an Emax model may be postulated but the
concentration-response relationship may not exhibit sufficient curvature
(plateauing) to accurately estimate the Emax and EC50.  In this situation
the correlation between the fixed effects estimates for Emax and EC50 may be
highly correlated.  A simplification of the model assuming a linear
concentration-response may adequately describe the data leading to a more
stable model.  Another example involving ill conditioning due to
specification of random effects is the situation where an Emax dose-response
model is fit to data arising from a parallel group dose-ranging trial.  In
this setting it may be unrealistic to expect stable estimation of
interindividual random effects on both Emax and ED50, as there is little
information on dose-response within an individual (i.e., each individual
only receives one dose level).

To guard against over-parameterization, inspection of the correlation matrix
of the estimates (q, W, and S) from the NONMEM output should be performed at
all stages of model development.  Specifically, the stability of the base
model should be investigated prior to covariate model building.  If the COV
step runs successfully but one or more pairwise correlations are near one,
say for example, between Emax and EC50, then introduction of covariates on
Emax and EC50 may exacerbate the instability of the model perhaps leading to
convergence problems (rounding errors) or COV step failures.  One may
falsely conclude that the instability is due to inclusion of covariate
effects when in fact the instability is associated with the structural model
and imprecision in the estimates of the structural parameters (i.e., base
model q's).

In general, model development should not be guided solely based on the
models that converged and had successful COV step estimations without
attempting to understand the underlying cause of the over-parameterization
associated with the models that had convergence and/or COV step failures.
Understanding the nature of the over-parameterization when it occurs can be
helpful in postulating a more parsimonious model.

References
1       Beal, S.L., and Sheiner, L.B.  NONMEM Users Guide - Part II: Users
Supplemental Guide.  NONMEM Project Group: University of California, San
Francisco, 1988, p 21.

2       Bates, D.M., and Watts, D.G.  Nonlinear Regression Analysis and its
Applications.  Wiley, NY, 1988, pp. 90-91.

3       Montgomery, D.C., and Peck, E.A.  Introduction to Linear Regression
Analysis.  Wiley, NY, 1982, pp. 301-302.



_______________________________________________________

From: "Kowalski, Ken" 
Subject:RE: [NMusers] from Ken Kowalski re: model diagnostics
Date: Fri, 9 May 2003 09:12:41 -0400

Bill,

Thanks for converting my word document to email text so that it could be
distributed to Nmusers.

All,

In the write-up below regarding COV step estimation, certain symbols were
converted and may be a bit confusing.  Here is the translation:

q = theta
W = omega
S = sigma
h = eta
e = epsilon
** = +/- infinity
*1 = +/-1

Sorry for the confusion.

Ken
_______________________________________________________