From: stuart@c255.ucsf.EDU (S.Beal)
Subject: Computation of CV's from OMEGA
Date: 26 Sep 1997 17:28:44 -0400

Often, one writes e.g. CL=THETA(1)*EXP(ETA(1)) and then computes an estimated CV by sqrt(omega), where omega is the estimate of the variance of ETA(1). Nick Holford has asked that I comment on this procedure.

The procedure works fine when omega is sufficiently small, say 0.15 or less. This assertion rests in part on the fact that when ETA(1) is normally distributed (when, therefore, CL is log normally distributed), (true) omega is precisely related to (true) CV by CV=sqrt(exp(omega)-1). So when omega=0.15, CV=.402, while sqrt(omega)=.387.

Often the estimated CV is no more than about 40%. However, often it is somewhat larger. In this case, and when the estimate is being interpreted in a somewhat qualitative manner, it may not be so important whether it is actually e.g. 60% or 65% (omega=0.36), or when, as is likely, the statistical

uncertainty in omega itself is large, this discrepancy, due simply to the difference between the two formulas, is relatively unimportant. If though, statistical uncertainty is very small, and one wants to be very accurate about the CV, one might take the view that the eta distribution is normal, and use the CV given by the lognormal formula (65%).

But the question becomes: should we assume that eta is normally distributed?

With NONMEM, use of CL=THETA(1)*EXP(ETA(1)) does not mean that the normality assumption is being made. Often, we really are only expressing ETA on the log scale, but not asumming it to be normally distributed. (Many discussions state that ETA is assumed to be normal, but these are often misleading. While there are sometimes good reasons for making this assumption, the NONMEM methodology largely avoids the assumption.) Since we do not need to make the normality assumption, it does not follow that the "extra accuracy" given by the lognormal formula really represents extra accuracy; it can just as well be garbage. Suppose we want to really do the right thing, and CV is large (perhaps as a pragmatic matter, we will judge the CV to be large when the results from the two formulas differ substantially). Then we should probably avoid reporting the CV as a "CV", but report it as an "apparent CV". I.e. The square root of omega is a number that is on the CV scale and is mildly related to the actual (but unknown) CV; it's square is an *accurate* computation of the estimate of variance.

****

From: stuart@c255.ucsf.EDU (S.Beal)
Subject: Computation of CV's from OMEGA
Date: 26 Sep 1997 20:49:37 -0400

Regarding the comments I made earlier today on this topic, I guess I should remind people that with CL=THETA(1)*EXP(ETA(1)), and when using the FO method, CV=sqrt(omega) is *exactly* the estimated CV, and there is no issue. This is because use of the FO method doesn't allow one to distinguish between this CL model and the model CL=THETA(1)+THETA(1)*ETA(1). This latter model is the one resulting from taking the FO model approximation. (This is one reason why the it is sometimes misleading to mention log normality.) The issue only arises with conditional estimation.

****

From: Mats Karlsson <mats.karlsson@biof.uu.se>
Subject: Uncertainty in CV's
Date: 26 Sep 1997 19:16:25 -0400

As part of his comment to Nick, Stuart said that if omega is large it is likely to be associated with a large statistical uncertainty. This is probably true in general. I would like to add that the confidence intervals around the estimate of omega is likely to be asymmetric. If one estimates the confidence intervals by using the Likelihood profile method, usually one finds that compared to the SE's provided by NONMEM, that both the lower and higher CI's are higher. Thus, if one really cares about the uncertainty in omega estimates the Likelihood profile method may be required.

Also, Stuart said that there is no normality assumption made with respect to the distribution of etas (neither for epsilons, I believe). However, many of the suggested validation procedures use the normality assumption (prediction intervals for example). Unless the normality assumption is crucial for the clinical application of the model, such validation procedures seem inappropriate to me. A perfectly valid model may fail a validation test because of an additional assumption never made in the modelling.

Another situation where one needs to be concerned with the normality assumption is in clinical trial simulation. Oftentimes it is assumed that the final population model, when turned into simulation mode, can produce real-life like predictions. This may not at all be true if eta's and/or epsilons are non-normally distributed.

Mats Karlsson