From: Joern Loetsch <j.loetsch@em.uni-frankfurt.de>
Subject: 95% CI of paramter estimate
Date: Mon, 13 Nov 2000 22:42:06 +0100

Can anyone explain to me how one obtaines the 95% CI for paramter estimates. I have been told that a CI based on the standard error of estimate reported by NONMEM is unreliable, and I now don't know how to obtain it.

Regards
Jorn Lotsch
_______________________________________________________
Joern Loetsch, MD
pharmazentrum frankfurt
Department of Clinical Pharmacology
J.W.Goethe-University Frankfurt/Main
Theodor-Stern-Kai 7
D-60590 Frankfurt/Main, Germany
Phone: +49-69-6301-4589
Fax: +49-69-6301-7617


*****


From: "Jogarao Gobburu 301-594-5354 FAX 301-480-3212" <GOBBURUJ@cder.fda.gov>
Subject: Re: 95% CI of paramter estimate
Date: Mon, 13 Nov 2000 17:26:50 -0500 (EST)

Hello,
Bootstrap is another technique that is used to obtain SE of the point estimates.

Few references:

1. Parke J, Holford NH, Charles BG. A procedure for generating bootstrap samples for the validation of nonlinear mixed-effects population models. Comput Methods Programs Biomed. 1999 Apr;59(1):19-29.

2. Jonsson EN, Karlsson MO. Xpose--an S-PLUS based population pharmacokinetic/pharmacodynamic model building aid for NONMEM. Comput Methods Programs Biomed. 1999 Jan;58(1):51-64.

Regards,
Joga
Joga Gobburu
Pharmacometrics,
CDER, FDA.


*****


From: Nick Holford <n.holford@auckland.ac.nz>
Subject: Re: 95% CI of paramter estimate
Date: Tue, 14 Nov 2000 12:05:03 +1300

Joern,

I think there are 4 methods for creating 95% confidence intervals on parameter estimates.

1. Simulate say 1000+ sets of data using your model and a design similar to your original data. Determine empirically the range of parameter estimates that cover 95% of the values you get from these 1000+ runs. This is the gold standard method.

2. Bootstrap 1000+ data sets from your original data and fit these with your model. Determine CI as in method 1. This method is conditional on your original data and its properties in relation to method 1 when used with NONMEM are not described in the literature as far as I know.

3. Compute a log likelihood profile for each parameter you are interested in. You do this by fixing the parameter of interest to values close to the final estimate from your model and refitting your original data. Empirically determine (e.g. by interpolation) the parameter values on either side of the final estimate that produce a 3.84 change in objective function. This relies on the assumption that the chi-square distribution is an appropriate way to describe the change in objective function. Might be OK for FOCE but almost certainly not for FO.

4. Use the asymptotic standard errors reported by NONMEM (if the covariance step runs). The CI obtained in this way will necessarily be symmetrical. CIs determined using the other methods above are often asymmetrical. You may be lucky and the asymptotic SEs may agree with the more reliable computationally intensive methods.

--
Nick Holford, Divn Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x6730 fax:373-7556
http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.htm


*****


From: Mats Karlsson <Mats.Karlsson@biof.uu.se>
Subject: Re: 95% CI of paramter estimate
Date: Tue, 14 Nov 2000 08:20:22 +0100

Nick,

We used method 1 below in J Pharmacokinet Biopharm. 26:207-46 (1998). (Actually method 4 compared well). However, I would not call method 1 the gold standard, at least not in our area. As I see it there are at least two problems:

1. The methods we use (FO, FOCE, etc) provide biased parameter estimates (not much but some). This means that method 1 can provide confidence intervals that do not include the point estimate itself. This would happen more often the richer your data set is and the more approximate the method is compared to your problem.

2. With method 1, the calculation of CI's would be reliant on distribution assumptions more heavily than the point estimates. Parameter estimates are not particularly sensitive to deviation from normality as long as distributions are symmetrical (Stuart says), whereas you assume strict normality of etas and epsilons in simulation. So in fact you may actually simulate quite a different data set than the original.

Best regards,
Mats

--
Mats Karlsson, PhD
Professor of Biopharmaceutics and Pharmacokinetics
Div. of Biopharmaceutics and Pharmacokinetics
Dept of Pharmacy
Faculty of Pharmacy
Uppsala University
Box 580
SE-751 23 Uppsala
Sweden
phone +46 18 471 4105
fax +46 18 471 4003
mats.karlsson@biof.uu.se


*****


From: "Gibiansky, Leonid" <gibianskyl@globomax.com>
Subject: RE: 95% CI of parameter estimate
Date: Tue, 14 Nov 2000 08:31:30 -0500

My concern with Method 1 is that it does not use original data at all, except for the model building. If the model poorly describes the data, it still can have very small Method 1 parameter CI. This method may be good in simulations for study design, when one vary design and study how well the PK model is recovered from the simulated data. The other goal can be to understand how confident you can be in the model derived from the real data. I think that in this situation it is better to use CI approaches that use original data in some form. Otherwise, CI are conditional on the quality of the model: they may be relevant if the model is good, and can be misleading if the model does not reflect the data.

As to the other methods, we just finished the work (joint work with Katya Gibiansky) where we compared 4 methods for CI.

1. NONMEM (FO, FOCE, FOCI with interaction)
2. Bootstrap (method 2 below)
3. Profiling (method 3 below)
4. Jackknife (compute partial estimates for data sub-sets, and use partial estimates to obtain parameter estimates and confidence intervals)

These methods were compared on 3 real data sets, one was done with FO, the other with FOCE, the third one with FOCE with interaction estimation methods. NONMEM was remarkably successful: in most cases, parameter estimates and CI obtained via NONMEM were in good to perfect agreement with the CI given by other methods. With (3), FOCE or FOCE with interaction was needed to produce a 3.84 change in objective function for the CI (as Nick mentioned). We found only 2 or 3 parameters (out of 30+ parameters in three models) with non-symmetric CI where NONMEM CI differed from profiling, bootstrap or Jackknife. One was the correlation coefficient (off-diagonal term in the variance-covariance matrix). It was bounded from above by 1, but NONMEM CI upper bound was larger. The other was variance of the random effect, bounded by 0, with the NONMEM CI being below 0.

Jackknife estimates and CI were biased on several occasions, but in those situations NONMEM results were more relevant and in agreement with bootstrap and profiling.

Overall conclusion FROM THE EXAMPLES THAT WE STUDIED was that it was sufficient to use NONMEM CI. More CPU-intensive methods just confirmed NONMEM findings.

Regards
Leonid


*****


From: "Piotrovskij, Vladimir [JanBe]" <VPIOTROV@janbe.jnj.com>
Subject: RE: 95% CI of paramter estimate
Date: Tue, 14 Nov 2000 15:39:08 +0100

Jorn,
Bootstrap seems to be the method of choice if you need CI for parameter estimates, however, it is almost impossible to do even for models of moderate complexity. BTW, CI based on SE provided by NONMEM are quite good for THETAs. In case of OMEGAs SE are less robust, but still reasonable. Note that most mixed-effects programs do not provide SE for random effects.

Best regards,
Vladimir
----------------------------------------------------------------------
Vladimir Piotrovsky, Ph.D.
Janssen Research Foundation
Clinical Pharmacokinetics (ext. 5463)
B-2340 Beerse
Belgium
Email: vpiotrov@janbe.jnj.com


*****


From: Mats Karlsson <Mats.Karlsson@biof.uu.se>
Subject: Re: 95% CI of parameter estimate
Date: Tue, 14 Nov 2000 21:27:17 +0100

Dear Leonid,

Thanks for much useful information. I think what you say makes sense, but would comment on the statement that CI's by "method 1" may be relevant if the model is good, but may be misleading if the model does not reflect the data. "...does not reflect the data...", I interpret as model misspecification being present. But is it not true that no method in the world can provide accurate CI's for a misspecified model. If you've decided to fit a one-compartment model to data showing bi-exponential decline, there is no way that you can be sure that your CI's include the true value of, for example, CL. Thus not only method 1, but all methods suffer if the model "...does not reflect the data...". However, method 1 is also sensitive to the fact that the point estimates are used for the simulation of the new data sets. Thus the SE's are conditioned on the fact that the point estimates are the true parameter values. It seems that the method suffers from a catch 22 phenomenon. The more important use of method 1 may be to show that the method chosen (FO, FOCE, etc) is providing sufficiently accurate parameter estimates, providing that the model is adequate. A really serious shortcoming of the method would probably show up in the results from the model fit to a dozen or so simulated data set.

Best regards,
Mats
--
Mats Karlsson, PhD
Professor of Biopharmaceutics and Pharmacokinetics
Div. of Biopharmaceutics and Pharmacokinetics
Dept of Pharmacy
Faculty of Pharmacy
Uppsala University
Box 580
SE-751 23 Uppsala
Sweden
phone +46 18 471 4105
fax +46 18 471 4003
mats.karlsson@biof.uu.se


*****


From: "Gibiansky, Leonid" <gibianskyl@globomax.com>
Subject: RE: 95% CI of parameter estimate
Date: Tue, 14 Nov 2000 15:55:02 -0500

Dear Mats,

I agree with your comment. In fact, I had the following example in mind when I mentioned "...does not reflect the data...". Imagine that you did not include sufficient number of random effects in the model, or significantly underestimate OMEGA or SIGMA values. Extreme is the case when they are fixed to almost zero. My guess would be that the NONMEM run would result in rather wide CI for the fixed effect parameters in this model to reflect uncertainty. On the contrary, method 1 will simulate 1000 nearly identical data sets (no or low variability in the model), and then estimate the parameters of the model with the perfect precision (with the same parameter estimates for each of the 1000 similar data sets). Method 1 CI will then be extremely small, independently of the actual CI.

Regards,
Leonid


*****


From: michael_smith@sandwich.pfizer.com
Subject: RE: 95% CI of parameter estimate
Date: Wed, 15 Nov 2000 09:46:17 -0000

Dear Mats,

Can I check my/your/our understanding of "Confidence Intervals"? If one could hypothetically re-run the experiment a number of times, repeat sampling using the same design, analyse the data with the same model then construct intervals using the same method then 95% of the intervals would contain the *true* value of the parameter of interest. A rather tortuous explanation, but thats what frequentists would have you believe. The bottom line is that you cannot make any probabilistic statement about whether any given interval does or does not contain the true value of the parameter.

It seems to me that what the recently discussed methods describe (apart from perhaps method 3, and I'm thinking about that one...) is a way of constructing "credible intervals". Sampling from the "posterior" distribution of your point estimate and its variability gives you an interval which describes your uncertainty around the point estimate, but when you simulate what distribution are you using for your estimate? Normal? Surely that then leads you back to a normal approximation to the interval which you could have calculated directly from the NONMEM output? Hence the reason why all of these methods appear to have similar conclusions??

I may be missing something fundamental... If I am, please excuse me.

By the way, the bayesian approach allows you to express uncertainty (in the form of an interval if you like) for all variances and covariances. It also allows one to attach probabilistic statements to intervals. Which is nice.

Best wishes,
Mike

Michael K. Smith (Senior Statistician) BSc MSc CStat
E-mail: Michael_Smith@Sandwich.Pfizer.Com
Tel.: (+44) 1304 643561


*****


From: "J.G. Wright" <J.G.Wright@newcastle.ac.uk>
Subject: RE: 95% CI of parameter estimate
Date: Wed, 15 Nov 2000 17:52:14 +0000 (GMT)

Dear nmusers,

A few comments:-

In the message below I think "true" is intended to mean "estimated". CIs intervals calculated often employ a normal approximation for the distribution for the distribution of the estimate, this is a weaker assumption than assuming the error is normally distributed.

CIs are almost always predicated upon the model being accurately specified (in its fitted form, including linearization)in every component. If one is in serious doubt between a family of models, it is possible to set up Bayesian models which make inference across models - but which are still predicated upon the models considered.

CIs are also predicated on experimental design, whether one takes a Bayesian or frequentist approach. Hence, if the design doesn't allow you to estimate a variance component, neither will repeating the experiment in all likelihood (freequentists)t.Nor will the posterior density influence the prior. Of course, this assumes your experiment wasn't so fragile that your sample was totally unrepresentative. I would suggest that if you are going to simulate CIs, don't just use point estimates but allow for some variation - sounds like MCMC again...this thread is very similar to one concerning SEs which took place recently.

GLS is more robust to variance function misspecification than joint normal theory maximum likelihood, incidentally.

Good luck to the valiant quantifiers of uncertainty,

James Wright


*****


From: "HUTMACHER, MATTHEW" <MATTHEW.HUTMACHER@chi.monsanto.com>
Subject: RE: 95% CI of parameter estimate
Date: Fri, 17 Nov 2000 12:12:07 -0600

Ken Kowalski and I call Method 1 for CI's (see Nick Holford's message below) the parametric bootstrap. This method requires the analyst to specify the probabilistic mechanisms by which all the data are generated. The parametric bootstrap is heavily assumption dependent, can provide the most knowledge from exhaustively looking at the data, and requires the greatest amount of modeling effort. Some issues that complicate this method are: i) does one parametrically model the covariates, ii) how does one handle censored data due to assay sensitivity, iii) how does one verify that the model (structural and stochastic) is adequate?

The nonparametric bootstrap is appealing since the probabilistic mechanisms that generate the data (and covariates) manifest themselves within the observed data; i.e. re-sampling the actual data maintains the correlations and relationships in the observed data. Essentially, the nonparametric bootstrap puts less of a burden on the analyst with some expense, perhaps, in knowledge of the data compared to method 1.

I would like also to comment on Mats' statement about misspecified models (I do not mean to take his statement too literally here). Model misspecification is not a black/white issue. Models are never specified correctly - do we ever feel that we have modeled the true OMEGA matrix? Some models are just more misspecified than others. In general, the greater number of key features the model adequately describes (structure, variability, etc.), the more confidence the analyst has in making inference using it. Even when a model is largely misspecifed, it may still be useful for specific purposes. Ken Kowalski and I have a paper (Statistics in Medicine - tentatively scheduled for the January 2001 issue) on this concept. We show that a 1-compartment model can approximate a 2-compartment model for the estimation of CL/F in a certain sparse-sampling setting (i.e., at steady-state sampling within the dosing interval). The paper deals with evaluating a design and powering the study to detect a pre-specified difference in CL/F in an arbitrary sub-population. We acknowledge that the fixed-effects estimates of ka and Vc/F will be biased (in our example, V/F from a 1-compartment model provides a relatively unbiased estimate of Vss/F of the 2-compartment model), but the estimates of CL/F and the difference parameter (delta CL/F) were unbiased (median bias <=5%). Using the "wrong" model does have a type I error implication, but it can be corrected by changing the difference in the likelihood needed for significance. Since, hypothesis testing and CI's are related, it seems that a reasonable confidence interval could be constructed on these parameters (CL/F and delta CL/F). , Verification of the coverage, if one were to use the nonparametric bootstrap for the CI's, would take considerable computing resources, however.


*****


From: Nick Holford <n.holford@auckland.ac.nz>
Subject: Re: 95% CI of parameter estimate
Date: Sat, 18 Nov 2000 09:43:09 +1300

Mathew & Ken,

Thanks for clarifying the terminology. I much prefer classifications based on names rather than numbers. What I called "Simulation" is indeed what you call a parametric bootstrap. I hope we can drop the Method 1-4 terminology from this and subsequent disucussions of this topic. To recap, I think it would be helpful to use these terms for the 4 methods of evaluating "confidence intervals" I outlined earlier:

Parametric Bootstrap
Non-parametric Bootstrap
Likelihood Profile
Asymptotic Standard Error

You list several disadvantages of the parametric bootstrap but I am puzzled by the only disadvantage you mention for the non-parametric bootstrap "some expense, perhaps, in knowledge of the data compared to [the parametric boostrap]". What does this mean?

Nick
--
Nick Holford, Divn Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x6730 fax:373-7556
http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.htm


*****


From: "HUTMACHER, MATTHEW" <MATTHEW.HUTMACHER@chi.monsanto.com>
Subject: RE: 95% CI of parameter estimate
Date: Mon, 20 Nov 2000 11:04:31 -0600

Perhaps, I could have been more explicit. I meant two things by "knowledge". Information about the data (and hence the modeled population) is revealed through the process of verifying the probabilistic mechanism by which the data were generated (checking the assumptions used in the parametric bootstrap). Secondly, I believe the parametric bootstrap will be more efficient; i.e., the parametric bootstrap will result in tighter CI's compared to those of the non-parametric bootstrap. I have not verified this statement in population PK work, but heuristically, parametric methods gain efficiency through assumptions about the distributional form of the data. These assumptions can be exploited to yield more powerful hypothesis tests, estimators with smaller variances, or in this case, tighter confidence intervals.

Matt


*****


From: "Piotrovskij, Vladimir [JanBe]" <VPIOTROV@janbe.jnj.com>
Subject: RE: 95% CI of paramter estimate
Date: Tue, 21 Nov 2000 09:05:54 +0100

Dear NM-users,

A couple of relevant references on bootstrapping in mixed-effects PK modeling:


1. Yafune A. Ishiguro M. Bootstrap approach for constructing confidence intervals for population pharmacokinetic parameters. I: A use of bootstrap standard error. Statistics in Medicine. 18(5):581-599. 1999.

2. ---. Bootstrap approach for constructing confidence intervals for population pharmacokinetic parameters. II: A bootstrap modification of standard two-stage (STS) method for phase I trial. Statistics in Medicine. 18(5):601-612. 1999.

Best regards,
Vladimir