Date: Fri, 12 Nov 1999 14:39:55 +0100
Subject: Computing std for secondary parms
From: Lars Erichsen <L.Erichsen@biostat.ku.dk>

Is there any 'easy' way of computing std for secondary parameters in NONMEM?

Thanks,
Lars.

 

 

*****

 

 

Date: Fri, 12 Nov 1999 09:40:31 -0800
From: LSheiner <lewis@c255.ucsf.edu>
Subject: Re: Computing std for secondary parms

Yes, although it still takes some algebra, which is all the usual approximation using first derivatives takes, and it can be tricky.

You must reparameterize in the "dervied parameter" of interest and re-run.

In general, if you want the SE of s = g(theta), then you must reparameterize the model in s, deleting one of the current thetas. For example, you used
Cl = theta(1),
V = theta(2)
...

but now and you want SE(t1/2). Recognizing that s = t1/2 = .693Cl/V, so that V = .693CL/s, one rewrites PK as:

Cl = theta(1)
s = theta(2)
V = .693*Cl/s
...

The SE oth theta(2) is the SE of t1/2.

CAUTION: if, in the above exampe, Cl or V in the original code have etas attached, then this gets very very tricky if you redally beieve in the original model. That is, the following 2 models are NOT identical:

Cl = theta(1)+eta(1)
V = theta(2)+eta(2)

and

Cl = theta(1)+eta(1
s = theta(2)+eta(2
V = .693*Cl/s

ESPECIALLY if a diagonal OMEGA is used. Hence they may yield different obj fun values, parameter estimates (even for Cl), and predictions of y.

LBS.
--
Lewis B Sheiner, MD Professor: Lab. Med., Biopharm. Sci., Med.
Box 0626 voice: 415 476 1965
UCSF, SF, CA fax: 415 476 2796
94143-0626 email: lewis@c255.ucsf.edu

 

 

*****

 

 

Date: Sat, 13 Nov 1999 07:15:30 +1300
From: Nick Holford <n.holford@auckland.ac.nz>
Subject: Re: Standard error of 'secondary' parameters

LSheiner wrote:
> CAUTION: if, in the above exampe, Cl or V in the original code have
> etas attached, then this gets very very tricky if you redally beieve in the
> original model. That is, the following 2 models are NOT identical:
>
> Cl = theta(1)+eta(1)
> V = theta(2)+eta(2)
>
> and
>
> Cl = theta(1)+eta(1
> s = theta(2)+eta(2
> V = .693*Cl/s
>
> ESPECIALLY if a diagonal OMEGA is used. Hence they may yield different
> obj fun values, parameter estimates (even for Cl), and predictions of y.

Why not do this to make the models identical in the eta structure?

Cl = theta(1)+eta(1
s = theta(2)
V = .693*Cl/s+eta(2)
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x6730 fax:373-7556
http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.html

 

 

*****

 

 

Date: Fri, 12 Nov 1999 10:47:15 -0800
From: LSheiner <lewis@c255.ucsf.edu>
Subject: Re: Standard error of 'secondary' parameters

Not quite. Note that in
(1) Cl = theta(1)+eta(1
(2) s = theta(2)
(3) V = .693*Cl/s+eta(2)

V now involves 2 etas, one explicit (eta(2)) in line (3), and the other via Cl (1) in the numerator of the RHS of (3). The following might be identical to the original (I'm not sure; as I said, this stuff is tricky):

Cl = theta(1)+eta(1
s = theta(2)
V = .693*theta(1)/s+eta(2)

LBS.
--
Lewis B Sheiner, MD Professor: Lab. Med., Biopharm. Sci., Med.
Box 0626 voice: 415 476 1965
UCSF, SF, CA fax: 415 476 2796
94143-0626 email: lewis@c255.ucsf.edu

 

 

*****

 

 

Date: Sat, 13 Nov 1999 08:23:18 +1300
From: Nick Holford <n.holford@auckland.ac.nz>
Subject: Re: Standard error of 'secondary' parameters

Thanks. Thats pretty obvious now you point it out :-)

But to come back to the original thread I wonder why anyone would bother trying to estimating SEs on secondary parameters using the usual NONMEM aysmptotic method. These estimates are barely worth the electrons used to display them on the screen except perhaps as some kind of rough diagnostic.

If you really want to know about the confidence of a 'secondary' parameter estimate then I would suggest either the log likelihood profile method (but that requires re-parameterization with the possible change in the model that you allude to above) or bootstrap (but that needs at least 1000 NONMEM runs to get reasonable values for a confidence interval).

--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x6730 fax:373-7556
http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.html

 

 

*****

 

 

Date: Fri, 12 Nov 1999 12:12:42 -0800
From: LSheiner <lewis@c255.ucsf.edu>
Subject: Re: Standard error of 'secondary' parameters

Nick Holford wrote:
>
> But to come back to the original thread I wonder why anyone would bother
trying to estimating SEs on secondary parameters using the usual NONMEM aysmptotic method. These estimates are barely worth the electrons used to display them on the screen except perhaps as some kind of rough diagnostic.

I agree.

>
> If you really want to know about the confidence of a 'secondary' parameter estimate then I would suggest either the log likelihood profile method (but that requires re-parameterization with the possible change in the model that you allude to above) or bootstrap (but that needs at least 1000 NONMEM runs to get reasonable values for a confidence interval).

Again, I basically agree, although even the likelihood profile depends on the asymptotic xhi-square distribution of the approximate likelihood, which is often questionable. If you want an honest answer, indeed, you have to simulate. I say simualte, rather than "bootstrap" since the latter may mean parametric bootstrap to some, and non-parametric to others, and also may mean sampling from the data to some, and sampling from the (post-foc) parameters to others ... "simulation" leaves it deliberately general, so that in any instance the purveyor knows he/she must specify/jusutify what he/she does.

--
Lewis B Sheiner, MD Professor: Lab. Med., Biopharm. Sci., Med.
Box 0626 voice: 415 476 1965
UCSF, SF, CA fax: 415 476 2796
94143-0626 email: lewis@c255.ucsf.edu

 

 

*****

 

 

From: "Sale, Mark" <ms93267@glaxowellcome.com>
Subject: RE: Standard error of 'secondary' parameters
Date: Fri, 12 Nov 1999 14:57:16 -0500

Nick,
Some time back we did some bootstrap simulation/analysis of NONMEMs ability to estimate SE of parameters. Nonmem didn't do disastrously with THETA (although there was some bias of about 20% as I recall, and a good bit of variability in the mis-estimation), but did remarkably poorly on SE of OMEGA and SIGMA, sometimes off by several orders of magnitude. I agree that the log likelihood approach is much more robust.

Mark

 

 

*****

 

 

Date: Sat, 13 Nov 1999 04:55:27 +0100
From: Mats Karlsson <Mats.Karlsson@biof.uu.se>
Subject: Re: Standard error of 'secondary' parameters

Mark,
We have some experience where the SE's of NONMEM were quite OK (JPB 26:207-46). We also have results indicating that the log likelihood approach may be severely biased, and therefore I wouldn't agree with the statement "the log likelihood approach is much more robust" based on theoretical considerations alone.

Best regards,
Mats

 

 

*****

 

 

Date: Mon, 15 Nov 1999 13:14:35 -0500 (EST)
From: "Chuanpu Hu 301-827-3210 FAX 301-480-2825" <HUC@cder.fda.gov>
Subject: Re: Standard error of 'secondary' parameters

Looking at the SE's also has theoretical basis, and is termed the "Wald test" in statistics. In the case of nonlinear mixed effects modeling, both the Wald test and likelihood ratio test are asymptotic (approximate) tests. That is, theoretically the p-values become more accurate when sample sizes increase. The practical matter is which "asymptotics" kicks in earlier. The author of the S-Plus nonlinear mixed effects routine nlme claimed that, based on simulation results, the likelihood ratio test performs better with the fixed effect parameters (THETAs), and the Wald test performs better with the random effect parameters (OMEGAs). It should be interesting if someone would do a similar study in NONMEM.

Chuanpu

 

 

*****

 

 

From: "Sale, Mark" <ms93267@glaxowellcome.com>
Subject: RE: Standard error of 'secondary' parameters
Date: Mon, 15 Nov 1999 13:17:59 -0500

As I recall, the Wald test is the only choice for tests of hypothesis with OMEGA, since the distribution of the log likelihood function is not know for variance terms.

Mark

 

 

*****

 

 

Date: Wed, 17 Nov 1999 12:03:58 -0500 (EST)
From: "Chuanpu Hu 301-827-3210 FAX 301-480-2825" <HUC@cder.fda.gov>
Subject: Re: Standard error of 'secondary' parameters

Mark,
I guess I do not understand your point. Don't people routinely look at -2 times the difference of the log likelihood function (NONMEM OBJ) values?

Chuanpu