From: "James Bailey" <James_Bailey@Emory.org>

Subject: Akaike information criterion

Date: Thu, 12 Jul 2001 16:58:56 -0500

 

To all:

 

In selecting an optimal model using the Akaike information criterion

should one equate the number of parameters to the sum of the number of

structural (clearances, volumes) and error (etas) parameters or should

one simply use the number of structural parameters.

 

Jim Bailey

 

*****

 

From: "Sale, Mark" <ms93267@GlaxoWellcome.com>

Subject: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 08:31:00 -0400

 

Jim,

 

Something I've wondered about as well. My view is that you can alway

convert an OMEGA to a THETA, as in

 

 

$PK

S1 = THETA(1) + ETA(1)

$THETA

(0,1)

$OMEGA

(0.3)

 

IS THE SAME AS

$PK

S1 = THETA(1) + THETA(2)*ETA(1)

 

$THETA

(0,1)

(0,0.3)

 

$OMEGA

(1,FIXED)

 

So, why not treat them the same?

Mark

 

********

 

From: "Bachman, William" <bachmanw@globomax.com>

Subject: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 08:38:23 -0400

 

You count all parameters - fixed and random effect parameters (thetas, etas

and epsilons) in calculating AIC.

 

AIC = OFV + 2p, where p is total number of parameters.

 

Bill

 

******

 

From: cng@imap.unc.edu

Subject: Re: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 10:45:14 -0400 (Eastern Daylight Time)

 

If I understand correctly, the single-sample statistics (for linear model )

like AIC, SBC, MDL, FPE, Mallow's Cp etc. can only be used as crude estimates

of generalization error in nonlinear models when you have a "large" training

set. Why use AIC? Did anyone try SBC or MDL (Minimum Description Length

Principle). Among the simple generalization estimators that do not require the

noise variance to be known, SBC often work well (at least in neural network).

Shao (1995) showed that in linear model (at least), SBC provides consistent

sub-set selection, while AIC dose not. That is, SBC will choose the "best"

subset with probability approaching one as the size of the training set goes to

infinity. AIC has an asymptotic probability of one of choosing a good subset

, but less than one of choosing the best subset (Stone 1979). Many

simulation studies have also found that AIC overfits badly in small samples,

and that SBC works well. MDL has been showed to be closely related to SBC.

 

Did anyone know a study that compare the model selection criterion (i.e SBC,

AICs) in NOMEM model selection? Thanks.

 

Chee Ng

 

******

 

From: "Bachman, William" <bachmanw@globomax.com>

Subject: RE: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 10:51:55 -0400

 

See:

 

Comparison of the Akaike Information Criterion, the Schwarz Criterion and

the F Test as Guides to Model Selection

Sheiner, Beal, & Ludden

J. Pharmacokin. Biopharm.,1994,(22),431-445

 

Bill

 

******

 

From: "Gibiansky, Ekaterina" <gibianskye@globomax.com>

Subject: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 10:58:21 -0400

 

I used SBC for model selection in NONMEM, and actually compared it with AIC,

not in the simulation studies though, but with actual data. With large data

sets AIC tends to choose overestimated models, keeping many more covariates,

than SBC. SBC seemed to perform well.

 

Katya

 

Ekaterina Gibiansky, PhD

Senior Scientist

 

GloboMax LLC

7250 Parkway Drive, Suite 430

Hanover, MD 21076

Voice (410) 782-2234

FAX (410) 712-0737

E-mail: gibianskye@globomax.com

 

*****

 

From: "Gibiansky, Ekaterina" <gibianskye@globomax.com>

Subject: RE: Akaike information criterion

Date: Mon, 16 Jul 2001 09:06:39 -0400

 

Sorry, Bill, overparameterized, of course.

 

Katya

 

-----Original Message-----

From: Bachman, William

Sent: Friday, July 13, 2001 11:06 AM

To: Gibiansky, Ekaterina

Subject: RE: Akaike information criterion

 

 

Katya

 

overestimated or overparameterized?

 

Bill