From: "James Bailey" <James_Bailey@Emory.org>

Subject: Akaike information criterion

Date: Thu, 12 Jul 2001 16:58:56 -0500

In selecting an optimal model using the Akaike information criterion

should one equate the number of parameters to the sum of the number of

structural (clearances, volumes) and error (etas) parameters or should

one simply use the number of structural parameters.

From: "Sale, Mark" <ms93267@GlaxoWellcome.com>

Subject: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 08:31:00 -0400

Something I've wondered about as well. My view is that you can alway

convert an OMEGA to a THETA, as in

S1 = THETA(1) + THETA(2)*ETA(1)

So, why not treat them the same?

From: "Bachman, William" <bachmanw@globomax.com>

Subject: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 08:38:23 -0400

You count all parameters - fixed and random effect parameters (thetas, etas

and epsilons) in calculating AIC.

AIC = OFV + 2p, where p is total number of parameters.

Subject: Re: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 10:45:14 -0400 (Eastern Daylight Time)

If I understand correctly, the single-sample statistics (for linear model )

like AIC, SBC, MDL, FPE, Mallow's Cp etc. can only be used as crude estimates

of generalization error in nonlinear models when you have a "large" training

set. Why use AIC? Did anyone try SBC or MDL (Minimum Description Length

Principle). Among the simple generalization estimators that do not require the

noise variance to be known, SBC often work well (at least in neural network).

Shao (1995) showed that in linear model (at least), SBC provides consistent

sub-set selection, while AIC dose not. That is, SBC will choose the "best"

subset with probability approaching one as the size of the training set goes to

infinity. AIC has an asymptotic probability of one of choosing a good subset

, but less than one of choosing the best subset (Stone 1979). Many

simulation studies have also found that AIC overfits badly in small samples,

and that SBC works well. MDL has been showed to be closely related to SBC.

Did anyone know a study that compare the model selection criterion (i.e SBC,

AICs) in NOMEM model selection? Thanks.

From: "Bachman, William" <bachmanw@globomax.com>

Subject: RE: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 10:51:55 -0400

Comparison of the Akaike Information Criterion, the Schwarz Criterion and

the F Test as Guides to Model Selection

J. Pharmacokin. Biopharm.,1994,(22),431-445

From: "Gibiansky, Ekaterina" <gibianskye@globomax.com>

Subject: RE: Akaike information criterion

Date: Fri, 13 Jul 2001 10:58:21 -0400

I used SBC for model selection in NONMEM, and actually compared it with AIC,

not in the simulation studies though, but with actual data. With large data

sets AIC tends to choose overestimated models, keeping many more covariates,

than SBC. SBC seemed to perform well.

E-mail: gibianskye@globomax.com

From: "Gibiansky, Ekaterina" <gibianskye@globomax.com>

Subject: RE: Akaike information criterion

Date: Mon, 16 Jul 2001 09:06:39 -0400

Sorry, Bill, overparameterized, of course.

Sent: Friday, July 13, 2001 11:06 AM

Subject: RE: Akaike information criterion