From: "Sarapee Hirankarn" <email@example.com>
Subject: Predictive Performance
Date: Tue, 25 Apr 2000 08:42:17 -0500
I wonder whether the procedure I tried to do as described below is appropriate or not.
In order to assess the PREDICTIVE PERFORMANCE of NONMEM, I used POSTHOC Bayesian estimation method (Code below) (by this method, NONMEM will obtain individual estimates). I also deleted some of known concentrations or defined it as missing data. Then, I calculated ME (predicted concentration - actual concentration), RMSE and relative predictive error(%) from the set of missing data only.
I understood that to do "population predictions or feedback predictions" as described by Stu Beal (NONMEM Topic 6) is a method of assessing the FITTING PERFORMANCE.
Please let me know if I misunderstood something and if I am wrong, please also let me know how to assess PREDICTIVE PERFORMANCE of NONMEM appropriately.
College of Pharmacy
University of Iowa
$PROB THEOPHYLLINE POPULATION DATA
$INPUT ID DOSE=AMT TIME CP=DV WT
;THETA(1)=MEAN ABSORPTION RATE CONSTANT (1/HR)
;THETA(2)=MEAN ELIMINATION RATE CONSTANT (1/HR)
;THETA(3)=SLOPE OF CLEARANCE VS WEIGHT RELATIONSHIP (LITERS/HR/KG)
;SCALING PARAMETER=VOLUME/WT SINCE DOSE IS WEIGHT-ADJUSTED
$THETA (.1,3,5) (.008,.08,.5) (.004,.04,.9)
$OMEGA BLOCK(3) 6 .005 .0002 .3 .006 .4
$EST POSTHOC MAXEVAL=450 PRINT=5 NOABORT
$TABLE ID DOSE WT TIME IPRED
$SCAT (RES WRES) VS TIME BY ID
1 4.02 0. . 79.6
1 . 0. .74 .
1 . 0.25 2.84 .
1 . 0.57 6.57 .
1 . 1.12 10.5 .
1 . 2.02 9.66 .
1 . 3.82 . . <- defined as missing data.
1 . 5.1 8.36 .
1 . 7.03 7.47 .
1 . 9.05 6.89 .
1 . 12.12 5.94 .
1 . 24.37 3.28 .
2 4.4 0. . 72.4
2 . 0. 0. .
2 . .27 1.72 .
2 . .52 7.91 .
2 . 1. 8.31 .
2 . 1.92 8.33 .
2 . 3.5 6.85 .
2 . 5.02 6.08 .
2 . 7.03 5.4 .
2 . 9. 4.55 .
2 . 12. . . <- defined as missing data
2 . 24.3 .90 .
From: "Paul Williams" <firstname.lastname@example.org>
Subject: Re: Predictive Performance
Date: Tue, 25 Apr 2000 09:55:42 -0700
Predictive performance assessment is in general a part of model validation which is often necessary depending on the intended use of the model. There are 2 approaches to assessing predictive performance  where there is an external validation data set and  where there is not an external validation data set. External validation is the most methodologically pure approach but may not always be practical. Internal validation is most often complex and computationally intense.
For external validation you will need to obtain another population of subjects separate from the index population in which you developed the model. You will need to follow Dr. Sheiner's and Beal's suggestions for measuring predictive performance [J Pharmacokinet Biopharm 9:503-511]. However, if in your test population you have subjects who have repeated measures these are not independent and you will need to account for this. A method to do this has been proposed in an additional paper in Pharmacotherapy 16:1085-1092.
If you do not have an external validation set you should consider internal validation. This can be done by cross-validation or the bootstrap. These methods are computationally intense and at least the bootstrap was nicely applied in a paper by Ene Ette [J Clin Pharmacology 37: 486-495]. Cross validation has been presented by Bill Gillespie who is now at Pharsight Corporation in Cary, North Carolina.
As far as model validation is concerned you should also consider the posterior predictive check as proposed by Andrew Gelman [Gelman A, Carlin JB, Stern HS, et al. Bayesian data analysis. New York: Chapman and Hall 1995.]..
Caveat: I have tried on several occasions to use the fractional prediction error [FPE = PredError/PredConcentration] and have yet to have it normally distributed [it is skewed to the far right]. This greatly increases the estimate of variance for the FPE and when this happens the 95% CI for the FPE are always very wide, contains 0, thus good predictability is almost always demonstrated for the FPE.
Paul Williams>>> "Sarapee Hirankarn" <email@example.com> 04/25 6:42 AM >>>