From: "Ritu Karwal" Ritu.Karwal@ranbaxy.com
Subject: [NMusers] NONMEM
Date: Thu, 14 Sep 2006 09:48:21 +0530

Dear nmusers:

Can anybody suggest me how to perform optimization of
sampling time points in NONMEM or using any other software?

I have read somewhere that there is a command $SUPER in NONMEM
to perform this.

Also, I have got a control stream (which I have got from nmusers
previous messages only posted by Ludger Banken) for this but that
is not working. Below is that control stream.

Also can anybody tell me what RATD, COVA, EVID, SS stands for in this
control stream. I have recently started working on population
pharmacokinetics and NONMEM so I donít know these words?  

With regards,

Ritu Karwal, M.Sc (Statistics)
Research Associate,
Metabolism and Pharmacokinetics,
Ranbaxy Research Laboratories Limited.

$PROB Popkin Dummy Simulation for first starting value of the seed
;This problem is only here to allow the use of the seed (-1) later on
$THETA (.1,1.4 ,10) ;Ka 1
$THETA (.1,3.5 ,50) ;CL 2
$THETA (1,20 ,1000) ; V 3
$THETA (1 ,1 ,1 ) ;F 4
$THETA (-2,0.2 ,2) ; CL (Covar) 5

$OMEGA 0.223144 ;kA
$OMEGA 0.039221 0.039221 ;CL V
$OMEGA .0001 FIXED;F
$SIGMA 0.039221 ;ERR Mult
$SIGMA 0.005 ;Err Add
$SIGMA 1 FIXED; Random time

$DATA SLB.nm
$INPUT ID TIME AMT DV MDV EVID SS II ADDL RATD COVA
$SIMULATION (35436 ) ONLY SUB=1

$SUBROUTINE
ADVAN2 TRANS=2

$PK
;This PK block has to be used in all problems below
;Therefore all model must be covered by this model
KA = THETA(1) * EXP(ETA(1))
CL = THETA(2) * EXP(ETA(2))*COVA**THETA(5 )
V = THETA(3) * EXP(ETA(3))
F1 = THETA(4) * EXP(ETA(4))
S2=V

$ERROR
CP=F
Y = F * EXP( EPS(1) ) + EPS(2)
IF (AMT.GT.0) THEN
ATIME=TIME
RTIME=TIME
LTIME=TIME
ENDIF

MDV2=MDV
IF (TIME.EQ. 101) RTIME=ATIME+1.7320508076 *EXP(0.3339544233 *EPS(3))
IF (TIME.EQ. 102) RTIME=ATIME+23.916521486 *EXP(0.050780836 *EPS(3))
IF (TIME.EQ. 201) RTIME=ATIME+3.4641016151 *EXP(0.0874491407 *EPS(3))
IF (TIME.EQ. 202) RTIME=ATIME+5.4772255751 *EXP(0.055421818 *EPS(3))
IF (TIME.EQ. 203) RTIME=ATIME+7.4833147735 *EXP(0.0405906612 *EPS(3))
IF (RTIME.LE.LTIME) THEN
RTIME=LTIME+.01
MDV2=1
ENDIF
LTIME=RTIME


$SUPER SCOPE=4 ITERATIONS=100
;This is the big DO Loop around the following Models

$PROB Popkin Simulation 1 to generate random Time
$THETA (.1,1.4 ,10) ;Ka 1
$THETA (.1,3.5 ,50) ;CL 2
$THETA (1,20 ,1000) ; V 3
$THETA (1 ,1 ,1 ) ;F 4
$THETA (-2,0.2 ,2) ; CL (Covar) 5

$OMEGA 0.223144 ;kA
$OMEGA 0.039221 0.039221 ;CL V
$OMEGA .0001 FIXED;F
$SIGMA 0.039221 ;ERR Mult
$SIGMA 0.005 ;Err Add
$SIGMA 1 FIXED; Random time

$INPUT ID TIME AMT DV MDV EVID SS II ADDL RATD COVA
$SIMULATION (-1) ONLY SUB=1
$TABLE ID RTIME AMT DV MDV2 EVID SS II ADDL RATD COVA
FILE=SIMUL1 NOPRINT NOHEADER NOFORWARD

$PROB Popkin Simulation 2 to generate random Concentrations
$THETA (.1,1.4 ,10) ;Ka 1
$THETA (.1,3.5 ,50) ;CL 2
$THETA (1,20 ,1000) ; V 3
$THETA (1 ,1 ,1 ) ;F 4
$THETA (-2,0.2 ,2) ; CL (Covar) 5

$OMEGA 0.223144 ;kA
$OMEGA 0.039221 0.039221 ;CL V
$OMEGA .0001 FIXED;F
$SIGMA 0.039221 ;ERR Mult
$SIGMA 0.005 ;Err Add
$SIGMA 1 FIXED; Random time

$DATA SIMUL1 (12E12.0) NRECS=700 NOOPEN
$INPUT ID TIME AMT DV MDV EVID SS II ADDL RATD COVA

$SIMULATION (-1) ONLY SUB=1


$PROB Popkin Model
$THETA (.1,1.4 ,10) ;Ka 1
$THETA (.1,3.5 ,50) ;CL 2
$THETA (1,20 ,1000) ; V 3
$THETA (1 ,1 ,1 ) ;F 4
$THETA (-2,0.2 ,2) ; CL (Covar) 5

$OMEGA 0.223144 ;kA
$OMEGA 0.039221 0.039221 ;CL V
$OMEGA .0001 FIXED;F
$SIGMA 0.039221 ;ERR Mult
$SIGMA 0.005 ;Err Add
$SIGMA 0.01 FIXED; Random time

;Without data statement the data from the last step are used
$INPUT ID TIME AMT DV MDV EVID SS II ADDL RATD COVA
$ESTIM MAXEVALS=2000 SIGDIGITS=3 PRINT=0 METHOD=0


$PROB Popkin H0
$THETA (.1,1.4 ,10) ;Ka 1
$THETA (.1,3.5 ,50) ;CL 2
$THETA (1,20 ,1000) ; V 3
$THETA (1 ,1 ,1 ) ;F 4
$THETA 0.0001 FIXED; CL (Covar) 5

$OMEGA 0.223144 ;kA
$OMEGA 0.039221 0.039221 ;CL V
$OMEGA .0001 FIXED;F
$SIGMA 0.039221 ;ERR Mult
$SIGMA 0.005 ;Err Add
$SIGMA 0.01 FIXED; Random time

;Without data statement the data from the last step are used
$INPUT ID TIME AMT DV MDV EVID SS II ADDL RATD COVA
$ESTIM MAXEVALS=2000 SIGDIGITS=3 PRINT=0 METHOD=0
_______________________________________________________

From: Justin Wilkins justin.wilkins@farmbio.uu.se
Subject: Re: [NMusers] NONMEM
Date: Thu, 14 Sep 2006 13:40:08 +0200
Dear Ritu,

The example you quote seems to be more suitable for producing a limited
series of Monte Carlo simulations than for producing a truly optimal
design. If you want to do it in the most appropriate way, you will need
to use a tool designed specifically for the purpose.

You will have to decide what your needs are (whether you need exact
sampling times or sampling windows, which will inform your choice of
approach - D- or ED-optimality, for example), and then select one of the
several pieces of software available based on these requirements. Two
that are commonly used for generating population designs are

PopED - http://depts.washington.edu/rfpk/rd/software_popED.html
PFIM  - http://www.bichat.inserm.fr/equipes/Emi0357/download.html

Each (there are others as well) has its own strengths and weaknesses but
I have used PopED with some success in the past, and PFIM has a good
reputation. Both packages require other software to run - PopED needs
O-Matrix or Matlab, and PFIM needs R or S-PLUS. Both rely on
minimization of the inverse of the Fisher information matrix with
respect to some variable (time, in your case), but others on this group
will probably be able to give you more detail about this than I can.

$SUPER is a NONMEM control record used to program a series of problems
for NONMEM to perform within a single file.

As for the other NONMEM parameters you mention, EVID is the PREDPP event
identification variable, is usually required, and has 5 possible values
- 0 for observation events, 1 for dose events, 2 for 'other-type' events
such as compartment resets, 3 for reset events (which re-initialize the
PK kinetic system) and 4 for reset-and-dose events (a combination of 1
and 3).

SS is the steady-state data item for PREDPP, and can have 4 possible
values - 0, indicating that the dose is not a steady-state dose, 1,
indicating a steady-state dose with reset, 2, indicating steady-state
dose without reset, and 3, which is very similar to 1.

The above is a (very) basic summary from the NONMEM online help, which
is accessed by the commands


>> nmhelp SUPER
>> nmhelp EVID
>> nmhelp SS


from the command prompt. Have a look for more detail.

COVA and RATD have no special meaning in NONMEM and appear to have been
user-defined variables (covariates, perhaps) supplied by the author of
the quoted code.

Justin
_______________________________________________________

From: Mark Sale - Next Level Solutions mark@nextlevelsolns.com
Subject: Re: [NMusers] NONMEM 
Date: Thu, 14 Sep 2006 08:21:14 -0700

My 2 cents:
  First, you need to define what you mean by optimal.  Traditional
statistical optimal design - e.g. ANOVA is  aimed at getting parameter
estimates as precisely estimated as possible - a laudable goal in
traditional statistics, nicely implemented in a number of applications,
I'm most familiar with Steve Duffuls
(http://www.bichat.inserm.fr/equipes/Emi0357/download.html), but I'll
let him tell you how wonderful it is.
 The three central problems, IHMO with existing methods are:
1.  Limited to D (or ED, or perhaps C|S) optimal for model parameters. 
What if we want to get a model with the smallest mean absolute error? 
What if we want a study design that estimates some other statistic
(survival time, AUC, difference between treatment A and placebo, etc)
as precisely as possible?
2.  No/little flexibility in sample number - they address sample times
only, the sample number is (usually) fixed.  What if our question is
(the more realistic) - what is the optimal study design (which I think
means getting and "adequate" answer for the lowest cost) to answer this
question if samples cost $200 each and subjects cost $5000 to enroll and
$1000/week to keep in the study.
3.  They don't tell you if a study is adequate - only which design is
best, for a given number of subjects/samples.

We have done a small demonstration project optimizing a BE study.
Essentially:
Find the optimal number of subjects, number of samples, sample times to
result in a "successful" BE study (e.g., 1-beta = 90%, 1-alpha = 90%),
given a specified cost of samples and subjects.  NCA pk was done. 
Model also included uncertainty about model parameters.  The algorithm
used for the optimization was (my favorite algorithm) Genetic
algorithm.  Interestingly, the resulting design was quite close to what
we usually do - except the GA answer had fewer sample times than the
traditionally designed study (and so was a little cheaper).  It also
was clear that D optimal was very different from NCA optimal sampling
times.
 I haven't had time to continue persuing this - any grad student
interested, I'm happy to share the code that I have.


Mark


Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
_______________________________________________________

From: "Andrew Hooker" andrew.hooker@farmbio.uu.se
Subject: Re: [NMusers] NONMEM
Date: Thu, 14 Sep 2006 18:28:51 +0200

Hi Mark,

I agree with most of your assessment about the current state of optimal
design.  The designs are generally based on getting the best possible
parameter estimates of your model and it important to develop methods for
looking at other test statistics.

Two points:

1. In PopED (www.rfpk.washington.edu ) we have to ability to optimize over
the number of samples per individual in a study, the number of individuals
in a study, and other design variables other than sample times.  See: 

M. Foracchia, A. Hooker, P. Vicini and A. Ruggeri.  PopED, a software for
optimal experimental design in population kinetics.  Comput Methods Programs
Biomed,  74: 29-46, 2004.

2. To me it is not so surprising that the optimal design that you attempted
using NCA resulted in sample times similar to the designs people come up
with normally (without any fancy optimal design) for these studies, because
most people design their studies based on NCA type thought processes anyway.
The question that comes to mind is: why shouldn't we use the information we
gain from population mixed effect models in our design calculations?  It
would be interesting to compare the performance of your NCA based design and
the D-optimal design you calculated.

-Andy

Andrew Hooker, Ph.D.
Assistant Professor of Pharmacometrics
Div. of Pharmacokinetics and Drug Therapy
Dept. of Pharmaceutical Biosciences
Uppsala University
Box 591 
751 24 Uppsala 
Sweden
Tel: +46 18 471 4355
www.farmbio.uu.se/research.php?avd=5
_______________________________________________________

From: Mark Sale - Next Level Solutions mark@nextlevelsolns.com
Subject: Re: [NMusers] NONMEM
Date: Thu, 14 Sep 2006 09:50:09 -0700

Andy,
  There are several practical obstacles to this.  The first reason that
no one uses a formal optimization (based on a pop pk model) to optimize
NCA, or other study endpoints it that it is pretty hard.  We estimated
saving about 10% of sample assay costs (maybe 100 samples per study,
~$10,000 dollars in a $500,000 study), and the sample assay budget came
from a different group than the people designing the study, so the study
designers didn't lose a lot of sleep over assay costs, prefering the CYA
approach.  It took me several of weeks of work to do the optimization,
another downside.  The second study optimization would obviously go a
lot faster, but it isn't clear that there is a business case for it,
until someone writes a general application to do it.  Hence my offer of
any code I have to anyone who wants to pursue it.  It also is very
computationally intensive, running Monte Carlo simualtion on 1000's of
designs, doing the pk and statistics for each design (ANOVA for NCA)
etc.  Probably the pay off for BE studies is marginal.  The payoff for
large, expensive, difficult to recruit studies may be significant, and
they wouldn't be much harder to optimize.  Another practical issue is
that the stats groups were skeptical - because we basically would
control the SE of the AUC - finding an "optimal" SE - not a minimal
value- by controling sample number and times.  They told us that stats
was responsible for estimating the SE of the parameters, not clin
pharm. They prefered to use historical values for SE of AUC (and worst
case scenario at that), and so the formal power analysis, which was
done by stats, didn't reflect the optimization, only the SE of the NCA
quantities from an historical study.
  These are all reasons why I gave up on this a while ago.  But, I think
in theory it is a very practical way to formally optimize study designs
- much more powerful than just doing some simulations in Trial
Simulator and manually tweaking some study parameters.



Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
_______________________________________________________

From: "Stephen Duffull" stephen.duffull@stonebow.otago.ac.nz
Subject: Re: [NMusers] NONMEM
Date: Fri, 15 Sep 2006 08:00:34 +1200

Hi all


>> The example you quote seems to be more suitable for producing 
>> a limited series of Monte Carlo simulations than for 
>> producing a truly optimal design. If you want to do it in the 
>> most appropriate way, you will need to use a tool designed 
>> specifically for the purpose.


I think it is worth a note here that, in my experience of optimizing designs
for both industry and academia, I have never been asked to find an optimal
design.  What I do get asked is to provide a sufficient design that requires
minimum effort, for example: fewest patients, reduced number of doses,
fewest samples per patient etc.  The term sufficient means a design that
meets the needs for which the model is intended to be used.

It is also my experience that it is almost impossible to do this within any
acceptable time-frame using NONMEM or some other package that has MC sim +
estimation - and that an information theoretic approach is both practical
and very quick at doing this.


>> PopED - http://depts.washington.edu/rfpk/rd/software_popED.html
>> PFIM  - http://www.bichat.inserm.fr/equipes/Emi0357/download.html


Of course, I have to plug:
POPT - www.winpopt.com
WinPOPT - www.winpopt.com

POPT requires MATLAB and WinPOPT runs independently of any other software.

Also I think that France Mentre has released PFIM_OPT for S (or R).

Steve
--
Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 913 Dunedin
New Zealand
E: stephen.duffull@otago.ac.nz
P: +64 3 479 5044
F: +64 3 479 7034

Design software: www.winpopt.com

_______________________________________________________

From: Nick Holford n.holford@auckland.ac.nz
Subject: Re: [NMusers] NONMEM
Date: Fri, 15 Sep 2006 08:31:05 +1200

Steve,

If I may split the academic hair a little more ...

I suspect that in fact your collaborators would have initially asked you for an
'optimal design' i.e. these are the words they would have used when asking you to
help them. But you would have offered a 'sufficient design' as being good enough.

I accept that methods based on the Fisher information matrix (FIM) are much faster
than brute force Monte Carlo (MC) methods but the FIM methods are limited to minimizing
parameter precision as the objective. There are other objectives for trial design e.g.
power, which can be (tediously) explored using MC methods but which are only indirectly
optimized using FIM methods.

Nick

--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
_______________________________________________________

From: Mark Sale - Next Level Solutions mark@nextlevelsolns.com
Subject: Re: [NMusers] NONMEM
Date: Thu, 14 Sep 2006 13:49:20 -0700

I agree with Steve, optimal design for studies is not generally
available.  Instead, we use simulation, to plagarize Steves term, to
find a sufficient design by tweaking parameters and doing MC
simulation.  I would however, suggest that given sufficient
computational power, formal optimization is possible on a reasonable
time line - a couple of days.   For most optimizations (i.e., BE
studies, NCA pk studies, dose-response, survival etc), you won't use
NONMEM anyway.  You use SAS or Splus in some non-iterative alorithm
(like ANOVA) that is very fast. So, for most trial optimizations, you
don't need the (NONMEM) estimation, only the simulation.  But, we
currently don't have the tools to do this.

Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
_______________________________________________________

From: "Stephen Duffull" stephen.duffull@stonebow.otago.ac.nz
Subject: Re: [NMusers] NONMEM

Nick

>> If I may split the academic hair a little more ...

Of course  :-) 

>> I suspect that in fact your collaborators would have 
>> initially asked you for an 'optimal design' i.e. these are 
>> the words they would have used when asking you to help them. 
>> But you would have offered a 'sufficient design' as being good enough.


Sometimes - but not always.  Many 'sponsors' really do want the 'minimally
effective' design - and don't ask for an 'optimal' design.  And of course
some sponsors know what they want but inadvertently use the term 'optimal'
anyway.  So, I don't agree with your assertion here.


>> I accept that methods based on the Fisher information matrix 
>> (FIM) are much faster than brute force Monte Carlo (MC) 
>> methods but the FIM methods are limited to minimizing 
>> parameter precision as the objective. 


(Hair split - you mean maximize precision.)

Not true - although I accept that this is their most common use in practice.

In addition to maximizing precision, FIM based designs can be used to:
1) determine designs for model discrimination
2) determine designs with minimum bias
3) determine designs for power to reject the null hypothesis (for model
building decisions only)
4) determine designs that carry the highest probability of success (GLMs)

And of course any combination of the above (including with the standard
maximizing parameter precision).  I'm not of course advocating that FIM
based methods should replace all MC methods - but I think both have a
complementary role.

Steve
--
Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 913 Dunedin
New Zealand
E: stephen.duffull@otago.ac.nz
P: +64 3 479 5044
F: +64 3 479 7034

Design software: www.winpopt.com
 
_______________________________________________________

From: "Stephen Duffull" stephen.duffull@stonebow.otago.ac.nz
Subject: Re: [NMusers] NONMEM
Date: Fri, 15 Sep 2006 09:25:45 +1200

Mark

>> I agree with Steve, optimal design for studies is not 
>> generally available.  Instead, we use simulation, to 
>> plagarize Steves term, to find a sufficient design by 
>> tweaking parameters and doing MC simulation.  I would 
>> however, suggest that given sufficient computational power, 
>> formal optimization is possible on a reasonable
>> time line - a couple of days.   

I would suggest that using a information theoretic technique you would get
rid of the simulation component completely and do this more efficiently
using an FIM approach.  I do not advocate simulation where quicker methods
are available.

If you use simulation then:
1) how do you know what designs to choose?
2) how do you determine when you have a minimally effective design - could
there be another design around the corner that you didn't think of which is
more efficient?

Why not just let an FIM search do its business, get the answer for the most
efficient-sufficient design.  In a sense the most efficient design is how
you would determine 'optimal' (or 'best') - just that the design is cast
from finding the minimum experimental effort rather than some other goal.

Steve
--
Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 913 Dunedin
New Zealand
E: stephen.duffull@otago.ac.nz
P: +64 3 479 5044
F: +64 3 479 7034

Design software: www.winpopt.com

_______________________________________________________

From: Mark Sale - Next Level Solutions mark@nextlevelsolns.com
Subject: Re: [NMusers] NONMEM
Date: Thu, 14 Sep 2006 14:55:07 -0700

>I would suggest that using a information theoretic technique you would get
>> rid of the simulation component completely and do this more efficiently
>> using an FIM approach.  I do not advocate simulation where quicker methods
>> are available.


Absolutely, the only problem is the quicker method often don't answer
the question you want answered (they tell you how to get the smallest
SE of model parameters, not how to do the cheapest/most powerful
study/fastest).


>> If you use simulation then:
>> 1) how do you know what designs to choose?


You optimize, based on user defined criteria (1-alpha >=0.9, 1-beta

>>=0.9, minimal cost, shortest duration)


>> 2) how do you determine when you have a minimally effective design - could
>> there be another design around the corner that you didn't think of which is
>> more efficient?


That what optimization does, I can reference you to a text.  GA (unlike
FIM) does NOT guarantee the "best" answer, the textbooks always say
"near optimal solution".  But, some analysis more sophisticated than I
understand, done mostly at Illinois University has examined "GA
deceptive searches", and you can be pretty sure that properly done GA
is truly optimal - but not guaranteed.  Also, as an aside I have
combined search algorithms and greatly increased the "robustness" of
the search, specifically addressing the problem you mention.



>> Why not just let an FIM search do its business, get the answer for the most
>> efficient-sufficient design.  In a sense the most efficient design is how
>> you would determine 'optimal' (or 'best') - just that the design is cast
>> from finding the minimum experimental effort rather than some other goal.


Absolutely (again), if your goal is minimal SE of model parameters.  If
your goal is cheapest/most power/fastest, I'm not sure FIM will work.


Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
_______________________________________________________

From: "Gastonguay, Marc" marcg@metrumrg.com
Subject: Re: [NMusers] NONMEM
Date: Thu, 14 Sep 2006 19:12:58 -0400

Steve,
I'm curious to learn of examples where FIM-based optimization methods have been useful for
achieving designs that minimize bias (#2 in your post below) of parameter estimates. It was
my understanding that in order to assess bias of parameter estimates, it is necessary to run
batches of MC simulation-estimation cycles and compare estimated values back to a reference
"true" value (e.g. simulation model parameter values).
Thanks in advance for your insight.
Marc

Marc R. Gastonguay, Ph.D., President & CEO, Metrum Research Group LLC
2 Tunxis Rd. Suite 112, Tariffville, CT 06081
Direct:860-670-0744, Main:860-735-7043, Fax:860-760-6014
Email:marcg@metrumrg.com, Web:www.metrumrg.com

_______________________________________________________

From: "Stephen Duffull" stephen.duffull@stonebow.otago.ac.nz
Subject: Re: [NMusers] NONMEM
Date: Fri, 15 Sep 2006 11:58:49 +1200

Marc

Thought that I might escape comment on this  :-)  


>> I'm curious to learn of examples where FIM-based optimization 
>> methods have been useful for achieving designs that minimize 
>> bias (#2 in your post below) of parameter estimates. 


I know of no examples for NL MEM.  That doesn't mean it can't be done of
course...


>> It was 
>> my understanding that in order to assess bias of parameter 
>> estimates, it is necessary to run batches of MC 
>> simulation-estimation cycles and compare estimated values 
>> back to a reference "true" value (e.g. simulation model 
>> parameter values).


As a gross simplification, I might suggest that bias in parameter estimates
can arise from 2 main sources (I am sure someone will correct me here).
1) due to not finding the true maximum of the likelihood
2) due to model misspecification

The latter case is a special construct since bias doesn't truly exist it
just appears to exist because you're thinking about one model and fitting
another and then the parameter estimate you obtain you apply to the wrong
model and hence it appears to be biased.

I don't think there's much you can do about case 1 (other than use a better
search algorithm).  Case 2 is a real problem and can be implicit (e.g. due
to linearisation) or explicit (e.g. your data does not support estimation of
a slow distribution phase hence your estimate of half-life while perhaps
accurate for your model is inaccurate to describe the actual time course of
disposition).

Case 2, you can optimize designs (to minimize the effects of this bias) by
using information theoretic techniques without the need for MC.  I hasten to
add here that this may not necessarily be based solely on FIM - but may have
other analytic computation.  So I should correct my last statement:


>>> > In addition to maximizing precision, FIM based designs can 
>
>> be used to:


To: "In addition to maximizing precision, information theoretic designs (not
necessarily based solely on FIM but under the overall umbrella of "Optimum
Design of Experiments") can be used to:"

Regards

Steve
--
Professor Stephen Duffull
Chair of Clinical Pharmacy
School of Pharmacy
University of Otago
PO Box 913 Dunedin
New Zealand
E: stephen.duffull@otago.ac.nz
P: +64 3 479 5044
F: +64 3 479 7034

Design software: www.winpopt.com
_______________________________________________________

From: Timothy H Waterhouse WATERHOUSE_TIMOTHY_H@Lilly.com
Subject: Re: [NMusers] NONMEM
Date: Fri, 15 Sep 2006 12:04:57 -0400

Hi all,

While it's true, as far as I know, that "existing methods" (meaning
software such as POPT, PFIM etc) don't allow for optimal design for precise
estimation of other statistics, I think this is a relatively simple problem
to address.  If the feature of interest (such as AUC or tmax) can be
written as a function of your PK parameters, a c-optimal design will give
you precise estimates of this feature, and can be obtained using a function
of the FIM.

The problem with c-optimal designs is that they often give you a singular
FIM, meaning they have zero efficiency for parameter estimation.  This was
actually addressed in a talk by Anthony Atkinson at the DEMA conference in
Southampton last weekend.  He used a combination of the "c" and "D"
criteria to find designs which are efficient for both, using a simple PK
model as an example.  His methods were for fixed effects models, but I
think they will extend to mixed effects models using the usual approximate
information matrix.

If you want to design for maximum power, minimum bias, etc, the information
matrix may not be quite as helpful...

Tim
_______________________________________________________

From: Mark Sale - Next Level Solutions mark@nextlevelsolns.com
Subject: Re: [NMusers] NONMEM
Date: Fri, 15 Sep 2006 09:19:59 -0700

Key phrase 
"If the feature of interest (such as AUC or tmax) can be written as a
function of your PK parameters," 
I'm assuming closed form solution.
But, we are still talking about precision in estimating parameters. What
about the vast majority of phase II and III studies that we do using
hypothesis testing? using ANOVA, Fisher exact test etc.  Not
infrequently we can develop models for the endpoint.  Can we optimize
the study (obviously, I beleive we can).  We have not had a lot of
success convincing people that we should move from a hypothesis testing
mode to an estimation mode.
BTW, the reason I'm obsessed with this - and I admit that I am, is that
I think this is a real opportunity for Clin pharm/modeling to impact
later phase development.  Companies like Pharsight (and others) have
really done an excellent job of raising awareness that modeling can
contribute to phase III design.  Part of the reason that this role
hasn't been fully embraced everywhere, (IMHO) is that they are trying
to wrest control of study design away from stats (and others).  Methods
to optimize phase III trials are new turf - we wouldn't have to fight
anyone for it, it really is very complimentary to stats - since it
requires their methods to implement.  If we could actually demonstrate
saving money (optimizing for lowest cost for an adequate study), I
think we can seriously impact late phase development. Of course in
academics, money is never an issue ....

 But, tools need to be developed, and we need more computers.

Mark Sale MD
Next Level Solutions, LLC
www.NextLevelSolns.com
_______________________________________________________

From: "A.J. Rossini" blindglobe@gmail.com
Subject: Re: [NMusers] NONMEM
Date: Fri, 15 Sep 2006 21:13:44 +0200

On 9/15/06, Mark Sale - Next Level Solutions mark@nextlevelsolns.com wrote:

 <-- lots of comments deleted -->

> Part of the reason that this role
> hasn't been fully embraced everywhere, (IMHO) is that they are trying
> to wrest control of study design away from stats (and others).  Methods
> to optimize phase III trials are new turf - we wouldn't have to fight
> anyone for it, it really is very complimentary to stats - since it
> requires their methods to implement.


Why need to wrestle anything from stats?  Any decent M&S group ought
to have a few statisticians around...

(speaking from the Novartis M&S viewpoint, where I'm in the statistics
subgroup...).

More seriously, this IS a critical problem -- there are general design
principles for PK sampling design on the programmatic level, i.e. akin
to the "adaptive designs" work that more traditional stats groups have
been working on.

Part of that will be thinking through the range of "critical path"
questions that PK sampling might answer, and being a bayesian (and
PopKin scientist) about it, leveraging compound family and
indication/pathway knowledge about the challenges that could be faced.

Analysis is one thing, but design, now that is completely different.
And especially working the patterns, i.e.   "given models X through Z
at Ph I with dense sampling, which resulted in sparse sampling design
W for Ph II and PhIb trials which were fit to models X1-- Z1, what
does the range of reasonable models suggest that we should use for
sampling at Ph III?"  (i.e. design using the previous work, or even
more interesting, the trajectory of previous work).

After all, as I used to say when I was a biostat/stat prof, "nothing
like a bad design or sloppy operations to generate another PhD
thesis...".   But getting the design right, sigh, that's contextual
and hard.

best,
-tony

blindglobe@gmail.com
Muttenz, Switzerland.
"Commit early,commit often, and commit in a repository from which we can easily
roll-back your mistakes" (AJR, 4Jan05). 
_______________________________________________________