From: "kai wu" kaiwu77@yahoo.com
Subject: [NMusers] Model building question 
Date: Mon, February 28, 2005 7:28 am
 
Dear users,
  In a recent study, we were comparing pk of a compound after a
  specific baseline change. So same subjects were entered in two
  occasions. Realizing between occasion variability could contribute
  to the pk change, what would be a proper and efficient way for
  this model building process? Thanks! 


Kai Wu
Department of Pharmaceutics
University of Florida
Gainesville, Fl 
Office phone #: 352-846-2730 
_______________________________________________________

From: "Bhattaram, Atul" BhattaramA@cder.fda.gov
Subject: RE: [NMusers] Model building question 
Date:  Mon, February 28, 2005 8:13 am 

Hello Kai
 
You could do the following:
1. Look at the concentration-time profile for the mean data 
   and a couple of individuals in the 2 occasions.
2. Develop a base model without occasion variability. Again take 
   a look at the data and the predicted values.
3. Add occasion variability if you think it is important and then 
   compare the decrease in variance before and after.
 
During the model building process you would have identify if you want to
explain your data using a 'statistical' model or a 'mechanistic' model. The
former would only describe the data, but would not be of much help in
predictive purposes. For the latter you will need more data to 'qualify' your model. 
 
Venkatesh Atul Bhattaram
Pharmacometrics
CDER/OCPB/FDA 
 
 
 

_______________________________________________________

From: "kai wu" kaiwu77@yahoo.com
Subject: RE: [NMusers] Model building question
Date: Mon, February 28, 2005 9:04 am

Atul,
What I did was :model as two different populations at two occasion first,
compared their bayesian estimators, and regressed on the baseline change
to have some idea. Then  I pooled two as one population. As you suggested,
testing BOV would be the first step. If single BOV and none of combinations
of BOV on parameters was significant, they would be disregarded in the following
model building process. However, I found that if I go the other way around: build
the model first, then introduce BOV to the final model, the results were quite
different in terms of parameter estimator and objective function value. Or
should I consider BOVs and the baseline change as potential covariates at the
same level, use either forward or backward procedure to build the model? 

Kai Wu
Department of Pharmaceutics
University of Florida
Gainesville, Fl 
Office phone #: 352-846-2730
_______________________________________________________

From: "Nick Holford" n.holford@auckland.ac.nz
Subject: Re: [NMusers] Model building question 
Date: Mon, February 28, 2005 1:40 pm

Kai,

As a general modelling philosophy you should always consider estimating BOV if you
have repeated occasions. The alternative that BOV is 0 is a very unlikely
assumption. 

In your specific case you may wish to consider both a systematic as well as a random
change in parameters. You could try estimating both the mean change in each
parameter relative to the first occasion and on top of that estimate the BOV as the
true random differences between the occasions.

Nick
 
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
_______________________________________________________

From:  "kai wu" kaiwu77@yahoo.com
Subject: Re: [NMusers] Model building question
Date: Mon, February 28, 2005 3:59 pm

Dr. Holford,

I surely understand the importance of the BOV.  However, my doubt is on how
should I incorporate it into my model building process.  
As you suggested, 

"You could try estimating both the mean change in each parameter relative to
the first occasion and on top of that estimate the BOV as the true random
differences between the occasions", 

if I understand right, I should decide the proper covariate model for Thetas first,
then add BOV to account for the random difference between occasions. That is
exactly what I did. The reason that I had doubt is that I remember that I came
across a paper, where the BOV was incoporated into the base model first, and as
paper stated, since BOV was determined not significant, it was disregarded for
further covariate model building for thetas. 

I also tried this approach, and my data showed none of BOVs was significant.
However, the first approach (adding BOV at last) would support BOV for Vd was
significant, and the coefficients for the covariate model of theta would be
different in two approaches too. 

The second question is that what is the criteria to judge BOV significant or not?
I was only comparing OBJ values and diagnostic plots. As I understand, NONEM was
putting BOV into  residual error when modelling w/o BOV. In my case, I only found
the residual error decrease from 31% to 29% after adding BOV. 

Kai 
_______________________________________________________

From: "Nick Holford" n.holford@auckland.ac.nz
Subject: Re: [NMusers] Model building question 
Date: Mon, February 28, 2005 4:23 pm 

Kai,

IMHO there are no 'correct' answers to your questions. The sequence of model
building should not affect the results but sometimes it does. This is in part due to
lack of adequate information in the design and in part due to NONMEM's limitations.

I would prefer to estimate BOV as part of the base model first. I would then test
for a systematic change in parameters that you have some a priori reason to think
may have changed from occasion to occasion. The best test criterion is something
based on the predictive performance of the model rather than rejection of the null
hypothesis using some approximation to the distribution of delta OBJ.

Nick

--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
_______________________________________________________

From: mark.e.sale@gsk.com
Subject: Re: [NMusers] Model building question 
Date: Tue, March 1, 2005 8:30 am

Nick, 

I have to take exception to your comment 

"The sequence of model building should not affect the results but sometimes it does "

I think, that it should be expected that the sequence of model building will affect the result.
The only condition in which the sequence does not affect the outcome is when all of the
various effects are independent, which is probably essentially never true in biology (or
life in general).  A trivial and/or contrived example.  You have a pk data set with two
covariates, kg and lb (unknown to you, both measuring body weight).  If you put kg in first,
you'll find no effect of lb, and vice versa, because the same information is contained in both.
Janet Wade demonstrated the same to be true for structural effects, and our local experience
with more robust search methods suggests that the same is true of residual error, and
interindividual error terms.  The outcome is always very sensitive to the sequence of
"hypothesis tests". I've been told by those who formally study combinatorial optimization
(which is what we are doing in our model building)  that our algorithm is really, really naive. 


Mark Sale M.D.
Global Director, Research Modeling and Simulation
GlaxoSmithKline
919-483-1808
Mobile 
919-522-6668 
_______________________________________________________

From: jeffrey.a.wald@gsk.com 
Subject: Re: [NMusers] Model building question
Date: Tue, March 1, 2005 10:29 am

From a formalistic perspective I'd have to agree with Mark.  From
a pragmatic perspective I would agree with a reworded version of Nick's statement. 

  "The sequence of model building should not affect substantial inferences"

However, this thread raises a question in my mind.  Is the coda of Nick's
statement "but sometimes it does" really true when we limit ourselves to
consideration of substantial inferences (i.e., drug label changes, dose
adjustments, etc...)?  I would be curious to learn of real-life examples in
which different model building sequences have led to "equivalent" models
with substantially different clinical manifestations. 

I think the field of combinatorial optimization offers the possibility for
increased automation of model building which in and of itself might yield
great benefits.  However, in my somewhat intentionally provocative opinion
(IMSIPO) I am not convinced that when we do what we do already (with adequate
expertise) that we are somehow failing to identify clinically meaningful
(actionable) conclusions. 

Jeff 

Jeff Wald, PhD
jeffrey.a.wald@gsk.com
Clinical Pharmacokinetics/Modeling and Simulation 
Neurology and GI
RTP, NC 
_______________________________________________________

From: Harry Mager harry.mager@bayerhealthcare.com
Subject: Re: [NMusers] Model building question
Date: 01-Mar-2005 10:22 

Mark,

Of course I totally agree with you, one minor remark, however. Even if the
information content in 2 covariates it is nearly the same, it may well happen
that both are retained in a relationship. In this case, you may end up with a
very high measure for the association between DV and the model predictiions, but
the regression coefficients tend to be very large (pretending a strong
relationship) but with opposite signs (very strong, opposite influence of the
covariates on the the dependent variable). There seems to be no way to avoid a
careful examination of the covariate structure and its potential implications on
regression coefficient variabilities.

Harry

Dr. Harry Mager
Head Global Pharmacometrics

Bayer Healthcare AG
BHC-PH-PD-GMD-GB Biometry & Pharmacometry
D-42096 Wuppertal / Bldg. 470
Telefon:  +49 (0) 202-36-8891
Telefax:  +49 (0) 202-36-4788
eMail: Harry.Mager.HM@Bayer-AG.de
_______________________________________________________

From: mark.e.sale@gsk.com 
Subject: Re: [NMusers] Model building question
Date: Tue, March 1, 2005 10:37 am 

Harry, 

Absolutely, careful - and thorough and thoughtful (i.e., what
make biological sense, does BSA make more sense than body weight,
does a lag time make more sense than sequential absorption
compartments, BTW, lag times make no biological sense at all, so
far as I can see).  What is remarkable is that we don't seem to
realize the folly of our model building strategy. 


Mark Sale M.D.
Global Director, Research Modeling and Simulation
GlaxoSmithKline
919-483-1808
Mobile 
919-522-6668 
_______________________________________________________

From: "Janet R. Wade" janet.wade@exprimo.com
Subject: Re: [NMusers] Model building question
Date:  Tue, March 1, 2005 12:41 pm 

Hi Jeff and Mark

I agree with the idea that we should consider if the different models we
arrive at (depending upon the route we take to that final model) would
result in different inferences.  

In the work Mark referred to I could indeed end up with two different
models, one a one compartment model with three covariates and one a two
compartment model with one covariate (simulated data).  I found the same
issue when I analysed two real data sets, different structural models but
with the same total number of parameters due the different number of
covariates in the two 'final' models for each compound.  The paper in
question (Wade et al., Interaction between the choice of structural,
statistical and covariate models in population pharmacokinetic analysis. J.
Pharmacokin. Biopharm., 22, 165-177) did not address if the predictions of
the two models would differ, but some unpublished work I did after writing
the paper did look at the predictions that they gave.  The results were
similar and would not have resulted in different dosing instructions (my
opinion only).  Obviously peaks and troughs were slightly different and that
could be important for drugs with a narrow therapeutic index.

Kind regards

Janet
_______________________________________________________

From: "Nick Holford" n.holford@auckland.ac.nz
Subject: Re: [NMusers] Model building question
Date: Tue, March 1, 2005 4:24 pm 

Mark,

I think you should have read the rest of the paragraph that I wrote before throwing
an exception. I was not advocating that all models should be built without thought
to sequence.

In the particular case at hand I proposed a strategy for building a model based on
my prior beliefs of what is important ie. BOV needs to be sorted out first then a
fixed effect of occasion. This is the same strategy I would use for adding exploring
other covariates i.e. fit the random effect first then the fixed effect e.g. fit the
total population parameter variability first (PPV) then add covariate fixed effects
in some biologically sensible sequence in order to see if they can reduce PPV.
Minimal or no reduction in PPV is a simple performance criterion that can be used to
reject inclusion of a covariate despite a moderate fall in OBJ. 

For clearance I think weight and renal function are primary while race, age and sex
are secondary. I use biology to avoid the colinearity trap (e.g. weight and age in
children). After building a model with a sensible a priori structure I might then if
I had time do some empirical exploratory analysis but I am not a fan of automated
blind searches (e.g. including weight in both kg and lb!) :-)

Finally, model evaluation should depend on some performance check other than a
change in OBJ,  covariance step success, etc. Janet's comments on the lack of any
performance difference ("The results were similar and would not have resulted in
different dosing instructions") despite building different models based on OBJ
criteria support this recommendation.

Nick
--
Nick Holford, Dept Pharmacology & Clinical Pharmacology
University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand
email:n.holford@auckland.ac.nz tel:+64(9)373-7599x86730 fax:373-7556
http://www.health.auckland.ac.nz/pharmacology/staff/nholford/
_______________________________________________________

From: "Steve Duffull" sduffull@pharmacy.uq.edu.au
Subject: RE: [NMusers] Model building question 
Date: Tue, March 1, 2005 6:30 pm

Hi Janet 


I think that you raise a key issue about the predictive performance of
the model.  It would seem to me (IMHO) that models developed from a
different model building order but ultimately ending up to be of a
similar global level of complexity, and therefore flexibility, would
probably describe the data that were used to generate the model
equivalently well.  The prediction question (at least in my mind) is
really about how well the inference from the model translates to new
data which has arisen under different experimental conditions with most
likely a different underlying distribution of covariates.

In this case, it is possible that different models may predict quite
differently - and therefore the order of model building may play a
significant role for future inference.

Regards

Steve
========================================Stephen Duffull
School of Pharmacy
University of Queensland
Brisbane 4072
Australia
Tel +61 7 3365 8808
Fax +61 7 3365 1688
University Provider Number: 00025B
Email: sduffull@pharmacy.uq.edu.au
www: http://www.uq.edu.au/pharmacy/sduffull/duffull.htm
PFIM: http://www.uq.edu.au/pharmacy/sduffull/pfim.htm
MCMC PK example: http://www.uq.edu.au/pharmacy/sduffull/MCMC_eg.htm
=======================================
_______________________________________________________

From: mark.e.sale@gsk.com 
Subject: Re: [NMusers] Model building question 
Date: Wed, March 2, 2005 9:51 am

Thanks Nick, I did read your entire comment - I always study your
comments carefully and keep a cross references data base of them
for whenever I need some inspiration ; - ).

 I think we can agree on this: 

Prior knowledge should form the basis of the model 
Model validation/qualification should be based at least partly on predictive performance. 

But, two other issues: 
One, we still do, occasionally do hypothesis tests.  In the
case of hypothesis tests, sequence may be very important. 
Second, we also do simulation.  The "domain" of interest across
which we simulate frequently includes specific covariates, e.g.,
age, race gender wt.  If your model doesn't include that covariate,
obviously you'll find that that covariate has no influence on the
outcome.  So, in that regard, what covariates end up in the final
model does matter - not just whether the line goes through the points.
(of course, if you what to simulate across a range of a covariates, you
should include that covariate in the model regardless of whether it
passes some arbitrary hypothesis test P value) 

What we might disagree on is how readily one should abandon a prior
believe based on new data.  From my personal experience, I've found my
prior believes to be frequently, perhaps usually wrong.  Others may have
different experience.  Because I am usually wrong about things (ask my
wife or kids), I am always ready to at least refine, frequently
ready to completely discard my prior believes.   

Mark Sale M.D.
Global Director, Research Modeling and Simulation
GlaxoSmithKline
919-483-1808
Mobile 
919-522-6668 
_______________________________________________________