From: "Sreenivasa Rao Vanapalli" <svanapal@blue.weeg.uiowa.edu>

Date: Wed, 9 May 2001 17:40:55 -0500

Are there any standards set to determine which parameters should we use in model building? My direct question is is there any difference between ADVAN4 TRANS4 and ADVAN4 TRANS1?(for two compartment model) Why can't we use distribution constants in the model? Why clearance only , why not elimination rate constant?

Sreenivasa Rao Vanapalli, Ph.D,

Janssen Postdoctoral Research Scholar,

S411 PHAR, College of Pharmacy,

University of Iowa, Iowa City, IA-52242

From: Nick Holford <n.holford@auckland.ac.nz>

Subject: Re: Parameterization!!!

Date: Thu, 10 May 2001 12:06:28 +1200

The elimination rate constant has no independent physiological correlate in the body if you are trying to describe drug concentrations. Volume of distribution and clearance have clear physical analogues. There are also well understand relationships which affect V and CL e.g. body size, renal function, in different ways so that the ratio CL/V (which defines the elimination rate constant for a 1 cpt model) is not a constant if one compares different people. A major application of popln PK analysis is to discover and describe these differences between people.

Parameterizing your model in terms of an elimination rate constant rather than CL and V will only make things harder to understand if you introduce covariates to predict differences in CL and V. It gets much more difficult if you want to use a 2 compartment model. So my advice is to keep things simple and learn to think in terms of clearance and volume and not in terms of rate constants (which are not constant!).

Nick Holford, Divn Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

email:n.holford@auckland.ac.nz tel:+64(9)373-7599x6730 fax:373-7556

http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.htm

From: "Jean-Xavier.Mazoit@kb.u-psud.fr" <jean-xavier.mazoit@kb.u-psud.fr>

Subject: Re: Parameterization!!!

Date: Thu, 10 May 2001 08:25:33 +0200

I would have a less definite opinion.

Indeed, if you think in term of interindividual variability, it seems better to reparameterize your model with more physiologic parameters such as volume and clearance, and then use an error structure (the ETAs) directed to these parameters. However, 1) models are just a pale description of reality, and 2) it is important to remember that the TRANS utilities (other than TRANS1) transform CL AND V back into rate constants !

Linear models (which are approximations of Michaelis-Menten kinetics and diffusion processes) are built with rate constant, not clearance. I think that it might be dangerous to always use your principles when doing resampling, especially when one bootstraps the errors, not the data. If you prefer to reparameterize your models to CL and Vss, why do not use the "non compartmental" approach which is based on the same principles of linearity ?

Faculté de Médecine du Kremlin-Bicêtre

(33) (0)1 45 21 34 41 (Hopital)

e-mail Jean-Xavier.Mazoit@kb.u-psud.fr

From: "Stephen Duffull" <sduffull@pharmacy.uq.edu.au>

Subject: RE: Parameterization!!!

Date: Thu, 10 May 2001 17:14:35 +1000

It is irrelevant that the model may use transformed parameter values (ie CLs and Vs into rate constants) - since it is the CLs and Vs that are specifically being estimated (not the rate constants). Therefore it is important what parameters you are trying to estimate. For population models it is important to note that the information in an experiment (from the Information matrix) is not invariant to parameterisation (this is in contrast to single subject experiments). It can be shown that the parameterisations for a 1 compartment iv-bolus model CL&K, V&K, CL&V (with respective ETAs) do not provide exactly the same information matrix (indeed can be very different). It is possible for this reason that some parameterisations may appear to work better with some experimental designs than others...

However since CL & V do have some biological plausibility I agree with Nick that they seem preferable.

http://www.uq.edu.au/pharmacy/duffull.htm

From: Nick Holford <n.holford@auckland.ac.nz>

Subject: Re: Parameterization!!!

Date: Thu, 10 May 2001 22:25:53 +1200

> Indeed, if you think in term of interindividual variability, it seems

> better to reparameterize your model with more physiologic parameters such

> as volume and clearance, and then use an error structure (the ETAs)

> directed to these parameters.

> However, 1) models are just a pale description of reality,

> and 2) it is important to remember that the TRANS

> utilities (other than TRANS1) transform CL AND V back into rate constants !

Here we disagree. It is quite irrelevant what TRAN is used or what ADVAN or even if one uses $PRED which does not use any ADVAN or TRAN. The key issue about parameterization for population models is which parameters have random effects attached to them (as you say above) NOT the code that is used with those parameters to compute the model prediction.

> Linear models (which are approximations of Michaelis-Menten kinetics and

> diffusion processes) are built with rate constant, not clearance.

Wrong again. The following is a linear model and perfectly legal mixed effects model for use in NONMEM. Look! No rate constant!

Y=DOSE/V*EXP(-CL/V*TIME) + ERR(1)

> that it might be dangerous to always use your principles when doing

> resampling, especially when one bootstraps the errors, not the data.

Would you like to be more specific about the dangers? And define what kind of bootstrap (parametric? non-parametric?) you have in mind? And why would the model parameterization make any difference for any kind of bootstrap?

> If you prefer to reparameterize your models to CL and Vss, why do not use

> the "non compartmental" approach which is based on the same principles of

I leave SHAM analysis methods to those who are not really interested in pharmacokinetic analysis. Non-linear pharmacokinetic models are much more fun. Its a shame so many drugs are so potent and do not stress elimination pathways. We need more drugs like ethanol to make pharmacokinetics interesting.

Nick Holford, Divn Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

email:n.holford@auckland.ac.nz tel:+64(9)373-7599x6730 fax:373-7556

http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.htm

From: "Jean-Xavier.Mazoit@kb.u-psud.fr" <jean-xavier.mazoit@kb.u-psud.fr>

Subject: Re: Parameterization!!!

Date: Thu, 10 May 2001 17:33:28 +0200

My English is not good enough to let me adequately respond to Nick. Indeed, I use compartmental models and parameterization with CL and V. However, I do not think that kinetic analysis is only for fun. It is primarily to make dosing better for the patients. In their examples, both Nick and Stephen use a one-compartment model, which is easily made linear by a logarithmic transformation. With a two-compartment model, things are not so simple and, as Stephen pointed, the Fisher matrix becomes very difficult to analyse in case of reparameterization with the first order approximation made by NONMEM (I imagine that it is the same with second order methods). It was in that sense that I addressed the problem of bootstrapping errors (non-parametric bootstrap as opposed to bootstrapping original data).

Then, I prefer to have a more moderate opinion on modelling. I am never confident on models, and I often check the reliability of my models by comparing their results with those of simple non-compartmental analysis.

Thank to both for this exciting discussion

Faculté de Médecine du Kremlin-Bicêtre

(33) (0)1 45 21 34 41 (Hopital)

e-mail Jean-Xavier.Mazoit@kb.u-psud.fr

From: "Gibiansky, Leonid" <gibianskyl@globomax.com>

Subject: RE: Parameterization!!!

Date: Thu, 10 May 2001 08:02:12 -0400

As Nick mentioned, CL and V are often correlated. In the model development process, we start with the base model, look for the best base model, and then add covariates. For the base model, we have a choice of the correlated OMEGA and uncorrelated. Correlated OMEGA accounts for hidden (due to covariates) correlation. Later, on the covariate step, this correlation can be partly or fully explained by the common covariates (in CL and V). So the question what is better:

1. To use uncorrelated OMEGA for the base model, add all the covariates, and then try correlated OMEGA to explain remaining correlation;

2. To use full OMEGA for the base model, and then take correlation out if it is not needed on the base model step or for the final covariate model.

The concern is that with variant (1) correlation due to, say, WT could be more difficult to recover on the covariate step because correlation would be already "explained" by the OMEGA structure. On the other hand, the common practice is to get the best base model, and this includes the best structure of the random effects.

The other related problem is that with the correlated OMEGA addition of covariate to CL will affect (due to the correlated OMEGA) V as well. I tried to add a covariate to all correlated parameters in this situation, is there a better way to handle it ?

From: Nick Holford [mailto:n.holford@auckland.ac.nz]

Sent: Wednesday, May 09, 2001 5:08 PM

Subject: Re: Parameterization!!!

I agree with Leonid's suggestion. I would also point out that if you do not use a BLOCK to allow correlation between your parameters your model is almost certainly more wrong than usual. I cannot imagine a realistic circumstance where CL and V would not be correlated (e.g. both will increase with increasing body size, or both will increase if F increases if the dose is oral) so always start with a BLOCK and only take it off if your data or other priors inform you that the assumption of no correlation is reasonable.

> I would guess that the difference is due to the correlation of ETAs (you may

> check it by plotting individual estimates of ETA1 vs ETA2 or CL vs V2). Try

> block structure of the OMEGA matrix. Example:

> If you use correlated structure with TRANS1 and TRANS4 (correlating pairs of

> alternative parameters) you may get closer results.

> From: Bachman, William [mailto:bachmanw@globomax.com]

> Sent: Wednesday, May 09, 2001 4:26 PM

> To: 'Sreenivasa Rao Vanapalli'

> Subject: RE: Parameterization!!!

> Sometimes one parameterization can be more stable for a given set of data

> than another parameterization. It may be related to how the random errors

> enter into the model rather than the fixed effect parameters. You could try

> reparameterizing within TRANS1 to get the more important parameters like CL.

> (You already have KA AND V2). It's less likely the peripheral parameters

> will be important in your model. They are typically poorly defined. Also,

> once you begin to explain more of the variability in your data through

> addition of covariates, it may be possible to go back and try the TRANS4

> parameterization (this time incorporating the covariates you've discovered)

> and obtain a successful minimization comparable to the TRANS1 fit.

> From: Sreenivasa Rao Vanapalli [mailto:svanapal@blue.weeg.uiowa.edu]

> Sent: Wednesday, May 09, 2001 3:51 PM

> Subject: RE: Parameterization!!!

> Yes I did try as you said. But the result is same. I'm really wondering what

> is going on behind the screen. TRANS1 fit gives better estimates. With

> TRANS4 I tried fixing the VD value. But V3 esimate became astronomical so

> was KA. And corresponding predicted values (more than 100 times the observed

> values!!!). I'm really not sure what to do. I need to do some covariate

> effect studies once the this model issue is settled.

> From: Bachman, William [mailto:bachmanw@globomax.com]

> Sent: Wednesday, May 09, 2001 2:47 PM

> To: 'Sreenivasa Rao Vanapalli'

> Subject: RE: Parameterization!!!

> How did the objective function values and the goodness of fit plots compare

> between TRANS1 and TRANS4? Are you sure you have a global minimum in both

> fits? (try different initial estimates to verify). If TRANS1 fit is

> better, try calculating new initial estimates for TRANS4 based on the TRANS1

> final estimates and the relationships between the two parameterizations.

> Also be aware of potential for flip-flop with your model.

> From: Sreenivasa Rao Vanapalli [mailto:svanapal@blue.weeg.uiowa.edu]

> Sent: Tuesday, May 08, 2001 5:51 PM

> Subject: Parameterization!!!

> I have data obtained after oral administration and trying to fit to a two

> compartment model with NONMEM. I know that the data fit to two comaprtment.

> Intitially I fitted the data with WinNonlin and now trying population

> compartmental model. When I tried with microconstants (KA, K12, K21, K, Vd)

> I could compare these estimates with WinNonlin values. But with clearance

> parameters, the estimates are quite different. The estimate for central

> compartment with clearance parameter (ADVAN4 TRANS4) was only 0.5 liters.

> Where as VD estimate with microconstant parameters (ADVAN4)was 13 liters.

> Same case with KA also. Can some one explain why this is happening?

> Sreenivasa Rao Vanapalli, Ph.D,

> Janssen Postdoctoral Research Scholar,

> S411 PHAR, College of Pharmacy,

> University of Iowa, Iowa City, IA-52242

Nick Holford, Divn Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

email:n.holford@auckland.ac.nz tel:+64(9)373-7599x6730 fax:373-7556

http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.htm

From: "KOWALSKI, KENNETH G. [PHR/1825]" <kenneth.g.kowalski@pharmacia.com>

Subject: RE: Parameterization!!!

Date: Thu, 10 May 2001 09:43:54 -0500

Good question. This topic came up at the Mufpada meeting last week. I take the following steps when I build population models:

1) Determine the structural model (e.g., 1 vs 2 comp)

2) Determine the variance structure for Omega and Sigma

3) Develop the covariate models

In step 3 I like to develop a full model with all the covariates in the model simultaneously. There is a lot of information that can be obtained from a full model fit and the issue of whether one should use a diagonal or block Omega during the covariate model building step goes away if you start with the full model and work backwards. Of course there can be issues with convergence of a full model when the covariates are highly collinear. For example, I have a researcher that always wants to investigate BSA as well as WT on V and CL. Because BSA and WT are so highly correlated it doesn't make sense to build a full model that includes both of these covariates. I usually include WT and then verify that any trends in the etas vs BSA are accounted for if WT is in the model. If we judiciously consider our covariates I think we can have more success in fitting full models.

In my experience where the correlation between CL and V is estimated to be quite high from a base model fit, I have yet to encounter a covariate or set of covariates included in the full model or final model that have strong enough signals to drive the correlation between CL and V to zero. Thus, I always work with the fullest block Omega that I can estimate from my base model and use that in fitting the full model and subsequent covariate models run in search of the most parsimonious final model. Because of this, even if I employ forward or forward/backward stepwise algorithms to build the covariate model, I tend to use the fullest Omega that I can estimate from my base model.

I will take this opportunity to make a plug for some research that my colleague, Matt Hutmacher, and I have been working on. Although stepwise procedures often find good fitting models they do have their deficiencies (particularly when there is high collinearity among the covariates) and there is no guarantee that they will find the best model among the 2^k possible models (all combinations of presence or absence of k covariate parameters). We have developed an algorithm that makes use of the estimates of the thetas and its corresponding covariance matrix of the estimates from a full model fit to approximate the likelihood ratio test statistic (difference in the objective function between the restricted model and the full model) for all 2^k - 1 restricted models without having to run each of the restricted models in NONMEM. We use the algorithm (called WAM for Wald's Approximation Method) to rank all 2^k possible models and then fit the top 10-15 models in NONMEM before deciding on a final model. Whereas stepwise procedures will find a single good fitting model the WAM algorithm gives a sense of the competing models that also have good fits. I gave a presentation at Mufpada last week comparing the WAM algorithm with the stepwise procedures on several data sets. In most cases the WAM algorithm performed as well if not better than the stepwise procedures in finding a parsimonious model and with substantially fewer NONMEM runs. Moreover, the NONMEM runs for the models that the WAM algorithm identifies as the "best models" is considerably more informative than the totality of the NONMEM runs determined by the stepwise procedures. The key to using this algorithm is the fitting of a full model. We have a paper that describes this methodology that will be coming out in the June, 2001 issue of JPP. We have SAS and S+ implementations that we will make available to anyone who is interested in trying it out.

From: Mats Karlsson <Mats.Karlsson@farmbio.uu.se>

Subject: Re: Parameterization!!!

Date: Thu, 10 May 2001 17:14:36 +0200

Just as another possibility regarding parametrisation. If you look for covariate-parameter relations using posthoc etas (or parameters), it might be useful to look at not only the eta for CL and that for V (if those are the parameters we are interested in) but also try to isolate the covariance and see if that correlates with any covariate (that is can any covariate explain the correlation per se). One way of doing that is to use models like

CL=THETA(1)*EXP(ETA(1)+ETA(3))

and then look at covariate models for ETA(3) (representing the covariance) as well as the ETA(1), ETA(2) (uncorrelated variability in CL and V) and the sums

ETA(1)+ETA(3) and ETA(2)+ETA(3). (I know this is not an entirely general model, but one that most often would be biologically reasonable for CL and V).

Div. of Pharmacokinetics and Drug Therapy

Dept. of Pharmaceutical Biosciences

From: "Gibiansky, Leonid" <gibianskyl@globomax.com>

Subject: RE: Parameterization!!!

Date: Fri, 11 May 2001 08:29:58 -0400

CL=THETA(1)*EXP(ETA(1)+ETA(3))

imposes restrictions on the covariate matrix. For example, the models with negative correlations

CL=THETA(1)*EXP(ETA(1)+ETA(3))

are not covered. More general (and the most general) one would be

CL=THETA(1)*EXP(ETA(1)+ETA(3))

V=THETA(2)*EXP(ETA(2)+THETA(3)*ETA(3))

However, with this model one has an extra parameter, and it may cause problems on the covariance and/or POSTHOC steps (have you tried it ? is it really a problem ?). Alternatively, one can try

V=THETA(2)*EXP(ETA(2)+THETA(3)*ETA(1))

In this case, ETA(1) is mainly responsible for the covariates present in CL, THETA(3) shows the correlation, and ETA(2) is mainly responsible for the covariates that are present only in V. Plots of ETA(1) and ETA(2) vs. covariates then will show what to include into covariate model for each of the parameters. Do you have any experience with similar parameterizations ?

From: Mats Karlsson <Mats.Karlsson@farmbio.uu.se>

Subject: Re: Parameterization!!!

Date: Sun, 13 May 2001 12:29:30 +0200

> CL=THETA(1)*EXP(ETA(1)+ETA(3))

> V=THETA(2)*EXP(ETA(2)+ETA(3))

> imposes restrictions on the covariate matrix. For example, the models with

> CL=THETA(1)*EXP(ETA(1)+ETA(3))

> V=THETA(2)*EXP(ETA(2)-ETA(3))

Yes. One could certainly try this if the correlation is negative. I just presented the positive correlation because that is the most commonly expected one for CL and V.

> More general (and the most general) one would be

> CL=THETA(1)*EXP(ETA(1)+ETA(3))

> V=THETA(2)*EXP(ETA(2)+THETA(3)*ETA(3))

> However, with this model one has an extra parameter, and it may cause

> problems on the covariance and/or POSTHOC steps (have you tried it ? is it

I would use the following instead (which I think is as general but with only three parameters):

CL= THETA(1)*EXP(THETA(3) * (ETA(1) + THETA(5)*ETA(3))

V = THETA(2) *EXP(THETA(4) * (ETA(2) + SQRT(THETA(5)*THETA(5))*ETA(3))

Where omega for ETA(1), ETA(2) and ETA(3) are fixed to 1. IF THETA(5) is negative, correlation is negative, whereas if it is positive correlation is positive.

> V=THETA(2)*EXP(ETA(2)+THETA(3)*ETA(1))

> In this case, ETA(1) is mainly responsible for the covariates present in CL,

> THETA(3) shows the correlation, and ETA(2) is mainly responsible for the

> covariates that are present only in V. Plots of ETA(1) and ETA(2) vs.

> covariates then will show what to include into covariate model for each of

> the parameters. Do you have any experience with similar parameterizations ?

Only with the one I mention above which seems to work fine.

Div. of Pharmacokinetics and Drug Therapy