From: Mats Karlsson <Mats.Karlsson@biof.uu.se>

Subject: Rate constants

Date: Mon, 27 Nov 2000 22:28:05 +0100

Those interested in real problems can stop reading here!

Usually we model rate constants (ka, keo, kout, ...) with exponential models (ka=THETA(.)*EXP(ETA(.))). Is there any reason to believe that this should be better than modelling the corresponding half-lives or transit/residence times instead (e.g. MAT=THETA(.)*EXP(ETA(.))? I usually find it easier to keep track of how reasonable parameters are on the time rather than inverse time scale, so the time scale would in that respect to prefer.

Best regards,

Mats

--

Mats Karlsson, PhD

Professor of Biopharmaceutics and Pharmacokinetics

Div. of Biopharmaceutics and Pharmacokinetics

Dept of Pharmacy

Faculty of Pharmacy

Uppsala University

Box 580

SE-751 23 Uppsala

Sweden

phone +46 18 471 4105

fax +46 18 471 4003

mats.karlsson@biof.uu.se

*****

From: Nick Holford <n.holford@auckland.ac.nz>

Subject: Re: Rate constants

Date: Tue, 28 Nov 2000 12:05:02 +1300

I have not been using rate constants in population analyses for a long time now. I have been using Tabs (absorption half-life), and Teq (equilibration aka "effect compartment" half lives) instead of KA and Keq (aka Keo) for the same reason Mats mentions. It is easier for my (limited) brain capacity to understand parameters which have units of time rather than 1/time.

The only downside is that somewhere in the code I have to add this:

IF (NEWIND.LE.1) LN2=LOG(2) ; for computational efficiency

and then, for instance, to keep PREDPP ADVAN2 happy:

TABS=THETA(PopTabs)*EXP(etaTabs)

KA=LN2/TABS

As for the suggestion that we consider the population model for Mats:

MATS=THETA(PopMats)*EXP(etaMats)

I think that we should be grateful that Mats is an individual and not a population :-)

Nick

--

Nick Holford, Divn Pharmacology & Clinical Pharmacology

University of Auckland, 85 Park Rd, Private Bag 92019, Auckland, New Zealand

email:n.holford@auckland.ac.nz tel:+64(9)373-7599x6730 fax:373-7556

http://www.phm.auckland.ac.nz/Staff/NHolford/nholford.htm

*****

From: "Piotrovskij, Vladimir [JanBe]" <VPIOTROV@janbe.jnj.com>

Subject: RE: Rate constants

Date: Tue, 28 Nov 2000 09:07:33 +0100

I think the advantage of converting rate constants into respective time parameters is not only in helping to keep track of how reasonable parameters are. My experience says that rate constants usually have highly skewed distribution in a population which cannot be adequately described by log-normal distribution. I guess time parameters should have less skewness. In case of FO approximation it may have minimal impact, however FOCE estimates will be significantly affected and perhaps less biased.

Best regards,

Vladimir

----------------------------------------------------------------------

Vladimir Piotrovsky, Ph.D.

Janssen Research Foundation

Clinical Pharmacokinetics (ext. 5463)

B-2340 Beerse

Belgium

Email: vpiotrov@janbe.jnj.com

*****

From: "James Wright" <damage128@hotmail.com>

Subject: Re: Rate constants..are dead?

Date: Tue, 28 Nov 2000 13:06:41 -0000

Dear nmusers,

I am very pro-half-life (for linear models) because to me it is interpretable and not just by pharmacokineticists.

In the population context, the distributional assumptions are of some interest however.

Physiologically, the half-life is a ratio of volume and clearance. Positively contrained ratios will tend to be lognormal. As half-life and ke are reciprocally related, then if one is lognormally distributed so is the other (reciprocalisation corresponds to reflection on the log-scale) but with different mean and variance. I do not intend to contradict Vladimirs practical advice, as lognormality may be a better approximation on the time-scale. I have never tried it so I do not know. The lognormality of such parameters are discussed in detail in

Julious SA and Debarnot CAM. Why are pharmacokinetic data summarized by arithmetic means? Journal of Biopharmaceutical Statistics 2000; 10: 55-71.

However, if you come to model a covariate on half-life (for example, if half-life was the parameter of clinical interest) then you should bear in mind that your distributional assumptions now describe perturbations about the model predictions. If you have chosen a model linear on the log-scale then the model will be equivalent for rate constants. Otherwise, it won't and neither will your distributional assumptions.

Regards, James

*****

From: "Wixley, Dick" <Dick.Wixley@solvay.com>

Subject: RE: Rate constants..are dead?

Date: Wed, 29 Nov 2000 10:11:48 +0100

Dear nmusers

I have found the discussion very interesting. I think there are two issues

*the chosen parameterization and distributional assumptions in estimation

*the chosen parameterization for practical scientific inference about the model and data, etc.

The former choice is important since the closer to the truth of the assumption of normality for the parameter(s), the better the performance of the estimation (e.g.FO) method, and the closer the individual predictions. (The inverse BOX-COX power transformation provides a useful and flexible parameter transformation that is also bounded >0.)

The sensible scale for thinking about the real-life situation is often not the rate constant. I have made a habit of presenting half-lifes for all rate constants and am in agreement with all comments on this point.

It is interesting to speculate about the true distribution of rate constants. Some years ago I did some investigation of the distribution of plasma concentration measurements in pre-clinical PK and toxicokinetics. I found that the distribution usually lay somewhere between the lognormal distribution and the gamma distribution.

After intravenous dosing this gives a model:

E(Y) = exp(-alpha*time+beta) or log[E(Y)] = beta - alpha*time, and,

VAR(y) = V*E(Y)**2 approximately.

The data could be analysed in the generalized linear model framework by a log-linear link model and Gamma errors.

Alternatively the transformation Z = log(Y+const) gave a model of the mean linear in 'time' and with constant variance.

i.e. E(Z)= beta -alpha*time , VAR(Z) = V` to first-order approximation.

Either way the linear model assumptions seemed to apply both within and beween subjects in all the pre-clinical data-sets I investigated. The normal distribution of rate constants therefore seems natural from superficial basic considerations.

But what about the constraint alpha>0. This messes up the normal assumption.

Subsequently in population PK in humans I have gained the impression that the "lognormal" assumption is not bad for elimination constants. Also, it is convenient and virtually essential since it naturally bounds alpha away from zero.

Also when eta becomes small, (CV<10%) the log-normal distribution tends to normality. (For the lognormal , the larger eta is, the more skew the distribution.)

For absorption rate-constants in oral administration, the situation is different and the distributions between subjects arbitrary and very variable.

Finally if a variable is lognormal, its inverse is lognormal.

An alternative to the lognormal that is more extreme, is the (generalised) Inverse Gaussian distribution. The reciprocal Inverse gaussian or Wald distribution is potentially interesting. Time permitting it would be perhaps fruitful to explore this whole area more fully.

regards

Dick

*****

From: "Piotrovskij, Vladimir [JanBe]" <VPIOTROV@janbe.jnj.com>

Subject: RE: Rate constants..are dead?

Date: Mon, 11 Dec 2000 13:57:47 +0100

Dick,

Perhaps you know also the way how to implement, e.g., gamma distribution in NONMEM? Or somebody else knows?

Best regards,

Vladimir

----------------------------------------------------------------------

Vladimir Piotrovsky, Ph.D.

Janssen Research Foundation

Human Pharmacokinetics (ext. 5463)

B-2340 Beerse

Belgium

Email: vpiotrov@janbe.jnj.com