From:"Luciane Velasque" 
Subject:[NMusers] large CV
Date:Tue, 5 Mar 2002 10:25:56 -0300

Hi !
 

I am modeling the population pharmacokinetics of didanosine. I have 72 voluntaries
who took one oral dose of 200 mg. Thirteen  blood samples were obtained
in the times 0, 0.25, 0.5, 0.75, 1.0, 1.5, 2, 2.5, 3, 4, 6, 8 e 10 .

 

the  CONTROL STATEMENTS is : ......

 

$SUBROUTINES  ADVAN4 TRANS4  

$PK

TVCL=THETA(1) 

CL=TVCL*EXP(ETA(1))

 

TV2=THETA(2)
V2=TV2+ETA(2)

 

TVQ=THETA(3)
Q=TVQ*EXP(ETA(3))

 

TVV3=THETA(4)

V3=TVV3*EXP(ETA(4))

 

TKA=THETA(5)
KA=TVKA*EXP(ETA(5))

 

S2=V2
S3=V3

 

K=CL/V2
K23=Q/V2
K32=Q/V3
KA=KA


$ERROR
Y = F + F*ERR(1)

$THETA (0,120)(0,5)(0,30)(0,50)

$OMEGA 1 1 1 1 
$SIGMA 1 

$ESTIMATION MAXEVAL=9999 SIGDIGITS=2 POSTHOC 

$COVARIANCE

 

 

The estimated OMEGA(V2) is very large  and the CV = SQRT(OMEGA(V2))*100 = 1600 %.  

  

Why is this happening ?  

 

Thanks in advance. 

 

Luciane
_______________________________________________________

From:"Bachman, William" 
Subject:RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 08:50:21 -0500

A couple of possibilities:
 
1. interindividual variability in V2 may actually be large
2. you might not have the information in your data to determine the interindividual variability
in V2 (interindividual variability in the peripheral parameters is often poorly
determined, you may want to omit the eta's on V2 and Q)
3. with dense data and high variability, use of FOCE method is recommended (METHOD=1)
4. also, use INTERACTION with METHOD=1 if possible
5. unless you have observations in the peripheral compartment, you don't need S3=V3 (but won't
have an effect on your model if you leave it in).
 

William J. Bachman, Ph.D. 
GloboMax LLC 
7250 Parkway Drive, Suite 430 
Hanover, MD 21076 
410-782-2212 
bachmanw@globomax.com
_______________________________________________________

From:"Farrell, Colm" 
Subject:RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 14:10:22 -0000

The control stream shows an additive error structure for IIV on V2, whereas the
calculation of the associated CV is for proportional/exponential error structure.
 

Colm Farrell 
GloboMax LLC
_______________________________________________________

From:"Kowalski, Ken" 
Subject:RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 09:12:46 -0500

Luciane,
 
FO often estimates large CVs with dense data.    You should try FOCE and since you are using
a constant CV residual error model you should also use the interaction option
(i.e., use METHOD=1 INTERACTION on the $EST statement).
 
Good luck.
 
Ken
_______________________________________________________

From:"atul" 
Subject:Re: [NMusers] large CV
Date: Wed, 5 Mar 2003 09:26:09 -0800

Hello Lucaine
 
Try FOCE. It will result in much better estimates. Also look at the correlations between
different estimates and include them in the model. 
 
Venkatesh Atul Bhattaram
Post-doctoral Fellow
University of Florida
Gainesville-32610
_______________________________________________________

From:"Howard Lee" 
Subject: RE: [NMusers] large CV
Date: Wed, 5 Mar 2003 09:23:15 -0500

Dear Luciane,

 

You modeled `additive' interindividual variability for V2 as the following:

 

TV2=THETA(2)
V2=TV2+ETA(2)

 

Therefore, SQRT(OMEGA(V2)) is SD, and your CV (%) is [SD/THETA(2)]*100 = [SQRT(OMEGA(V2))/THETA(2)]*100,
which I think will give you smaller CV for V2. 

 

Hope this helps.

Thank you.

 

Howard

 

 

Howard Lee, MD, PhD
Assistant Professor
Center for Drug Development Science
Department of Pharmacology, Georgetown University
School of Medicine, Box 571441
Washington, DC 20057-1441
USA
Tel: 202-687-8198
Fax: 202-687-0193
_______________________________________________________

From:Iñaki Fernández de Trocóniz 
Subject:Re: [NMusers] large CV
Date:  Wed, 05 Mar 2003 15:30:27 +0100

Dear Luciane,

V2 has been coded as TV2+ETA(2) instead of TV2*EXP(ETA(2)),
therefore the expression you are using to compute the CV is not
appropriate;
in your case the units of the variance depend on the units of the
parameter.
In addition, you have non-sparse data, perhaps you might consider the
use of  the FOCE with
INTERACTION estimation method. It is possible that in that case the
estimates
of inter-subject variability are more realistic.

Best regards,

Iñaki.
_______________________________________________________

From:"Sam Liao" 
Subject: RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 09:25:42 -0500

Dear Luciane:
 
two more possibilities,  
1) You don't have initial estimates in theta and omega for KA.
2) It may be quite possible that it converge to a local minimum.  How is the diagnostic plots look like?  
In my experience, the inital estimates for two-compartment model are very critical.   

Best regards,

Sam Liao, Ph.D.
PharMax Research
PO Box 1809,
20 Second Street,
Jersey City, NJ 07302
phone: 201-7983202
efax: 1-720-2946783
_______________________________________________________

From:Chuanpu Hu
Subject: RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 09:44:25 -0500:

Luciane, 

You used lognormal distribution for every parameter except V2, where you used additive normal. In that case, 
SQRT(OMEGA(V2))*100 
depends on the unit ov V2 and is not the CV of V2. You may want to rethink that model. 

Chuanpu 
--------------------------------------------------------------------------
Chuanpu Hu, Ph.D.
Research Modeling and Simulation
Clinical Pharmacology Discovery Medicine
GlaxoSmithKline
P.O. Box 13398
Five Moore Drive
Research Triangle Park, NC 27709
Tel: 919-483-8205  
Fax: 919-483-6380
--------------------------------------------------------------------------

_______________________________________________________

From:"Alice I Nichols" 
Subject: RE: [NMusers] large CV
Date:Wed, 5 Mar 2003 10:34:26 -0800

Dear Lucianne,
 
Your model may be overspecified given the available data. I was interested in adding that
an essential step in this process is to check the standard error for this parameter
(eta2) to see if the estimate you obtain is meaningful. Please look over the SEs you get for
all your parameters. If you have a parameter with a SE that is very large and the
estimated parameter is having minimal impact on model you may need to drop this
parameter from your model. 
 
Alice
 
Alice Nichols, PhD
Hawthorne Research and Consulting, INC
132 Hawthorne Rd
King of Prussia, PA 19406
PH:610-878-9112 / FX:610-878-9113
nichols@bellatlantic.net
_______________________________________________________

From:"Luciane Velasque" 
Subject: Re: [NMusers] large CV
Date: Tue, 5 Mar 2002 12:21:02 -0300

When I use FOCE I obtain the message :
 
0MINIMIZATION SUCCESSFUL
 NO. OF FUNCTION EVALUATIONS USED:  360
 NO. OF SIG. DIGITS IN FINAL EST.:  2.3
 
 ETABAR IS THE ARITHMETIC MEAN OF THE ETA-ESTIMATES,
 AND THE P-VALUE IS GIVEN FOR THE NULL HYPOTHESIS THAT THE TRUE MEAN IS 0.
 
 ETABAR:   -.18E-03   .14E-05  -.26E-07  -.46E-06   .87E-03
 
 P VAL.:    .10E+01   .94E+00   .72E+00   .80E+00   .92E+00
0R MATRIX ALGORITHMICALLY NON-POSITIVE-SEMIDEFINITE
 BUT NONSINGULAR
0COVARIANCE STEP ABORTED
 
What should make ?
 
thanks 
 
P.S. In the Statment Control 
 
V2= TV2*EXP(ETA2)
_______________________________________________________

From:"Bachman, William" 
Subject:  RE: [NMusers] large CV
Date: Wed, 5 Mar 2003 10:37:02 -0500

Your covariance step simply aborted for the reason stated (see Manual V, p.145).  What would I do?
The first thing I would do is look at the omega estimates (the interindividual variance estimates), if any
are very small (approaching zero, e.g. E-05) or very large, I would remove them from the model
because they are poorly estimated.

William J. Bachman, Ph.D. 
GloboMax LLC 
7250 Parkway Drive, Suite 430 
Hanover, MD 21076 
410-782-2212 
bachmanw@globomax.com 

_______________________________________________________

From:"Venkatesh Atul Bhattaram" 
Subject: Re: [NMusers] large CV
Date:Wed, 5 Mar 2003 11:08:40 -0500

Hello Lucaine
 
Clearly some of the estimates of etas are bad. You might want to fix them to zero or to any previously
reported value if the subjects belong to the same population and not to any special population. Also I
would explore the plots of different etas and try to reduce the "dimensionality" of the model. You will
find that when you plot the etas and do a 90 degree projection, most of the variability will be explained
by one of the two parameters and you will also notice the scale differences. So it looks like simplification
of the model will be a good solution either in terms of fixing or introducing correlations for which you
might need some explanations in terms of covariate information.
 
Venkatesh Atul Bhattaram
Post-doctoral Fellow
University of Florida
Gainesville-32610
_______________________________________________________

From:VPIOTROV@PRDBE.jnj.com
Subject:RE: [NMusers] large CV
Date:Thu, 6 Mar 2003 11:20:59 +0100

When selecting ETAs to be excluded, I would not recommend to rely on omega estimates. This way you
could exclude a random effect, which would be essential. Better way (although not ideal, too) is to run
sequentially a series of reduced models with each ETA excluded one at a time and then compare MOF
values. ETAs associated with an insignificant increase of MOF can be safely excluded.
 

Best regards, 
Vladimir
_______________________________________________________