From: "Stephen Duffull" <sduffull@fs1.pa.man.ac.uk>

Date: Wed, 7 Oct 1998 11:50:20 +0100

I was wondering if anyone could help me with a problem that I seem to run into when using ADVAN6.

Briefly, I have 4302 data records. The model I am using has oral/iv dosing with parent and metabolite (both 2 compartment). So with a gut compartment there are a total of 5 compartments. There are 11 parameters (some fixed), 9 etas & 4 epsilons. The problem takes about 2+ days to run (TOL=3). The error that almost always occurs is:

NUMERICAL DIFFICULTIES WITH INTEGRATION ROUTINE.

MAXIMUM NO. OF EVALUATIONS OF DIFFERENTIAL EQUATIONS, 100000,

Sometimes it claims to have successfully converged - although when looking at the fits this does not appear to be the case. Any ideas how to get around this one?

Email sduffull@fs1.pa.man.ac.uk

From: alison@c255.ucsf.EDU (ABoeckmann)

Date: Wed, 7 Oct 1998 15:24:17 -0700 (PDT)

Here are some general guidelines.

The number 100000 was chosen arbitrarily to prevent endless loops when the step size for an integration is driven to very low values, effectively 0.

It can stop a run when the convergence is slow.

It is possible that convergence is slow because the model is not very good. If you cannot improve the model, and it appears that the search was indeed making reasonable progress towards a mininum, here are three suggestions.

You can try ADVAN8 and/or ADVAN9 instead of ADVAN6; either of these is better able to handle stiff systems of differential equations.

You can divide the long run into several smaller runs, using the MAXEVALS option of the $ESTIMATION record. Use Model Specification files so that each run is continued by the subsequent run. This will keep each run under the limitation of 100000 evaluations of the differential equations.

Alternately, you can increase this limit. In your $PK block, include the following verbatim code:

" COMMON /PRCOMG/ IDUM1,IDUM2,IMAX,IDUM4,IDUM5

" INTEGER IDUM1,IDUM2,IMAX,IDUM4,IDUM5

The constant 200000 gives you twice the default; you may use a larger or smaller value.

> Sometimes it claims to have successfully converged - although when looking

> at the fits this does not appear to be the case. Any ideas how to get

I think this is another question, right? The error message never appeared, the run appears to be ok, but the fit is biased. It is a question that is not easy to answer, but here are a few brief ideas.

Biased fits are discussed somewhat in Guide VII. Its possible that METHOD=1 (FOCE) may help. Run times and numeric difficulties will be much worse. If the variance of most of the etas is small and only one or two have large variances (and considerable variability in the posthoc etas), consider the HYBRID method, in which the well behaved etas are held to 0 (using the ZERO option) and only the badly behaved etas are estimated by the FOCE method.

For information about HYBRID and ZERO, see Guide VIII ($ESTIM). See also Guide VII.

From: imeineke@WisLAN1.med.uni-goettingen.de (Dr. Ingolf Meineke, Uni Goettingen, Klin. Pharm.)

Date: Thu, 8 Oct 1998 11:44:12 +0100

In my experience the ominous error message occurs during the optimization step when the algorithm is using some inappropriate THETAs. Therefore I simply use the NOABORT option in the $ESTIMATION statement and sooner or later things return to normal. Under these conditions the error message can appear more than once, so I think the number 100000 is not the cumulative maximum of allowed evaluations of differential equations during a run.

see also http://regulus.PharBP.Med.Uni-Goettingen.DE/

From: alison@c255.ucsf.EDU (ABoeckmann)

Date: Thu, 8 Oct 1998 08:58:21 -0700 (PDT)

In response to Ingolf Meineke's note:

He is correct. The maximum of 100000 evaluations of the differential equations applies to a single call to the integrating routine (a single advance of the state vector). My suggestion to use MSF's to divide the run into smaller runs probably will not help.

> Sometimes it claims to have successfully converged - although when

> at the fits this does not appear to be the case. Any ideas how to get

I responded as if he was asking what to do about a poor fit after apparently successful minimization, but he wrote back to me the following:

> No, actually it was all part of the same problem (the error message occurred

> just after a claim of a successful minimisation).

After the Estimation Step, there are additional steps such as the Posthoc, Covariance and Table steps, and also additional passes thru the data to compute values for the output report. Evidently it was during one of these steps that the limit was reached. This would be an ideal time to have created an MSFO in the estimation step, because it could be used to complete the final steps in a later run without repeating the estimation step.

Or, it may be that the posthoc step will always fail, because NONMEM is trying etas that are so far from 0 that the PK parameters are far from their population values, and the differential equations have become very ill conditioned. In that case, it may help to use the EXIT statemement to avoid totally unreasonable eta values.

From: Pierre Maitre <maitre@cdg.ch>

Date: Fri, 09 Oct 1998 07:20:14 +0100

Your model is pretty complex. Your problem may come from bad ( because they are unknown) initial estimates, and the program gets lost trying to find a minimum in a pretty flat surface.

I suggest that you split the problem and initially only fit the iv data to a simple model, in order to get clearance and volume of distribution. In a second step, you may add the oral data, but limit the fit to the parent drug only, and then in a third step you may use the full model that takes into account the metabolites. By doing so, in each step you will get parameter values that you can use as reliable initial estimates for the next step.

From: "Stephen Duffull" <sduffull@fs1.pa.man.ac.uk>

Date: Fri, 9 Oct 1998 09:06:43 +0100

Thanks for your comments. Indeed thanks to all who responded to my Email. I have a few things to work with and I will let the group know which worked.

This is a good point and is essential with datasets like this or for those that are more complex. Needless to say I have already spent some time doing this (although I split it up into a few more steps). Indeed about half of the thetas (and also two of the etas) in the run that I described previously were fixed at values obtained from these previous runs.