I have formulated an NLP model with around 2500 variables and it is non-convex. When I solve it using IPOPT, I get the error “Intermediate Infeasible, terminated by solver”. When I use CONOPT as solver, I get same error. Analyzing the listing iteration log files, it reveals that solver labelled the solution as infeasible because the convergence is too slow as demonstrated by message in the end of listing iteration log “Convergence is slow and a derivative is discontinuous”. Please note that sometimes, the same model reached a “local optimal” with sensible values for the variables. With some nice information in AIMMS documentation, forum and webinar, I have tried scaling option, good initial points (which are not possible in every case) and a bit of reformulation by removing some variables acting as intermediate variables between two variables to calculate some values. None of these options have proved any change in the results. Any idea or word is highly appreciated. Thanks for your time. Cheers
Best answer by Marcel HuntingView original
@Zeb , you could try the AIMMS Presolver by switching on the Solvers General option 'Nonlinear presolve'. I wonder whether it is possible to reformulate the model such that it does not have any discontinuous derivatives. Would it be possible to add your project here, preferably after adding it to a zip file, or send it to our support by email if your project contains sensitive information, such that we can look at the model formulation?
Just wanted to check back - did you find the help you needed in this thread, or maybe you found another solution you could share with us? Thank you so much!
Thank you for your comment. The model had variables which were using formulas in which denominator were some other very small variables (say in terms of 10^-4 or 10^-5). This was causing instability. Additionally, I scaled up the variables a little bit and it successfully converged to a sensible solution. So, lessons learnt: Try to avoid sqrt, abs, log, division by very small numbers, and bad scaling as much as possible. Thanks
Thank you so much for your response! This is so helpful, and we’re so glad you figured it out! :D