AI Assistants for Every AIMMS Application
A new library lets you transform any AIMMS application into something your users can interrogate with AI: ask questions, run scenarios, and get answers in plain language. It has already been tested on four complex use cases and models. You can get involved too.
Why this matters
Business users can now talk to the optimization model in plain language and get answers in seconds, with no new tools, no Python, and no bolt-on chatbot.
The end-user experience in more detail
Your planners open a conversation inside the AIMMS application and ask questions in plain language: "What happens if we lose this warehouse for a week?" or "Can we still meet demand if we cut the third shift?" The AI assistant orchestrates the interaction, the optimization model runs the math, and the planner gets an answer in seconds, all without leaving the tool they already use.
Here is what it looks like in practice:
-
"Which customers in the Nordics had more than 5% unmet demand last week?". The agent queries live optimization results and returns a filtered answer with the relevant numbers.
-
"Our Düsseldorf warehouse just lost a shift. Cut its daily throughput capacity by 30% and re-optimize". The agent modifies the input, re-runs the solver, and reports the cost and service level impact across the affected network.
-
"Compare our current plan with a version where we consolidate the Berlin and Prague DCs into one. Show me the cost delta and any customers that drop below 95% fill rate". The agent runs both scenarios, compares the results side by side, and highlights the trade-offs.
How you will add these AI-assistants
The same library works across any AIMMS application. You import it into your project, provide configuration that teaches it your domain, and your application becomes AI-ready. From there, you control everything the AI can touch.
As a developer, you decide exactly what the AI can see, what it can modify, and what requires user confirmation. You set this per identifier. That level of governance is what separates a production-grade AI assistant from a chatbot bolted onto a UI.
We will publish detailed documentation on each of these as the capability moves into early access.
Built on real operations, not in a lab
We are building this in workshops with customers, on real, large-scale optimization applications. Teams bring their actual models and their actual user pains, and we work together on development builds.
Four workshops have been completed so far, across industries and geographies. Each one has surfaced problems that only show up at production scale and improved the product directly: edge cases in large indexed data, planner expectations, and the level of domain context the agent needs to give useful answers. The problems that matter most are the ones you only find in real operations, so that is where we are spending our effort.
Start preparing now
If you build and maintain AIMMS applications, you can start preparing now:
-
Understand your users' pain points. The best AI experiences start from tasks that are multi-step, time-consuming, or error-prone for your business users. The AI assistant is most valuable where it replaces work that is manual, repetitive, or spread across multiple screens.
-
Think about what to expose. Walk through your model and identify the sets, parameters, and procedures that matter for the workflows your users care about. You stay in control of what the agent can access.
-
Plan your application prompt. The application-specific context you provide determines how well the agent understands your domain. Include what the optimization does, key terminology, and any constraints or assumptions your planners should be aware of.
Get involved
We are running hands-on workshops with customers to add a working AI assistant directly into their AIMMS applications. If you want to explore this on your own application, or help shape what comes next, talk to your AIMMS contact or reply to this post.