Replies posted by MarcelRoelofs
hi @gdiepen First of all, you know that you can also use dex::schema::ParseJSONSchema to generate a library for creating a mapping and identifiers for just a json schema, and that you don't have to do trickery to wrap it in a openapi spec?I've added an option dex::schema::IterativeResetArrays in DEX 22.214.171.124 that steers whether iterative-reset attributes are added to array mappings when generating mapping files from a json schema. Notice, that with this option set to 0, you should call dex::ResetMappingData prior to calling dex::ReadFromFile to reset the iterative counters manually. Finally, you can already embed special extensions (x-aimms-...) in a json schema to steer the generator (e.g. to set the identifier names). I've played with the idea to also allow to set x-aimms extensions in a schema that would allow mapping schema elements to existing indices and identifiers in the model. That would relieve you of having to do the mapping of generated data to your model identifiers afterw
@sandervlotThis is a consequence of how pro::DelegateToServer is handled. When you use pro::DelegateToServer in a procedure, that procedure will be re-run in a solver session (with all the current arguments), using the current data in the current session. Within the solver session, the procedure will be run from within the context of pro::OptimizeCurrentCase inside a block-on-error, by handling an incoming message that encodes the original procedure call with arguments on the client side. Handling that message means, re-running the procedure call encoded in it.The block-on-error is necessary to allow pro::OptimizeCurrentCase to not be interrupted, such that it can communicate back the result to the client, even in case of an error. If the error would fall through completely to the global error handler, the call to pro::OptimizeCurrentCase would not be able to complete.So, if you use a global error handler e.g. to record any error to a logging table, then you will have to call this proc
@salman18 My guess is that you are loading a lot of data during data initialization during project startup. The verification session during startup has a time limit of 30 seconds. Thus, if your data initialization (or whatever you do during project startup) takes longer than 30 seconds, the verification session will fail. Because this is one of the most common problems when publishing AIMMSpacks to AIMMS PRO, we have recently decided to skip this verification run when publishing an AIMMSpack, see https://documentation.aimms.com/pro-release-notes.html#aimms-pro-2-51-1-release
@gdiepen Maybe even better, try to avoid the use of passwords altogether, and go SSO. I don't know what exactly you need the password for, but many services now support OAuth, and DEX will allow you to perform an Authorization Code flow from within your model, both locally, on-prem and in the cloud.
@Noob9000 You can also put e.g. dataset and tablename annotations on sections or named declaration sections. They are then inherited by all identifiers underneath them. That would allow you to define a ‘table’ as a (named declaration) section with identifiers with the same inidces in there. You can define the column name for indices either by separating the index declaration from the set declaration, and then setting the dex::columnname annotation for the index, or by pre-filling the dex::ColumnName string parameter with the column you want for the index.
@Noob9000 Maybe you're trying to load the newest DEX version in an AIMMS version <= 4.87? The latest DEX versions only work with AIMMS >= 4.88. We're moved on to a new build system and it became too cumbersome to port all new components back to the old build system used by AIMMS <= 4.87.
@rmateus When DEX reads a 0 in any supported file format into a numerical parameter that has a 0 default, then no value is stored by AIMMS. This makes a 0 encountered in a file read via DEX indistinguishable from an empty field. There are several ways around that:Specify another default for the parameter. In this solution you may have to use the NonDefault function to prevent the non-0 default to be used when you don't want it. Specify the force-dense attribute in the DEX mapping. The identifier you specify for this attribute will be set to 1 for any non-empty value encountered in a file read via DEX. You can then use this parameter to distinguish between a 0 value and an empty value. When writing a file via DEX, 0 values will also be written back if the force-dense parameter holds a non-zero value.
Would it be possible to read a multilevel column data from a csv file using the DEX Library mapping ?
If you have an example of the crashing parquet file, then please send a reproducing example to email@example.com and we can take a look at it. Might be you're using some Parquet feature we don't support, but we would need the parquet to be able investigate.
Would it be possible to read a multilevel column data from a csv file using the DEX Library mapping ?
Hi @premkrishnan612 I've no idea what you are referring to about the limitation of the number of indices that you can use in a CSV mapping. The only limitation is the maximum dimension an AIMMS identifier, which is 32, and DEX is fine handling these.I've attached a very simple example of a 5-dimensional identifier being mapped to a CSV file, both with all dimensions in the row header, and with the last dimension mapped to the column headers using the name-binds-to attribute. This would work the same with identifiers with any dimension up to 32.With DEX you can't have multiple column headers. These pivot-like tables are the realm of the axll library. For exchanging data with other applications, such formats suck imo.BTW. if you have massive data I would suggest to use Parquet files instead of CSV files. This will lead to increased performance as well as decreased file size, while many applications are able to work with Parquet, and delta-lake based data warehouses like databricks are b
Hi @MattC The same redirect URL is also supported for WebUI sessions running on PRO on prem servers since PRO version 2.41.1, see https://documentation.aimms.com/pro-release-notes.html#aimms-pro-2-41-1-release. If you want to use OAuth2 Authorization Code flow from within WebUI sessions, then you need this redirect URL, as it is a necessary part of the Authorization Code flow.
Hi @Diego Perez-Urruti Well, the first thing I noticed is some confusion about how pro: paths are written. In your comment you use forward slashes, but your regular expressions use backward slashes. The documentation mentions forward slashes, so this may be the cause of your problems.Alternatively, you could just use FindNthString to find the last forward/backward slash, and take the substring starting one position to the right.to find the case name.
Hi @HHHermit If I understand you correctly, you're looking to host the database in the cloud, while still using the AIMMS fat client application on the user's workstation. That raises a number of questions:How does the database get filled? Solely from the AIMMS application? Or are there different processes that also load data into the database? Are you also planning to move the AIMMS application to the cloud? Is it a multi-user app that actually requires a database for sharing the data among multiple users?If there are other applications loading data directly into the database, I don't see how you to prevent a VPN or using a service like Azure Data Factory to get data into the database. However, with Azure VPN gateway and Azure VPN Client, setting up a VPN connection to your database from the user's computer is a breeze.In case you move the application to the AIMMS cloud, you can still run it as a desktop application (although in end-user mode). If you also have an application databas
@oomse, I'm wondering whether those continuous model changes are mostly wrt data that you store as part of the model, or whether you actually change the model formulation that often (which I find much harder to imagine).In the first case, I would argue that in most cases non-static data should not be part of the model, but should be stored separately (file, database, CDM). This would give you a way to store model state without having to save the model on every run.In the second case, if you have to change the model formulation that often, isn't it possible, and much more natural, to make the model formulation more data driven? In that case, changes in model behavior are driven by changes in data (potentially by following different branches in your code through if-then-else constructs if need be). Also, you can have a superset of variables and constraints, only a subset of which you can actually include in your optimization model. This allows for another method to make an optimization m
@waishengIf that's the constraint that is giving you trouble I would make certain that you carefully exclude t = 0 in the inventory balance constraint, or you will end up with two incompatible constraints for the inventory at t = 0.Also, if you base your model off the formulation in the referenced article, then notice that the formulated constraint about the bottleneck rate does not match its textual description. In the formulated constraint all production in a period must be equal to the bottleneck rate, while the article mentions that is should be max the bottleneck rate. The charts in the article match the textual description.
Hi @Fenris Wulf Well, technically, even outside our cloud platform you can run your own docker cluster using our AIMMS EO docker image (https://github.com/aimms/aimms-eo), directly running dex::api::RESTServiceHandler to handle incoming requests one at a time.However, on our cloud platform you can just publish an app and then directly start using the Task REST service to fire tasks from your app, effectively allowing modelers to instantly expose their model via a REST API. We then take care of all the infrastructure on our cloud platform necessary to run them at scale.Setting this up yourself leaves it completely up to you to set up such infrastructure yourself with things such as load-balancing task requests between multiple nodes, or implementing more advanced task queueing approaches across all nodes in the cluster. This will require additional expertise in your team outside of the realm of optimization modeling and mathematical optimization, and potentially substantially slow down
Hi @Sree The DEX library will guess the delimiter based on the first line, and is tested to work with comma, semi-colon and tab separators. When writing, it will always generate comma-separated files.The DEX library is currently under full development to become a kind of Swiss army knife for data exchange in AIMMS, supporting reading and writing multiple data formats (JSON, XML, CSV, Excel, Parquet, DB), auto-generated mappings from annotated identifiers, auto-generating application databases capable of storing datasets for multiple scenarios from annotated identifiers (currently under development), generating REST API clients from OpenAPI specifications, exposing model procedures as REST services on our cloud platform, exposing generated application databases through a REST service in our cloud platform (next on roadmap), … So, if anything, my bet would be for DEX as the future proof library for data exchange.
Already have an account? Login
Please use your business or academic e-mail address to register
Login to the community
No account yet? Create an account
Enter your username or e-mail address. We'll send you an e-mail with instructions to reset your password.