Solved

Would it be possible to read a multilevel column data from a csv file using the DEX Library mapping ?

  • 13 July 2023
  • 4 replies
  • 50 views

Badge

The reason for my question is that mapping with DEX package for CSV is limited by the number of dimensions possible in a single file and therefore needs the use of multi-level columns.  This limitation requires me to use multiple CSV files for my problem. While using the bind-to and maps-to mapping elements are useful with single line headers and can even be extended with help of .regex, I was wondering if multi-line headers could be adapted to the data exchange mapping elements for csv. Showing a sample table below with the problem that I am facing.

    Set4_val1 Set4_val1 Set4_val2 Set4_val2
Set1 Set2 Set5_val1_Set_4_val1 Set5_val2_Set_4_val1 Set5_val1_Set_4_val2 Set5_val2_Set_4_val2
a aa 5 6 7 8
a bb 9 10 11 12
b aa 13 14 15 16
b bb 17 18 19 20

The final value in the table, lets called the parameter X is such that X(set1, set2, set4, set5)

Thank you

With Best Regards - Prem

 

icon

Best answer by MarcelRoelofs 15 July 2023, 11:08

View original

4 replies

Userlevel 4
Badge +5

Hi @premkrishnan612 

 

I've no idea what you are referring to about the limitation of the number of indices that you can use in a CSV mapping. The only limitation is the maximum dimension an AIMMS identifier, which is 32, and DEX is fine handling these.

I've attached a very simple example of a 5-dimensional identifier being mapped to a CSV file, both with all dimensions in the row header, and with the last dimension mapped to the column headers using the name-binds-to attribute. This would work the same with identifiers with any dimension up to 32.

With DEX you can't have multiple column headers. These pivot-like tables are the realm of the axll library. For exchanging data with other applications, such formats suck imo.

BTW. if you have massive data I would suggest to use Parquet files instead of CSV files. This will lead to increased performance as well as decreased file size, while many applications are able to work with Parquet, and delta-lake based data warehouses like databricks are built on top of the Parquet format.

Badge

Thank you for the response !

Badge

Just one more follow up question. I tried to test out parquet mapping but AIMMS keeps crashing and closing. I had originally created the parquet files in python with and without encoding but in both cases it starts to run the code and then AIMMS closes. Is there any reason why this is happening ?

Userlevel 4
Badge +5

If you have an example of the crashing parquet file, then please send a reproducing example to support@aimms.com and we can take a look at it. Might be you're using some Parquet feature we don't support, but we would need the parquet to be able investigate.

Reply


Didn't find what you were looking for? Try searching on our documentation pages:

AIMMS Developer & PRO | AIMMS How-To | AIMMS SC Navigator