One way to do this is to maintain a case file with the initial / test data of your identifiers and load that casefile after each test to reset the state to the initial state.
You will however need to update this casefile when you add new tests/identifiers used in tests.
Hi @Sandra
I think you're looking for the concept of cloned data sets in the AIMMSUnitTest library.
Given a set of identifers (e.g. all identifiers in a section of your model), you can create a cloned dataset, which is basically a copy of all the identifiers in that set within a runtime library created by the AIMMSUnitTest library. You can then restore the original data from that cloned dataset, compare the current contents of all actual identifiers with the clones in the cloned data set (to verify the results of some action in your model), etc.
The unit test model of the DataExchange library uses this concept at length, and contains various examples how we use the concepts that you asked about using cloned data sets.
With respect to just applying empty to all identifiers in your model, I would advise to be more selective in what you empty, and empty just a given set of identifiers. Typically just calling empty may also destroy the internal state of generic libraries like CDM, DEX, and runtime libraries created by those and cause problems further on during the execution of your model.
Thank you @mohansx , @MarcelRoelofs for your answers!
It indeed seems that I was looking for cloning data sets - I will look into this feature, it seems promising. We only empty the identifiers in the main folder (so in our own code) and not the parameters within the libraries - so hopefully, there should not be a problem.