Common Pitfalls of Modeling and Simulation Discussing Most Commonly Seen Simulation Challenges

Common Pitfalls of Modeling and Simulation

There are many potential pitfalls that face those who embark on a process simulation development effort. This article discusses some of those most commonly seen.

1. Model only what you understand

It can be said that the utility of a given model is only as good as the degree to which it represents the actual system being modeled. Indeed, a system — whether a process unit or just a section — can only be modeled once it is sufficiently understood. One may ask why modeling and simulation designers develop invalid models? There are many reasons, the first of which is that high fidelity model development requires a significant investment of time and effort. The fact is that many designers are under time constraints to deliver results. Consequently, a careful understanding of the underlying system being modeled and rigorous validation of the model is not always an option.
While understandable, this is at the same time unacceptable. It is highly unlikely that a simulation developer can provide a meaningful result when they did not understand the system they were intending to model. While the timeline might have been met, the result was likely meaningless. Worse yet, the result was likely wrong and might have adversely affected larger design or business decisions. Model only what you understand! 

If you don’t have a fundamental understanding of a technology, there is no way you can effectively model or simulate that technology.

This step cannot be skipped in a successful modeling and simulation effort. If this step cannot be completed, it is better to not proceed down the path of modeling and simulation development.

2. Understand your model

It is imperative that the simulation engineer has a full understanding of the tools being used. Most simulations are likely to have errors — even commercial tools. This is especially the case for new simulation implementations. Sometimes, simulation implementations can make assumptions that may not accurately reflect the exact process performance. So, one must be careful in defining basic simulation assumptions.

If the simulation developer utilizes commercial simulation tools for the implementations, it is imperative to allocate the proper amount of time to closely examine and fully understand what that code is doing and what it is not doing.

There is no better way to lose credibility than to not be able to answer questions about one’s own results.

Understand what you have modeled! There are resources available to help with this, including technical support for commercial tools, online groups and user forums for open source tools.

3. Make your results independently repeatable

The first rule of thumb is having the answer to this question: is my model performing as I expected it to perform?

If the answer is "Yes", then new simulation results can also be compared with results in existing literature using underlying assumptions and parameter conditions. Good checking method is as well using a different simulation tool with same process data and assumptions. Getting results that are very close to each other is definitely good confirmation of your model and confirmation that the model results are independently repeatable!

4. Carefully define modeling and simulation requirements

This is an activity that is too often ignored or given superficial treatment. The authors would argue that simulation engineers all too often rush into a modeling and simulation effort without a clear idea of what they are hoping to accomplish. This is a surefire recipe for failure.
The first step is to clearly understand the results of interest that would be generated by a simulation. Not all simulation tools necessarily lend themselves to the same types of output results, so it is important to clearly define expectations so that tool selection is an informed
The next step is to clearly define the required performance of the simulation to be developed. We will focus on three primary dimensions of performance:

  • Cost: The overall investment in resources towards the development and maintenance of the modeling and simulation activity. This includes not only original platform costs, but also development time, upgrade and maintenance costs, and troubleshooting.
  • Execution Speed: For a given simulation scenario, how quickly can that simulation complete and provide the desired output results? This is generally governed by software complexity.
  • Fidelity: For a given simulation task, how accurately do the simulation’s results reflect the performance of the real system.

Note that these dimensions of performance are often contradictory; not all performance dimensions can be achieved simultaneously. Do you want high fidelity? Then the cost will likely be very high. In general, you should prioritize those three dimensions of performance. 

A common pitfall is to begin a modeling and simulation effort with unrealistic expectations. Is it really feasible to model all the process components to every little process detail with high fidelity? Probably not. Is it possible to model the entire process to every little detail with many simplifying assumptions? Probably, but it is unlikely to be useful.
When defining requirements and expectations for a modeling and simulation effort it is recommended to begin by choosing the required fidelity. How accurate result is required? A successful effort will always begin with this question because, without a meaningful degree of fidelity, any model and simulation activity is meaningless.

Once the required fidelity is established, one can then begin placing limitations on simulation capabilities accordingly.

Cost is generally bound by an allocation of resources. So given a known cost constraint and a known fidelity requirement, we can then begin building a conceptual model for the simulation. The target fidelity will mandate the inclusion of particular system characteristics with great detail and inputs with particular degrees of accuracy, and also allow for relaxation on other system details and input accuracy.

Note that this exercise requires a strong understanding of the system being modeled and on the underlying concepts.

Remember, model only what you understand! 

5. Model what you need and no more

One of the first decisions that the simulation developer must face is to determine what he or she is attempting to demonstrate through simulation and what is the most simplistic model that captures all necessary components. The engineering tradeoff is that increased detail can provide higher fidelity output from the model, but at the cost of complexity — potentially introducing error and certainly increasing debugging time and execution time.

The designer must also realize that a model is always an abstraction from the real world. 

Regardless of the level of detail included, a simulation will always be an approximation of the real system; an arbitrarily high degree of fidelity is generally not possible. Also, the cost of increased fidelity at some point becomes greater than the marginal utility of the additional fidelity.

How much detail is sufficient in a simulation to capture the essence of the real world process being modeled? Unfortunately, the answer to this question is that it depends on the particular simulation scenario. The simulation engineer should first decide exactly what is the problem that he or she seeks to address through simulation. What are the inputs and the outputs of the model? Some outputs may be independent of specific details in the model, while others may be correlated and therefore seriously affected if those components are abstracted.

Simulation always takes the form of an abstraction of a system to allow the designer to gain some insight from investigating various operating scenarios of the system. Yet in other cases, the researcher desires to investigate a process reaction to a single condition that may be unlikely to occur in real life. Perhaps testing the actual system under this condition could be harmful and simulation is the only way to examine the problem. The next step is to decide how much of the system must be implemented for the simulation results to be valid. Ultimately, the simulation engineer is going to have to decide the level of detail required in his or her simulation.

First, the developer must consider the engineering tradeoffs between adding more detail to a model and increased computational time, increased complexity, and increased debugging time.

A more abstract approach that focuses only on the basic behavior of a process is generally very flexible, easier to debug, and has a shorter execution time. But, it may not capture the behavior of interest.