14.11.2016.

Making the Most of Life-Cycle Dynamic Simulation, from Concept to OTS and Beyond Safety, reliability, efficiency, lower costs - what's not to like?

John Hinsley, Independent consultant in dynamic simulation, flow assurance and process control
Making the Most of Life-Cycle Dynamic Simulation, from Concept to OTS and Beyond

Steady state process simulation is universally accepted as an essential tool for project development across the process industries. Every process engineer expects to have easy access to HysysTM or one of its derivatives or competitors for developing process heat and mass balances. Dynamic process simulation, however, is still seen as a “nice to have” by many projects despite the clear benefits it can bring. This is despite the technology having been available for many years to perform high-fidelity dynamic simulation of process facilities and multiphase pipelines, with tremendous improvements in computing speed and ease of use over the last couple of decades.

So what are the benefits of using dynamic simulation, and why is it not being used more routinely?

This article aims to answer these questions, with specific reference to the Upstream Oil & Gas Industry, but close parallels exist in most sectors of the Process Industries.
Dynamic process simulation can provide valuable contributions throughout the project life cycle:

  • Coarse assessment models for concept selection – early consideration of operability and process interactions can increase reliability of concept screening and can help avoid problems which would otherwise become increasingly expensive to correct as the development progresses.
  • Engineering study models for design development and verification – perhaps the most widespread use of dynamic simulation during the design phases is to focus on “hot spots” such as HIPPS and compression, but identifying and understanding dynamic interactions and operability issues (e.g. rate changes, start-up, responses to failures etc.) can add value throughout the process. It can help to ensure system-wide data coherence, can significantly enhance process control philosophy development, and is unmatched for resolving potential HAZOP actions.
  • Well-to-export integrated model for proof of seamless operation of full system – given the necessity to segregate engineering effort, this is by far the best way to reveal and address any integration problems when all sections are connected and thus have realistic dynamic boundary conditions. Ideal tool for developing operating philosophies and procedures, and for demonstrating the full system consequences of design decisions. 
  • Interfaced models for control system development and testing – a fully dynamic model allows the control system to be tested under very realistic conditions at a tiny fraction of the cost of actual reality (no safety risks, no loss of production). Sufficiently detailed models can allow pre-commissioning tuning of controllers.
  • Operator training simulators – long accepted as essential throughout the Process Industries, although too often limited in scope such that operators only learn of interactions with connected systems later, during production – sometimes too late. More effective and less expensive when based on models developed during earlier activities.
  • Real-time operator guidance systems (e.g. Pipeline Management Systems) – extremely valuable for predicting conditions where they are not measured, providing “look-ahead” and “what-if” functionality, and helping operators with infrequent operations (e.g. pipeline pigging, re-start after maintenance shut-downs etc.).
  • Virtual flow metering – especially valuable for subsea wells, providing back-up for physical meters which are expensive to maintain or replace. Should be considered part of the process instrumentation, with appropriate security and robustness (as opposed to simply an operator guidance tool).
  • Engineering model(s) for production support - continued use of dynamic simulation during operation for process modifications, debottlenecking, troubleshooting etc.

To maximize the return on investment in dynamic simulation, the project needs to gain as many of these potential benefits from the most reasonable amount of effort.

Note that this does not mean minimum effort – a judgement is required on the benefits gained from each level of expenditure. This judgement needs to be made in the widest project context, with a firm understanding of the value such work can generate. The benefits of early work are often realised in later design phases (when the contractor may have changed), in training, commissioning, and particularly in production, long after the design contractors have lost interest! This strongly suggests that the Operating Company should take ownership of the dynamic simulation effort throughout the project development.

Starting the dynamic simulation work as early as possible in the project will magnify the value significantly – largely because the less a design has progressed the less it costs to change it.

There is a misconception that it is impossible to start until there is enough firm data available. However it is possible, and very informative, to build models of early concepts using estimated data – often the act of making such estimates is enough to highlight sensitivities within a design, or to indicate which factors will make the biggest difference to operation or production efficiency. Again it is apparent that OpCo ownership of the dynamic simulation is key – guiding the design using its results will be likely to improve the project life-cycle economics, whereas the design contractor may see little or no benefit to their bottom-line.

Once the dynamic simulation effort has started, it makes sense to continue using the same tools (software) and even the same team if possible – providing both costs savings and valuable continuity, carrying the project’s history forward through the design phases and into production.

Too often project decisions are made for reasons which are later forgotten, making them either difficult to challenge or too easy to overturn, depending on the prevalent culture in the organization. Achieving continuity relies on early selection of personnel and software tools, which itself requires a deep understanding of the life-cycle opportunities and the market offerings that can support each stage.

From the outset, the selection of simulation tools should be driven by a vision of the final scope of dynamic modelling, usually set by the need for operators to be trained to understand the process as a whole and its interactions with external systems. The choice should include consideration of the capability of the core process simulation software to integrate models running in other packages – e.g. multiphase pipeline models, or “black-box” models of proprietary technology. Its ability to emulate and interface to a wide range of control systems also needs to be understood in the context of training simulator plans.

The team of engineers chosen to build, run and interpret the simulations are the other vital element which should be given serious attention early in the project. They need experience of the simulation tools, process design, process control and operation, so they are able to correctly and efficiently identify issues and solutions and to work effectively with other engineering disciplines.

Dynamic simulation is often perceived as a specialist discipline – not without some justification – which supports personnel continuity, since they can then also develop into valuable specialists in the project’s process system. Continuous involvement of Operations representatives from the earliest stages also helps maximise the value added by dynamic simulation effort. 

When managing dynamic simulation as part of a project development, it is vital to understand the investigative nature of the activity. It is possible to have a rough idea in advance of the most likely issues that need modelling, but it will never be possible to predict all the problems that may be identified during building, commissioning and using a model to support the design effort. If the use of the model is limited to a very strict, predefined scope and schedule, then many opportunities for improving the process design will be missed.

Dynamic simulation is by necessity detailed and comprehensive, and as such is an ideal way to find the “unknown unknowns” during design phases. But this can only happen if the project management and the contract structure allow sufficient freedom to appropriately experienced simulation engineers.

The above scenario requires a rethink to the typical contract strategy for dynamic simulation. Most projects include it in the scope of the FEED and EPCIC contractors who insist on fixed (minimal) scopes of work and then treat it as a cost (and a nuisance – particularly if it finds anything wrong with their design!). The first complete system model is often left to the Operator Training Simulator, which is bundled into the Main Automation Contractor’s scope, for whom the sale of extra hardware is more interesting than spending time building a high quality dynamic model. This approach provides no motivation for the simulation work to be used as a vehicle for value improvement throughout the project, nor is there any opportunity for Operations to influence its use or to use its results to influence the design. Together with segmentation of effort between numerous contractors and suppliers, this tends to diminish the value gained from dynamic simulation, giving a false impression that it is an expensive luxury. 

An alternative contracting strategy that addresses these issues, would be for the Operating Company to commission the dynamic simulation effort directly, integrating it into the their own project technical team. This should begin in Concept Selection, with the model(s) evolving and increasing in scope throughout the design phases. Typical contract practice requires data and drawings to be provided to the Operating Company as they become available, which allows estimated data to be gradually replaced and received data to be verified. If this is brought under an "integration management" umbrella it becomes a tool for focusing the design effort on overall system performance and operability, as well as helping check each element of the system as the design crystallizes. The detailed design contract would stipulate that models of any proprietary process units should be provided as "black boxes" suitable for integration into the full model. The Operating Company’s model can be used by their Technical Assurance team to verify the contractor’s design or to highlight areas where it can be improved. Engendering a spirit of cooperation between the OpCo and Contractor is the key to maximizing value from this strategy – carefully designed production performance incentives can help. 

By the end of detailed design the model should have been thoroughly tested and widely used, making it an ideal basis for control system testing and for the Operator Training Simulator.

For continuity, it makes sense for the OpCo’s team to complete the scope of model - it needs to include the full system for which the operators will be responsible, including any external influence on how the process is operated (e.g. wells, pipelines, utilities). The full model would then be free-issued to the OTS vendor (usually the control system vendor) for integration into their system. This approach reduces costs, schedule and risks to the project, and has the advantage that the Operations representatives will already have had a chance to influence the scope of the training tools. 

As alluded to above, to get the best value from dynamic simulation in early phases requires upfront planning. It is important to gain an early understanding of how the choice of software tools for each phase will affect costs and benefits later.

In general it is not necessary to use the same software vendor/package for dynamic simulation as that being used for steady state (H&MB) design work. This is for two reasons: firstly there is significant effort involved in building a good dynamic model, even when starting from a steady state model in nominally the same package, due to the orders of magnitude increase in input data and process detail required. Secondly, using the same software for both activities could actually hide systematic or common errors (e.g. in the fluid properties models).

I saw a particularly severe example on a project where the H&MB simulation had been performed without correcting the water density from the standard equation of state (resulting in a 20% error!). Since this was a simple mistake in the setup of the fluids model, it was common to the dynamic version. Such a large error was easy to see when checking the model output (which raises the question “why didn’t the contractor notice?”) but more subtle errors can go unnoticed without using different software as an independent verification tool.

Once a dynamic model of significant size has been built for the project the advantages of continuing with the same software will become apparent. At each stage the savings increase as the model becomes larger and more detailed, to the extent that it can be possible to justify a detailed engineering study model almost entirely by the cost savings from passing on the resulting model as the basis for control system testing and the training simulator. A project I worked on some years ago in the North Sea tried to save money by allowing compressor vendors to provide their own controllers. The model we developed for engineering studies was later used to test these controllers – both by emulation and simulation (hardwiring to the actual controllers) - finding and fixing enough faults to pay for the entire simulation effort many times over. This example also indicates the difficulties with estimating benefits of dynamic simulation.

When the problems are solved before commissioning, no one notices as there is no fault condition with which to compare the smooth operation - so no one says “that finding saved us $Xm”.

Likewise, many unsolved problems cause “mysterious” trips or maybe just reduce production efficiency – which, during production, may be impossible to connect with a cause that could have been avoided by better use of dynamic simulation. Of course, having access to the full engineering model during production can help troubleshoot such problems.
Usually, real-time simulation tools will also benefit from the “software continuity” rule, with the possible exception of Virtual Flow Metering. For use with remote wells (especially subsea) the scope of the model for each VFM is relatively small (usually the well bore and Xmas tree, including the choke valve) and may need to be modelled in more detail than would usually be required for study and training purposes. Hence the other requirements for VFM may override the savings from using an existing model – VFM offerings vary widely in their approaches and capabilities, so it is vital to make the right choice for the specific characteristics of the project.
Much of the above advice for extracting value from established process engineering tools may seem straight forward, but it is still not the norm.

So what are the reasons why dynamic simulation is still not routinely used on all projects? I have already mentioned the perceptions that it is too expensive (which relates to the difficulty in quantifying the benefits), that it needs specialists (who may be difficult to find), that is takes longer and is difficult to predict (and hence to schedule) and, falsely, that it needs too much data to be useful early in a project.

The current typical contract strategy reinforces and compounds these attitudes – often making dynamic simulation an irritation for the contractor, rather than an integral part of their team. Delegating the simulation effort to the design contractor usually reduces the life-cycle benefits, with the same model being less likely to be used during training or production. Without the OpCo taking control of the dynamic simulation, segmented supply leads to segmented modelling efforts – increasing costs but reducing benefits.

The knowledge gap, particularly within Project Management, creates a Catch-22: until dynamic simulation is used more, the understanding of the potential benefits will not be widespread enough to realise the full value, but without seeing the benefits delivered, it won’t get used enough…

Breaking this cycle will need OpCo managers with vision and courage to seek expert advice at project inception to make a life-cycle simulation plan and take key decisions, based on sound understanding of the technology and its optimal use.

With the OpCo taking ownership from the start, using the right contract and management strategies, the full potential should be achievable – with all the cost savings that can bring. Then the snowball should really start to gather momentum!

Author: John Hinsley

With over 30 years experience applying dynamic simulation in a range of process industries, John is currently a Director at Integrated Process Analysis Ltd, providing engineering services, consultancy, training and technical assurance in Flow Assurance, Dynamic Process Simulation and Process Control.