As computer technology evolves, numerical modelling appears to be an increasingly popular tool to obtain solutions for static or dynamic structural problems in all engineering fields. Tunnelling, on the other hand, is one of the most empirical engineering disciplines; so applicability and reliability of such numerical solutions may often be challenged. Yet, in the famous words of G.E.P. Box, "all models are wrong, some are useful," [1]. Numerical methods have become standard practice and are indeed a very useful tool for the analysis of complex tunnel structures, as many recent design and consulting projects have shown. In the authors’ perspective a well prepared simulation of an underground structure can give a good and communicable description of the structural behaviour, indicates risks, in design projects and some rationalisation of a modelling campaign in terms of cost (or effort) efficiency.

THE PURPOSE OF MODELLING

The primary objective of tunnelling engineers is arguably to facilitate the design and construction of tunnels as efficiently and safely as possible. In this endeavour, we are often required by clients to produce reports quantifying, or at least identifying, the potential risks to existing surface and subsurface infrastructure in an attempt to provide some measure of assurance against adverse consequences during construction. Inevitably, modelling appears to have become an inherent part of this assurance process.

The purpose of engineering models – whether physical or analytical – is essentially to try to anticipate or simulate the expected response of the ground to excavation, as well as loads and deformations induced in the structural support elements of the new and existing infrastructure. The reality is however, no matter how thorough we may believe our ground investigation and interpretation is, or how numerically sophisticated our models to be, every model is only an abstraction of reality.

This can be attributed to lack of knowledge, to randomness in the nature of materials and phenomena, to mathematical indetermination, or even decisions on budget allocation. Particularly in geotechnical engineering, the inherent variabilities of the ground mean that the predictions resulting from the modelling process contain uncertainties.

For centuries, prior to the advent of numerical models, tunnels have been successfully designed and constructed using empirical and simple analytical methods. The empirical approach makes use of past experience in similar conditions to define temporary support and lining thicknesses. Of course experience must be relied upon (probably most in SCL tunnelling), and this aspect can hardly be matched by numerical calculations. However, the authors are of the opinion that using such an approach alone cannot adequately quantify risk and uncertainty during the design phase, which are then to be managed during construction. The closed form analytical solutions often employ grossly simplifying assumptions, one of which is that the ground is CHILE [5]: continuous, homogeneous, isotropic, linearly-elastic. However, this is barely the case. In fact, the ground is more likely to be DIANE [5]: discontinuous, inhomogeneous, anisotropic, non-elastic. Besides, these methods are typically valid for very simple planar geometries. Numerical models can be used to help quantify the risks during the design phase, as well as capture the DIANE nature of the ground and complex tunnel geometries.

The requirement to quantify uncertainty and manage risk coupled with the inherent simplifications in closed form solutions has perhaps been a driving factor in the increasingly frequent use of numerical models for tunnel design. This is ever more apparent as urban development requires engineers to push the boundaries of experience and in some instances, propose new construction schemes that are without precedence. The numerous station upgrade schemes currently underway in London are typical examples.

The debate for and against numerical models

We are all too familiar with the situation seen in many consultancy offices where the boss walks in and says: "Team, here are the details of the new tunnel project that we’ve won. What I want you to do is design the lining and support elements and give me a prediction of imposed deformations". At which point, someone is charged with ‘doing the numbers’. This, usually young, engineer _ res up the numerical modelling software, dives in to designing the tunnel and comes back with results. Yet often the modeller’s results may not fit with a design that experience has shown to be adequate for similar conditions. For instance, the predicted lining loads may fall outside the theoretical capacity of the shotcrete lining, calculated in accordance with Eurocodes; the code checks and results of the model indicate a lining thickness far greater than what more experienced practitioners consider ‘reasonable’ for the problem in question. Thus, we are faced with a dilemma: on the one hand, we have a modeller who has perhaps received formal tuition in numerical methods and has some experience with its implementation at the office; on the other hand, we have the practical engineer who has spent years on site building tunnels, both with differing opinions. The question is whose advice we should take. Whether the design should be driven by the model, or if the model should be adapted to suit what we believe to be the correct design.

It is perhaps these situations that prompted Professor David Potts to address this debate for and against numerical modelling in his 2003 Rankine Lecture [12]. In this, Potts raises concern over the frequent examples of poor practice which have led others [6] to question the validity of design using numerical models. In the authors’ opinion, it is perhaps not the numerical model to blame; the model is but a mathematical calculation based on fundamental laws of physics and engineering theories, the latter of which are prescribed as inputs by the user.

Indeed, the input of the model dictates the output. Victoria Station’s original 1960s design (created without numerical models), compared with the modern vision (Figure 1) demonstrates the variability that can result based entirely on the users approach to modelling. It is perhaps such variability that has led to the disparity between the two schools, for and against modelling. From the authors’ experience it appears that the generation of practically trained engineers tend to doubt numerical models because the new-age engineers-cummodellers often lack practical experience, or it is that they do not understand the expected behaviour, which leads to their blind faith in the results of the model.

But then again, in any solution (numerical, empirical, analytical), the quality of input defines the quality of output. Regardless of the approach being empirical, analytical, or numerical, if the result stems from good engineering, it will provide a correct solution. Numerical modelling carries certain benefits, but solutions from all three approaches should be consistent with each other. And of course, modelling engineers should have a broad practical understanding of their subject, in order to deliver a practical product with their work.

IMPLEMENTATION

There is a "thin line" between detailed, accurate structuring and result-oriented manipulation of the model and yet this very argument may compromise the credibility of numerical methods. Thus, it is of high importance for the modelling engineer to make the necessary assumptions and simplifications, getting rid of superfluous details and produce the best possible product. To achieve that, a process of improving the model in terms of simulating all the relevant features as accurately as possible shall take place prior to running any analysis.

Type and scope of numerical models Various model types can be used depending on the nature of the structure.

In terms of the scope of the model, three cases are distinguished: check, design and back analysis. When the aim of the model is checking, typically tunnel initial model prepared by [7]. The back analysed model, exhibited a close match to both the Inclinometer measurements and the mode of failure. Simulating the soil For the simulation of the soil, the engineer needs to derive the appropriate soil parameters for the selected constitutive model, mainly based on the geotechnical investigation. As tunnelling deals with unloading, it is of high importance to consider the dependency of bulk and shear stiffness on the stress and strain level and to be aware that soil exhibits a stiffer behaviour on unloading than in primary loading. However, the uncertainty in soil mechanics is reflected in the derivation of most parameters (such as ko) and the final outcome should reflect a reasonable implementation of factors such as the anisotropy, heterogeneity or the presence of geological faults. In any case, some simplifications can be applied to make a reasonable model, making use of previous experience in similar projects or ground conditions.

Structural components

The most commonly simulated structural components in tunnelling are the linings. Once the acting loads on the supports have been calculated, the simulation can be performed using either solid (continuum) elements or shell (beam) elements. A great advantage of using solid elements is that they allow for complex shapes and excavation sequences to be simulated. However, the structural section forces (M, V, Nhoop) cannot be calculated directly and the small thickness of the lining compared to the size of the model requires detailed meshing and usually a large number of elements. With shell elements, the structural forces can be acquired directly, the mesh quality is not being compromised and they can be used together with joint elements in order to simulate construction joints (common in SCL tunnelling).

Excavation and support Sequences

A great feature of numerical methods is that they allow the simulation of various construction phases and especially in tunnelling, for the implementation of the so called ”soil relaxation”. The excavation and support sequence simulation is a straightforward procedure that shall be done as accurately as required by the nature of each individual structure. Focusing on the soil relaxation, the main principle is the following: when a simplified numerical model is used instead of a full 3D one, a relaxation factor can be applied on the soil prior to its excavation in order to simulate the deformation that takes place prior to the installation of the support.

There are several ways to estimate the relaxation factor (analytical methods, simplified numerical models), and to implement it in the model (internal pressure reduction, stiffness reduction, volume loss control). When it is possible, the most direct method should be used in order to avoid additional time consuming calibrations of the model (e.g. using internal pressure reduction, which is directly related to the relaxation factor, instead of stiffness reduction, which will have to be calibrated to estimate the appropriate stiffness reduction factor that yields similar results).

Trials and calibration

The calibration of a model, refers to the execution of trial analyses in order to validate it against analytical or well established numerical solutions. This is done to match monitoring data (similar to back analysis) or to investigate the effect of various parameters (sensitivity analysis). It is up to the judgement of the engineer, whether the model should be calibrated in order to optimise its performance or increase the output confidence.

Checks and verifi cation of results

As alredy discussed above, quality of numerical models – especially in geotechnics – is strongly dependent on uncertainties in their input and consequently their output. And then it comes to the hands of the modelling engineer to prepare a tool that fits the needs of a problem with adequate accuracy, precision and reliability. In order to achieve that, one should try to fully understand the functions of the model, eliminate all errors, and verify the results in the post-processing phase. Bottomline, a model must make sense; and its creator must be able to defend this.

In the pre-processing phase, a thorough check should be performed in order to identify any flaws in the geometric formulations and the mesh (e.g. wrongly assigned boundary conditions, ill meshing, or incompatible types and orders offinite elements), or the input parameters; a very common mistake is the oversight of the unit system; sometimes the apparently trivial matters most. It is also important to "escalate" the complexity of the model as this is being built: From the authors’ experience, approximately 50 per cent of troubleshooting in a model is associated to material non- linearity, so it seems logical to simply test-run an elastic model before assigning any soil plasticity. Why not allow for an easier breakdown of the error-hunting?

But even before all that, it is of great importance that the user understands the model’s functions, i.e. the way it receives the input and the way it produces the output. For that, one could create a simplified trial model to identify potentials and limitations of the program. What can prove quite helpful is a cross-check with analytical solutions (as for example arch-statics, [10], [2], [3], or [11] – see also (Jones, 2013)), or empirical solutions and previous experiences – this can be a review of previous projects, available monitoring data or a discussion with a senior colleague. Such solutions can also be used for the verification of the output. Moreover, agreement between the outputs of two independently prepared models can substantially increase confidence in the results and accommodate justification for decisions to be made based on them.

DESIGN ASPECTS

Once the required results are extracted and verified, they can be infused into the design process, which may include the structural design, as well as the design of excavation sequences, settlement mitigation measures, or monitoring layouts. Nonetheless, this process is not straightforward and it shouldn’t be neglected that it typically includes several iterations between the modelling and the design team in order to optimize the design outcome, while the construction team is often also involved in such decision making processes. Obviously, this leads to a necessity for a thorough reporting of the modelling procedure and archiving of the respective data, and definitely a perceptive budget allocation and time scheduling. This will become more critical if the modelling engineer is absent at a future stage – e.g. departed from the project team after the delivery of the first outputs – so revisiting a sophisticated model can become particularly laborious.

The transfer of information between the various teams shall be devoted additional attention in order to avoid misinterpretations, especially when it comes to large bulks of data in raw formats and spread-sheets, or when a third member inspects the output to extract information. Essentially it should be a modelling engineer’s pronounced responsibility to build the model in both an editable and retraceable way.

Structural design through capacity limit curves

The development of advanced numerical methods in tunnel analysis and design, led characteristically to a large amount of output information. When it comes to tunnel linings, which in most cases are elements with uniform thickness and reinforcement (or unreinforced / fibre-reinforced), the design can be expeditiously performed using the so-called Capacity Limit Curves (Sauer et al., 1994).

These curves can present all design combinations of axial forces and bending moments (potentially shear forces too) juxtaposed to the envelope of the cross-section’s design capacity, providing a transparent and comprehensive graphical and numerical structural verification, as well as the design’s safety factor (Hoek et al., 2008).

EFFORT – EFFICIENCY

Recent years have seen an accelerated advance in computing technologies. Even until the 1970’s when the first pocket calculators became available the most commonly used calculation tool among engineers would be the slide rule, already in the 1980’s a desktop PC became a standard for design consultancies, and relevant technology has constantly advanced. Yet, although computing technology never stops advancing, a remarkable shift is taking place when it comes to analyses in the world of civil engineers: not long ago, time in numerical analysis tasks was mainly perceived as that needed to run a calculation, but nowadays it is much more time allocated to building, troubleshooting, and post-processing a model than running it itself. Increased computing capacities allow for much more competent models, e.g. in term of geometry precision (3D), or material constitutive laws (non-linearities). Consequently, efficiency of a modelling exercise has now become clearly associated to the abilities of the modelling engineer, than the abilities of the machine.

At the same time, the effort and money put in the modelling campaign needs to be aligned with the requirements of the project, i.e. the technical questions that need to be answered by the analysis in a feasible budget. A model that gives less than requested is of course inefficient. But yet again, a model providing superfiuous information is also not necessarily a good solution.

An obvious example, if the sole concern of a design project is the dimension check of tunnel linings, is the question of why should one pursue a model that accurately estimates surface settlements.

A common waste of computational time is caused through the creation of a very dense meshing, beyond the point that the results are sensitive to the element size. A well-engineered numerical model should provide an appropriate balance between the output it provides, the effort needed to be managed, and the project-specific added value it offers. This is, after all, often considered to be essentially the core of what engineering is about.

SYNOPSIS The intention of this paper has been to present numerical modelling for tunnelling from a less academic, more practical point of view and to provide some useful concepts and insights for engineers.

As a summary, the following points are recommended:

  • Even the most sophisitcated numerical model is incapable of giving an exact answer. Numerical models do have limitations, and peril to the project lurks when the results of a model are trusted blindly. However, when the limits of the model are understood, numerical solutions can yield very useful information. As with most tasks, we need to know what we know, but we must also know what we don’t.
  • Models are but mathematical calculations based on fundamental laws of physics and engineering theories. Engineers should always question and challenge their models up to the point they are able to explain and defend the results based on theory and/or experience. Confidence in the analysis results should arise from agreement with relevant past experience, cross checking with analytical or simplified numerical solutions, and above all common engineering sense.
  • Modelling engineers should know the ‘habits’ of the software they are using, its pros and cons, and to develop a thorough checklist that simplifies the modelling and moreover the debugging process. Note also, in a poorly constructed model, debugging may take up to 90 per cent of the overall effort.
  • Decisions on the analysis approach need to be aligned with the characteristics of the project team. It is sometimes preferable, depending on the capacities of the parties involved, to use simpler models (e.g. simple constitutive laws) that are better understood and communicated. Numerical models are built in order to support a prespecified engineer’s decision and they should be understood in this context. This, when balanced with the project budget can lead to an efficient analysis campaign.

FINAL THOUGHT

There are no certain rules to be followed when modelling comes to play. In the end, the aim is always to achieve reliable results with minimum effort, a procedure that is always subject to optimisation as the engineers’ skill and experience, and as computational capacities increase.