Much of the finite element analysis work in modern geotechnics is carried out on desktop PCs costing little more than £1000. But the story was very different 15 years ago, when a typical machine for small finite element (FE) problems would have been a Dec VAX11/750 with 5MB of RAM and a 400MB hard disk, costing over £100,000.
What were considered large analyses at the time would often have to be sent away to a site with a 'large' computer such as a Cray. Work that could not even be considered 10 years ago can now be done on a desktop machine.
Today's software is also more user friendly, with recent developments allowing the user to draw the problem on the screen and also to assign the constitutive models and boundary conditions using mouse-driven commands in a Windows environment, so users no longer have to be familiar with rather less intuitive operating systems such as Unix.
However, while hardware and software have both become cheaper and more user friendly, this does not mean that a serious FE study is cheap to commission. Any properly performed study requires a large amount of engineering input to check the sensitivity of the results with the range of input parameters and assumptions and to confirm that calibration runs give results within limiting closed form solutions. Ease of access to and use of powerful FE packages should not lead to the belief that it is straightforward and foolproof to perform such analyses.
It may seem obvious, but the first consideration when starting a numerical analysis project is to define what results are required - for example, displacements, stresses or failure loads. Defining this resolves which geometry and features of the problem need to be included in the analysis, and which can be disregarded. Even with current computing power, it is rarely possible or even desirable to model an entire problem in one analysis without making simplifying assumptions of some kind.
Where failure mechanisms and loads are being investigated, another factor to resolve is the definition of failure, as in many problems there is no sudden, clear, catastrophic failure. A simple analogy is that of pile design. The design capacity of a pile must be defined at a certain deflection, since most piles will carry additional load as they penetrate further, but the large deflections associated with such loads will generally be unacceptable for the structure being supported.
While the definition of failure can be open to interpretation, one of the major advantages of FE analysis is that the precise failure mechanism need not be assumed. This is not the case in many limit state approaches, such as in slip circle analyses of slope stability. Indeed, one of the major advantages of numerical methods is that the solutions are valuable in indicating the deformation or failure mechanisms which occur under given changes in loading conditions.
Only after the requirements of an analysis have been established can consideration be given to matters such as the selection of appropriate element types, material models and mesh densities.
When selecting material models and input parameters to use in an analysis, it is important to ensure that the aspects of material behaviour appropriate to the problem are correctly modelled. In many ground capacity problems, for example, the final solution is very sensitive to predicted displacements. Results may therefore depend not only on the nominal 'strength' of the material, but also on the variation of soil stiffness with strain used in the analysis. This can be measured reliably but requires the use of insitu techniques (such as geophysics or seismic cone testing), high quality sampling and laboratory testing. Incorrect assessment of stiffness can also lead to gross overpredictions of ground movements, for example in the prediction of surface settlements because of tunnelling.
Some of the more complex material models can incorporate many material parameters difficult to define with confidence. While the material model must include all aspects of soil behaviour relevant to the problem, unnecessary complexity of the model can cloud interpretation of results, which can sometimes appear to depend on rather obtuse parameters. An example of the use of more complex models is the prediction of pore pressure generation in sands under cyclic or seismic loading.
Fugro uses models with pore pressure generation parameters derived from the results of cyclic triaxial tests or cyclic simple shear tests. However, in
some cases the calculated pore pressure response can be very sensitive to small changes in the parameters measured in the cyclic tests, with a change of 10% in the design value of one parameter leading to a doubling of predicted liquefaction depth in a seismic event.
To compound the problem there are often only a few data points available for calibrating these parameters on any given project. The results of such analyses must therefore be used with caution and checked as carefully as possible against more simply derived solutions and/or empirical correlations. Where the project timescale and budget allows, a model test programme can help improve the confidence in FE predictions.
Geotechnical problems, which often require non-linear stiffness and material plasticity (which can both be stress or strain dependent), frequently result in large analytical models that are very inefficient numerically. Different techniques for solving the internal equations in FE analyses are becoming more widely used, allowing problems to be analysed in much smaller computer core space faster than a 'traditional' complete solver.
For very large linear problems, such as elastic analyses, iterative solvers can enable the problem to be analysed in as little as 10% of the core space and provide the solution in a time an order of magnitude faster. Fugro recently performed flow net analyses involving complex 3D geometry that required over 50,000 3D brick elements to adequately define the problem. A traditional complete solver would have needed a lot more than the 2GB of core space available on the computers being used. Using an iterative solver enabled the analyses to be performed in just over 600MB of core space in only a few hours.
Geotechnical problems can involve large displacements and formulations of large strain elements are becoming increasingly sophisticated. However some soil-structure interaction problems involve the distortion of the soil such that even large strain elements cannot cope. Examples include the installation of spudcan footings for offshore mobile jack-up drilling plat-forms and the installation of pipelines.
In these instances Fugro uses FE software that auto- matically regenerates the mesh when the element distortion becomes greater than a specified amount. Another benefit of this remeshing tech- nique is that the distribution of elements in the new mesh is automatically weighted to areas where distortions or strain gradients are greatest. This makes the resulting mesh more efficient, with elements concentrated only in the regions where they are most needed.
While the increasing avail- ability of affordable sophisticated hardware and software - opening the way to hitherto impossible numerical modelling - is excellent, it is also making the finite element method accessible to many people who do not have wide experience of using such techniques.
The dangers of using such software are as great as ever, and unless full sensitivity studies are performed on all aspects of the input parameters and analysis parameters that can affect the results, the old adage about such analyses will still be true: 'rubbish in - rubbish out'.
Tim Carrington is senior geotechnical engineer at Fugro.