Nearly every blast decision involves compromise. Closing the pattern to increase fragmentation or heave may cause undesirable backbreak. Firing a blast more quickly for cast and movement may increase vibration. Using a smaller hole diameter may reduce oversize but increase costs. It is the role of the blast designer to understand and balance these compromises to achieve the blast objectives. Unfortunately, operating mines and quarries are difficult places in which to run controlled experiments, and it can be hard to measure the effects of subtle changes in blast design. Computer-based blasting models help guide our blast design choices, but it is important that users understand the limitations and constraints of such models.

###### One model to rule them all …

The perfect blasting model would accurately predict the non-ideal detonation behaviour of explosives in all rock types and down-the-hole conditions.

It would predict damage, fracture, fragmentation and rock movement in three dimensions based on known geological conditions, including rock structure, water saturation and in-situ stresses. It would include the fluid dynamics effects of gas ingress into and around cracks and its influence on the dynamics of bulk motion associated with muckpile heave. It would use an as-drilled blast pattern, actual initiation timing (including the effects of delay scatter) and real face profiles. It would account for the effects of stress wave reflections and refractions from free faces and joints. It would predict fragment size distribution explicitly over the full range of resultant sizes (eg with rock sizes ranging from two metres down to 200 microns), as well as ground vibration, air overpressure, flyrock risk and possible fume and dust propagation at all points around the blast, out to a range of hundreds of metres.

Unfortunately, the perfect blasting model does not exist. The models available today can all do some of these things, but each has limitations and provides only part of the picture.

###### Types of blast models

Models can be classified as empirical, analytical, numerical, mechanistic or combinations of these.

Empirical computer models are based on the simple fitting of mathematical and/or computational expressions to information that has been obtained by observation and/or measurement. Examples of commonly used empirical models include the fragmentation prediction models (KuzRam and Swebrec), and the scaled distance formula for predicting vibration. These models can often be represented in a spreadsheet and run on a laptop in a few seconds.

Analytical models attempt to make predictions from first principles, based on an understanding of the underlying physics of the system or a process that has been transcribed into mathematical relationships (formulae) that (a) are usually few in number, (b) are amenable to computation and (c) are considered good approximations to the theory. By and large, analytical models are not used for dynamic (time-varying) calculations (although they can be) but do often involve iterative procedures to achieve convergence or optimisation. As such, they can be difficult to create on a spreadsheet. These models can generally be run on a laptop but may still take a few minutes or even hours to run, depending on the fidelity and resolution.

Numerical models are similar to analytical models in that they utilise mathematical expressions that are, when combined, deemed to produce good approximations to the theory. The difference is that the combinations of equations cannot always be solved separately (ie analytically) or together without the use of numerical procedures. Special mathematical procedures are needed to differentiate or integrate over time or space or to minimise functions or to interpolate or extrapolate solutions into areas where results are required. Fourier transforms, which can be used to convert between time and frequency domains to aid in finding solutions, provide a good example of such procedures.

Orica’s HelFire and wildFire models are examples of semi-analytical models, as they are inherently analytical but do use numerical procedures as well (eg for differentiation). HelFire is used to predict damage radiation patterns in the rock mass, due to one or more blast holes. The model assumes that damage is related to vibration (particle velocity) and uses standard wave superposition principles to accumulate the vibration from all relevant blast holes. wildFire computes the well known ballistic trajectories of moving objects (rocks) to make probabilistic flyrock predictions based on assumptions relating to initial velocity, initial angle, particle size, shape and rotation and the influence of air resistance at specified altitudes.

Mechanistic models are really at the pinnacle of the modelling hierarchy in that they attempt to replicate the inherent physics involved, generally using a multitude of numerical procedures, and thereby attempt to make predictions by explicitly simulating changes in conditions through space and time. These are the most complex models, often seeking to solve for combinations of multiple phenomena (hence being known as multi-physics codes), and some can take days or weeks to run even on high-powered multi-core or GPU-based computers.

Orica’s most well known numerical model is the Mechanistic Blast Model. It uses data from a non-ideal detonation code, combined with an internal computational fluid dynamics gas flow code, to model the detonation process and the gas flow through the fracture domain.

A finite/discrete element mesh represents the rock mass and models wave propagation and the resulting stresses and strains. Fracturing of the rock mass is calculated based on a strain rate dependent tensile failure model. Multiple rock types, explosive types and geological discontinuities can be applied in the same model.

###### Good models = Good data

Common inputs required for blasting models are the elastic/plastic and structural properties for one or more rock domains and the detonation performance of the explosive.

The important rock properties include stiffness (modulus), strength (UCS), density, fracture toughness and Poisson’s ratio. These are not always readily available or easily measured, and in fact most mines and quarries do not have this data to hand, but measuring the sonic velocity of the rock can be helpful in determining these values. Joint persistence, spacing and condition are also important.

Many of the more complex vibration models require one or more “seed waves” that represent the waveform of a single blast hole recorded at the point of interest, and vibration attenuation and scaling constants that can be measured only by taking many vibration readings at different scaled distances. This can be time-consuming and expensive.

The physics of explosive detonation (detonics) is itself a vast and complex field of modelling. Explosive performance models can be described as ideal or non-ideal. Ideal detonation involves a theoretical thermodynamics-based calculation of the performance of the explosive at infinite diameter, under conditions of perfect confinement and complete reaction of all the ingredients. Of course, this never happens in practice.

For more realism, it is desirable to use non-ideal detonation models that include the effects of finite blast hole diameter, finite explosive reaction times and radial blast hole expansion, dependent on visco-elastic rock properties. Explosive formulations are characterised for non-ideal detonation modelling by measuring the unconfined velocity of detonation of the explosive in cardboard tubes at different diameters, and using this information to calculate the confined behaviour at\ different diameters.

###### Predicting damage patterns

Traditional blasting teachings are often based on overly simple “thought experiments” that have no real experimental basis.

One such teaching is the idea that the booster should be placed at grade level to improve fragmentation and damage in that zone. This idea is based on the assumption that the booster adds energy or VoD (velocity of detonation) to the explosive around it.

In fact, mechanistic computer models used to predict damage radiation patterns around blast holes show that this idea could be incorrect. For example, the HelFire model predicts damage around a blast hole in an infinite, homogeneous and isotropic rock mass under the assumption that it is directly related to the peak particle velocity generated by the stress waves produced from the detonating blast hole. The accumulated damage from these stress waves shows the area immediately surrounding the booster is actually not extensively damaged and indicates that placing the primer at grade level is good for fragmentation in the stemming horizon, but not for damage at grade (Figure 2).

While this is a relatively simple model, it does provide one quick way of looking at the question that is, nevertheless, backed up by more complex mechanistic modelling.

In reality, the geometry of free faces, rock structure and in-situ stresses play a huge role in determining the pattern of damage around a blast hole.

As shown in Figure 3, each of the two blast hole patterns have the same rock, structure, blast hole diameter, pattern, explosive type, powder factor and initiation sequence. The only difference is that the pattern on the right has a free face. Modelling using the mechanistic blast model helps us to visualise the effect of this on the overall pattern of damage through the rock mass. It is clear that the model predicts much more damage in the case with the free face. The extra damage is caused by stress waves reflecting from the free face, and the subsequent strain induced in the rock mass through the pattern.

Turning to the vertical plane, the mechanistic blast model can help us visualise the same effects due to the bench surface and crest, and combine the effects of primer position (as predicted by HelFire).

In Figure 4, the model shows that the scenario with the primer at the bottom of the blast hole produces the most consistent fragmentation through the muckpile. Although damage at grade around the booster is less, fragmentation in the stemming horizon is greater. The “top-primed” scenario does the opposite – it promotes damage at grade level at the expense of the stemming horizon, and therefore produces less consistent fragmentation throughout the whole muckpile.

Of course, apart from the fragmentation effects, there is a practical reason that the primer should be placed at the bottom of the hole when single-priming.

If the hole collapses during loading, a primer at the bottom of the hole can still be used to initiate the partial column that is already charged. If the primer were suspended at the top half of the hole, a hole collapse during loading would probably lead to a partial column that cannot be fired.

Our ability to model explosive, blast hole and whole-of-blast performance is advancing with improvements in our understanding of explosive performance, rock breakage and computing power. Although colourful animated blast models are attractive to the eye and tempt the user to jump to conclusions when predicting blast outcomes, it is important to understand the principles behind and limitations of each model. There is no model that can accurately predict everything that happens in a blast.

As a general rule, the models that produce more accurate predictions demand more time-consuming and expensive input data collection and computing resources.

Many of the models available to us are more useful for comparative rather than absolute predictions.

*The authors work for Orica Ltd. Martin Adam is the Manager - Global Technical Excellence. Alan Minchinton is Senior Technology Manager - Blast Modelling. Peter Dare-Bryan is Senior Mining Engineer - Blast Modelling.*