2 - Prediction of Rupture in Complex Systems

2.1 - Nature of the Problem

The damage and fracture of materials are technologically of outstanding interest because of their economic and human cost. They cover a wide range of phenomena such as cracking of glass, aging of concrete, the failure of fiber networks in the formation of paper, and the breaking of a metal bar subject to an external load. Failures of composite systems are of upmost importance in the naval, aeronautics and space industries. By the term composite, we include both materials with constrasted microscopic structures and assemblages of macroscopic elements forming a super-structure. Chemical and nuclear plants suffer from cracking due to corrosion either of chemical or radioactive origin, aided by thermal and/or mechanical stress. More exotic but no less interesting phenomena include the fracture of old painting, the pattern formation of the cracks of drying mud in deserts, and rupture propagation in earthquake faults.

Despite the large amount of experimental data and the considerable effort that has been undertaken by material scientists, many questions about fracture and fatigue have not yet been answered. There is no comprehensive understanding of rupture phenomena, but only a partial classification in restricted and relatively simple situations. This lack of fundamental understanding is reflected in the absence of proper prediction methods for rupture and fatigue, that could be based on a suitable monitoring of the stressed system.

Many material ruptures occur by a "one crack'' mechanism and a lot of effort is being devoted to the understanding, detection and prevention of the nucleation of the crack. Systems that do not fail by the "one crack'' rupture mechanism are fiber composites, rocks, concrete under compression and materials with large distributed residual stresses. The common property shared by these systems is the existence of large inhomogeneities, that often limit the use of homogeneization theories for the elastic and more generally the mechanical properties. In these systems, failure may occur as the culmination of a progressive damage involving complex interactions between multiple defects and growing micro-cracks. In addition, other relaxation, creep, ductile, or plastic behaviors, possibly coupled with corrosion effects come into play. Many important practical applications involve the coupling between mechanical and chemical effects with the competition between several characteristic time scales. Application of stress may act as a catalyst of chemical reactions or, reciprocally, chemical reactions may lead to bond weakening and thus promote failure. A dramatic example is the aging of present aircrafts due to repeating loading in a corrosive environment [Committee on Aging of U.S. Air Force Aircraft, 1997]. The interaction between multiple defects and the existence of several characteristic scales present a considerable challenge to the modelling and prediction of rupture.

2.2 - The Role of Heterogeneity

In the early sixties, the Japanese seismologist K. Mogi noticed that the fracture process strongly depends on the degree of heterogeneity of materials: the more heterogeneous, the more warnings one gets; the more perfect, the more treacherous is the rupture. The failure of perfect crystals thus appears to be unpredictable while the fracture of dirty and deteriorated materials may be forecast. For once, complex systems could be simpler to apprehend! However, since its inception, this idea has not been much developed because it is hard to quantify the degrees of "useful'' heterogeneity, which probably depend on other factors such as the nature of the stress field, the presence of water, etc. In our work on failure of mechanical systems, we have solved this paradox quantitatively using concepts inspired from statistical physics, a domain where complexity has long been studied as resulting from collective behavior. The idea is that, upon loading a heterogeneous material, single isolated microcracks appear and then, with the increase of load or time of loading, they both grow and multiply leading to an increase of the number of cracks. As a consequence, microcracks begin to merge until a "critical density'' of cracks is reached at which the main fracture is formed. It is then expected that various physical quantities (acoustic emission, elastic, transport, electric properties, etc.) will vary. However, the nature of this variation depends on the heterogeneity. The new result is that there is a threshold that can be calculated: if disorder is too small, then the precursory signals are essentially absent and prediction is impossible. If heterogeneity is large, rupture is more continuous.

To obtain this insight, we used simple mechanical models of masses and springs with local stress transfer [Andersen et al., 1997]. This class of models does not claim precise realism but attempts rather to identify the different regimes of behavior. The scientific enterprise is paved with such reductionism that has worked surprisingly well. We were thus able to quantify how heterogeneity plays the role of a relevant field: systems with limited stress amplification exhibit a so-called tri-critical transition, from a Griffith-type abrupt rupture (first-order) regime to a progressive damage (critical) regime as the disorder increases. This effect was also demonstrated on a simple mean-field model of rupture, known as the democratic fiber bundle model. It is remarkable that the disorder is so relevant as to change the nature of rupture. In systems with long-range elasticity, the nature of the rupture process may not change qualitatively as above, but quantitatively: any disorder may be relevant in this case and make the rupture similar to a critical point; however, we have recently shown that the disorder controls the width of the critical region [Sornette and Andersen, 1998]. The smaller it is, the smaller will be the critical region, which may become too small to play any role in practice. For realistic systems, long-range correlations transported by the stress field around defects and cracks make the problem more subtle. Time dependence is expected to be a crucial aspect in the process of correlation building in these processes. As the damage increases, a new "phase'' appears, where micro-cracks begin to merge leading to screening and other cooperative effects. Finally, the main fracture is formed leading to global failure. In simple intuitive terms, the failure of compositive systems may often be viewed as the result of a correlated percolation process. The challenge is to describe the transition from the damage and corrosion processes at the microscopic level to the macroscopic rupture.

2.3 - Scaling, Critical Point and Rupture Prediction

Motivated by the multi-scale nature of the second class of ruptures and analogies with the percolation model, physicists working in statistical physics started to suggest in the mid-eighties that rupture of sufficiently heterogeneous media would exhibits some universal properties, in a way maybe similar to critical phase transitions. The idea was to build on the knowledge accumulated in statistical physics on the so-called N-body problem and cooperative effects in order to describe multiple interactions between defects. However, most of the models were extremely naive and essentially all of them quasi-static with rather unrealistic loading rules. Some suggestive scaling laws were found to describe size effects and damage properties, but the relevance to real materials was not convincingly demonstrated with some exceptions. The interest of physicists for the modelling of rupture in heterogeneous media seems to have decreased since then except for a few groups.

In 1992, we proposed the first model of rupture with a realistic dynamical law for the evolution of damage, modelled as a space dependent damage variable, a realistic loading and with many growing interacting micro-cracks [Sornette and Vanneste, 1992]. We found that the total rate of damage, as measured for instance by the elastic energy released per unit time, increases as a power law of the time-to-failure on the approach to the global failure. In this model, rupture was indeed found to occur as the culmination of the progressive nucleation, growth and fusion between microcracks, leading to a fractal network, but the exponents were found to be non-universal and function of the damage law. This model has since then been found to describe correctly experiments on the electric breakdown of insulator-conducting composites [Lamaignere et al., 1996]. Another application is damage by electromigration of polycristalline metal films [Bradley and Wu, 1993].

In 1993, we extended these results by testing on real engineering composite structures the concept that failure in fiber composites may be described by a critical state, thus predicting that the rate of damage would exhibit a power law behavior [Anifrani et al., 1995]. This critical behavior may correspond to an acceleration of the rate of energy release or to a deceleration, depending on the nature and range of the stress transfer mechanism and on the loading procedure. We based our approach on a theory of many interacting elements called the renormalization group. The renormalization group can be thought of as a construction scheme or "bottom-up'' approach to the design of large scale structures. Since then, other numerical simulations of statistical rupture models and controlled experiments have confirmed that, near the global failure point, the cumulative elastic energy released during fracturing of heterogeneous solids follows a power law behavior.

To get a qualitative understanding of the renormalization group theory, let us consider the usual way that composite or mechanical systems are designed (for engineering and industrial applications). This may be called the component system, or bottom-up design. First, it is necessary to thoroughly understand the properties and limitations of the materials to be used, and experimental tests are begun. With this knowledge, larger component parts are designed and tested individually. As deficiencies and design errors are noted they are corrected and verified with further testing. Since one tests only parts at a time these tests and modifications are not overly expensive. Finally one works up to the final design of the entire engine, to the necessary specifications. There is a good chance, by this time, that the global structure will generally succeed, or that any failures are easily isolated and analyzed because the failure modes, limitations of materials, etc., are well understood. There is a very good chance that the modifications to the system to get around the final difficulties are not very hard to make, for most of the serious problems have already been discovered and dealt with in the earlier, less expensive, stages of the process. The reliability and failure properties of such system is the result of a bottom-up approach of the reliability and failure properties of the constitutive elements, in other words calls for a hierarchical modelling. The renormalization group offers a general framework to formalize and calculate how a property or failure at a given scale may or may not cascade at higher levels.

Based on an extension of the usual solutions of the renormalization group and on explicit numerical and theoretical calculations, we were thus led to propose that the power law behavior of the time-to-failure analysis should be corrected for the presence of log-periodic modulations [Anifrani et al., 1995]. Since then, this method has been tested extensively during our continuing collaboration with the French Aerospace company Aerospatiale on pressure tanks made of kevlar-matrix and carbon-matrix composites embarked on the European Ariane 4 and 5 rockets. In a nutshell, the method consists in this application in recording acoustic emissions under constant stress rate and the acoustic emission energy as a function of stress is fitted by the above log-periodic critical theory. One of the parameter is the time of failure and the fit thus provides a "prediction'' when the sample is not brought to failure in the first test. Improvements of the theory and of the fitting formula were applied to about 50 pressure-tanks. The results indicate that a precision of a few percent in the determination of the stress at rupture is obtained using acoustic emission recorded 20 % below the stress at rupture. These successes have warranted an international patent and the selection of this non-destructive evalution technique as the routine qualifying procedure in the industrial fabrication process.

This example constitutes a remarkable example where rather abstract theoretical concepts borrowed from the rather esoteric field of statistical and nonlinear physics have been applied directly to a concrete industrial problem. This example is remarkable for another reason that we would like to relate.

2.4 - Discrete Scale Invariance, Complex Exponents and Log-Periodicity

During our research on the acoustic emissions of the industrial pressure tank of the European Ariane rocket, we discovered the existence of log-periodic scaling in non-hierarchical systems. To fix ideas, consider the acoustic energy E\sim (t_c - t)^{-\alpha} following a power law a function of time to failure. Suppose that there is in addition a log-periodic signal modulation  E \sim (t_c - t)^{-\alpha}~[1+ C\cos[2\pi {\log (t_c - t) \over \log \lambda}]]. We see that the local maxima of the signal occur at t_n such that the argument of the cosine is a multiple to 2\pi, leading to a geometrical time series t_c - t_n \sim \lambda^{-n} where n is an integer. The oscillations are thus modulated in frequency with a geometric increase of the frequency on the approach to the critical point t_c. This apparent esoteric property turns out to be surprisingly general both experimentally and theoretically and we are probably only at the beginning of our understanding. From a formal point of view, log-periodicity can be shown to be nothing but the concrete expression of the fact that exponents or more generally dimensions can be "complex'', i.e. belong to these numbers which when squared give negative values.

During the third century BC, Euclid and his students introduced the concept of space dimension, which can take positive integer values equal to the number of independent directions. For instance, a line has dimension one and we live in a space of dimension three and a spacetime of dimension four. During the second half of the nineteen century and the twentieth century, the notion of dimensions was generalized to fractional values. The word "fractal'' was coined by Mandelbrot to describe sets consisting of parts similar to the whole, and which can be described by a fractional dimension. This generalization of the notion of a dimension from integers to real numbers reflects the conceptual jump from translational invariance to continous scale invariance. Science progresses by analogies and generalization, thus allowing to embody in simpler concepts an increasing broad phenomenology. Here, there is a further generalization of the notion of dimension, according to which the dimensions or exponents are taken from the set of complex numbers. This generalization captures the interesting and rich phenomenology of systems exhibiting discrete scale invariance, a weaker form of scale invariance symmetry, associated with log-periodic corrections to scaling. Discrete scale invariance is a weaker kind of scale invariance according to which the system or the observable obeys scale invariance only for specific choices of magnifications, which form in general an infinite but countable set of values. This property can be seen to encode also the concept of lacunarity of the fractal structure.

Encouraged by our observation of log-periodicity in rupture phenomena, we started to investigate whether similar signatures could be observed in other systems. And looking more closely, we were led to find them in many systems in which they had been previously unsuspected. These structures have long been known as possible from the formal solutions of renormalization group equations in the seventies but were rejected as physically irrelevant. They were studied in the eighties in a rather academic context of special artificial hierarchical geometrical systems. Our work led us to realize that discrete scale invariance and its associated complex exponents and log-periodicity may appear "spontaneously'' in natural systems, i.e. without the need for a pre-existing hierarchy. Examples that we have documented [Sornette, 1998] are diffusion-limited-aggregation clusters, rupture in heterogeneous systems, earthquakes, animals (a generalization of percolation) among many other systems. Complex scaling could also be relevant to turbulence, to the physics of disordered systems, as well as to the description of out-of-equilibrium dynamical systems. Some of the physical mechanisms at the origin of these structures are now better understood. General considerations using the framework of field theories, the framework to describe fundamental particle physics and condensed matter systems, show that they should constitute the rule rather than the exception, similarly to the realization that chaotic (non-integrable) dynamical systems are more general that regular (integrable) ones. In addition to a fascinating physical relevance of this abstract notion of complex dimensions, the even more important aspect in our point of view is that discrete scale invariance and its signatures may provide new insights in the underlying mechanisms of scale invariance and be very useful for prediction purposes.

JavaScript has been disabled in your browser