3 - Towards a Prediciton of Earthquakes?

An important effort is carried out world-wide in the hope that, maybe sometimes in the future, the grail of useful earthquake prediction will be attained. Among others, the research comprises continuous observations of crustal movement and geodetic surveys, seismic observations, geoelectric-geomagnetic observations, geochemical and groundwater measurements. The seismological community has been criticized in the past by promising results using various prediction techniques (e.g. anomalous seismic wave propagations, dilatancy diffusion, Mogi donuts, pattern recognition algorithms, etc.) that have not delivered to the expected level. The need for a reassessment of the physical processes has been recognized and more fundamental studies are persued on crustal structures in seismogenic zones, historical earthquakes, active faults, laboratory fracture experiments, earthquake source processes, etc.

There is even now an opinion gaining momemtum that earthquakes could be inherently unpredictable [Geller et al., 1997]. The argument is that past failures and recent theories suggest fundamental obstacles to prediction. It is then proposed that the emphasis be placed on basic research in earthquake science, real-time seismic warning systems, and long-term probabilistic earthquake hazard studies. It is true that useful predictions are not available at present and seem hard to get in the near future but would it not be a little presomptuous to claim that prediction is impossible? Many passed examples in the development of Science have taught us that unexpected discoveries can modify completely what was previously considered possible or not. In the context of earthquakes, the problem is made more complex by the societal implication of prediction with, in particular, the question of where to direct in an optimal way the limited available ressources.

We here focus on the scientific problem and describe a new direction that suggests reason for optimism. Recall that an earthquake is triggered when a mechanical instability occurs and a fracture (the sudden slip of a fault) appears in a part of the earth crust. The earth crust is in general complex (in composition, strength, faulting) and groundwater may play an important role. How can then one expect to unravel this complexity and achieve a useful degree of prediction? We need to understand the nature of the organization of the crust, then the characteristic properties of large earthquakes and the nature of signatures that could be used for prediction.

3.1 - The Large Scale Self-Organization of the Crust

Seismicity is characterized by an extraordinary rich phenomenology and variability which makes very difficult the development of a coherent explanatory and predictive framework. In the late eighties, we and other groups independently proposed that the concept of self-organized criticality (SOC) could provide a plausible framework. Apart from the rationalization that it provides for the Gutenberg-Richter law for earthquakes, the power law fault length distribution and for the fractal geometry of sets of earthquake epicenters and fault patterns, it has not been exploited until recently to advance our understanding on the crust organization and about the very rich and subtle properties found in tectonics and seismology.

Roughly speaking, SOC refers to the spontaneous organization of a system driven from the outside in a dynamical statistical stationary state, which is characterized by self-similar distributions of event sizes and fractal geometrical properties. SOC refers to the class of phenomena occurring in slowly driven out-of-equilibrium systems made of many interactive components, which possess the following fundamental properties :

  • a highly non-linear behavior, namely essentially a threshold response,
  • a very slow driving rate,
  • a globally stationary regime, characterized by stationary statistical properties, and
  • power distributions of event sizes and fractal geometrical properties.

The crust obeys these four conditions:

  • the threshold response is associated with the stick-slip instability of solid friction or to a rupture threshold thought to characterize the behavior of a fault upon increasing applied stress.
  • The slow driving rate is that of the slow tectonic deformations thought to be exerted at the borders of a given tectonic plate by the neighboring plates and at its base by the underlying lower crust and mantle.
  • The stationarity condition ensures that the system is not in a transient phase, and distinguished the long-term organization of faulting in the crust from, for instance, irreversible rupture of a sample in the laboratory.
  • The power laws and fractal properties reflect the notion of scale invariance, namely measurements at one scale are related to measurements at another scale by a normalization involving a power of the ratio of the two scales. These properties are important and interesting because they characterize systems with many relevant scales and long-range interactions as probably exist in the crust.

We have recently tested the usefulness of the SOC hypothesis by measuring its predictive and explanatory power outside the range of observations that have helped defined it. We thus explored how the SOC concept can help understand the observed earthquake clustering on relatively narrow fault domains and the phenomenon of induced seismicity by human activity such as water impoundment in artificial lakes, gas and ore extraction. We found that both pore pressure changes and mass transfers leading to incremental deviatoric stresses of less than 10 atmospheric pressure are sufficient to trigger seismic instabilities in the uppermost crust with magnitude ranging up to 7 in otherwise historically aseismic areas. We argued that these observations are in accord with the SOC hypothesis as they show that a significant fraction of the crust is not far from instability and can thus be made unstable by minute perturbations. The properties of induced seismicity and their rationalization in terms of the SOC concept provide further evidence that potential seismic hazards extend over a much larger area than that where earthquakes are frequent.

3.2 - Large Earthquakes

There is a series of surprising and somewhat controversial studies showing that many large earthquakes have been preceded by an increase in the number of intermediate sized events. The relation between these intermediate sized events and the subsequent main event has only recently been recognized because the precursory events occur over such a large area that they do not fit prior definitions of foreshocks [Jones and Molnar, 1979]. In particular, the 11 earthquakes in California with magnitudes greater than 6.8 in the last century are associated with an increase of precursory intermediate magnitude earthquakes measured in a running time window of five years [Knopoff et al., 1996]. What is strange about the result is that the precursory pattern occured with distances of the order of 300 to 500 ~km from the futur epicenter, i.e. at distances up to ten times larger that the size of the futur earthquake rupture. Furthermore, the increased intermediate magnitude activity switched off rapidly after a big earthquake in about half of the cases. This implies that stress changes due to an earthquake of rupture dimension as small as 35 ~km can influence the stress distribution to distances more than ten times its size. This result defies usual models.

This observation is not isolated. There is mounting evidence that the rate of occurrence of intermediate earthquakes increases in the tens of years preceding a major event. Sykes and Jaume [1990] present evidence that the occurrence of events in the range 5.0-5.9 accelerated in the tens of years preceding the large San Francisco bay area quakes in 1989, 1906, and 1868 and the Desert Hot Springs earthquake in 1948. Lindh [1990] points out references to similar increases in intermediate seismicity before the large 1857 earthquake in southern California and before the 1707 Kwanto and the 1923 Tokyo earthquakes in Japan. More recently, Jones (1992, 1994) has documented a similar increase in intermediate activity over the past 8 years in southern California. This increase in activity is limited to events in excess of M = 5.0; no increase in activity is apparent when all events M > 4.0 are considered. Ellsworth et al. [1981] also reported that the increase in activity was also limited to events larger than M = 5 prior to the 1989 Loma Prieta earthquake in the San Francisco Bay area. Bufe and Varnes [1993] analyse the increase in activity which preceded the 1989 Loma Prieta earthquake in the San Francisco Bay area while Bufe et al. [1994] document a current increase in seismicity in several segments of the aleutian arc.

Recently, we have investigated more quantitatively these observations and asked what is the law, if any, controlling the increase of the precursory activity [Bowman et al., 1998]. Inspired by our previous consideration of the critical nature of rupture and extending it to seismicity, we have invented a systematic procedure to test for the existence of critical behavior and to identify the region approaching criticality, based on a comparison of the observed cumulative energy (Benioff strain) release and the accelerating seismicity predicted by theory. This method has been used to find the critical region before all earthquakes along the Californian San Andreas system since 1950 with M > 6.5. The statistical significance of our results was assessed by performing the same procedure on a large number of randomly generated synthetic catalogs. The null hypothesis, that the observed acceleration in all these earthquakes could result from spurious patterns generated by our procedure in purely random catalogs, was rejected with 99.5% confidence [Bowman et al., 1998]. An empirical relation between the logarithm of the critical region radius (R) and the magnitude of the final event (M) was found, such that log R is proportional to 0.5 M, suggesting that the largest probable event in a given region scales with the size of the regional fault network.

To rationalize these observations, it is natural to invoke again the importance of heterogeneity. Recall that Mogi showed experimentally on a variety of materials that, the larger the disorder, the stronger and more useful are the precursors to rupture. For a long time, the Japanese research effort for earthquake prediction and risk assessment was based on this very idea [Mogi, 1974]. Can our previous results obtained for engineering rupture phenomena be applied to earthquakes?

If rupture of a laboratory sample is the well-defined conclusion of the loading history, the same cannot be said for earthquakes. The problem is that it is not clear how to reconcile this idea with both the nature of the dynamical rupture propagation and the large scale and time organization of the crust previously discussed, that rather suggest a succession of complex coupled irregular cycles, apparently quite different from the critical point picture in which a large earthquake is the culmination of a preparatory stage. We have recently found a way out of this conundrum by studying a simple numerical model of earthquakes on a hierarchical fault structure driven at a slow average uniform rate, taking into account the crust heterogeneity and the existence of relaxation processes [Huang et al., 1998]. We observe that, while the system self-organizes at large time scales according to the expected statistical characteristics, such a the Gutenberg-Richter law for earthquake magnitude frequency, most of the large earthquakes have precursors occuring over time scales of tens of years and over distances of hundreds of kilometers. This type of behavior is documented in earthquake catalogs as we have shown, but its interpretation leads to considerable difficulty as it is hard to understand how 1-10 km ruptures can be related over distance of 100 ~km. Within the critical view point, these intermediate earthquakes are both ''witnesses'' and ''actors'' of the building-up of correlations. These precursors produce an energy release, which when measured as a time-to-failure process, is quite consistent with a power law behavior. In addition, the statistical average (over many large earthquakes) of the correlation length, measured as the maximum size of the precursors, also increases as a power law of the time to the large earthquake. These two properties qualify a critical behavior. From the point of view of self-organized criticality, this is surprising news: the individual large earthquakes do not lose their "identity'' because they belong to the large scale and long time collective behavior of the tectonic plate.

3.3 - Log-Periodicity?

We must add a third and last touch to the picture, which uses the concept of discrete scale invariance, its associated complex exponents and log-periodicity, as discussed above. In the presence of the frozen nature of the disorder together with stress amplification effects, we showed that the critical behavior of rupture is described by complex exponents, in other words, the measurable physical quantities can exhibit a power law behavior (real part of the exponents) with superimposed log-periodic oscillations (due to the imaginary part of the exponents). Physically, this stems from a spontaneous organization on a fractal fault system with "discrete scale invariance''. The practical upshot is that the log-periodic undulations may help in "synchronizing'' a better fit to the data. In the above numerical model, most of the large earthquakes whose period is of the order of a century can be predicted in this way 4 years in advance with a precision better than a year. For the real earth, we do not know yet as several difficulties hinder a practical implementation, such as the definition of the relevant space-time domain. A few encouraging results have been obtained but much remains to test these ideas systematically, especially using the methodology presented above to detect the regional domain of critical maturation before a large earthquake {Sornette and Sammis, 1995].

While encouraging and suggestive, extreme caution should be exercized before even proposing that this method is useful for predictive purpose but the theory is beautiful in its self-consistency and, even if probably inacurate in details, it may provide a useful guideline for the future.

JavaScript has been disabled in your browser