Projects for Master theses

DownloadGUIDELINES FOR MASTER PROJECTS & THESES AT THE CHAIR OF ENTREPRENEURIAL RISKS (PDF, 153 KB)

Prof. Dr. Didier Sornette   Version: August 2014

DownloadINCREDIBLY USEFUL ADVICE FOR STUDENTS WRITING A THESIS (PDF, 72 KB)  

I, Professor D. Sornette and my chair will not take any master student for a master thesis anymore as I am retiring from ETH Zurich on July 31, 2022, due to the Swiss Federal age limit of 65. I do not have specific recommendations for professors you could contact as a replacement. January, 2022
 

ETH Zürich is using SiROP to publish and search scientific projects. With your university login you get free access to internships, scientific projects, Bachelor’s and Master’s theses. For more information please visit external pagehttp://www.siropg.org.

Projects for Master theses cont.

  • Project #1 rePick
    One of the fundamental input for inferring the properties of earthquakes (location, magnitude, focal mechanisms and so on) is the exact onset time of particular seismic waves. This work is often done by duty seismologists, but it can be daunting task. Thus, there is an increasing trend to automate this task by training AI systems on already available arrival time datasets of different seismic phases. However, a case can be made for a platform that enables users to pick different phases of seismic waves (primarily first P and S arrival) in the continuous waveform data publicly available. Seismic phase picking, although exclusively done by specialists(and increasingly neural network enabled automatic pickers), is a relatively simple task that any human could be trained in a short time to perform efficiently. Human enabled picking of seismic phases trumps any automatic picker for two important reasons:
    1) Humans are much better in recognising patterns.
    2) Most of the arrival times available for existing data have no uncertainty data associated with them, making the AI systems trained on them as good or bad as a few individuals who did the picking in the first place. The rePick platform would be designed to leverage the pattern recognition ability of humans to aid and support training of AI enabled pickers, not only to mark arrival times for newly recorded dataset but also to aid in further human-enabled picking of seismic phases. For a specific set of users who opt for it, this could also be used as a CAPTCHA system that allows web hosts to differentiate between human and automated website access.
    Alternatively, a reward system could also be envisioned based on the accuracy of individuals to the emerging crowd consensus. To design systems for automatically differentiating between a good and bad picks we can rely on strategies such as discriminating users performance based on distance to “ground truth” picks on realistic synthetic seismograms as well as publicly available expert-picked datasets.
    The goal of this project would be design such a platform and test strategies for discriminating good and bad picks.


  • Project #2 Generalised model ensembling for improved earthquake forecasting

Our team at ETH Zurich is one of the world leaders in statistical earthquake forecasting. We want to build on the suite of earthquake forecasting models that we have built in the last few years and construct an earthquake forecasting model that combines individual forecasts from different models.

Since a forecast by a given model M_i is the specification of a probability density function (pdf) in space, time and magnitude, the simplest proposed combined forecast  is in the form of a (local pixel dependent) linear combination of the m pdf’s. In the language of statistical physics, this corresponds to a so-called ``annealed’’ weighted averaging over the models.

In the master thesis, you will explore the relative performance of this benchmark with other methods,  such as using the weighted sum of the log of the pdf, which corresponds to a ``quenched’’ weighted averaging over the models. [this uses the language of the statistical physics of disordered media]. In the quenched version, one really considers that each model gives an independent interpretation of the data. In the annealed version, one assumes that each model is a realisation of an unknown true model.
Both interpretations have value and pros and cons. You will explore other generalised solutions on how to combine m pdf’s’.
In particular, you will investigate a general and elegant answer to this question, which is to use copulas  A copula will allow you to construct the m-dimensional multivariate distribution of earthquake numbers in each pixel and time period across the m models. This can be seen as the general tool to combine m pdf’s.  From the copula, you can for instance derive things like: what is the probability that all m models or  a subset of n < m models predict that more than p earthquakes will occur? In other words, it can provide very interesting information on when the models agree on large bursts of events. Constructing the copula function provides the diagnostic of how the models converge or differ in their predictions.

This thesis on model ensembling will also use the concepts of bagging, boosting and stacking in the language of machine learning. Ensembling models is justified when the different models M_i are designed to capture complementary patterns and structures in the data.
In other words, this also calls for investigating the design of the models which should themselves be ``optimised’’ so that they are  "optimally’’ complementary and thus perform the best when combined. Individual model can be relatively bad when used individually. It is like in a team work or in soccer or other sports. You combine specialised experts and the team is great.
If all members are very good at one same thing, the team will be of inferior quality.

Contacts; Prof D Sornette and Dr. Shyam Nandan <>


        

  • Project #3 The hidden perils of (over-)control in complex systems

    The goal of this master thesis is manifold. The first goal is to dissect the fundamental origin of the results of Eurich and Pawelzik (2005) and Patzelt et al (2007; 2011; 2013), who show that optimal control yields power law distributions of the remaining fluctuations, with  for instance empirical demonstration in the stick-balancing problem. The guiding idea is to study the nature of the control methodology, and the dependence of the results on the length of the time series on which the control is learned. In particular,  the emerging power-laws can be seen to be structurally similar to the denominator of the random variable at the origin of the introduction of the student t distribution. We want to test the conjecture that the effects found by Eurich and Pawelzik (2005) and Patzelt et al (2007; 2011; 2013) may in fact disappear in certain very relevant conditions that will be explained to the student.

    The second goal is to work on a d-dimensional generalization of this problem, in particular using pseudo inverse operators when the look-back period n is less than the dimension of the control problem. One further goal will be to study the non-stationary matrix of evolution or noise such that, by construction, only some most recent observations will matter. This non-stationary situation could justify the initial results of Eurich and Pawelzik (2005) and Patzelt et al (2007; 2011; 2013).

    Eurich, C. W. and Pawelzik, K. (2005). Optimal control yields power law behavior. In International Conference on Artificial Neural Networks, pages 365–370. Springer.

    Felix Patzelt, Markus Riegel, Udo Ernst and Klaus Pawelzik, Self-organized critical noise amplification in human closed loop control, Frontier in Computational Neuroscience (2007) 1:4, doi: 10.3389/neuro.10.004.2007.

    Felix Patzelt and Klaus Pawelzik, Criticality of Adaptive Control Dynamics, Phys. Rev. Lett. 107, 238103 (2011)

    Felix Patzelt and Klaus Pawelzik, An Inherent Instability of Efficient Markets, Scientific Reports 3 : 2784 | DOI: 10.1038/srep02784 (2013)

    supervision: Prof. D Sornette (ETH Zurich) and Prof. S. Lera (SUSTech)

  • Project#4 Jump-diffusion model of bubbles and crashes with non-local behavioral self-referencing                        

Study of financial bubbles is an extremely important topic in the modern financial world. The dramatic behavior of bubbles is considered by economists, bankers, policy makers with no doubt to have massive impacts on society. Among the literature on bubbles and crashes, the rational expectation bubble theory has provided a framework on which the Johansen-Ledoit-Sornette (1999) has been established. It posits that bubbles are due to regimes of positive feedback loops, which create transient super-exponential price growth. Such hyperbolic price growth is not sustainable and ends, with a finite probability, in a crash or more generally a change of regime. The problem with this class of models based on rational expectation bubble theory is that the return-risk duality is here translated into an instantaneous relationship between the excess return and the crash probability. When the price plateaus, even with a very large overvaluation, the rational expectation bubble theoryimposes that the crash hazard rate vanishes. This is not realistic.

Our group has developed another class of financial bubble models in which the crash hazard rate is non-local function of the return path, and not just the instantaneous return. This allows one to account for a possible market instability when a large misplacing has developed. Specifically, we use a general jump-diffusion framework, and a bubble is quantified as the growth in asset prices that exceeds what can be expected over a certain period of time. Then, the hazard rate of correcting jumps is determined by a non-linear S-shaped function of a non-local estimation of mispricing. In one specification, the mispricing is estimated as an exponential moving average of the difference between the present and the past log-prices over a long time scale(a proxy for the estimation of mispricing), typically one year or more. This specification is rooted in behavioral finance, exploiting in particular the traits of anchoring on past price levels and on probability judgment about the likelihood of a correction as a function of the amplitude of the self-referential mispricing. An increase in the mispricing in turn increases the crash probability and, by the condition of no-arbitrage, leads to a larger conditional expected return. This process in fact introduces a natural positive feedback mechanism in price dynamics: the higher the price above a long term trend, the larger the probability of a correction and the larger the instantaneous rate of return, further possibly fuelling the bubble development until the correcting jumps / crashes occur.

The master thesis consists in joining the team of Prof. D Sornette and in particular working with his PhD student Ran WEI to extend, parameterise and calibrate the model, first on synthetic data and then on real data. One particular interest in this model is that it offers the potential to a more stable and better estimation of the true hidden risk premium, which is still the major unsolved problem in financial economics.

The master student will in particular work in exploring extensions of the model:
1. Model with additional parameters:
From what we observed in the synthetic data, we propose to introduce parameters presenting amplifiers, constraints, and so on, that control the jump intensity so that the synthetic price trajectories possesses more features of the real price behaviors.

2. Model with bifurcated jump intensity:
Explore the feasible choices for the “non-linear S-shaped function” mentioned above describing the crash hazard rate. The original definition of the function is a log-logistic function which is likely to be too simple to represent the real world. One of the options on the list is a function describing a bifurcation.

Both topics aim at empowering the model with the ability to better illustrate some non-trivial behaviors in the financial market and to obtain better calibration results. The master student may choose one of the two topics or, given that the framework is well established, both. We also encourage the master student to develop his/her own ideas. Solid mathematical skills, knowledge of C++ and Matlab/Python are required.

please contact Ms. Ran Wei, PhD student at
the Chair of Entrepreneurial Risks, D-MTEC
Scheuchzerstrasse 7, SEC, E 5
8092 Zürich
 

  • Project#5 Project External Analytics

in collaboration with CREDIT SUISSE AG
Internal Audit Data Analytics, Switzerland 
see Downloadlink (PDF, 81 KB)
 

  • Project#6 Project Voice Analytics

in collaboration with CREDIT SUISSE AG
Internal Audit Data Analytics, Switzerland
  see Downloadlink (PDF, 80 KB)
 

  • Project#7 Social Media Bubble

The social media bubble is a hypothesis stating that there was a speculative boom and bust phenomenon in the field of social media in the 2010s, particularly in the United States. The standard definition of a bubble in a stock is when prices are above a level that can be justified by economic fundamentals.
Other definitions have been proposed in particular in our group (see http://www.er.ethz.ch/financial-crisis-observatory.html and references
in http://www.er.ethz.ch/media/publications/social-systems-finance/bubbles_and_crashes_theory.html).
Here, the master thesis will investigate the bubble in social media, in which social networking services (SNS)
saw huge growth since 2006. Some investors believed around 2014-2015 that the "bubble" was similar to the dot-com bubble of the late 1990s and early 2000s.
See external pagehttps://en.wikipedia.org/wiki/Social_media_bubble for a brief account and some references.

The goal of the master thesis is first to review the literature and then to study quantitatively the
financial stock performance of social media companies during this period, using in particular tools
developed in our group, in particular the LPPLS (log-periodic power law singularity) model for bubble diagnostics.
The master thesis will also study the social media bubble in its sociological components.
Furthermore, there is an hypothesis that this social media bubble is coincident with the growing adoption of
a new technological platform, namely smart cell phones, that enabled the development of social networks.
The master thesis will thus investigate this hypothesis and perform relevant regression and causality tests.
 

  • Project#8 Financial Crash Prediction for Trading Strategies and Risk Management

The Log-Periodic Power Law Singularity (LPPLS) model is one of the main workhorses for financial bubble identification and crash forecasting, which was developed in the Chair of Entrepreneurial Risks at ETH Zurich in the last decade. During a bubble, the price of an asset temporarily deviates from its fundamental path of exponential price growth and enters a phase of accelerating, so-called super-exponential growth. Due to the acceleration of the price growth rate, this behaviour is unsustainable and must end at some finite time in the future, when the formed bubble explodes and the dynamics of the underlying asset enter a new regime.

The described process of bubble growth up to the crash is mathematically embodied by the LPPLS model. When calibrating this seven-parameter model to market data using statistical procedures, we can extrapolate the most likely time of the bubble burst, the so-called critical time, as it is one of the model parameters. Successful predictions of the critical time have been published by our group for plenty of cases. These predictions can on the one hand be useful signals for trading strategies and, on the other hand, may be used as warning precursors for risk management.
Around the LPPLS model, which has been studied and applied for about 25 years, there are many interesting, challenging and fascinating topics such as model extension, model calibration and predictive performance evaluation that remain open for investigation and are closely connected to statistics, machine learning and econometrics. A crucial task amongst these is the development of a rigorous backtesting procedure for evaluation of the predictive performance of the model. Therefore, the goal of this thesis will be to run a large TP TN FP FN  analysis (external pagehttps://en.wikipedia.org/wiki/Sensitivity_and_specificity) of the predictions of the LPPLS model. As a result, we will gain insight into the predictive performance of the model in its current state of art by disentangling false and trustworthy predictions from one another. This will clarify the situations in which the critical time and other quantities derived from the model may be utilized as a warning or trading signals. The dataset that we will use to assess the predictions of the LPPLS model originates from a giant research database that comprises a massive amount of price and LPPLS prediction data for 25k+ Assets, developed at the Financial Crisis Observatory funded by Prof. D. Sornette in 2008. The project will involve the usage of automated peak detection methods, calculations on ETH Zurich’s supercomputer as well as result analysis using advanced statistical and machine learning tools. Depending on how fast the student progresses, the analysis will be extended to even more metrics that give a richer perspective on the analyzed data. Own ideas suggested by the student in the course of the project are very welcome and we are open to adjust the direction of the project accordingly.
You are invited to participate in and contribute to our interdisciplinary research group. Your skillset should comprise solid mathematical skills, knowledge of statistics, (advanced) knowledge of Python programming and data analysis.
Please contact PhD student Jean-Christian Gerlach at

  • Project #9 Page view prediction for job advertisements using machine learning techniques

Description:
Machine learning techniques are an essential approach to predict and improve the click-through rate of ads in search engines. Similar methods could be applied to predict the number of page views a job advertisement will generate on a job platform. Predicting the number of potential applicants that a job advertisement will reach can help companies to optimize their recruiting
efforts. The project consists of defining features of the job advertisements which are used for the prediction, defining the classifier, selecting models for the prediction, training them using empirical data, evaluating and testing the models. The goal of the project is to find a predictor for the number of page views that a job advertisement generates.
Data:
• Empirical data from a swiss job platform consisting of 8000 job advertisements posted between 2009 and 2017, including the content of the advertisement, time information, and job information such as the sector, type and location.
• Google analytics results such as the page views mapped to the job advertisements.
References:
• Harper, Ray. "The collection and analysis of job advertisements: A review of research
methodology." Library and Information Research 36.112 (2012): 29-54.
• Richardson, Matthew, Ewa Dominowska, and Robert Ragno. "Predicting clicks: estimating the click-through rate for new ads." Proceedings of the 16th international conference on World Wide Web. ACM, 2007.
Required skills:
• Experience in machine learning, especially experience in natural language processing is a
plus.
• Profound knowledge in a programming language, preferably python.

  • Project #11 The Reddit online Space game: study of the collective behaviors, cooperation and competition between one million gamers with scarce spatial resources and limited individual actions

    On 1st of April 2017, the Reddit online community launched a massive multiplayer online game, dubbed Place. Starting with a 1000x1000 pixel white canvas, a registered Reddit user could color a single pixel every few (five to twenty) minutes. The game lasted 72 hours, and involved a total of over one million users [1].

    The first hours of the game involved random placement of coloured pixels. Soon after, communities emerged spontaneously to form large visual patterns on the canvas, such as flags of countries, music band logos, cartoon characters, etc [2]. In addition to collaborating, users also competed - attempting to enlarge their paintings by taking over the territory of other groups. A large, structured, dataset describing the actions of each individual over the course of the entire game it publicly available.

    You will be performing:
    • basic exploratory data analysis on the dataset
    • detect and quantify the spatiotemporal patterns underlying collaboration and competition amongst groups of users
    • depending on the results of the first steps, we will address scientific questions on the nature of human collaboration and competition over a limited resource

    In particular, interesting questions include the distribution of sizes, quality and complexity of the patterns within Place, the distribution of contributions over all the million people who participated, their spatio-temporal dynamics, the self-excitation and cross-excitations between players and much more. This study is a template for understanding coordination and cooperation, in the presence of heterogeneous motivations in an unstructured environment, which can be argued to be a simplified but realistic representation of real social systems.

    The project is appropriate for an individual aspiring to master a broad array of scalable data analytics tools for social networks, either for academic or industrial purposes.

    Citations:
    [1]https://redditblog.com/2017/04/13/how-we-built-rplace/
    [2]https://draemm.li/various/place-atlas/

    Required skills:
    • Expertise in languages well suited for data analytics: Python, MATLAB, or R.
    • Expertise in times series analysis, machine learning, complex networks, and social dynamics are all desirable.
    • Interest in social dynamics and data analytics.

    Starting date:
    • As soon as possible

    Supervision:
    • The project will be supervised by professor Didier Sornette (in Zurich) and PhD student Dionysios Georgiadis (in Singapore).

  • Project #12 e-Voting validation in Switzerland and beyond
    (
    applying robust statistical methods to detect manipulation of votes, based on 152 years of historical vote results of Switzerland)
    Context
    E-voting is currently a hot topic in Switzerland with two e-voting systems (one from Swiss post, one from the canton Geneva) being used in several cantons alongside traditional votes. In the current e-voting testing phase, max 30% of the population in 11 cantons is eligible to vote using e-votes. The security of e-voting is hotly debated, especially the possibillity of manipulation of results by an attacker. As 100% protection is fundamentally impossible, there is a need for robust and objective methods to validate results and detect manipulation with statistical methods. Currently, some cantons validate their results on an ad-hoc basis with unknown methods. A parliamentary initiative introduced on 11 June 2018 demands (A) systematic validation of e-voting results, (B) at least two different e-voting systems deployed to ensure results from independent channels are available for comparison and detection of manipulation [1].

    Observation
    In spite of the importance of voting in Switzerland, cantons have not yet a mandate to validate the results. Some do it with unknown methods (ad-hoc) to detect errors in the results delivered by the communes.

     From canton SG, we know that e-votes results typically differ less than 10% from traditional vote results.

     E-voting is a hot topic, robust detection of potential manipulation is key to ever get e-voting from testing to full deployment.

    There is a clear need for a robust and objective method to validate voting results.

    Research Question
    Given the votation results of each canton and each channel (traditional results, e-voting results from system A or B), develop statistical methods to identify potential manipulation / validate results.

    Data:
    • Historical data of all votes from all cantons for the past 152 years is available.
    • Historical results of the past 20 years (180 votes x 26 cantons) can be analyzed online [2].
    • There is significant correlation between the results of different cantons, which can be used for the statistical analysis.

    [1] external pagehttps://www.admin.ch/gov/fr/accueil/documentation/manifestations.event-id-7003.html
    [2] external pagehttps://www.evotesim.ch


  • Project #13 Comparative analysis between the Panic of 1907 and the Financial Crisis of 2007
    The financial crisis of 2007, which is widely believed to have been triggered by a decline in the US housing market, caught many speculators, investors and financial professionals off guard. However, if they would have studied the economic conditions, and the state of global markets that preceded the Panic of 1907, they might have found some remarkable parallels with the years before 2007. Most strikingly, the 1907 Panic propagated throughout a "shadow banking system" of Trust companies that provided liquidity to the financial markets outside the regulated circuit. When the trust in these Trusts evaporated, because of their opaque structures and governance, panic could spread fast. These Trust companies had low cash reserves relative to deposits and were extremely sensitive to bank runs. Additionally, the 1907 Panic was triggered by a climate of excessive speculation which made the Knickerbocker Trust collapse. It marked the end of the first wave of globalization which started around 1870.

    The master thesis is planned as follows:

    a) An independent and concise description of the conditions preceding both crashes;
    b) A comparative analysis of the parallels between both periods;
    c) An analysis of the differences between both periods.

    The approach will be as much as possible data driven, with a keen eye for underlying processes and dynamical systems.

    The different topics that may be relevant in this research are the following:

    - The structure and status of financial markets and the shadow banking system of 1907 and 2007;
    - The different waves of globalization of 1870-1907 and 1980-2007: international trade, transportation costs, new communication methods, migration patterns ...
    - The gold standard and British imperialism in the late 19th and early 20th century;
    - The regulatory response to the 1907 Panic and the establishment of the Federal Reserve;
    - The different patterns of recovery after 1907 and 2007;
    - Inequality;
    - Policy ...

  • Project #14 Tests of stock return predictability of 100+ indicators
    There is a vast literature in academia, proposing many variables that are shown to predict cross-sectional stock returns. They range from fundamental value indicators to technical variables based on various price and volatility patterns.

    The goal of this master is to revisit and enrich the seminal study of R.D. McLean and J. Pontiff [1] by (i) analysing complementary measures of success that are more relevant to professional investors and (ii) combine these 100+ indicators using decision trees and possibly machine learning methods to improve the results.

    [1] R.D. McLean and J. Pontiff,  Does Academic Research Destroy Stock Return Predictability? The Journal of Finance 71 (1), 5-31 (2016)

  • Project #15 Structural model of the dynamics of corruption and how to get rid of it
    Corruption is a form of dishonesty undertaken by a person entrusted with a position of authority, often to acquire personal benefit. Corruption may include many activities including bribery and embezzlement, though it may also involve practices that are legal in many countries. Political corruption occurs when an office-holder or other governmental employee acts in an official capacity for personal gain.
    Corruption is most commonplace in kleptocracies, oligarchies, narco-states and mafia states.
    See external pagehttps://en.wikipedia.org/wiki/Corruption
    Arguably, corruption is one of the main, if not the main, obstacle against healthy economic growth and harmonious development with relative fair sharing of resources to ensure a broadly distributed welfare. There have been many studies purporting to find the cause of corruption. Reciprocally, corruption leads itself to many responses and consequences. The causality is often not clear. And attempts to crack down on corruption have often only temporary effects.

    This project is based on the idea that what is missing is an integrative dynamical system view of corruption coupled with all the other relevant variables. Rather than constructing static models of causality,. the goal of this master thesis is to integrate all relevant variables into a dynamical nonlinear coupled mathematical framework. This model will enable to predict changes of regimes, and identify the key actions that can tilt an organisation or a nation towards more or less corruption. The guidelines for the modelling is found in the book of Scheffer [1].

    [1] Marten Scheffer, Critical Transitions in Nature and Society (Princeton Studies in Complexity) 2009th Edition

  • Project #16 Cyber Risks and Data Breaches
    The 3rd and 4th industrial revolutions have brought rapid coupling of the physical and digital (i.e., cyber) worlds. Cyber criminals are increasingly exploiting opportunities in the cyber domain, often with catastrophic consequences for firms and individuals. This risk is extreme and rapidly evolving – there is even debate about if insurance companies will be able to cover the risk, and how to quantify it. Data breaches of private information in particular have been identified as one of the costliest risks that firms are exposed to [1]. Yahoo, Equifax, Uber, JP Morgan, Sony Pictures, etc. have all been affected, and the quantity of private information (Number of names, addresses, credit card numbers, etc. exposed) has eclipsed the size of the human population – see our previous study [2].

    Existing studies and open data breach data (e.g., [3]), have been overly simple. We propose a master thesis to collaborate with the chair in:

    1) Improving data breach event information: Measurement of event severity, victim firm characteristics, attack characteristics, etc. E.g., one can compare the amount of information breached claimed by the company with the actual breached information (available on the Darknet).

    2) Statistical analysis for a more detailed risk quantification. E.g., characterizing risk as a function of victim firm size, sector, etc.

    3) And, expanding the scope of data breaches to include an important new class: cryptocurrency – where there is now substantial intentional and accidental exposure of private keys, and loss of crypto assets.

    The thesis should be well balanced between event/subject matter research, working with data, and statistical analysis. Supervision by experts in statistical risk assessment, and cyber risk will be available; and productive collaboration with other members of the chair foreseen.

    [1] IMB/Ponemon, Cost of Data Breach Study, external pagehttps://www.ibm.com/security/data-breach
    [2] Wheatley, Spencer, Thomas Maillart and Didier Sornette. "The extreme risk of personal data breaches and the erosion of privacy." The European Physical Journal B 89.1 (2016): 7.
    [3] external pagehttps://www.privacyrights.org/data-breaches

 

  • Project #17 Back-testing of trading strategies based on the Financial Crisis Observatory (FCO) output (http://tasmania.ethz.ch/pubfco/fco.html)
    For the FCO, our group has developed a number of indicators, based on the models and methods largely published by the group of Prof. D. Sornette on financial bubbles and crashes, which can also be read on this site (Bubbles and crashes: theory and Bubbles and crashes: theory - empirical analyses http://www.er.ethz.ch/media/publications/social-systems-finance/bubbles_and_crashes_theory_empirical_analyses.html). The goal of this master thesis is to master how to develop trading strategies and risk control methods using these indicators.

  • Project #18 Bitcoin Bubbles: Epidemic-Diffusion Analyses and Models
    Since 2012, bitcoin has undergone a series of mind-blowing bubble and crash events, which seem unequalled historically in their amplitude (transient price appreciations by factors of tens of even hundreds followed by very fast and large crashes organised in a hierarchy of amplitudes and time scales).

    The previously known mechanisms of imitation, herding, attention bias, irrational exuberance and the irresistible greed for quick and easy gains can be recognised to be at play in the bitcoin bubbles. However the sheer amplitude and recurrence of the bitcoin bubbles (as well as many other crypto-currencies) raise the question whether these above mechanisms constitute the whole story. In other words, is there something genuinely new ("this time is different?") in the bitcoin bubble phenomena?

    The goal of this master project is to develop empirical analyses followed by theoretical formalisations testing the hypothesis that a novel ingredient could be the existence of geo-political groups distributed over the Earth whose activities can be described by an epidemic or diffusion process in space (not just in terms of triggered trading activity in a mean-field fully connected world). At the danger of being simplistic but for illustrative purposes, bitcoin trading may be dominated by Chineses at some time, then Japanese take over, then Koreans, etc. This diffusion can go back and forth depending on a variety of events that unfold. In other words, we hypothesise that geography (in the complex network of bitcoin exchanges and traders distributed over the Earth)) matters. This is a completely novel ingredient in our wholly connected world where geographical distances were thought to be erased, at least apart from ultra-high-frequency trading. Historically, diffusion of prices was occurring in the middle ages on wheat prices with a typical diffusion "speak" of no more than tens of kilometres per day [1-3]. But the bitcoin phenomena are likely to be much richer in their diffusion properties, occurring at the scale of the whole Earth, and organised through social groups. This master thesis will start from and extend models of epidemic spreading such as [4-6].

    The master thesis will be a joint collaboration between Prof. D. Sornette (chair of Entrepreneurial Risks at ETH Zurich) and Prof. Alex Lipton (Visiting Professor of Financial Engineering, École Polytechnique Fédérale de Lausanne and Connection Science Fellow, Massachusetts Institute of Technology).

    [1] Roehner (B.M.) 1999: The space-time pattern of price waves. The European Physical Journal B 8, 151-159.
    [2] Roehner (B.M.) 2000: The correlation length of commodity markets: 1. Empirical evidence. The European Physical Journal B 13,175-187.
    [3] Roehner (B.M.) 2000: The correlation length of commodity markets: 2. Theoretical framework. The European Physical Journal B 13, 189-200 (2000)
    [4] Yun Feng, Li Ding, Yun-Han Huang and Li Zhang, Epidemic spreading on weighted networks with adaptive topology based on infective information, Physica A 463, 493-502 (2016)
    [5] Bing Wang, Yuexing Han and Gouhei Tanaka, Interplay between epidemic spread and information propagation on meta-population networks, Journal of Theoretical Biology 420, 18-25 (2017)
    [6] She Han, Mei Sun, Benjamin Chris Ampimah and Dun Han, Epidemic spread in bipartite network by considering risk awareness Physica A 492, 1909-1916 (2018)

  • Project #19 Emergent Utility Functions
    A fundamental axiom of economic theory and most of social sciences is that people have well-defined preferences that can be captured by a (sometimes stochastic and difficult to define) utility function specific to each individual. The present project aims at developing a new framework to test the validity of this axiom and investigate to what degree it should be challenged.

    Consider the agro-industry that supples billions of people with junk and sugared food, leading to the modern diseases of the western world such as the so-called epidemics of obesity and many chronic diseases including allergy. A standard economic approach would be that people maximise their utility (pleasure) and the industry responds rationally by providing such addictive health-adverse products. Economic theory would then responds by invoking a kind of externality at work here, or a partial breakdown of information available to consumers on the long-term effect of their so-called optimising behaviors. Consider different human cultures: while there are fundamentals such as biological needs for food, protection and breeding, other preferences may vary widely across a spectrum of cultures.

    The goal of the project is thus to develop an agent-based model in the spirit of [1,2] exploring how preferences and utility functions can emerge bottom-up from an evolutionary and cultural perspective, survival, competition, cooperation, depending on the nature and richness of the environment. In other words, the conjecture to be tested is that environment, supply and culture shape in large part what we call preferences and utility functions. 

    [1] M. Hetzer and D. Sornette,
    The co-evolution of fairness preferences and costly punishment, PLoS ONE 8 (3), e54308, pp.1-18 (2013)
    (external pagehttp://arxiv.org/abs/0908.3072) and (external pagehttp://ssrn.com/abstract=1468517)

    [2] M. Hetzer and D. Sornette,
    An Evolutionary Model of Cooperation, Fairness and Altruistic Punishment in Public Good Games,
    PLoS ONE 8(11): e77041, pp. 1-13. doi:10.1371/journal.pone.0077041  (2013)
    (external pagehttp://ssrn.com/abstract=1927919)

  • Project #20 Agent-based model of in-group preference of human cooperation and strong reciprocity
    In [1,2], we showed that experiments performed in the group of E. Fehr on human cooperation and altruistic punishment could be quantitatively accounted for in agent-based models, with agents endowed with disadvantageous inequity aversion. The set-up in [1,2] considers global preferences. A more psychologically realistic model suggests that we have in-group preferences (e.g., I care more about inequality faced by members of my social group, than I care about members of other groups).
    This idea is hotly debated in countries with multiple ethnies, such as in the US, because of recent race related tensions. A recent and popular book "The New Jim Crow (Mass Incarceration in the Age of Color blindness" by Michelle Alexander, 2010) makes the case that anti-black racism was introduced in the US as a means to stifle global preferences against inequality, prompting people to accept a less just system than they would have otherwise. 
    The core question of the present proposed research project for a Master thesis is twofold. First, how much of an in-group preference do individual agents need to have before collective norms for altruistic punishment fail to evolve? And, second, does the exogenous introduction of an in-group bias into a mature system (that previously did not have any such in-group bias) destabilize the existing norm? Our current study [1,2] showed that the system converged toward a stable level of "disadvantageous inequity aversion" and the goal now is toexplore whether this stability is threatened by in-group bias. In addition to the exogenous introduction of an in-group bias, we can also surmise that t if we allow "mutations" to occur that favour in-group biases, then they will tend to be evolutionary stable strategies, so that it is probably unnecessary to introduce the in-group bias exogenously. These ideas are to be tested in simulations of an agent-based model that generalises [1,2].

    One can consider the problem as an analogy to Schelling's 1971 model of housing segregation which found how small individual preferences for in-group can influence collective outcomes. We want to find out, in a simple model, how much in-group preferences need to exist at the agent-level before major effects are visible at the collective level.

    [1]  M. Hetzer and D. Sornette,
    The co-evolution of fairness preferences and costly punishment, PLoS ONE 8 (3), e54308, pp.1-18 (2013)
    (external pagehttp://arxiv.org/abs/0908.3072) and external pagehttp://ssrn.com/abstract=1468517)

    [2] M. Hetzer and D. Sornette,
    An Evolutionary Model of Cooperation, Fairness and Altruistic Punishment in Public Good Games, PLoS ONE 8(11): e77041, pp. 1-13. doi:10.1371/journal.pone.0077041 (2013) (external pagehttp://ssrn.com/abstract=1927919)

    Contact person: Prof. D. Sornette (ETH Zurich) and Dr. Ravi S. Kudesia (Behavioral Scientist, Future Resilient Systems at the National University of Singapore)

  • Project #21 Predictive Analytics in Social Media Downloadlink to the description (PDF, 239 KB)

  • Project #22 Real-time Event Detection with Twitter using unsupervised learning techniques Downloadlink to the description (PDF, 121 KB)

  • Project #23 Horse-races between models of decision making (cumulative prospect theory, transfer attention theory, quantum decision theory…) 
    During the last 25 years, prospect theory and its successor, cumulative prospect theory (CPT), replaced expected utility as the dominant descriptive theories of risky decision making. Although these models account for the original Allais paradoxes, 11 new paradoxes show where prospect theories lead to self-contradiction or systematic false predictions. The purpose of this master thesis is to investigate the relative value of competing theories of decision making: cumulative prospect theory, "configural weight" models (in which probability-consequence branches are weighted by a function that depends on branch probability), quantum decision theory (QDT) (which accounts for the entanglements of prospects) and so on. The student will first use empirical data set comprising 91 choices between two lotteries (two "propects") offered in random pairs made by 142 subjects repeated at two separated times. Other data sets will be also investigated. Particular attention will be put on the probabilistic component of choices. This master thesis aims at building on and extending considerably a previous one just comparing CPT and QDT on a restricted data set.

  • Project #24 Momentum and acceleration based strategies using optimal trend and curvature estimators on sparse data
    This master thesis will elaborate a recent set of optimal estimators of trends and curvature obtained over very few data points. These estimators will then be utilised to invent novel trading strategies of investment on various financial assets. The master thesis is thus a combination of (i) analytical investigations of optimal estimators of sparse data sets and (ii) practical implementation to develop strong knowledge of trading strategies in finance

  • Project #25 Extension of Markowitz portfolio optimisation to include constraints of diversification with the Herfindhal index and other means; analytical solutions and implementation tests. It is well-known that the standard Markowitz portfolio optimisation leads to over-concentration on a few assets, in contradiction with the need for diversification, especially in the presence of return uncertainties. The project includes testing novel investment strategies to combine return-risk optimisation with constraint to obtain robust portfolios.

    The master thesis will be co-supervised by Prof. D. Sornette and a PhD student, and will benefit from a well-honed data access process and testing methodology.

  • Project #26 Data assimilation with particle filters of the predictions of critical times of financial bubbles and crashes obtained at the Financial Crisis Observatory using the log-periodic power law singularity method. The goal is to combine past predictions with present observations of financial data to obtain the best estimation of the present state of the world that is then input in the LPPLS machinery.  The full integration of a data assimilation method to the LPPLS methodology is expected to advance significantly the existing diagnostics of bubbles and crashes and serve as the new engine powering a future generation of the Financial Crisis Observatory at ETH Zurich (http://tasmania.ethz.ch/pubfco/fco.html).

    The master thesis will be co-supervised by Prof. D. Sornette and a PhD student, and will benefit from a well-honed data access process and testing methodology.

  • Project #27 The Acceleration Effect in International Stock Markets
    Recent work conducted by the Chair of Entrepreneurial Risk has reported strong evidence that changes of momentum, i.e. acceleration, provide better performance and higher explanatory power than momentum itself (Ardila et al., 2015). The acceleration not only challenges the predominant role of momentum, arguably the most studied financial anomaly, but also provides supporting evidence for the ubiquity of transient positive feedbacks influencing the price formation process. Examples of these procyclical mechanisms include the market impact of option hedging, market makers bid-ask spread in response to past volatility, learning of business networks, algorithmic trading, asymmetric information on hedging strategies, stop-loss orders, deregulation, central banks easy monetary policies, imitation, herding, among others.

    The goal of this master thesis is to further investigate the existence of acceleration patterns in international stock markets, following the approach conducted by Ardila et al. (2015) and along the lines of the analysis performed by Fama and French (2012) for size, value, and momentum. The project is well bounded and may lead to an academic publication. The student will leverage the research conducted by members of the Chair, and will be closely guided throughout the project.

    The candidate should have sound knowledge of econometrics, be familiar with asset pricing factors models (e.g. CAPM and arbitrage pricing theory), and be comfortable with Python - ideally with the Pandas library as well.

    If you are interested, please contact Diego Ardila,

    References:
    Fama, Eugene F., and Kenneth R. French. "Size, value, and momentum in international stock returns." Journal of financial economics 105, no. 3 (2012): 457-472.

    Ardila, Diego, Zalàn Forrò, and Didier Sornette. "The Acceleration Effect and Gamma Factor in Asset Pricing." Swiss Finance Institute
    Research Paper 15-30 (2015).

    Asness, Clifford S., Tobias J. Moskowitz, and Lasse Heje Pedersen. "Value and momentum everywhere."
    The Journal of Finance 68.3 (2013): 929-985.

  • Project #28 Bubble analysis of the Swiss real estate market using a hedonic index
    We have access to a unique database of both asking and transaction prices of real estate in Switzerland. This is an excellent opportunity for a master’s student to explore this data and to create hedonic indices at national and district level using the price information, as well as property specific characteristics. These indices will then be used as a basis for further analyzing the Swiss real estate market for signs of bubbles.  An ideal candidate would be familiar with econometrics, have good knowledge of statistics, and be comfortable programming in Matlab, Python or R.  If you are interested in this project please contact Dr. Dorsa Sanadgol ()

    The proposed master’s thesis will extend our past research illustrated in these publications:

    Diego Ardila, Peter Cauwels, Dorsa Sanadgol, and Didier Sornette, Is There A Real Estate Bubble in Switzerland? Diagnostic of Q4/2012, Swiss Real Estate Journal 6, 38-47 (2013) external page(http://ssrn.com/abstract=2237561)

    Diego Ardila, Dorsa Sanadgol, Peter Cauwels and Didier Sornette, Identification and critical time forecasting of real estate bubbles in the U.S.A and Switzerland, working paper external page(http://ssrn.com/abstract=2465000)

  • Project #29 Designing and backtesting a factor model for stocks combining bubble, value and growth scores
    At the Financial Crisis Observatory, an FCO Cockpit report is published, synthesizing the global bubble status of financial markets on a monthly basis (the reports can be found here:  or 
  • Project #30 Market impact of financial bubbles dragon-hunters
    The goal of this master thesis is to extend the agent-based model proposed by T. Kaizoji, M. Leiss, A. Saichev and D. Sornette, Super-exponential endogenous bubbles in an equilibrium model of rational and noise traders, Journal of Economic Behavior and Organization (in press) external page(http://ssrn.com/abstract=2561719)]. This model was designed to investigate bubbles on financial markets. The model assumes two types of agents: rational traders that invest a constant fraction of their wealth in stocks and noise traders that rely in their investment decision on social imitation and past price performance. Using this model, it is possible to simulate markets that exhibit important statistical regularities that resemble those of real markets, namely volatility clustering and a slowly decaying autocorrelation function of absolute returns. Furthermore, the emergence of bubbles and their connection to the parameters that control noise trader behavior have been established and verified in simulations. The model is able to generate bubbles with a realistic duration, i.e. a length of roughly one year.

    Building upon this model, the proposed master thesis will investigate the impact of another trader type whose agents use log-periodic power laws (LPPL) to predict bubbles. Bubble detection algorithm will be developed with a corresponding investment strategy balancing full information and computation time. The main goal of this work is to quantify the impact of an LPPL powered strategy employed by some market participants on the whole market and in particular on bubble development. To this end, the goal is to simulate markets with LPPL investors with different wealth levels and implementation of their strategy to minimise market impact. The goal is to test the hypothesis that bubble hunters will aggravate the size of bubbles and the pain of following crashes.

  • Project #31 Drawdowns and financial risks
    A major concern for financial risk management is the existence of "outliers" (also called "kings'' or "black swans'') in the distribution of financial risks measured at variable time scales such as with drawdowns. These outliers are identified only with metrics adapted to take into account transient increases of the time dependence in the time series of returns of individual financial assets. They seem to belong to a statistical population which is different from the bulk of the distribution and thus require some specific treatment. The Chair of Entrepreneurial Risk proposes a Master thesis whose main purpose is the study of the challenging question of identifying and quantifying the rallies and drawdowns observed in the times series of financial return. The work will also include the study of the statistical distribution of the rallies and drawdowns and the development of asset allocation methods adapted to account for these phenomena. The applicants must have good computational skills (knowledge of Matlab® is mandatory) as well as a good level in finance, mathematics and statistics.

  • Project #32 Study of a nonlinear model of the price of an asset: Kalman filter calibration to real data to develop a trading program of financial stocks

  • Project #33 Endogenous versus exogenous dynamics and scaling laws in YouTube dynamics of the fame of video
    - Finding correlations between the most viewed videos each day and relevant news stories in order to gauge whether or not YouTube can act as a sort of social barometer. 
    - Looking at the microdynamics of the ranks of videos. Analyzing on the basis of Rank Clocks (external pageNature: http://www.nature.com/nature/journal/v444/n7119/full/nature05302.html)
    - Developing a measure of the relative importance or impact of the different "featuring" mechanisms on YouTube.....in other words, what is the effect of being featured on the "Most Viewed" part of the website as compared to the "Most Discussed" section of the site?

  • Project #34 Economics of Global Earthquake Monitoring
    This project has the goal to study the economic costs and operational management components of a global earthquake forecasting (GEF) system, which could be developed based on a wide variety of pre-earthquake (pre-EQ) signals, which the earth starts to produce about 1-2 weeks before major seismic events. GEF will combine data fron satellites with data from ground stations and other information available on the ground. The idea is to monitor the evolution of these signals in near-real time, send the data to a Data Center, and correlate the information in order to arrive at a decision 1-2 days before major earthquakes to issue a warning for a given region and time window. The master thesis will be performed as an interdisciplinary project supervised by Prof. D. Sornette together with Prof. F. Freund from Nasa Ames in California. 

  • Project #35 Precursors and decay of happiness
    The goal is to test the endo-exo theory that we have developed elsewhere for commercial successes and to understand the power law distribution of waiting times between cause and actions to the dynamics of happiness!

    see Downloadhttp://www.uvm.edu/~cdanfort/research/dodds-danforth-johs-2009.pdf (PDF, 695 KB)
    How Happy Is the Internet? external pagehttp://news.sciencemag.org/social-sciences/2009/08/how-happy-internet

  • Project #36 Empirical tests of punctuated evolution due to delayed carrying capacity in economics and biology
    For instance, use data on the five known developments and mass extinctions of biological species, data on successions of S-curves for innovations, and so on.
    This project is based on the paper: V.I. Yukalov, E.P. Yukalova and D. Sornette, Punctuated Evolution due to delayed Carrying Capacity, Physica D 238, 1752-1767 (2009) (external pagehttp://arXiv.org./abs/0901.4714)

  • Project #37 Real-time "paper-money" implementation of trading strategies using reverse engineering of financial markets with Agent-Based Models (ABM)
    Goals: Understanding, Testing and Ideally Extending / Evolving a Trading Strategy based on Reverse Engineering Financial Markets with Agent-Based Models (ABM)
    Tasks:
    - Getting to know the concept of ABM and understanding their potential when used for reverse engineering financial markets.
    - Learning the fundamentals of genetic algorithms (and similar evolutionary strategies).
    - Getting familiar with the online trading platform "Interactive Brokers" for performing (automated) trading according to the stratgegy (paper trading and optionally real trading). 
    - Developing and understanding of strategies performance analysis and apply them for the developed strategy.
    Requirements:
    - Understanding C++ and A plus knowing the boost library
    - A plus: having experience in using Interactive Brokers or similar trading platforms.

  • Project #38 In macro-economics, modellers use essentially conditions of monotonous preferences and there are essentially four fundamental variables that macro-economists consider
    1) more wealth is better
    2) less inequality is better
    3) less inflation is better
    4) less unemployment is better.

    RESEARCH QUESTION: at the micro-economics level, we have shown that people are "disadvantageous inequity adverse" [1,2]. How does it renormalize into the adverse effect of INEQUALITY at the macro level? The research will consist in developing agent-based models with economic agents endowed with "disadvantageous inequity aversion" who compete for scarce resourdes in an economic set-up of production and consumption. As the agents see their wealths evolve and form a broad distribution of emergent inequalities, the question is to investigate what controls the growth of such inequalities and the macro-welfare associated with the level of inequality. Measures of macro-success will be used and novel ones will be developed if necessary. 

    References:
    [1] M. Hetzer and D. Sornette, The co-evolution of fairness preferences and costly punishment, PLoS ONE 8 (3), e54308, pp.1-18 (2013)
    (external pagehttp://ssrn.com/abstract=1468517)

    [2] M. Hetzer and D. Sornette, An Evolutionary Model of Cooperation, Fairness and Altruistic Punishment in Public Good Games, PLoS ONE 8(11): e77041, pp. 1-13. doi:10.1371/journal.pone.0077041 
    (external pagehttp://ssrn.com/abstract=1927919)

  • Project #39 Comparison with theory and insights from the empirical size distribution of banks in the USA: small versus large banks and the economic meaning of deviations from Zipf law
    References:
    [1] A. Saichev, Y. Malevergne and D. Sornette, Theory of Zipf's Law and beyond, Lecture Notes in Economics and Mathematical Systems, Volume 632, Springer (November 2009), ISBN: 978-3-642-02945-5 (0)
    external pagehttp://www.springer.com/economics/financial+economicsbook/978-3-642-02945-5
    (external pagehttp://arxiv.org/abs/0808.1828)

    [2] Y. Malevergne. A. Saichev and D. Sornette, Zipf's law and maximum sustainable growth, Journal of Economic Dynamics and Control 37 (6), 1195-1212 (2013) (external pagehttp://ssrn.com/abstract=1083962 and external pagehttp://arxiv.org/abs/1012.0199)

    [3] external page"On the Size Distribution of Financial Institutions" 
    JOHN GODDARD, University of Wales System - Bangor University
    Email: 
    HONG LIU, University of Glasgow - Glasgow Business School
    Email: 
    DONAL G. MCKILLOP, Queen's University Management School
    Email: 
    JOHN O.S. WILSON, University of St. Andrews
    Email: 
    This study examines the firm size distribution of US financial institutions. A truncated lognormal distribution describes the size distribution, measured using assets data, of a large population of small, community-based commercial banks. The size distribution of a smaller but increasingly dominant cohort of large banks, which operate a high-volume low-cost retail banking, exhibits power-law behavior. There is a progressive increase in skewness over time, and Zipf's Law is rejected as a descriptor of the size distribution in the upper tail. By contrast, the asset size distribution of the population of credit unions conforms closely to the lognormal distribution. 

    [4] T. Maillart, D. Sornette, S. Spaeth and G. von Krogh, Empirical Tests of Zipf's Law Mechanism In Open Source Linux Distribution, Physical Review Letters 101, 218701 (2008) (external pagehttp://arxiv.org/abs/0807.0014)

    [5] Qunzhi Zhang and Didier Sornette, Empirical Test of the Origin of Zipf's Law in Growing Social Networks, Physica A 390, 4124-4130 (2011)

    [6] Ryohei Hisano and Didier Sornette, Predicted and verified deviations from Zipf's law in ecology of competing products, Phys. Rev. E 84, 026117 (2011)  (external pagehttp://arxiv.org/abs/1101.5888)

  • Project #40 Are crises good for entrepreneurship?
    Steve Jobs and Steve Wozniak started Apple in 1976. America was then at a low point, following the 1973-74 recession, the Arab oil embargo, the Watergate scandal, and the fall of Saigon. Other US companies that were started in bad times include General Electric, IBM, Hewlett-Packard, and Microsoft. Question: Do tough times beget a disproportionate number of great companies? If so, why?

    "The crisis is a catalyst for change in the technological environment. Things that we only gave half a thought to in the past are suddenly being addressed very quickly." Many innovations are geared to optimizing processes and reducing non-personnel costs. Fundamental issues are also being broached: "Without innovating", said one  manager "it won't be possible to prosper over the next few years". One characteristic shared by all the companies surveyed is that none reduced spending on research and development. On the contrary, some even increased it sharply because, as another executive emphasized, "The capital market is looking longer term – at least for now. The winners are making the most of the opportunities arising from modified investor perspectives." [John Mauldin's report, 1 Dec 2011]. 

    Study firm databases on firm creation rates and compare with financial market conditions, economic conditions to test the hypothesis that crisis and recession times are indeed disruptive and create opportunities to create the best and enduring companies.

    In other words, is this true that there is an abnormal number of great companies that are created in stressed times?

  • Project #41 Bubbles in social networks
    There has been recently a lot of enthusiasm towards social networks. In the past year, companies like LinkedIn, Groupon, Zynga, Pandora and others went through their IPO, reaching valuations of billions of USD. This is only the beginning, since the social-network giant Facebook goes through its IPO mid-May for a valuation, evaluated by many as of 15 May 2012, of the order of 100 billion USD. 
    In a recent paper, Cauwels and Sornette (2012) developed a methodology to value Facebook and Groupon. They showed that both companies are vastly overvalued, perhaps by more than a factor of three. Forró, Cauwels and Sornette (2012) extended the valuation methodology to Zynga, a social-gaming company using Facebook as a platform. The arrived at the conclusion that Zynga, just as Facebook, was significantly overvalued. 
    These papers open the larger question of knowing whether there is a bubble in the whole social network sector. The aim of this project is to adapt/extend the methodology developed by Cauwels and Sornette (2012) and Forró, Cauwels and Sornette (2012) to value other social networking companies and offer a valuation perspective of the social networking sector in general. 
    References:
    - Cauwels P. and Sornette D., Quis Pendit Ipsa Pretia: Facebook Valuation and Diagnostic of a Bubble Based on Nonlinear Demographic Dynamics. The Journal of Portfolio Management, 38 (2), 56-66 (2012) Download(http://arxiv.org/pdf/1110.1319.pdf) (PDF, 675 KB)
    - Forró Z., Cauwels P., Sornette D., When games meet reality: is Zynga overvalued? Journal of Investment Strategies 1 (3), 119-145 (2012) first version of 26 December 2011: Valuation of Zynga (2011) external page(http://arxiv.org/abs/1112.6024)  Download(http://arxiv.org/pdf/1204.0350.pd (PDF, 660 KB)f: final version 3 April 2012)

  • Project #42 Trust and incentives in the dark web marketplaces (September 2016)
    Description
    As the dark web becomes increasingly prevalent, consumers have the opportunity to purchase any number of illicit substances from these largely anonymous marketplaces. How do they decide which products to purchase? How do sellers develop reputations as trustworthy? In this masters thesis, your will analyze a large dataset from the dark web Silk Road marketplace [1]. After extracting the relevant network of transactions, you will uncover the key variables that drive purchase decisions, and how they relate to trust. Namely, these analysis will compare predictions of preferential attachment versus tness [2] under multiple evolving conditions [3]. Depending on our collective interest, this project is extremely robust and can support other avenues of research. While not necessary, the project also supports the opportunity to work part time from Singapore at the Singapore ETH Centre.
    References
    [1] Christin, N. (2013, May). Traveling the Silk Road: A measurement analysis of a large anonymous online marketplace. In Proceedings of the 22nd international conference on World Wide Web (pp. 213-224). ACM. Chicago
    [2] Bianconi, G., & Barabasi, A. L. (2001). Competition and multiscaling in evolving networks. EPL (Europhysics Letters), 54(4), 436. [3] Saichev, A. I., Malevergne, Y., & Sornette, D. (2009). Theory of Zipf's law and beyond (Vol. 632). Springer Science & Business Media.
    Required skills
    strong data analysis skills (high levels of prociency in Python, Matlab or R) strong quantitative skills and interest in complex networks
    Starting date
    As soon as possible 
    Supervision
    The project will be supervised by Prof. Didier Sornette (in Zurich), Dionysios Georgiadis (in Singapore), Dr. Ravi Kudesia (in Singapore) Sandro Lera (in Singapore and Zurich).  

If interested in a project, please contact Prof. Dr. Didier Sornette
Chair of Entrepreneurial Risks, ETH Zurich
SEC F 7
Scheuchzerstrasse 7, 8092 Zürich

Phone: +41 44 632 89 17
E-Mail: 

JavaScript has been disabled in your browser