Econophysics of Markets and Business Networks (New Economic Windows)

Download Econophysics Of Markets And Business Networks (New Economic Windows) 2007

As b gets smaller, the density becomes heavier-tailed and more sharply peaked. Power Exponential Price Returns 31 Fig. NordPool top, 12 a. All in all, one can conclude that the price returns distributions, once the linear autocorrelation structure is removed, display heavytailed shapes.

However, before drawing conclusions, remember that, as shown in the previous section, the variance of log-returns is not independent of price levels. Results are reported in Table 3. As it can be seen, the rescaling procedure exerts only a minor impact on the density of log-returns. The heavy-tailed nature of electricity log-returns can be appreciated in Fig.

In days when demand is very high — and so is the market-clearing price — the inverse variance-price relationship breaks down. The APX market departs from the observed scaling pattern. Physica A 1: Advanced Modeling and Optimization, 4: A Framework for Comparing Models and Data. Federal Reserve Bank of Philadelphia, working paper No.

Both Sides of the Fence: Energy and Power Risk Management, 8: Regime Jumps in Electricity Prices. Energy Economics 25 5: On the Law of Frequency of Errors. Empirical Science of Financial Fluctuations. Weron R, Bierbrauer M. Variations in Financial Time Series: Ahalpara1 , Prasanta K. Panigrahi2, and Jitendra C. We further study the nature of the temporal variations in the returns.

At short time scales one observes predominantly stochastic behavior, as also sharp transients originating from certain physical phenomena [4—6]. In the intermediate time scales, the random variations are averaged out and the possibility of manifestation of structured variations present in the data emerges.

Wavelet transform, because of its multi-resolution analysis capability is well suited for this purpose [7—10]. Daubechies-4 Db4 wavelet transform has been used for this purpose. This daily index is shown in Fig. One observes structured variations at higher levels, apart from random and transient phenomena at lower scales. The corresponding high-pass and low-pass normalized power has been shown in Figs.

One clearly sees that at higher scales substantial amount of variation is present, since the low-pass power decreases and the high-pass power increases at higher levels. Modelling 37 a b Fig. Parikh a b Fig. The variations at higher scales carry more power as is seen above. One sees that the power in the average part gradually decreases. The resulting data set is roughly 4 fold bigger and is quite smooth and therefore easy to model. Moreover the piecewise polynomial form of the interpolated data is well suited with a similar structure used for the map equation, searched by Genetic Programming.

Modelling 39 Under the umbrella of Evolutionary Algorithm [13—15] various approaches like Genetic Algorithm [16], Genetic Programming [17] etc, have been used that help solve a variety of optimization problems based on a complex search. In the present note we have incorporated Genetic Programming GP that employs a non-linear structure for the chromosomes representing the candidate solutions. In Genetic Programming we start with an ensemble of chromosomes called a population and then create its successive generations stochastically using a set of genetic operators, namely copy, crossover and mutation.

At the end of a reasonable number of generations, the top chromosome in the population is used as the solution of the optimization problem. The sum of squared errors 2 is minimized within the GP framework. It may be noted that for R2 close to 0, r can be negative. Modelling 41 a First half b Second half Fig. The 1-step predictions are also found to be very good. Modelling 43 a b Fig. The 1-step predictions are found to be very good.

Modelling 45 a b Fig. Interestingly the map equation is primarily linear with a small component of non-linear term. The map equation is then used to make out-of-sample predictions. The 1-step out-ofsample predictions beyond points using the GP map equation are shown in Fig.

NSE data smoothened using Db4 2nd level wavelet transform. With reference to Fig. One observes transients and cyclic behavior at various levels. The linear trend in the cyclic variations of the returns is clearly visible. In this light it may be interesting to study the predictability of the higher level cyclic behavior.

The same has been attempted through GP framework with substantial success [19]. The smooth part of the data or the trend has been modelled and reported in [20] and has been described in detail in [21]. Modelling 49 References 1. Springer-Verlag, New York 2. Correlations and Complexity in Finance.

Cambridge University Press 3. A Review, Journal of Economic Literature, Society of Industrial and Applied Mathematics, Philadelphia 8. Cambridge University Press University of Michigan Press, Ann Arbor 2nd ed. Addison Wesley publication Financial time-series analysis is of great interest to practitioners as well as to theoreticians, for making inferences and predictions. The main theoretical tool to describe the evolution of such systems is the theory of stochastic processes, which can be formulated in various ways: Some systems can present unpredictable chaotic behavior due to dynamically generated internal noise.

Either truly stochastic or chaotic in nature, noisy processes represent the rule rather than an exception, not only in condensed matter physics 52 A. It would not be possible to predict future price movements using the past price movements or trends. Some statistical features of daily log-return are illustrated in Fig.

Price in USD above , log-price center and log-return below plotted versus time for the General Electric during the period — If the price changes in each sub-interval are independent Fig. The Gaussian Normal distribution has the following properties: This result suggested that short-term price changes were not well-behaved since most statistical prop- Fig. The results of their empirical studies on asset price series show that the apparently random variations of asset prices share some statistical properties which are interesting, non-trivial, and common for various assets, markets, and time periods.

Hence, distinctive characteristics of the individual assets are not taken into account. Below we consider a few ones from [6]. There have been various suggestions for the form of the distribution: As one increases the time scale over which the returns are calculated, their distribution approaches the Normal form. The volatility measure of absolute returns shows a positive auto-correlation over a long period of time and decays roughly as a powerlaw with an exponent between 0. Therefore high volatility events tend to cluster in time, large changes tend to be followed by large changes, and analogously for small changes.

Reprinted and adapted from L. Robert Osorio and C. The three widely accepted forms of the EMH are: Several later studies also looked at things such as the reaction of the stock market to the announcement of various events such as takeovers, stock splits, etc. In general, results from event studies typically showed that prices seemed to adjust to new information within a day of the announcement of the particular event, an inference that is consistent with the EMH. The accumulating evidences suggested that stock prices could be predicted with a fair degree of reliability.

In fact, many such anomalies that have been discovered via back-testing, have subsequently disappeared or proved to be impossible to exploit due to high costs of transactions. They assume that the future price changes only depend on the past price changes. Their main characteristic is that the returns are uncorrelated. This question has been studied especially since it may lead to deeper insights about the underlying processes that generate the time-series [12]. Next we discuss two measures to quantify the long-time correlations, and study the strength of trends: One studies the rate of change of the rescaled range with the change of the length of time over which measurements are made.

One can rewrite Eq. Results from power spectrum analysis and DFA analysis were found to be consistent. The power spectrum analysis of Fig. Numerical Comparison In order to provide an illustrative example, in Fig. Hurst and DFA exponents. Since the stock-market data are essentially multivariate time-series data, it is worth constructing a correlation matrix to study its spectra and contrasting it with random multivariate data from coupled map lattice.

Empirical spectra of correlation matrices, drawn from time-series data, are known to follow mostly random matrix theory RMT [19]. A sequence of such values for a give period of time forms the return vector r i. In addition, a large class of processes is translationally invariant and the correlation matrix will possess the corresponding symmetry. We use this property for our correlation models in the context of coupled map lattices. We use the maximum likelihood estimates, i.

Thus, by applying an approach based on RMT, we try to identify non-random components of the correlation matrix spectra as deviations from RMT predictions [19]. We now consider the eigenvalue density, studied in applications of RMT methods to time-series correlations. If T is not very large compared to N , then generally the determination of the covariances is noisy, and therefore the empirical correlation matrix is to a large extent random. Deviations from the random matrix case might then suggest the presence of true information.

The main result of the study was a remarkable agreement between 64 A. Santhanam theoretical predictions, based on the assumption that the correlation matrix is random, and empirical data concerning the density of eigenvalues. This is shown in Fig. Eigenvalue spectrum of the correlation matrices. Using two large databases, they calculated cross-correlation matrices of returns constructed from: It was found that: Pincus [24], formally based on the evaluation of joint probabilities, in a way similar to the entropy of Eckmann and Ruelle.

Santhanam has shown a drastic increase in the two-week period preceding the stock market crash of Trends and Perspectives, B. English translation by A. Business 36, Business 38, 34 B 3, Stanley, Physica A , ; arxiv: Stanley, Physica A , Economics 22, 3 Ausloos, Physica A , E 49, E 60, B 20, Barat, Physica A , ; arxiv: Rahmani, Physica A , ; D. Kaneko, Physica D 34, 1 C 16, Engle, Econometrica 50, Econometrics 31, E 64, E 65, E 65, ; arxiv: USA 88, USA , USA 93, We study the returns of stock prices and show that in the context of data from Bombay stock exchange there are groups of stocks that remain moderately correlated for up to 3 days.

We use the delay correlations to identify these groups of stocks.

Account Options

In contrast to the results of same-time correlation matrix analysis, the groups in this case do not appear to come from any industry segments. Much of the work is focussed on studying the time series from the stock markets and modelling them based primarily on the ideas of statistical physics. For instance, it is now broadly agreed that the distributions of returns from the stock markets are scale invariant [1].

In general, it is assumed that the time series of returns from the stocks are more or less like white noise. This implies that the successive return values are nearly uncorrelated. For instance, the auto-correlation of the returns would throw up a delta function at zero delay. In this article, we ask the question if the time series of returns can remain correlated with delays. Infact, it is known from several studies [2] that there are certain stocks which tend to display similar evolution.

Often, they are stocks from similar industry segement, such as the automobile or technology companies. In this article, we study the correlation between the retruns of stock prices as a function of delays. Santhanam start from the observation that certain stocks remain correlated for several years together. For instance, in Fig. For any two standardised quantitites i. In the case of data shown in Fig. However, notice that even though these stocks are strongly correlated we will not be able use this to any advantage since the returns of these stocks are not as strongly correlated.

We calculate the cross-correlation between them and the result is displayed in Fig. This implies that inspite of the delays of the order of 2—3 days, there exist some residual correlation among certain stocks. If the time series of returns were purely white noise, even a delay of one day would destroy all the correlations present in the data. The DFA is a technique to study the long range correlations present in nonstationary time series.

For white noise, DFA gives an exponent of 0. This preliminary analysis reveals that there could be groups of Fig. The daily closing stock prices of Infosys and Wipro during — Note that they are strongly correlated. Correlations and Delays 71 Fig. The daily returns from the stocks of Infosys and Wipro during — As compared to Fig.

This provides the impetus to study the delay correlations among stocks. This can be elegantly done in the framework of delay correlation matrix. The study of delay correlation matrix and its random delay correlations have become important for several applications where lead-lag relationships are important. For instance, the U.

S stock market was reported to show lead-lag relations between returns of large and small stocks [5]. Santhanam It has been shown based on the analysis of zero delay correlation matrix that group structure is not strong in Indian stock markets [4].

In this work, we will explore the group structure in the context of delay correlations and show that the groups obtained from analysis of delay correlation matrices do not belong to same industry segments. The eigenvalue density of the delayed correlation matrix from the daily closing data of Bombay stock exchange. The dashed curve is the theoretical curve. Note that three largest eigenvalues deviate strongly from the eigenvalue density for a random delay correlation matrix. The case of delay correlation matrix without symmetrising has been considered by Biely and Thurner [7] and they have obtained the eigenvalue density in the general case of non-symmetric correlation matrix.

Hence, in the Indian stock market there are certain groups of stocks whose returns remain moderately correlated for about 2—3 days. In order to identify the groups of stocks that contribute to the deviating eigenvalues, we study the eigenvectors corresponding to these eigenvalues.

Correlations and Delays 75 5 Discussions The time series of returns for certain groups of stocks show non-negligible correlations for short delays. In the case of daily closing data obtained from the Bombay stock exchange, the moderate correlations among the returns extends for about 2—3 days. The results here would imply that modelling of returns should incorporate mechanisms for groups of stocks that display such mild correlations in empirical data. In this context, it would be interesting to do a thorough analysis of delay correlation matrix with data from developed and emerging markets.

Sinha S, Pan RK in this volume.

  • One October Night;
  • Econophysics of Markets and Business Networks (New Economic Windows).
  • Special offers and product promotions.
  • Dessertlicious: 26 Delicious Dessert Recipes?
  • Eier, Milch und Käse: Auf dem Bauernhof (German Edition).
  • Last Chance Tourism: Adapting Tourism Opportunities in a Changing World (Contemporary Geographies of Leisure, Tourism and Mobility)?

Finally we consider a portfolio composed of an asset and an option on that asset. These results were not known before. However, evidence has been found [2] that the returns exhibit power law fat tails in the high end of the distribution. The cumulative probability distribution of the log-price diminishes as a mixture of power laws and thus the higher-order moments of the distribution may not exist and the characteristic function of the distribution may not be analytic.

Due to the constraints on the size of this paper we only include new results leaving proofs for further publications. In the following we recall certain known properties of operator stable distributions and we derive Fourier transforms of real powers of projections of operator stable vectors onto a non-random vector. The random vector Lt is strictly operator stable, meaning that it is an operator-normalized limit of a sum of some independent, identically distributed iid random vectors Xi.

The class of distributions of the former vectors related to a given matrix E is termed an attraction domain of an operator stable law. Members of such class are usually unknown. The following identities hold: In the following we assume that that function is even: In general this is not known. We will obtain closed form results for these moments in section 2. Here we only recall that in the non-Gaussian case, due to 5 , we have: To the best of our knowledge, this has not yet been achieved.

The generic properties of operator stable probability distributions and their marginals are described in [7]. Here we recall some known facts and we analyse two particular cases of the stable index. Using the scaling relation 4 we get: We stress that, contrary to [7], we aim at computing the characteristic functions and the fractional moments in closed form rather than only showing their existence.

From 11 we see that: From the properties of the Gamma function we obtain easily the fractional moment of the scalar product as: The Jurek coordinates read: Then the following spectral decomposition holds: We tions for the Jurek coordinates r: The fractional moment reads: Here we 84 P. Richmond consider European style options that can be exercised only at maturity. The value of the portfolio is then: The portfolio is a stochastic process that is required to grow exponentially with time in terms of its expectation value. Thus we require that the deviations have no drift: We waive that unrealistic assumption and instead require the portfolio to increase exponentially with time.

From equation 1 we have: We note that 33 is merely a transformation of equation 1 and not a solution to that equation. We make three 86 P. We will therefore construct a zero-expectation value stochastic process 31 as a linear combination 30 of two stochastic processes St and C St ; t that have both non-zero expectations values. For this purpose we will analyze dt the probability distribution of the deviation variable Dt and work out conditions for the option price such that the conditional expectation value dt E Dt St is equal zero. In that we have assumed that the price of the option is a perfectly smooth function of the price of the stock.

This may limit the class of solutions. We will seek for these solutions in future work. We could compute its expectation value directly using 36 and re-sum the series. However, we will instead calculate the characteristic dt function of the process Dt conditioned on the value of the process St at time t. We will extend the model according to these lines in future investigations. The expectation value of the portfolio deviation conditioned on the value of the price of the stock St reads: Since the Levy distribution has been truncated as in 35 and due to 5 the result in 49 is real.

Indeed the log-characteristic function can be expanded in a Taylor series in even powers of the argument only and thus its value at the negative imaginary unit is real. If we did not truncate we would have obtained a unrealistic complex result as seen from We do not investigate here the mathematical subtleties concerned with the existence of the stochastic integral Indeed in the Cox-RossRubinstein binary tree model in discrete time one considers a portfolio composed of a stock and a bond and one derives the number of stocks by requiring contingent claim replication, meaning an equality of the portfolio and the claim with probability one see [28] for example.

The later result is essentially the same as that in Inserting 59 into the second equality in 52 we get: This is what we do now and assume hereafter the whole positive real axis in the integration in We end this section by stating the price of the portfolio. Therefore the solution 72 is only an approximation.

The factors in 73 are complex which is of course unrealistic. The reason for that is the following. Thus we have proven that the risk-neutral option pricing method holds in the generic setting of operator stable processes. This method is described in the Appendix. We have developed a technique to ensure that the expectation value of the portfolio grows exponentially with time. In doing this we have not, unlike other authors, made any assumptions about the analytic properties of the log-characteristic function of the stock price process. The Value at Risk will be expressed as an integral equation involving the conditional characteristic function of the portfolio deviation The resulting integrals will be carried out by means of the Cauchy complex integration theorem.

The results of these calculations will be reported in a future publication. Note that in this case equation 72 can be written as follows: We note that the factors 76 have a following integral representation that lends itself to numerical computations in a straightforward manner. Inserting 85 into 80 gives: Bachelier L , Theory of Speculation, Ann. Gopikrishnan P et al Inverse cubic law for the distribution of stock price variations, Eur. Mandelbrot B The variations of certain speculative prices, J.

Econophysics

Heavy tails in Theory and Practice. Martingale Methods in Financial Modelling, Springer. Recently, many econophysicists have trended towards the latter by using multi-agent models of trader populations. One particularly popular example is the socalled Minority Game [1], a conceptually simple multi-player game which can show non-trivial behavior reminiscent of real markets. Subsequent work has shown that — at least in principle — it is possible to train such multi-agent games on real market data in order to make useful predictions [2—5].

This paper addresses the question of how to infer the multi-trader heterogeneity in a market i. Our focus is on the uncertainty for our parameter estimates. Johnson As such, this paper represents an extension of our preliminary study in [6]. In addition, the use of such a measure removes the necessity to scale the time-series, thereby reducing possible further errors. We propose a mechanism for making this problem more tractable, by employing many runs with small subsets chosen from the full space of agents.

As a result of choosing subsets of the full agent space, an individual run can exhibit a bias in its predictions. In order to estimate and remove this bias, we propose a technique that has been widely used with Kalman Filtering in other application domains. Agents compete with each other for a limited resource e. At the end of each time-step, one of the actions is denoted as the winning action. This winning action then becomes part of the information set for the future.

As an illustration of the tracking scheme, we will use the Minority Game — however we encourage the reader to choose their own preferred multi-agent game. The game need not be a binary-decision game, but for the purposes of demonstration we will assume that it is. We select a time horizon window of length T over which we score strategies for each agent.

The agent chooses its highest scoring strategy as its Inferring the Composition of a Trader Population in a Financial Market winning strategy, and then plays it. Assume we have N such agents. At each time-step they each play their winning strategy, resulting in a global outcome for the game at that time-step. Their aggregate actions result in an outcome which we expect to be indicative of the next price-movement in the real priceseries. If one knew the population of strategies in use, then one could predict the future price-series with certainty — apart from a few occasions where ties in strategies might be broken randomly.

The next step is to estimate the heterogeneity of the agent population itself. We choose to use a recursive optimization scheme similar to a Kalman Filter — however we will also force inequality constraints on the estimates so that we cannot have a negative number of agents of a given type playing the game. Suppose xk is the vector at time-step k representing the heterogeneity among the N types of agents in terms of their strategies.

On the other hand, staying constrained to a probability space removes one degree of freedom from our system i. Johnson to come in real time, and then make an estimate for that time given all the information from the past. The Kalman Filter holds a minimal amount of information in its memory at each time, yielding a relatively cheap computational cost for solving the optimization problem. In addition, the Kalman Filter can make a forecast n steps ahead and provides a covariance structure concerning this forecast. The Kalman Filter is a predictor-corrector system, that is to say, it makes a prediction and upon observation of real data, it perturbs the prediction slightly as a correction, and so forth.

It would be a tough modeling problem to choose another matrix i. The variable zk represents the measurement also called observation. Hk is a matrix that relates the state space and measurement space by transforming a vector in the state space to the appropriate vector in the measurement space. This is actually just the expectation of the outer product of the state prediction error with itself.

This essentially tells us how much we prefer our new observed measurement over our state prediction. If we look carefully at the following equation, we are essentially taking a weighted sum of our state prediction with the Kalman Gain multiplied by the measurement residual. This is just the expectation of the outer product of the state error estimate with itself. Johnson The covariance matrices throughout the Kalman Filter give us a way to measure the uncertainty of our state prediction, state estimate, and the measurement residual.

Here we simply provide the equations of the Kalman Filter without derivation. For a detailed description of the Kalman Filter, see Ref. We now introduce a generalization for nonlinear equality constraints followed by an extension to inequality constraints. We present the nonlinear case for further completeness here. We now rephrase the problem we would like to solve, using the superscript c to denote constrained.

The diagonal elements of Rkc represent the variance of each element of vkc. As we did for the Kalman Filter, we will state the equations here. The interested reader is referred to Refs. Notice that we allowed the equality constraints to be nonlinear. A stronger form of these equations can be found in Refs. Further, if our equality constraints are in fact independent N. Johnson c of j, we only need to calculate Hk,j once for each k.

This also implies that the pseudo-inverse in Eq. However, this method allows us to incorporate equality constraints. An active constraint is simply a constraint that we treat as an equality constraint. We will ignore any inactive constraint when solving our optimization problem. After solving the problem, we then check if our solution lies in the space given by the inequality constraints. For the next iteration, this set of constraints will be the new active constraints.

We formulate the problem in the same way as before, keeping Eqs. However, we replace Eq. Although we keep Eqs. Now we solve the equality constrained problem consisting of the equality constraints and the active inequality constraints which we treat as equality constraints using Eqs.

These constraints will now become active for the Fig. Summary of the recursive method for predicting the heterogeneity of the multi-agent population. Shown is a situation with 5 types of agents, where each type has more than one strategy. Taking the dot product of the frequencies over the agents and their decisions, we arrive at our prediction for the measurement. We then allow the recursion into the optimization technique. We also assume initial conditions are provided.

We do not perturb the error covariance matrix from Eq. Under the assumption that our model is a well-matched model for the data, enforcing inequality constraints as dictated by the model should only make our estimate better. Having a slightly larger covariance matrix is better than having an overly optimistic one based on a bad choice for the perturbation [10].

In the future, we will investigate how to perturb this covariance matrix correctly. In our application, our constraints are only to keep each element of our measure positive. Hence we have no equality constraints — only inequality constraints. However, we needed to provide the framework to work with equality constraints before we could make the extension to inequality constraints.

However, in our application we are not provided with this information a priori so we would like to estimate it. We will present one possible method here which matches the process noise and measurement noise to the past measurement residual process [11]. We estimate Rk by taking a window of size Wk which is picked in advance for statistical smoothing and time-averaging the measurement noise covariance based on the measurement residual process and the past states.

If we refer back to Eq. It is rare that one would expect Inferring the Composition of a Trader Population in a Financial Market a cross-correlation in the process noise. As N grows, not only does our state space grow linearly, but our covariance space will grow quadratically. We quickly reach areas where we may no longer be in a computationally feasible region. If we were interested in simul2 taneously allowing all possible pairs of strategies, our vectors and matrices for these computations would have a dimension that would not be of reasonable complexity, especially in situations where real-time computations are needed.

In such situations, we propose selecting a subset of the full set of strategies uniformly at random, and choosing these as the only set that could be in play for the time-series. We can then do this a number of times and average over the predictions and their covariances. We would hope that this would cause a smoothing of the predictions and remove outlier points.

In addition we might notice certain periods that are generally more predictable by doing this, which we call pockets of predictability. Johnson Similarly, we can calculate our best estimate of the predicted covariance for the measurement residual: Also, note that we chose equal weights when calculating the averages; we could have alternatively chosen to use non-equal weights had we developed a system for deciding on the weights.

In fact, it could be the case that the run provides much information — it is just that the predictions always tend to be biased in one direction or the other. So what we might like to do is remove bias from the system. We can model this bias as lying in the state space, the measurement space, or some combination of elements of either or both.

In the bottom left we have the zero matrix so the bias term is not dependent on the state xk , and in the bottom right we have the identity matrix indicating that the bias is updated by itself exactly at each time. We also generally assume no noise in the bias term and keep its noise covariance as 0, as well. Of course, this can be changed easily enough if the reader would like to model the bias with some noise. We do not claim that this is a good model for the time-series, but it does contain some of the characteristics we might expect to see in this time-series. Since the size of this computation would not be tractable, we take a random subset of 5 of these types and use these 5 as the only possible types of agents to play the game.

We perform such runs, each time choosing 5 types at random. In addition, we allow for a single bias removal term. We could have many more terms for the bias, but we only use 1 in order to limit the growth of the state space. For the analysis of how well our forecasts perform, we calculate the residual log returns and plot these. We perform such runs, which we average over using the method described in Sect. Visually, we would say the residuals in Fig. Maybe further bias removal would readily achieve this. Despite the one parameter bias removal, we still see a general bias in the data without which the residuals look much cleaner.

Perhaps a more complex bias model would remove this. When coupled with an underlying market model that better suits the time-series under analysis, these techniques could provide useful insight into the composition of multi-trader populations across a wide range of markets. We have also provided a framework for dealing with markets containing a very large number of active agents.

Econophysics of Order driven Markets New Economic Windows

John Wiley and Sons, Inc. Glenn Research Center at Lewis Field As the result rapid development and spread of electrical trading systems occurred all over the world. Econophysics [1—6] emerged during the same period as spreading of ICT. It seems not to be a chance but to be a necessity that these occurrence were coincident. Since ICT makes uncertainty of human activities due to lack of information decreasing, human activities which were not observed once become detectable.

Physics needs high-accuracy data in order to establish theoretical concept. At the beginning of s supply of data about human activities and demand for theory of human met together. This analysis provides insights about both perception and action of the market participants in the foreign exchange market. Its turnover in was estimated as 1.

The foreign exchange A. The foreign exchange market is open for 24 hours on weekdays, and is close on Saturdays and Sundays. The market participants in the foreign exchange market trade almost through the electrical broking systems the EBS and the Reuters In this article the data provided by CQG Inc. The source of this data is the EBS of which terminal computers are used by more than 2, traders in dealing rooms. The market participants in the foreign exchange market adopt the twoway quotation, where both buyers and sellers quote both buying and selling rates at the same time.

Therefore one cannot estimate excess demand from the quotation records. As shown in Table 1 the data records the almost same numbers of ask quotes and bid quotes. The dates, rates, and indicators to show ask quotes A and bid quotes B are recorded. More becoming common more quickly trading can be conducted. It is analogous to a control system used when airplanes are operated. In this article quantitative methods based on the tick data are introduced and intuitive interpretation of results are shown.

To compute the behavioral frequency provides us reduction of massive information. The market prices are determined based on minds of the market participants but minds are unobservable. Human behavior is observable and may be related to the minds. We will present appropriateness and availability of measuring behavioral frequencies through model analysis and empirical analysis. If the two tick frequencies similarly vary then they are expected to have similar generating mechanism behavior of the market participants.

To utilize the similarity between instantaneous phases is useful to characterize the similarity of two signal from viewpoint of synchronization. If the power spectra are similar then their generating processes seem to be similar. It is an application of the relative entropy, which is widely used in information theory, information geometry, phonetic analysis, and so on, to a normalized power spectrum.

Shintani 6 Agent-based Model In this section an agent-based model composed of double-threshold agents is introduced [10]. Pseudo tick frequencies are produced by computer simulation, and relations between similarities between two tick frequencies and agent parameters are investigated. The i-th market participant interprets information xi t. The interpretation of the same information depends on the market participants, and can change due to their experience.

One of the most simplest function form is a linear function of xi t and a noise. Here the uncertainty of the interpretation is assumed to be described as an additive Gaussian noise: It is assumed that the feeling has one-to-one relation to the interpretation. Namely the feeling can be identical to the interpretation. The market participants judge whether they can accept their feeling or not. According to the Granovetter model [11] if the inner state of agents excesses a threshold value the agents decide to take an action nonlinearly. Applying this mechanism to the market participants behavior the investment attitude can be modeled.

In order to divide three action the two thresholds are needed at least. The information xi t perceived by the i-th market participant at time t can be divided into endogenous information weighted moving average of returns and exogenous information news: The more absolute values of the thresholds are, the less market participants become active. Therefore cij x, y is monotonically decreasing function of x and y. Since the exogenous information si t is unpredictable it seems to be reasonable to describe it as a random variable which obeys the normal distribution, si t: Of course this assumption can be weakened, for example, colored noises can be applied to aij t.

Since four kinds of similarity measures provide almost same results, similarity of behavioral parameters of the market participants is related to similarity of behavioral frequencies despite similarity measures. It is concluded that the behavioral frequencies tend to be similar if the behavioral parameters are similar. Contrarily when the behavioral frequencies are similar one may infer that the behavioral parameters of the market participants are similar. The similarities among currency pairs are calculated by using four similarity measures from three aspects; 1 macroscopic time scale, 2 microscopic time scale, and 3 networks of currency pairs.

The similarity structure depends on the time zone. As the results it was found that similarity structure of the tick frequencies dynamically varies with rotation of the earth and that four kinds of methods have the same tendency. Hence the similarity of the tick frequency is associated with behavioral parameters of the market participants.

In order to avoid these risks, constructive utilization of the tick data should be sophisticated. Shintani in order to avert emergency of market crashes. If all the market participants can understand their situation then mass panic due to lack of information may be averted. Takayasu H Ed. Sato AH Frequency analysis of tick quotes on foreign currency markets and the double-threshold agent model, Physica A Sato AH Characteristic time scales of tick quotes on foreign currency markets: The information is available at EBS homepage: Sato AH Frequency analysis of tick quotes on the foreign exchange market and agent-based modeling: A spectral distance approach, http: Weighted Networks at the Polish Market A.

Modeling behavior of economical agents is a challenging issue that has also been studied from a network point of view. In several cases scaling laws for network characteristics have been observed. In the present study we consider relations between companies in Poland taking into account common branches they belong to. It is clear that companies belonging to the same branch compete for similar customers, so the market induces connections between them. On the other hand two branches can be related by companies acting in both of them. A bipartite graph of companies and branches has been constructed as at Fig.

Bipartite graph of companies and trades. In the bipartite graph we have two kinds of objects: Nf , where Nb — total number of branches and Nf — total number of companies. The largest capacity of a branch in our database was construction executives , the second largest was building materials.

Let B i be a set of branches a given company i belongs to. If we used the example from Fig. Similarly a branch network has been constructed where nodes are branches and an edge represents connection if at least one company belongs to both branches. Weighted Networks at the Polish Market Fig. Companies network on the left, branches network on the right.

In the branches network the link weight means a number of companies that are active in the same pair of branches and it is formally a cardinality of a common part of sets Z A and Z B , where Z A is a set of companies belonging to the branch A and Z B is a set of companies belonging to the branch B: One can notice the existence of edges with large weights. In Table 1 we present A. Weight distribution in branches network.

Data for branches networks: Data for companies networks: The weight distribution in this case does not follow a power law and in a limited range it shows an exponential behavior. We have observed a power law scaling Fig. We suppose that the mechanism of this behaviour is similar to Fig. We have observed that decrease is faster with wo than in the branches networks. This dependence is presented at the Fig. Dependence of entropy on the average nodes degree. Circles represent branches networks and X-marks represent companies networks.

Line corresponds to Figure 9 shows that such a simplistic approach gives a very good approximation of real entropy value. We can conclude that the width of the distribution is the main source of entropy changes in our systems. Link weights in both networks are very heterogenous and a corresponding link weight distribution in the branches network follows a power law. This results in a recovery of a hidden scaling relations present in the network. We have found the distribution width to be a crucial factor for entropy value.

Taxonomy and portfolio analysis, Physical Review E, See web page of company http: Mukherjee1,2 , and S. Bilateral trade relationships in the international level between pairs of countries in the world give rise to the notion of the International Trade Network ITN. Our observation is that the ITN has also a scale invariant structure like many other real-world networks.

The volume of trade between two countries may be considered as a measure of the strength of mutual economic dependence between them. In the language of graph theory this strength is known as the weight associated with the link [9]. While simple graphical representation of a network already gives much informations about its structure, it has been observed recently that in real-world networks like the Internet and the world-wide airport networks the links have widely varying weights and their distribution as well as evolution yield much insight into the dynamical processes involved in these networks [10—13].

Recently few papers have been published on the analysis of the ITN. Also the K. In a recent paper [12] we have studied the ITN as an example of the weighted networks. Analysis of the ITN data over a period of 53 years from to available in [15] have lead to the recognition of the following universal features: Secondly, the strength of a node, which is the total volume of trade of a country in a year depends non-linearly with its Gross Domestic Product GDP. In addition a number of crucial features observed from real-data analysis have been qualitatively reproduced in a nonconservative dynamic model of the international trade using the well known Gravity model of the economics and social sciences as the starting point [16].

The annual trade between two countries i and j is described by four different quantities expij , expji , impij and impji measured in units of million dollars in the data available in the website [15]. In fact they had grown almost systematically over the years. Looking at the available data few general observations can be made: Few high income [14] countries make trades to many other countries in the world.

Econophysics of Markets and Business Networks : Arnab Chatterjee :

These countries form the large degree hubs of the network. In the other limit, a large number of low income countries make economic transactions to few other countries. Moreover, a rich-club of few top rich countries actually trade among themselves a major fraction of the total volume of international trade. A huge variation of the volume of the bilateral trade is observed starting from a fraction of a million dollar to million million dollars.

There are a large number of links with very small weights and this number gradually decreases to a few links with very large weights. The tail of the distribution consists of links with very large weights corresponding to mutual trades among very few high income countries [14]. The variation of the ratio of wmax and W is shown in Fig. The average weight per link had also grown almost systematically Fig. Again the total world trade W had grown with years from 2.

The degree k of a node is the number of other countries with which this country has trade relationships. We have studied the degree distributions, each averaged over ten successive ITNs, for example, —60, —70, —80, —90 and — The plots are given in Fig.

Bestselling Series

The statistical behaviour of the underlying networks in these systems have also New Economic Windows Econophysics of Markets and Business Networks. Econophysics of Markets and Business Networks (New Economic Windows): Economics Books @ www.farmersmarketmusic.com

Such a region is completely absent in the decade — We plot these quantities in Fig. No power law variation is observed for the —60 plot. For the next decades however power laws over small regions are observed whose slopes gradually decrease to 1. Consider a process which starts from N nodes but with no links. Links are then inserted between pairs of nodes with a probability proportional to the weight of the link since a large weight link is more likely to be occupied than a small weight link. Econophysics of Stock and other Markets Arnab Chatterjee. Planning Theory Franco Archibugi.

Back cover copy Econophysicists have recently been quite successful in modelling and analysing various financial systems like trading, banking, stock and other markets. The statistical behaviour of the underlying networks in these systems have also been identified and characterised recently.

This book reviews the current econophysics researches in the structure and functioning of these complex financial network systems. Leading researchers in the respective fields will report on their recent researches and review on the contemporary developments. The book will also include the comments and debates on the latest issues arising out of these. Pisa, Pisa Title not yet available Bikas K.

De Marche, Ancona Debt-credit economic networks of banks and firms: Statistics and Evolution Mauro Gallegati Univ. A mathematical model Neil F. Salerno, Salerno Title not yet available M. Kyoto, Kyoto Dynamical structure of behavioral similarities of the market participants in the foreign exchange market Sitabhra Sinha Inst. About Arnab Chatterjee Econophysicists have recently been quite successful in modelling and analyzing various financial systems like trading, banking, stock and other markets. The statistical behavior of the underlying networks in these systems have also been identified and characterized recently.

Leading researchers in the respective fields report on their recent researches and review contemporary developments. The book also includes comment and debate on the latest issues arising out of these. Book ratings by Goodreads. Goodreads is the world's largest site for readers with over 50 million reviews. We're featuring millions of their reader ratings on our book pages to help you find your new favourite book.