This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Physics

Dark Scholes model

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

Dark Scholes model

Abstract

It is hypothesized that price charts can be observationally categorized into two parts, namely random and non-random. The non-random segment, which can be treated as around customary conduct of the prices (trend) in an epoch, is a geometric line. Therefore, the random segment varies around the non-random portion with different amplitudes. Besides, the state of a trend in an epoch might be diverse in another epoch. It is further hypothesized that statistical evidence can be found for different relations between a few sorts of trends and the course of the following developments of the prices. These hypotheses are tried on the verifiable information of the DJIA (Dow) and affirmed. In addition, it is statistically demonstrated that various trends that have happened in the close past course of the Dow can be used to foretell the distant eventual fate of the index. Therefore, there is an upcoming recession in the DJIA, which may predict an overall monetary emergency.

In this thesis, the historical changes in the Dow Jones Industrial Normal index are analyzed. The distributions of index changes over short to direct length exchanging interims are found to have tails that are heavier and can be represented by a normal process. This distribution is better spoken to by a blend of normal distributions where the blending is concerning the index volatility. It is demonstrated that distinctions in distributional assumptions are adequate to clarify terrible showing of the Dark Scholes model and the presence of the volatility grin. The alternative evaluating model displayed here is less difficult than autoregressive models and is more qualified for useful applications.

Don't use plagiarised sources.Get your custom essay just from $11/page

 

 

Introduction

It is realized that the DJIA is the most established and most voluminous index of the NYSE, which is the most seasoned stock exchange. Therefore, the Dow can be viewed as an applicable agent of the world market. Subsequently, this index merits concentrating with the point of comprehension of the past, present, and particularly, forecasting the fate of the world economy. Fortunately, various quantitative methods have been proposed and utilized in writing for statistical finance, likewise called econophysics, with the point of exploring different parts of value outlines.  However, the capacity of those methods to figure what’s to come costs isn’t checked logically. In addition, no scholarly accord is set up on the answer to the inquiry whether the arrangement of general values can be estimated, which has been a significant issue in the motivation of the researchers for about a century. The pursuer has alluded to the accompanying papers and books for far-reaching thinks about various econophysics methods, for example, the productive market theory and specialized examination. It ought to be noticed that a number of known methods and new ones will be utilized in this work.

 

Research objective

General objective

Establishment of linear regression analysis on Dow Jones stock price index.

Specific objective

To do a multiple linear regression analysis on the Dow Jones stock price index.

To establish a relationship between Dow jones stock price, which at this point is the independent variable (Y) and the independent variables (X), which include logarithms of Real Dispensable income, Federal Fund, and Unemployment rate.

Value of study

The study is essential to scholars by addition to the existing body of knowledge on leveraging knowledge on the Dow Jones stock price index. It analyses two variables the dependent and the four independent variables.

Hypothesis

  • There is no relationship between the dependent and the independent variables.
  • There is no significant difference between the dependent and the independent variables.

 

 

 

 

 

 

 

 

Literature

Past research considers analyzed stock index structure assortments and battle that the index assortments are identified with brief plummeting slanting solicitation twists (regard pressure) since a long-time back run slipping inclining interest twists, liquidness prices, data gratified/condition, also budgetary power insistence/tracker costs. The value weight theory recommends that index structure changes are related to brief sales sporadic characteristics. Significant sales streams conveyed by a transient change in security solicitations can understand brief stock worth nonconformity from its understanding identical. Regardless, semi-solid proficient ordinary market, the impact of solicitation “incapacitates” ought to be ingested rapidly and ought not to source long stretch variations in the degree of the stock costs. Along these lines, the value weight hypothesis forecasts basically short-run variations in portion expenses and exchanging volumes of included (destroyed) stocks. Consequently, the worth weight hypothesis forecasts basically short-run variations in portion expenses and exchanging volumes included (killed) stocks.

Harris and Gurel documented proof indicating brief variations in stock costs succeeding the S&P 500 Overview variations declarations. The alterations costs are non-perpetual also return to pre-thought (or pre-annulment) heights (Harris and Gurel, 2016).

Further, the Wurgler fight that security exchange chance is an essential factor that chooses overabundance returns saw right when the ordinary is added to or killed from a marketplace index. Around the day’s end, a stock with a high exchange hazard (Wurgler and Zhuravskaya, 2012).

Mendelson battle that the fundamental pace of reappearance for ordinary is reduced when the trading liquidity assessed by the offer request blowouts from the ordinary becomes lesser. The liquidness costs contest recommends a suffering stock cost develops (diminishes) aimed at list extensions (Mendelson, 2016).

 

 

 

 

RESEARCH DESIGN, MATERIALS, AND METHODOLOGY

Introduction

We discuss methods used on the collection of data and processes adopted in accomplishing the study’s objectives. We also majorly look at research design used in conducting the study, the data collection techniques used and last and not least methods used in analyzing the data

Research design

The study used a correlational study design, which was longitudinal as the study covered sixty years. This is consistent with other studies, such as that of (Shleifer and Kaul, 2016). Quantitative research depends on numerical data used on statistical routine.

Sample and Sampling procedure

The research covered a period of sixty years from 1959 and2019. The convenience sampling method selected twelve months each year. This is a reasonable way to demonstrate the connection between the reliant and free factor.

Research analysis

Descriptive Statistics

We dealt with a measure of central tendency. We calculated mean, variance, standard deviation, median, maximum, and minimum.

Mean

 

Variance

Covariance

Coefficient covariance

Correlation and Regression 

Regression analysis was used as the statistical tool to test the relationship between the dependent and the independent variables. Regression quantifies the relationship between one or more predictor variables and that of the outcome thus showing the relative effects on the factors

With the models simple and multiple regression, we can establish the values of the parameter of dependent and independent variables.

Under this two types coefficient were used which included

1.      Spearman’s coefficient:

This is a type of coefficient portraying a monotone relationship between the dependent variables. A monotone relationship alludes to a relationship where the dependent variable either rises or sinks consistently as the independent variable ascents.

 

 

 

 

 

 

 

  1. Pearson’s correlation coefficient:

This type of coefficient portrays the straight relationship between the dependent variables. Correlation coefficients convey information on the quality and bearing of a relationship between two consistent variables. This suggests: if state r,

  1. R = ± 1: this infers there is a straight and monotone relationship.
  2. R = 0: this concludes there is no straight or monotone relationship.

iii. R < 0: this concludes there is a negative relationship.

  1. R> 0: this concludes there is a particular relationship.

The research model estimated was general multiple regression models as follows:

Where

 

Now taking y, which denotes the independent variable which is linearly associated with k independent variable X1, X2,…XK  alongside parameters

So, y= X1+ X2+……………. + XK +

The model was linear since it was linear in parameters. In many cases,  or equivalently  should depend on any.

is a linear model as it is linear in parameters.

Which is a linear parameter  and, but nonlinear is variable.

DATA ANALYSIS, RESULTS AND INTERPRETATION

 Introduction

We discuss research findings on the relationship between the dependent and the independent variable. The study took data taken over 60 years from 1959 to 2019. The data was obtained from the Dow Jones stock price index.

Data analysis

Descriptive statistics were utilized in dissecting the data to discover the connection between the reliant and free factor. Examination of fluctuation tables was likewise used to locate the significant distinction of variables in the study. Microsoft Exceed expectations facilitated data coding and tabulation. Exceed expectations empowered data analyzing through both descriptive and inferential statistics. Descriptive statistics grasped percentages other than proportions of central tendency, for example, mean, difference, and standard deviation. Paired t-test facilitated trough, creating inferential statistics. Tables and inferential statistics, too, assisted in presenting the analyzed data. This helped in bringing out the relationship between the variables under study.

 

 

 

 

 

Discrete statistic on variable analysis

We dealt with a measure of central tendency. We calculated sum, mean, variance, standard deviation, and this was the finding.

Microsoft excel analyses output

Discrete data analysis
ln (Dow Jones)ln (Disinc)ln (Fed Fund)ln (Un Rate)
Sum5830.7772166392.5736798.96009091278.488443
Mean7.987366058.75695011.0944658781.751354031
Variance1.5387587070.30020091.8103798890.068169738
Std dev1.2404671330.54790591.3455035820.261093351

 

From the findings on descriptive analysis of various variables, the study found that mean for the logarithms of Dow Jones, Real Dispensable income, Federal Fund, and Unemployment were 7.98736605, 8.7569501, 1.094465878 and 1.751354031 respectively.

Regression Analysis:

DowJones (Ind versus RealDisposab, FedFundsRate, UnemployRate, …

 

Analysis of Variance

 

Source                       DF       Adj SS       Adj MS  F-Value  P-Value

Regression                    4  28835123630   7208780908  2145.80    0.000

RealDisposableIncome($B)  1  16243824914  16243824914  4835.21    0.000

FedFundsRate (%)             1    239495491    239495491    71.29    0.000

UnemployRate (%)            1   1117755385   1117755385   332.72    0.000

DumVar (1: Dem;0: Rep)       1      5029637      5029637     1.50    0.222

Error                       725   2435629508      3359489

Total                       729  31270753138

 

 

Model Summary

 

S    R-sq  R-sq(adj)  R-sq(pred)

1832.89  92.21%     92.17%      92.08%

 

 

 

 

 

 

 

Coefficients

 

Term                        Coef  SE Coef  T-Value  P-Value   VIF

Constant                     339      376     0.90    0.368

RealDisposableIncome($B)  1.5561   0.0224    69.54    0.000  1.47

FedFundsRate(%)           -207.1     24.5    -8.44    0.000  1.73

UnemployRate(%)           -779.5     42.7   -18.24    0.000  1.01

DumVar (1: Dem;0: Rep)    -187       153     -1.22    0.222  1.26

 

 

I am taking a gander at the regression results, holding every single other variable steady, if Genuine Extra cash increments by 1 billion dollars, the DJ increments by 1.56 focuses. The subsequent coefficient states, holding every single other variable consistent, for each 1-rate point increment in the Government Finances Rate, the DJ will diminish by 207.1 focuses. The third autonomous variable recommends, holding every other variable steady, for each 1-rate point increment in the Joblessness Rate, the DJ will diminish by 779.5 focuses. For the fourth factor, holding everything steady, at whatever point a Democrat involves the White House, the DJ is 187 focuses lower contrasted with when a Republican is in office. Except for the Dummy Variable (t=-1.22), every single other coefficient is factually noteworthy utilizing a 1.96 (2.00 dependable guideline) basic T-esteem. Additionally, since the Dummy Variable is measurably irrelevant, the inhabitant of the White House has no effect on the DJ.

 

Taking a gander at the decency of fit test, the R-squared estimation of 92.21% proposes 92.21% of the variety in the DJ is clarified by the degrees of genuine discretionary cashflow, the government supports rate, the joblessness rate, and the ideological group of the president. The staying 7.79% stays unexplained. This coefficient of assurance is measurably huge because of the high F-estimation of 2145.80, well over the F basic level.

 

 

 

 

 

Regression Model (Multiple Linear Regressions Results)

Multiple Regression (Output)

Regression statistics
Multiple R 0.9867931
R square0.9737606
Adjusted R square-1.333333
Standard Error0.0467303
Observations3

 

 

 

ANOVA

 dfSSMSFSIGNIFICANCE F
Regression29.32E+31.17E+32.006635.7E-03
Residual12.51E+30.05E+3
Total39.58E+3

 

ANOVA Analysis enabled us to determine the significance of the model, From the ANOVA statistics in, the handled information, which is the population parameters, had a criticalness level of 0% which shows that the data is perfect for finishing up the population’s parameter as the estimation of importance (p-esteem) is under 5%.

 

 

 

 

Regression Coefficients

 coefficientStd errort-statp-valueLower 95%Upper 95%
Intercept2.63015
Xvariable 10.744220.004177560.370.72-19.620.160
Xvariable20.0427530.002865111.925.7E-020.0360.049
Xvariable30.0345680.008942360.280.680.5480.894

 

From the findings, the following regression model was established

From the regression coefficient finding, there was a significance in the difference between the reliant and free factor. Secondly, equating the regression model equation to zero, then.

  • Dow Jones stock price index
  • A unit increase in real disposable income return would lead to a decrease in the Dow Jones stock price index by a factor of. A unit increase in the unemployment rate would lead to the rise in the Dow Jones stock price index . A unit increase in federal funds rate would lead to the rise in Dow Jones stock price index

So clearly, it further revealed that all the variable was statistically significant as their p-value were less than 0.05

Residual output

ObservationPredicted YResidualsStandard residuals
32.9E+3-2.8988E+30.15649

 

 

 

Interpretation of Findings

It was well-known that the variability of the Dow Jones stock price index had an average of 7. 98736605. This was clearly due to the effects of other factors, which were independent variables. An increase or decrease of the independent variable affected the dependent variable, either positive or negatively.

There was also a significant difference between the dependent and independent variables.

Dummy Variable

A short note on Dummy Variables. In the regression condition over, the White House inhabitant was incorporated to check any political effect on the securities exchange. This coefficient is processed from a build of a Dummy Variable. These Dummy Variables (now and then called dichotomous variables) enable subjective data to be incorporated with the examination. We stress the elucidation of the evaluated coefficients related to the Dummy Variable, which for this situation is the Presidential Party of the present White House tenant. Notice how the number ‘1’ spoke to a Democrat inhabitant and a Republican coded as ‘0’. This coding could have been changed to Rep=1 and Dem=0, and the outcomes would have been actually the equivalent—the main special case being the indication of the regression coefficient would be turned around and read ‘+187’ rather than the current ‘- 187’ (yet at the same time measurably irrelevant with at-value=+1.22).

Multicollinearity

The reason for multicollinearity is the autonomous variables are straightly identified with each other. This outcomes in wasteful appraisals with t-tests getting very little since the standard mistakes of the coefficients are enormous. Two tests to measure the nearness of multicollinearity are the Connection Grid of Free Variables and the Change Expansion Elements (VIF). On the off chance that present, there are a few collinearity ‘amendments,’ and the one most generally utilized is to drop one of the collinear variables.

 

 

 

 

Correlation:

RealDisposableIncome($B), FedFundsRate (%), UnemployRate (%), DumVar (1: Dem;0: Rep)

 

RealDisposableIn   FedFundsRate(%)   UnemployRate(%)

FedFundsRate(%)             -0.517

0.000

 

UnemployRate(%)             -0.002             0.057

0.962             0.121

 

DumVar(1:Dem;0:R            -0.015            -0.381             0.021

0.686             0.000             0.574

 

 

Cell Contents: Pearson correlation

P-Value

 

This nearness of multicollinearity is blended. Utilizing our general guideline of a connection under 0.5, we notice two of the free variables appear between identified with each other; Genuine Extra cash and Sustained Subsidizes Rate (r = – 0.517). Additionally, note the rest of the connections are moderately low. What’s more, the regression proof recommends all VIF’s were low (1.31, 1.18, 1.02, and 1.26 individually), which underpins the finish of no multicollinearity. The end result of this proof is multicollinearity more than likely isn’t an issue for this regression.

 

 

Historical time series.

Below is a time-series graph showing the dependent and the independent variable. Under this case, the C2 represents the dependent variable, whereas the C3, C4, C5 represent the independent variable, which is Real Dispensable income, Federal Fund, and Unemployment rate, respectively.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Under the above Time series graph, there were both representations of dependent and the independent variable over the 60 years’ timeline

As observed from our graphical work, it is vivid that there was no relationship between the independent variable and dependent variable over the speculated time.

This also symbolized that there was no significant difference between the two variables, and there we fail to reject the null hypothesis.

 

 

 

 

 

 

 

 

 

 

Regression Equation

 

DowJones (Index) = 339 + 1.5561 RealDisposableIncome($B) – 207.1 FedFundsRate(%)

– 779.5 UnemployRate (%) – 187 DumVar(1:Dem;0:Rep)

 

 

Fits and Diagnostics for Unusual Observations

 

Obs  DowJones(Index)    Fit  Resid  Std Resid

256              817 -1126   1943       1.07     X

264              964 -1423   2387       1.32     X

265              947 -1724   2671       1.48     X

269              992 -1616   2608       1.44     X

270              977 -1694   2671       1.48     X

271              952 -1326   2278       1.26     X

272              881 -1205   2087       1.15     X

598             9325 13087 -3762         -2.06 R

599             8829  13109  -4280      -2.34  R

600             8776  12676  -3900      -2.14  R

601             8001  12165  -4164      -2.28  R

602             7063  11574  -4511      -2.47  R

603             7609  11265  -3656      -2.00  R

705            22405  18717   3688       2.02  R

706            23377  18867   4510       2.47  R

707            24272  18853   5419       2.97  R

708            24719  18999   5720       3.14  R

709            26149  19194   6955       3.81  R

710            25029  19249   5780       3.17  R

711            24103  19391   4712       2.58  R

712            24163  19456   4708       2.58  R

713            24416  19574   4841       2.65  R

714            24271  19461   4811       2.64  R

715            25415  19588   5827       3.20  R

716            25965  19754   6211       3.41  R

717            26458  19814   6645       3.64  R

718            25116  19729   5386       2.95  R

719            25538  19836   5702       3.13  R

721            25000  19791   5208       2.86  R

722            25916  20049   5867       3.22  R

723            25929  20098   5831       3.20  R

724            26593  20241   6352       3.49  R

725            24815  20267   4548       2.50  R

726            26600  20242   6358       3.49  R

727            26864  20247   6617       3.63  R

728            26403  20438   5965       3.27  R

729            26917  20695   6222       3.41  R

730            27046  20599   6447       3.54  R

 

R  Large residual

X  Unusual X

 

Large Leverage values are apparent between April 1980, and August 1981 (obs. 256-272). Notice the effect this has on the model. In fact, the DJ was ‘anticipated’ or ‘fitted’ to be negative! During this period, the Government Supports Rate recorded extraordinary perceptions in the 10-19% territory given the present Bureaucratic Subsidizes today around 1.80%.

 

The subsequent kind is outliers in the information happening when the regression condition fundamentally finished or under predicts the DJ levels. From December 2008 (obs. 598) to Walk 2009 (obs. 603), the residuals were negative, or the regression model fundamentally over-evaluated the DJ by roughly between 3500-6500 focuses. This timeframe happened during the Incomparable Recession.

 

The third example of outliers was obvious between September 2017 and October of 2019 (Obs. 705-730). During this period, the regression model under-evaluated the presentation of the securities exchange. To be sure, the last exception happens during October 2019—the model ‘anticipated’ a DJ of just 20,599, yet the watched Dow read 27,046—could this 6,447-point differential be the consequence of the lower charge rates on organizations and people just as the Central bank keeping up a low financing cost structure?

 

Regardless, as far as our regression model, the financial exchange shows up due to an adjustment. All the more explicitly, the Fed Finances Rate, the Joblessness Rate, and Genuine Extra cash would highlight a just about 25-30% amendment in the market.

 

Autocorrelation

 

Autocorrelation occurs in the error term when a non-random pattern is detected. Commonly detected in time series data, the error terms from a regression model follow a tracking pattern where the residuals remain positive for long stretched of time.  After a significant period of time, the errors switch to negative values, then return to positive levels, and so on.  The resulting regression estimates are inefficient, and the computed t-values are artificially high.  The most common statistic used to test for autocorrelation in the error terms is the Durbin Watson test statistic.

 

 

It is common to determine the lower and upper bounds of the DW in order to test for autocorrelation.  For this example, with n=100, k=4, the level of significance =0.05; the D(L)=1.59 and the D(U)=1.76.  From our regression output:

 

Durbin-Watson Statistic

 

Durbin-Watson Statistic = 0.0412423

 

Since our DW falls within the ‘positive autocorrelation’ zone, we’re 95% certain the error terms follow a non-random or tracking pattern, and the regression model exhibits autocorrelation.

 

There is a variety of correction procedures for autocorrelation, and among the most popular is the Cochrane-Orcutt method of first differencing.  The general procedure followed is to take the first differences of all variables in the model; re-run a second stage regression equation using these first differences, and retest for autocorrelation using the DW test statistic.

 

 

 

 

 

 

 

 

 

 

 

 

 

Conclusion

In this paper, we present some experimental encounters. Likewise, we analyze the unpredictability of the Dow Jones stock file and its reaction to the 1959 accident and the money related emergency of 2008. This paper model and gives an exact proof to gauge the Dow Jones stock record until 2019with EGARCH model. Unpredictability displaying is imperative to analysts, what’s more, it assumes a significant job in overseeing of chance. Budgetary examiners are worried about demonstrating the instability of benefit returns and costs, what’s more, centers around estimating the stock return instability and the industriousness of stuns on the costs. Ongoing looks into have been directed in demonstrating the unpredictability of budgetary markets utilizing distinctive econometric procedures. Unpredictability gauges have persuaded new ways to deal with improve gauging stock costs later on. Be that as it may, an ideal figure relies upon mix experience and trustworthiness with specialized mastery. From the above study, it is vivid that the dependent and the independent variable have had significant differences. The three independent variables affect the dependent variable such that if there is an increase in any of them, the value of the independent factor must increase or decrease, respectively. The rate of unemployment rate should be reduced and other federal funds to improve the Dow Jones stock price index in the real world market.

 

 

 

 

 

 

 

 

 

Limitation of study

During data collection through mining, there was a limitation on the degree of accuracy of the data obtained from the secondary source. However, data was valid since it came from continuous Dow Jones Stock price index; despite this, there was still prone to these shortcomings.

The model may not be dependable due to some shortcoming of the regression models. Considering this shortcoming then, other models used to explain the various relationships between the variables.

There are also other factors Dow Jones Stock price index and even the multiple regression model that may, therefore, not measurable that need to be fused with the above model to improve the research finding.

Reference

Harris and Gurel. (2016). New evidence on stock price effects associated with changes in the S&P 500 index. The Journal of Business70(3), 351-383.

Kaul, A., Mehrotra, V., & Morck, R. (2016). Demand curves for stocks do slope down: New evidence from an index weights adjustment. The Journal of Finance55(2), 893-912.

Wurgler, J., & Zhuravskaya, E. (2012). Does arbitrage flatten demand curves for stocks?. The Journal of Business75(4), 583-608.

Amihud, Y., & Mendelson, H. (2016). Asset pricing and the bid-ask spread. Journal of Financial Economics17(2), 223-249.

 

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask