This essay has been submitted by a student. This is not an example of the work written by professional essay writers.
Demand And Supply

Data Handling and Decision Making

Pssst… we can write an original essay just for you.

Any subject. Any type of essay. We’ll even meet a 3-hour deadline.

GET YOUR PRICE

writers online

Data Handling and Decision Making

Task 1

Current Financial and Non-Financial Data

Sony Inc. is the corporation that will be examined in the report on the handling of data and making decisions. In this context, Sony operates as a business and holds itself accountable for the decision process as necessary. This section will explain the main data source and movement of Sony, which is aggregated, processed, stored, and taken into account when making decisions for the organization. Sony uses both the non-financial and financial data during the process of making its decisions (Jorion & Tarmor 2001). The financial data that is used by the corporation entails the financial statements which span from its yearly balance sheets, annual income statements, annual cash flow statements, and changes in shareholders’ equity (Zhu & Zong 2012). The data that is extracted from the balance sheet of the organization determines the trends in its assets and debts. The data that is sourced from the cash flow statements of the organization mainly depicts its profitability (Bhimani & Langfield-Smith 2007). On the other hand, the operating, financing, and investing activities are highlighted in the income statements of Sony. Hence, the organization combines the data from these financial statements to make important decisions regarding its valuation and worth. In contrast, the corporation uses non-financial data, including information on governance and corporate social responsibility (Orens & Lybaert 2010).  This form of data has justified being crucial in the company for making policy decisions.

Don't use plagiarised sources.Get your custom essay just from $11/page

Data Integrity and Identified Gaps

Data integrity involves maintaining and ensuring the consistency of the data in a life cycle (Amorosi 2019). It has evolved as one of the essential aspects of designing, enforcing, and using any systems that store, process, and retrieve data.  The roles and responsibilities of data integrity professionals are to present an overview of the identification of gapless information under the electronic data that has been recorded and misreported. The data integrity professional is obligated to recognize and interview the workers and management who impact the deviations in financial reports (Webb 2019). The majority of the corporations are anxious to get rid of the integrity data gaps using the integrity of data checks. Precision, completeness, or timeliness of data highlights the validations that improve the level of trust (Moiin & Pruscino 2001). Evaluating if the control gaps are in the company, which is resulting in the determination control type, the validation required and ultimately taking the earlier broad picture of the case scenario that is occurring (Gxp 2019). It is important for the corporations to draw suggestions out of the detailed process flow that constitutes the systems and systems that need to be thoroughly assessed.

Map Between Business Functions and Data Sources

            The map illuminates the substantial exploration of data and the visualization between the sources of data and functions of the business (Fiaz et al. 2016). The capture part detects the multi-structured data that retains the inputs in a raw iterative analysis form.  The process part parses the data into the format of reservation that reviews and analyses the data using tools such as Pig and Hive in unearthing its value (Lydia & Swarup 2019). The label data and type of information is instrumental for discovery. Also, the pre-compute stats and data groupings are applied to facilitate the analysis. The exchange section uses the visualization tools that will enable reviewing the data to draw key insights. The EDW section involves matching that allows both systems and software to exchange details meaningfully, reimburse the claims, and other ordered data (Aziz et al. 2014). Data mapping happens in many settings with various distinct standards, sets, and terminologies (Luján-Mora et al. 2004).

Task 2

Data Flows between the Organization and Its Stakeholders

It was confirmed that the organizational decision-making process of Sony was restrained to financial statements and other non-financial data sources. These sources of data have allowed the corporation to make decisions that have maintained its current times (Junita & Magdalena 2014). The other organizations in a similar industry increase their convectional data sources with big data to derive effective decisions. As a result of this, it would be suggested that the corporation should use its customer data to draw insights on the tastes and preferences of the consumer in the market.

The stakeholders act as the heart of Sony Corporation. Data flow between the company and these stakeholders play a significant part in the process of making decisions. The data flow between the organization and the respective stakeholders can be grasped through the stakeholders’ worries. The most important data flow is observed between the organization, investors, and owners. The owners of these two groups of stakeholders are focused on return on investment for investors, income, social goals, longevity, growth, and lucrativeness for the owners of Sony (Ayuso et al. 2014). These concerns lead to obtaining the most significant information for Sony that helps to make strategic decisions. In the hierarchy, the creditors, suppliers, and creditors formulate the second most significant stakeholders for Sony Corporation. The concerns of the creditors involve the credit score and liquidity, and the client’s interest focus on the value, care, and quality of the products or services (Wei et al. 2016). Data flows between the stakeholders and the organization help to make tactical decisions that involve the implementation of vital decisions (Pérez Carrillo 2007). In the hierarchy, the lowest-ranked are the community, employees, and government. The topics that the government deals with are taxation, legislation, truthful reporting, positive, and negative externalities. The community deals with the conservation of the environment and job opportunities. The workers are concerned with gaining recognition in the workplace, compensation, appreciation, and improving the rates of pay. The information that is derived from the topics affecting the stakeholders is crucial in making the operational decisions in the organization.

Approach to the Improvement in Data Integrity

Data integrity is the notion of data where it is not reliable to the needs of the organization and precise throughout its process of application (Schuha et al. 2019). Hence, for the corporation to use big data associated with customer insights, it is essential for it to support data integrity. The customer data that is gathered online supports big data, and this implies that it would be analyzed through computers (Wang et al. 2019). In this sense, the validation of the information systems of Sony would be appropriate for the enhancement of the data integrity of the data, enforcement of error detection software, enforcing of audit trials, and maintaining the backup and recovery processes (Wolsk et al. 2019). In addition, the corporation should guarantee that it has trained its personnel, carried out many frequencies and internal audits levels that aim at examining its controls and procedures, and conform to a lifecycle of a software upgrade (Jaya et al. 2019).

Data Protection and Ethical Assurance Requirements

It is unavoidable that the issues of safeguarding of data and ethical assurance will happen because Sony uses the customer data to obtain insights about the products and services to offer in the market (Ethics 2019). From an ethical point of view, the gathering of data from the masses without seeking their permission poses an ethical compliance problem (Research 2019). Hence, the suggested data analytics framework for the organization is no ethical complaint (Stahl 2004; Vanclay et al. 2013).

Task 3

A statement of Strategic Decisions

There are many strategic decisions that are implemented by Sony Corporation in many areas of operations management. First, the management of Sony applies strategic decisions such as the generic strategy of differentiation that stress on optimal lucrativeness along with the principles of sustainability in designing products (O’Farrell et al. 1992). Second, strategic decisions are applied by Sony management on quality management (Murray 1988). The operations management policy of the organization stresses the attainment of and support for kando, which is presented in the vision statement and mission statement. Third, strategic decisions are applied in the process and capacity design of Sony (Bertola & Teixeira 2003). The operation managers apply the strategic decisions to maximize the use of capacity in production facilities, thus resulting in production maximization in every facility. Fourth, Sony applies strategic decisions in its location strategy to decrease operating expenses and maximize revenues. In this context, the marketing mix of Sony entails retail chains in densely populated areas, including the urban centers (Allen & Helms 2006). This facet of the marketing mix assists optimizes the productivity of the organizational sales operations. Fifth, Sony applies strategic decisions during the design of the layout and strategy to optimize human resources and information. Sixth, the management of Sony applies strategic decisions in supply chain management to uphold the sufficient supply to support the present operations and growth of the business, particularly in high-growth developing markets.

The financial data that constitute the financial statements help in making decisions. The financial statements record the activities, though, do not offer an evaluation of the data. Despite the significant role of the financial statements, they do not offer an assessment of the accounting outcomes. Many measurements and evaluations require being knowledgeable of the numbers to be capable of using the information constituted in the various financial statements for financial decision-making. It is obvious that not all the information is essential to make rational financial decisions is readily available. Financial ratios help the make decisions about the liquidity, solvency, profitability, and productivity of Sony Corporation (Schmidgall & DeFranco 2016; Adam 2014; Megaladevi 2015). The financial statements replicate historical activities and future decisions.  This is possible by drawing inferences regarding trends of the distinct ratios instead of the actual historic numbers. The non-financial data of Sony Corporation help in making decisions. Non-financial information is always used for making policy decisions and offering information to assist in resource allocation in Sony Corporation.

Strategic Decision Based On Data Analytics

Sony Corporation needs to make a strategic decision on the product design it intends to release to the market. The organization must make arrangements for products such as mobile phones to satisfy the consumer (Boothroyd 1994; Collinson 1993). It is important that the company applies social media analytics to get insights into the needs and demands of its clients. The strategic decision will increase the competitive advantage of the organization in the smartphone market (Gershon & Kanayama 2002). The essence of being a first mover in creating a certain product will make Sony competitive in the market.

Identification of a Specific Dataset

To attain the decision to come up with product designs that satisfy the client, specify data from the Sony website was chosen ( https://www.sony.net/SonyInfo/IR/library/historical/).

Task 4

Business-Related Information and Source of Data

The dataset represents the financial information of Sony Corporation from the financial year 2017 to 2019. The financial information of the organization constituted of the balance sheets, statements of income, and cash flows. The information was expressed in terms of the American dollar ($). The balance sheets listed the liabilities, assets, and owner’s equity during the financial years 2017, 2018, and 2019.  The cash flow statements indicated the inflows and outflows of cash by Sony Corporation during the financial years 2017, 2018, and 2019. The income statements indicate the amount of profit, expenses, and revenues that the organization generated during the financial years 2017, 2018, and 2019. The financial data was obtained from the financial statements of Sony Corporation in the annual report.

Data Filtering, Cleaning and Preparation

There are various steps taken for data filtering, cleaning, and preparation in SPSS. Data preparation forms the basis for statistical analysis of the data. It involves reviewing the questionnaires to get rid of the unacceptable questionnaires that partially have less variance. Nonetheless, it entails the editing of data to rectify the partial, inconsistent, ambiguous, and indecipherable answers. Coding involves the designation of numerical codes to answers to answers that do not previously have them. Arguably, the preparation of data involves transcribing data to make it accessible for processing. Data cleaning forms an intrinsic part of the process of data science. Arguably, data is important for carrying out data analytics for Sony Corporation. Data cleaning involves the repetitive cycles of screening, diagnosing, documentation, and treatment (Zhang et al. 2003; Vassiliadis et al. 2000; Sikora 2004; Rahm & Do 2000).  Data cleaning consists primarily of implementing error prevention strategies before they occur. Screening entails systematically searching for the features in evaluating questionnaires and databases. The diagnosis stage will involve the acknowledgment of the state of the defective financial data. The treatment stage will involve the deletion; editing or leaving the financial data as the stages of data cleaning necessitates understanding the sources of errors likely during the collection of data. The documentation stage involves leaving the detection of errors, alterations, and checking of errors and will enable a return to the previous value when necessary.

Data filtering is the elimination of rows to remove the irrelevant information and involves explaining the ‘signal’ of the variables to be modeled (Stephanedes  & Chassiakos 1993; Su & Khoshgoftaar 2009; DeBettencourt et al. 2005). In the SPSS, the first step will involve clicking the ‘data’ menu and subsequently choose ‘select cases’ that will familiarize the process of case selection and filtering process. The suitable ‘radio button’ will be keyed to clarify the data to choose for the analysis. A random sample of cases will appear after clicking the radio button. Next, the variable will be selected that is appropriate to filter the data. When the filtering criteria have been chosen, the radio button should highlight the unselected cases that will be filtered. After this, the data editor screen will occur, and the unselected cases will develop a slash via the row number. Lastly, to get rid of the filter later, the data analysis should reverse to ‘Select Cases’ and then choose the ‘All Cases’ radio button.

Task 5

Data Mining Procedures

Data mining involves a logical process that looks for information to discover the most significant data (Isaac & Dixon 2017). The goal of the system is to find out the designs that were earlier obscure. It is easy for a data analyst to use the examples to give solutions to the many issues (Shaweta 2014). The Big Data analytics helps to grasp information from a diversity of various measurements and order it. The functional component of the dataset for Sony Corporation is demonstrated below.

The definition of the issue is the first phase of data mining. In this phase, the client of Sony Corporation liaises with the domain specialists to define the issue and deduce the undertaking goals, differentiating the key characters and discover the present answers on the issue (Thusoo, et al. 2010). The objectives of the task at this instance should be transformed into Data Mining and Knowledge Discovery (DMKD) objectives and might integrate the starting of the prospective data mining tools (Sen & Sinha 2005; Revels & Nussbaumer  2013; SAPinsider, 2010).

Data comprehension forms the second phase of data mining. This stage involves the gathering of data and selecting the information that will be necessitated, such as its configuration and size. When the foundation coexists, a diversity of features might be positioned because it is important (Fawcett & Provost, 2015). The value of the information is reviewed about the DMKD objectives. It is important to review the aspects of missing qualities, the credibility of proper anticipations, and et cetera.

Data preparation is the third phase of data mining. It is where the attainment of the entire information disclosure process relies on; it extends to nearly a fraction of the whole venture exertion (Foreman, 2013).  During this progression, the administrators select the information that will be used as a contribution to mining of information (Prabhu 20006; Oracle 2002). It might involve assessing data, linking it, and drawing the significance of the data, information cleaning, such as reviewing the objectives of information records or changing for the commotion (Finlay 2014).  The cleansed details can be handled by highlighting the commitment and calculations to alleviate dimensionality by induction of new quality state and by addressing the breakdown of information. The result would be the new information records and satisfying the explicit information that desires to be used in the phase (Boyd & Crawford 2012). The data preparation for the organization is represented by the diagram below.

The data transformation phase involves operations that would impact the success of the mining process. Smoothing is done to get rid of noise from the data (Inmon et al. 2008; Madden 2016). Aggregation involves the applications of operations that would aggregate the weekly financial data into annual financial data. Generalization involves the replacement of low-level data with high-level data notions with the assistance of concept hierarchies. Normalization involves the scaling up and down of the attribute data (Makkes et al. 2013).  Attribute construction involves the construction of attributes and entails a specified set of attributes that are significant for data mining (Demchenko et al.2013). The outcome of this process is a final data that can be utilized for modeling.

In the modeling stage, there is a determination of data patterns using mathematical models. Grounded on the business objectives, the appropriate modeling techniques will be selected for the dataset (Inmon 1996; Hobbs et al. 2005). In this context, there is creation of a scenario to review the validity and quality of the model. In addition, the model will be run on the financial dataset of Sony Corporation (SAS 2016). The outcomes are assessed by the stakeholders to guarantee that the model satisfies the objectives of data mining.

In the evaluation phase, the results of the model should be assessed in the context of business objectives in the earlier phase in which the new requirements of the business might be raised. The outcomes of the model should be assessed (Prescott 2014).  The new business requirements require to be raised because of the new patterns that have been found out in the outcomes or from other aspects. Understanding the business forms an iterative process in data mining. The go decision requires to be made in this phase to shift to the deployment stage.

In the deployment stage, the data mining discoveries are shipped to daily business operations. The knowledge or information discovered during the data mining process ought to be made for non-technical stakeholders (Arif & Mujtaba 2015; Ghost et al. 2015). An in-depth deployment plan that entails maintenance, shipping, and monitoring of data mining discoveries will be created (McKinsey Global Institute 2016). A final project report will be created with lessons that are derived and key experienced during the project. This assists to enhance the business policy of the organization.

Descriptive Statistics of Business Performance

            Descriptive data analysis forms a beginning point for an in-depth statistical analysis. It provides researchers with a general perspective of the data. The descriptive statistics were used to demonstrate the business performance of Sony Corporation. The dataset on sales and profit margins were used as operational variables for assessing the business performance of the organization. A proper understanding of the cause and effect association can help the organization redefine its approaches. This assists in expanding its client bases for a while. The illustrative factual assessment can provide the explorative group with supportive snippets of data.

The figure shows the descriptive statistics of Sony Corporation. The gross profit of the organization in 2017, 2018, and 2019 was $1,940,096,000, $ 2,313,560,000, and $ 2,402,491,000 respectively. Therefore there are three observations in the sample of the gross profits of the three years. In these outcomes, the mean of the gross profits is $2,218,715,667, and the median is $2,313,560,000. The data seems to be skewed to the left, which explains why the mean is smaller than the median. In these outcomes, the standard deviation is $245354579.5. With normal data, the majority of the observations are spread in 3 standard deviations on every side of the mean.

Gross Profits
Mean2218715667
Standard Error141655532.5
Median2313560000
Mode#N/A
Standard Deviation245354579.5
Sample Variance6.01989E+16
Kurtosis#DIV/0!
Skewness-1.479586582
Range462395000
Minimum1940096000
Maximum2402491000
Sum6656147000
Count3
Confidence Level(95.0%)609494563.7

The figure shows the descriptive statistics of Sony Corporation. The gross sales of the organization in 2017, 2018, and 2019 was $7,603,250,000, $ 8,543,982,000, and $ 8,665,687,000 respectively. Thus, there are three observations in the sample of the gross profits of the three years. In these outcomes, the mean of the gross profits is $8,270,973,000 and the median is $8,543,982,000. The data appears to be skewed to the left, which explain why the mean is smaller than the median. In these outcomes, the standard deviation is $581458107.1. With normal data, majority of the observations are spread in 3 standard deviations on every side of the mean

sales
Mean8270973000
Standard Error335704994.7
Median8543982000
Mode#N/A
Standard Deviation581458107.1
Sample Variance3.38094E+17
Kurtosis#DIV/0!
Skewness-1.647073782
Range1062437000
Minimum7603250000
Maximum8665687000
Sum24812919000
Count3

 

Organizational Forecast Report Based On Inferential Data Analysis

The data below shows the regression of the regression of the operational variables of gross profits and sales.

SUMMARY OUTPUT
Regression Statistics
Multiple R0.998378
R Square0.996758
Adjusted R Square0.496758
Standard Error5.78E+08
Observations3
ANOVA
 dfSSMSFSignificance F
Regression12.05E+202.05E+20614.92765340.025658594
Residual26.68E+173.34E+17
Total32.06E+20
 CoefficientsStandard Errort StatP-valueLower 95%Upper 95%Lower 95.0%Upper 95.0%
Intercept0#N/A#N/A#N/A#N/A#N/A#N/A#N/A
year4098685165284.724.797730.0016222513387522.868480984833875234809848

            The figure shows the bar chart of the gross sales of Sony Corporation in the financial years 2017, 2018, and 2019.

            The figure shows the ANOVA test of the gross sales of financial years 2017, 2018, and 2019.

Anova: Two-Factor Without Replication
SUMMARYCountSumAverageVariance
7.6E+09120172017#DIV/0!
8.54E+09120182018#DIV/0!
8.67E+09120192019#DIV/0!
year3605420181
ANOVA
Source of VariationSSdfMSFP-valueF crit
Rows22165535#NUM!#NUM!
Columns006553565535#NUM!#NUM!
Error0065535
Total22

 

Evaluation of Statistical Models

The specific statistical models to be adopted in the data mining procedures is a logistic regression model that forecast a dependent data variable that assess the association between at least one coexisting independent variable (Allison 2005; Witten & Frank 2002; Phyu 2009; Ge et al. 2017; Holzinger & Jurisica, I., 2014; Lison 2015; Witten et al. 2005). For instance, the logistic regression would be used to forecast the financial performance of the Sony Corporation in terms of liquidity, solvency, profitability, and productivity.

The resulting analytical model can be considered using may input criteria. In the pretext of financial performance, the model could consider aspects such as the current assets, owner’s equity, and current liabilities. The data analysts use logistic regression to approximate the likelihood of events such as evaluating an association between characteristics and the probabilities of a certain result. Sony Corporation can use the insights from the model to improve the business strategies to help decrease expenses, increase revenues, and profits.

Multivariate analysis of variance (MANOVA) is typically an ANOVA with many dependent variables. ANOVA tests for the variation in means between at least two groups, whereas MANOVA tests for the difference in two or more vectors of means (French et al. 2008; Keselman et al. 1998; Warne 2014).

Task 6

In the modern age, the consumers generate tons of data, including the data daily and businesses desire to use for hyper-personalization, sending the relevant offers by gathering, storing, and analyzing the data. Data visualization is an essential ingredient for bringing the power of big data in the mainstream. It is problematic to indicate the behavior of the data when it is left in the database. It is possible to get an obvious picture of the behavior of the data using charts or graphs (Fiaz et al. 2016; Wang et al. 2015; Simon 2014; Keim et al. 2013). Data visualization enables the interpretation of the data and changes the distinct variables to observe the impacts and technology to increasingly making it easy to perform it. The appropriate data visualizations on the business performance are the ones that expose a new thing regarding the underpinning patterns and associations constituting the data. Data visualization is characterized by demonstrating the big picture rapidly with simplicity for further action (Ali et al. 2016; Gorodov & Gubarev 2013).

Summary data analysis is used to compute the key indicators, including the yearly operating income, yearly net profit, and annual consumption costs that are always the data that worry the decision-makers. From the maximum range of the dashboard, it is indicated that Sony Corporation garnered total revenue of 278 million and allowed the function of showing the percentages. In this manner, the profit margin and consumption cost ratio are automatically displayed.

The development trend analysis is often used to visually compare the financially operating data on a certain incessant interval of Sony Corporation and to grasp the trend of the financial performance of the organization in the financial years 2017, 2018, and 2019. It is likely to analyze if there is an abnormality in the increase and decrease of data. It can assist the enterprise in forecasting the future financial state and giving the verdict on the prospect of development.        The line chart will demonstrate the line development trend of the organization. The horizontal axis epitomizes the month, year, and day and the vertical axis epitomizes the indicators, including the costs and operating income.

The pie chart and column shows the gross sales of the organization from financial years 2017 till 2019. The gross sales of the organization was $7, 603, 250, 000, $8, 543, 982, 000, and $8, 665, 687, 000 in the financial years 2017, 2018, and 2019 respectively. This implies that the gross sale in the company is increasing. The increase in sales over the past three financial years is significant to the investor because it indicates the demand for the products and services of Sony Corporation that will be increasing in the future. It is significant to differentiate acquisitive and organic sales growth. The rates of growth vary by company size and industry.

The diagram below shows the radar illustrating the gross profits and sales of the organization. The greater the gross profit margins, the better. A high gross profit margin demonstrates that the organization is thriving in administering its costs of sales. In addition, it demonstrates that the organization has to cover for funding, operating, and other expenses. The gross profit margin might be enhanced by increasing the sales price or cutting down the costs of sales. However, the measures might have negative impacts, including the reduction in sales volumes because of the increased prices or low product quality due to decreasing expenses. Also, the gross profit margin should be comparatively stable expect where there is a substantial change to the business model of Sony Corporation.

 

Recommendations

            Business analytics is evolving as a potential tool for organizations such as Sony Corporation to enhance its business performance based on client retention, customer service, and acquisition (Cheng et al. 2012; Evans & Lindner 2012; Wixom et al. 2013; Sharma et al. 2014). Predictive analytics assist in predicting the financial performance in terms of liquidity, solvency, productivity, and profitability in the next five years (Liu 2014; Gunasekaran et al. 2017; Hazen et al. 2016; Gunasekaran et al. 2018).

It is important to define the objectives. To derive the majority of the data from the data teams, the corporations should define their objectives before starting their analysis and establishing a strategy to elude following the hype rather than the needs of Sony Corporation and describe the obvious key performance indicators (KPIs). Even though there are many KPI indicators that an individual could select from, it is suggested to avoid overdoing it and focus on the most significant ones within the industry (Alwaer et al. 2010; Bhatti et al. 2014; Shahin & Mahbod 2007).

It is important to analyse and understand the dataset of Sony Corporation. That might seem obvious though we might have to mention it after establishing the frame of the inquiries to answer and the data collection that one requires to read through to obtain the meaningful insights that will lead to making data-driven business decisions. User feedback is a significant tool for conducting more in-depth analyses into the experience of the client and deriving actionable insights. To do this successfully, it is significant to have pretext. By assessing the responses in the open comments of the feedback form, it would be easy to understand why the company is not performing well in the business after deriving the operational variables of gross profits and sales from financial years 2017, 2018, and 2019.

It is recommended to present the data in a meaningful way. Digging and gleaning insights is nice though managing it to tell the discoveries and conveying the message is better. It is important to assure that the acumen does not remain untapped and dusty and that it will apply for making decisions in the future. With the assistance of data visualization software like Microsoft Excel, a computer geek is not required to build and customize a powerful online dashboard that will tell the data story and offer help, team, and management to make the appropriate data-driven business decisions. Also, an outline presented on a financial dashboard will guarantee an overview of the business performance of Sony Corporation. With the top KPIs, including income statements, gross profit margins, and operating expenses rations, the dashboard will allow a quick process of making decisions, whereas focusing on real-time data.

To make the process of deploying a data source to a cluster, it is vital to choose the server as a deployment target. When the data source is deployed to a server, the Weblogic server will create an example of the data source on the server, such as a pool of database connections in the data source (Barnes et al. 2010; White et al. 2008; Mountjoy & Chugh 2004). When one deploys a data source to a cluster source, the WebLogic server will develop an instance of the data source on every server in the cluster (Jacobs 2003; Nyberg et al. 2004).

 

 

 

 

 

 

 

 

 

 

 

Bibliography

Adam, M.H.M., 2014. Evaluating the Financial Performance of Banks using financial ratios-A case study of Erbil Bank for Investment and Finance. European Journal of Accounting Auditing and Finance Research, 2(6), pp.162-177.

Ali, S.M., Gupta, N., Nayak, G.K. and Lenka, R.K., 2016, December. Big data visualization: Tools and challenges. In 2016 2nd International Conference on Contemporary Computing and Informatics (IC3I) (pp. 656-660). IEEE.

Allen, R.S. and Helms, M.M., 2006. Linking strategic practices and organizational performance to Porter’s generic strategies. Business Process Management Journal.

Allison, L., 2005. Models for machine learning and data mining in functional programming. Journal of Functional Programming, 15(1), pp.15-32.

Alwaer, H. and Clements-Croome, D.J., 2010. Key performance indicators (KPIs) and priority setting in using the multi-attribute approach for assessing sustainable intelligent buildings. Building and environment, 45(4), pp.799-807.

Amorosi, F., 2019. Data Integrity Assurance as Key Factor for Surviving Corporate Audits and Regulatory Inspections. Global Provider, Local Solutions in Your Language, PQE (Octorber 2017), p. 8.

Arif, M., and Mujtaba, G., 2015. A survey: data warehouse architecture. International journal of hybrid information technology. Vol. 8, No. 5, pp. 349-356.

Ayuso, S., Rodríguez, M.A., García-Castro, R. and Ariño, M.A., 2014. Maximizing stakeholders’ interests: An empirical analysis of the stakeholder approach to corporate governance. Business & society, 53(3), pp.414-439.

Aziz, A.A., Jusoh, J.A., Hassan, H., Idris, W., Rizhan, W.M., Zulkifli, M., Putra, A., Yusof, M. and Anuwar, S., 2014. A Framework For Educational Data Warehouse (Edw) Architecture Using Business Intelligence (Bi) Technologies. Journal of Theoretical & Applied Information Technology, 69(1).

Barnes, T., Messinger, A., Parkinson, P., Ganesh, A., Shegalov, G., Narayan, S. and Kareenhalli, S., 2010, March. Logging last resource optimization for distributed transactions in Oracle WebLogic server. In Proceedings of the 13th International Conference on Extending Database Technology (pp. 651-656).

Bertola, P. and Teixeira, J.C., 2003. Design as a knowledge agent: How design as a knowledge process is embedded into organizations to foster innovation. Design Studies, 24(2), pp.181-194.

Bhatti, M.I., Awan, H.M. and Razaq, Z., 2014. The key performance indicators (KPIs) and their impact on overall organizational performance. Quality & Quantity, 48(6), pp.3127-3143.

Bhimani, A. and Langfield-Smith, K., 2007. Structure, formality and the importance of financial and non-financial information in strategy development and implementation. Management accounting research, 18(1), pp.3-31.

Boothroyd, G., 1994. Product design for manufacture and assembly. Computer-Aided Design, 26(7), pp.505-520.

Boyd, D., & Crawford, K. 2012. Critical questions for big data: Provocations for a cultural, technological, and scholarly phenomenon. Information, communication & society, 15(5), 662-679.

Chen, H., Chiang, R.H. and Storey, V.C., 2012. Business intelligence and analytics: From big data to big impact. MIS quarterly, pp.1165-1188.

Collinson, S., 1993. Managing product innovation at sony: the development of the data discman: Cutting-edge technologies. Technology Analysis & Strategic Management, 5(3), pp.285-306.

DeBettencourt, J., Yamane, S. and Lara, M., SERVICE INTEGRITY Inc, 2005. Techniques for filtering data from a data stream of a web services application. U.S. Patent Application 10/649,160.

Demchenko, Y., Ngo, C. & Membrey, P., 2013. Architecture Framework and Components for the Big Data Ecosystem. System and Network Engineering Group, [online]. University of Amsterdam. Available at http://www.uazone.org/demch/worksinprogress/sne-2013-02-techreport-bdaf-draft02.pdf  Retrieved on 1 December, 2016

Ethics, 2019. Data protection and research ethics – University of Leicester. [Online] Available at <https://www2.le.ac.uk/services/research-data/create-data/dp-ethics>[Accessed 8 Mar. 2019].

Evans, J.R. and Lindner, C.H., 2012. Business analytics: the next frontier for decision sciences. Decision Line, 43(2), pp.4-6.

Fawcett, T. and Provost, F., 2013. Data science for business, O’Reilly Media Inc.

Fiaz, A.S., Asha, N., Sumathi, D. and Navaz, A.S., 2016. Data visualization: Enhancing big data more adaptable and valuable. International Journal of Applied Engineering Research, 11(4), pp.2801-2804.

Fiaz, A.S., Asha, N., Sumathi, D. and Navaz, A.S., 2016. Data visualization: Enhancing big data more adaptable and valuable. International Journal of Applied Engineering Research, 11(4), pp.2801-2804.

Finlay, S., 2014. Predictive analytics, data mining and big data: Myths, misconceptions and methods. Springer.

Foreman, J.W., 2013. Data smart: Using data science to transform information into insight. John Wiley & Sons.

French, A., Macedo, M., Poulsen, J., Waterson, T. and Yu, A., 2008. Multivariate analysis of variance (MANOVA).

Ge, Z., Song, Z., Ding, S.X. and Huang, B., 2017. Data mining and analytics in the process industry: The role of machine learning. Ieee Access, 5, pp.20590-20616.

Gershon, R.A. and Kanayama, T., 2002. The sony corporation: A case study in transnational media management. International Journal on Media Management, 4(2), pp.105-117.

Ghosh, R., Halder, S., and Sen., S., 2015. An integrated approach to deploy data warehouse in business intelligence environment. The Third International Conference on Computer, Communication, Control and Information Technology (IEEE C3IT).

Gorodov, E.Y.E. and Gubarev, V.V.E., 2013. Analytical review of data visualization methods in application to big data. Journal of Electrical and Computer Engineering, 2013.

Gunasekaran, A., Papadopoulos, T., Dubey, R., Wamba, S.F., Childe, S.J., Hazen, B. and Akter, S., 2017. Big data and predictive analytics for supply chain and organizational performance. Journal of Business Research, 70, pp.308-317.

Gunasekaran, A., Yusuf, Y.Y., Adeleye, E.O. and Papadopoulos, T., 2018. Agile manufacturing practices: the role of big data and business analytics with multiple case studies. International Journal of Production Research, 56(1-2), pp.385-397.

GxP, 2019. Data Integrity in a GxP-regulated Environment – Pauwels Consulting. [Online] Available at: <https://www.slideshare.net/PauwelsConsulting/data-integrity-in-a-gxpregulated-environment-pauwels-consulting-academy>[Accessed 8 Mar. 2019].

Hazen, B.T., Skipper, J.B., Ezell, J.D. and Boone, C.A., 2016. Big data and predictive analytics for supply chain sustainability: A theory-driven research agenda. Computers & Industrial Engineering, 101, pp.592-598.

Hobbs, L., Hillson, S., Lawande, S., Smith, P., 2005. Oracle Database 10g Data Warehousing, Elsevier.

Holzinger, A. and Jurisica, I., 2014. Knowledge discovery and data mining in biomedical informatics: The future is in integrative, interactive machine learning solutions. In Interactive knowledge discovery and data mining in biomedical informatics (pp. 1-18). Springer, Berlin, Heidelberg.

Inmon W., Strauss D., Neushloss G., 2008. DW 2.0. The Architecture for the next generation of data warehousing, Morgan Kaufmann.

Inmon, W. H., 1996. Building the data warehouse, New York ea: John Wiley & Sons.

Isaac W. and Dixon A., 2017. Why big-data analysis of police activity is inherently biased. The Conversation. [Online] Available at https://theconversation.com/why-big-data-analysis-of-police-activity-is-inherently-b… [Accessed 17/3/2019].

Jacobs, D., 2003, January. Distributed Computing with BEA WebLogic Server. In CIDR.

Jaya, M., Sidi, F., Ishak, I., Affendey, L., And Jabar, M., 2019. A Review Research in Data Quality in Achieving High Data Quality Within Organization. Theoretical Journal and Applied Information Technology, 95(12), pp. 1-11.

Jorion, P. and Talmor, E., 2001. Value relevance of financial and non financial information in emerging industries: The changing role of web traffic data. London Business School Accounting Subject Area, (021).

Junita, I. and Magdalena, N., 2014. Maximizing stakeholders value through Good University Governance (GUG) in higher education institutions.

Keim, D., Qu, H. and Ma, K.L., 2013. Big-data visualization. IEEE Computer Graphics and Applications, 33(4), pp.20-21.

Keselman, H.J., Huberty, C.J., Lix, L.M., Olejnik, S., Cribbie, R.A., Donahue, B., Kowalchuk, R.K., Lowman, L.L., Petoskey, M.D., Keselman, J.C. and Levin, J.R., 1998. Statistical practices of educational researchers: An analysis of their ANOVA, MANOVA, and ANCOVA analyses. Review of educational research, 68(3), pp.350-386.

Lison, P., 2015. An introduction to machine learning. Language Technology Group: Edinburgh, UK.

Liu, Y., 2014. Big data and predictive business analytics. The Journal of Business Forecasting, 33(4), p.40.

Luján-Mora, S., Vassiliadis, P. and Trujillo, J., 2004, November. Data mapping diagrams for data warehouse design with UML. In International Conference on Conceptual Modeling (pp. 191-204). Springer, Berlin, Heidelberg.

Lydia, E.L. and Swarup, M.B., 2015. Big data analysis using hadoop components like flume, mapreduce, pig and hive. International Journal of Science, Engineering and Computer Technology, 5(11), p.390.

Madden, S., 2016. Mesa takes data warehousing to new heights, Communications of the ACM, 59, 7, pp 116.

Makkes, M., Ngo, C., Demchenko, Y., Strijkers, R., Meijer, R. & Laat, C., 2013. Defining Intercloud Federation Framework for Multi-provider Cloud Services Integration. The Fourth International Conference on Cloud Computing, GRIDs, and Virtualization (Cloud Computing 2013), Valencia, Spain 27 May-1 June 2013.

McKinsey Global Institute, 2016. The age of analytics: competing in a data-driven world [online]. McKinsey & Company Global Institute. Available athttp://www.mckinsey.com/business-functions/mckinsey-analytics/our-insights/the-age-o…  Retrieved 10 January 2017.

Megaladevi, P., 2015. A study on financial performance analysis of the select Paper Company with special reference to Tamil Nadu Newsprint and papers limited. International Journal of Recent Research Aspects, 2(4), pp.22-24.

Moiin, H. and Pruscino, A., Oracle America Inc, 2001. Data integrity and availability in a distributed computer system. U.S. Patent 6,192,483.

Mountjoy, J. and Chugh, A., 2004. WebLogic: The Definitive Guide: Development, Deployment & Maintenance. ” O’Reilly Media, Inc.”.

Murray, A.I., 1988. A contingency view of Porter’s “generic strategies”. Academy of management review, 13(3), pp.390-400.

Nyberg, G., Patrick, R., Bauerschmidt, P., McDaniel, J. and Mukherjee, R., 2004. Mastering BEA WebLogic Server: best practices for building and deploying J2EE applications. John Wiley & Sons.

O’Farrell, P., Hitchens, D. and Moffat, L., 1992. Does strategy matter? An analysis of generic strategies and performance in business service firms. Business Strategy Review, 3(1), pp.71-87.

Oracle, 2002. Oracle9i data warehousing guide. Available at: https://docs.oracle.com/cd/B10500_01/server.920/a96520/title.htm  [Accessed 24 January 2017].

Orens, R. and Lybaert, N., 2010. Determinants of sell‐side financial analysts’ use of non‐financial information. Accounting and Business Research, 40(1), pp.39-53.

Pérez Carrillo, E., 2007. Corporate Governance: Shareholders’ Interests and Other Stakeholders’ Interests. Corporate Ownership & Control, 4(4).

Phyu, T.N., 2009, March. Survey of classification techniques in data mining. In Proceedings of the International MultiConference of Engineers and Computer Scientists (Vol. 1, pp. 18-20).

Prabhu, C.S.R., 2006. Data Warehousing. Concepts, Techniques, Products and Applications, Prentice Hall.

Prescott, M. E., 2014. Big data and competitive advantage at Nielsen. Management decision, Vol. 52, Issue 3, pp. 573-601

Rahm, E. and Do, H.H., 2000. Data cleaning: Problems and current approaches. IEEE Data Eng. Bull., 23(4), pp.3-13.

Research, E., 2019. Research ethics and data protection. Records Management and Information Security, p. 1.

Revels, M., and Nussbaumer, H., 2013. Data mining and data warehousing in the airline industry, Academy of Business Research Journal, 3, pp69-82

SAPinsider, 2010. Federated Vs. Centeralized Vs. De-centeralized Data warehouse. Blog by Dr Berg. Available athttp://sapinsider.wispubs.com/Assets/Blogs/2010/June/Federated-Vs-dot-Centeralized-Vs-dot-De-centeralized-Data-warehouse  [Accessed 24 January 2017].

SAS, 2016. Big Data. What it is and why it matters. [online]. Available at http://www.sas.com/en_us/insights/big-data/what-is-big-data.html  Retrieved 10 January 2017.

Schmidgall, R.S. and DeFranco, A., 2016. How to best use financial ratios in benchmarking and decision making in clubs: Review of the decade 2003–2012. International Journal of Hospitality & Tourism Administration, 17(2), pp.179-197.

Schuha, G., Potentea, T., Thomasa, C. and Brambring, F., 2019. Improving Data Integrity in Production Control. [Online] Available at: <https://www.researchgate.net/publication/270981447_Improving_Data_Integrity_in_Production_Control>[Accessed 8 Mar. 2019].

Sen, A., and Sinha, A., 2005. A comparison of data warehousing methodologies, Communications of the ACM, 48, 3, pp79-84.

Shahin, A. and Mahbod, M.A., 2007. Prioritization of key performance indicators. International Journal of Productivity and Performance Management.

Sharma, R., Mithas, S. and Kankanhalli, A., 2014. Transforming decision-making processes: a research agenda for understanding the impact of business analytics on organisations.

Shaweta, 2014. A review on designing of distributed data warehouse and new trends in distributed data warehousing, International Journal of Computer Science and Information Technologies5(2), 1692-1695.

Sikora, M., 2004. Data cleaning and transformation-the first stage of data mining process. Studia Informatica, 25(2), pp.127-136.

Simon, P., 2014. The visual organization: Data visualization, big data, and the quest for better decisions. John Wiley & Sons.

Stahl, B.C., 2004. Responsibility for information assurance and privacy: A problem of individual ethics?. Journal of Organizational and End User Computing (JOEUC), 16(3), pp.59-77.

Stephanedes, Y.J. and Chassiakos, A.P., 1993. Application of filtering techniques for incident detection. Journal of transportation engineering, 119(1), pp.13-26.

Su, X. and Khoshgoftaar, T.M., 2009. A survey of collaborative filtering techniques. Advances in artificial intelligence, 2009.

Thusoo, A. et al., 2010. “Data warehousing and analytics infrastructure at Facebook” Proc. of the 2010 ACM SIGMOD International Conference on Management of Data, pp. 1013-1020. Available at: http://borthakur.com/ftp/sigmodwarehouse2010.pdf [Accessed 21 May 2018]

Vanclay, F., Baines, J.T. and Taylor, C.N., 2013. Principles for ethical research involving humans: ethical professional practice in impact assessment Part I. Impact Assessment and Project Appraisal, 31(4), pp.243-253.

Vassiliadis, P., Vagena, Z., Skiadopoulos, S., Karayannidis, N. and Sellis, T., 2000. ARKTOS: A tool for data cleaning and transformation in data warehouse environments. IEEE Data Eng. Bull., 23(4), pp.42-47.

Wang, L., Wang, G. and Alexander, C.A., 2015. Big data and visualization: methods, challenges and technology progress. Digital Technologies, 1(1), pp.33-38.

Wang, R., Storey, V. and Firth, C., 2019. An Analysis of Data Quality Research Framework. IEEE Transactions on Data Engineering Knowledge, Vol. 7(No 4), pp. 1-18.

Warne, R.T., 2014. A Primer on Multivariate Analysis of Variance (MANOVA) for Behavioral Scientists. Practical Assessment, Research & Evaluation, 19.

Webb, T., 2019. Data Integrity for Computer Systems: 10 Steps to Compliance. Bayer Animal Health, August 2017, pp. 1-25.

Wei, C.C., Fu, C.C. and Wei, C.S., 2016. Satisfaction maximizing model for managing project stakeholders. Research Journal of Business and Management, 3(4), pp.314-322.

White, S., Alves, A. and Rorke, D., 2008, July. WebLogic event server: a lightweight, modular application server for event processing. In Proceedings of the second international conference on Distributed event-based systems (pp. 193-200).

Witten, I.H. and Frank, E., 2002. Data mining: practical machine learning tools and techniques with Java implementations. Acm Sigmod Record, 31(1), pp.76-77.

Witten, I.H., Frank, E. and Hall, M.A., 2005. Practical machine learning tools and techniques. Morgan Kaufmann, p.578.

Wixom, B.H., Yen, B. and Relich, M., 2013. Maximizing Value from Business Analytics. MIS Quarterly Executive, 12(2).

Wolsk, M., Howard, L. and Richardson, J., 2019. A Trust Framework for Online Research Data Services. Information Services, Griffith University, pp. 1-16.

Zhang, S., Zhang, C. and Yang, Q., 2003. Data preparation for data mining. Applied artificial intelligence, 17(5-6), pp.375-381.

Zhu, D.X. and Zong, S., 2012. Towards the Use of both Financial and Non-financial Data for Decision Making: A Conceptual Framework for Federated Database Systems. International Journal of Business and Social Science, 3(2).

 

 

 

 

 

 

 

  Remember! This is just a sample.

Save time and get your custom paper from our expert writers

 Get started in just 3 minutes
 Sit back relax and leave the writing to us
 Sources and citations are provided
 100% Plagiarism free
error: Content is protected !!
×
Hi, my name is Jenn 👋

In case you can’t find a sample example, our professional writers are ready to help you with writing your own paper. All you need to do is fill out a short form and submit an order

Check Out the Form
Need Help?
Dont be shy to ask