different sources of secondary and primary data, differences in scales, sampling, and questionnaires in research Introduction There are different methodological approaches and tools to use in research. It is important to take field notes during research as they act as discussion notes which bring in new ideas. Research is used to establish or confirm facts and to reaffirm the results of other peoples works. This essay describes different sources of secondary and primary data, differences in scales, sampling, and questionnaires in research. Observational research or field research is a type of correlation research in which a researcher observes without contact behavior or a phenomenon. Observational research can be classified into different varieties depending onto the extent to which the researcher controls his environment and its most common in marketing. The types include non-participant observation and participant Observation. Where a researcher decides to be a non-participant, he/she has no intervention on the environment thus researches its natural occurrence. In participant observation, the researcher intervenes in the group mostly by taking part in the activities being carried out. This helps the researcher…
research method and technique of data collection A dissertation or research project is one of the requirements to get a doctoral degree. For this assignment. I will review Glyde (2019) dissertation entitled: Developing a Framework to Improve Ethical Decision-Making Financial Institutions. The author of the paper used a specific research approach to conduct his research work. Thus, in this paper, I would present the research method and technique of data collection. On the other hand, I will argue about the justification of these tools. Research question and proposal research method Glyde (2019) chosen: What are the life experiences of U.S. financial institution employees that could assist organizational development and equip managers with improved ethical decision making in finical institutions? Based on the research question, the author used a web conference supported semi-structure guided by open-ended questions to reveal new experiments and themes. Thus, the author presented three research approaches: quantitively, qualitative, and mixed methods. The author found the quantitate and mixed-method unfavorable due to the form of data collection. For example, data collected using open-ended questions, while the quantitive…
Data packets Data packets usually travel to and from numbered network ports linked with specific IP addresses and endpoints, using the TCP or UDP transport layers protocols. All ports are likely at risk of attack. No port is natively secure. The risk comes from the version of the service, whether someone has configured it rightly. There is a total of 65,535 TCP ports and another 65,535 UDP ports, and in this, we will focus on some of the diciest ones. TCP 21 connects FTP servers to the internet. FTP servers carry various vulnerabilities such as anonymous authentication capabilities directory traversals and cross-site scripting, making port 21 an idea target. While some network ports develop suitable entry points for attackers, others make better escape routes. TCP/UDP port 53 for DNS provides an exit strategy. Once criminal hackers within the network have their prize, all they need to do to get it out the door is using readily available software that turns data into DNS traffic. The more usually utilized a port is, the easier it can be to sneak attacks in…
Putting Big Data analytics to work Hello Adriaan Stander, Good post The procedures for the determination of the validity of the regression model comprises of comparing the model predictions as well as the coefficients using theory, data collection to check the predictions of the model. Outcome comparison with the theoretical calculations of the model, as well as splitting of data or cross-validating where part of the data is used in the estimation of the coefficients of the model, the remaining information is then utilized in the measurement of the prediction accuracy of the model. It is also notable that splitting of data is an effective procedure for model validation in an event where the collection of new data for purposes of model testing is not practical. An algorithm known as the Duplex algorithm which was developed by Kennard is a good recommendation for the division of data into the set of estimation as well as prediction set in case there is no visible variable like time for use as a basis for data splitting (Ludwig et al., 2015). …
Data Mining Use Cases and Business Analytics Applications Hello Cindy Odipo, Great content Regression models are frequently used for prediction, estimation, and estimation of parameters as well as control. It was evident that the person using the model is not its initial developer. Therefore it is always essential that before the model is released to the users, there is a need for some assessment as far as the validity of the model is concerned. There are two things, model adequacy as well as model validation. Model adequacy is an amalgamation of residual analysis, lack of fit testing, high-leverage or overly influential observations searching as well as some other internal investigation which investigate regression model fit to the present data. Model validation, on the other hand, focuses on the examination of it or not, the model shall function as desired in the intended analysis (Hofmann et al., 2013). It is therefore worth noting that a model which fits well the data, is also likely to be successful in its final application. Reference Hofmann, M. and Klinkenber, R. (2013) RapidMiner: Data…
Raven Property Management Data Security Proposal Background of the company Raven is a limited liability company that does govern the operation of a chain of retail supermarkets. In that regard, to this company, a lot of sensitive and confidential information entrusted to its disposal to ensure safety and safeguard. Data stored in their databases from their loyal clients include information such as names of the Directors, health status, passwords, account statements, statutory tax compliance accounts, the creditworthiness of the retail companies. This kind of data calls for extra care when being handled by Raven to avoid possible loss, malicious attack, or even possibility of tampering with it. Therefore, security matters towards this information lie in the hands of a Security Consultant in collaboration with other teams of staff within the organization (Safa & Von, 2016). Although the company has put in place security measures to control access of data as explained above, there are still vulnerabilities and threats to the customers’ data, basing on the fact that the field of technology is advancing day by day. For instance, they have not…
Data-driven Innovation at the Grupo Pellas SER Company Reasons for BI System Implementation at SER From the case study, SER decided to implement the BI system for several reasons. Primarily, the company sought to enhance its business operations by leveraging its existing systems with the new technology. Throughout its existence, SER had accumulated over 120 years of disorganized and unsystematic information that it could use to improve its decision-making processes (Zamora & Barahona, 2016). Historical data is essential in determining such elements as the cost trends of production, resource needs, and demand patterns for its various offerings. Notably, the emergence of big data and modern analytical capabilities enables organizations to draw statistics from such areas as transactions, communication, and other elements in the whole supply chain (Jeble, Kumari, & Patil, 2018). In this regard, decisions can be based on evidence from such past elements, which in many cases tend to be repetitive or highly identical. Additionally, SER implemented the BI system to keep track of its production processes by evaluating key deliverables, relevant influencing factors, and gain the potential to…
Summary on pitfalls of data that have driven decisions written by Megan MacGarvie and Kristina McElheran Chapter 15 This chapter is about the pitfalls of data that have driven decisions written by Megan MacGarvie and Kristina McElheran. He states that managers are still likely to be vulnerable to an array of pitfalls when applying data to back up their hard decisions even if they apply impressive large data sets, careful statistic methods, and best analytics tools. There are different traps in the decision-making process coming from the fact that individuals do not carefully process all pieces of data in all decisions. They instead depend on simplified procedures. Megan and Kristina have provided three primary cognitive traps that normally skew decision making, even when people use the best data. The first cognitive trap is the confirmation trap, according to Megan and Kristina. One is likely to fall into this trap when he pays more attention to the findings aligning with the belief he had before and ignore other facts and patterns in the data. This can happen even with small data…
Data Warehouse Design Q1. A Comprehensive Understanding of the Concepts, Purposes, Architectures, Evolution Benefits of Data Warehouse Data Warehousing is a collection of business information and data derived from functioning systems as well as external data sources. A data warehouse is created to prompt business decisions by permitting consolidation of data, analysis, and reporting at diverse aggregate levels (Ross et al. 2014). Data is transferred into the DW either by extraction, transformation, and or loading. The concepts of data warehousing involves integration of data stores and logical, physical, and conceptual models in efforts to support business objectives and information needs of the end-user (Ross et al. 2014). Generating a DW entails mapping data amid targets and sources, then capturing information details in a metadata repository. The data warehouse offers a single and comprehensive source of historical and current information. There are three different types of DW, such as enterprise Data warehouse, Data Mart, and operational data store. Enterprise DW is an integrated warehouse, which offers decision support package across an enterprise (Patel & Patel 2012). It provides a unified method…
Interviews as a valuable tool in obtaining data in qualitative research When undertaking qualitative research, three broad categories of data collection are utilized, and they include interviews, participant observation, and personal documents. While all the three instruments can enable a researcher to collect the relevant data from the participants, interviews are preferred in the current study as the most suitable tool. Interviews are a valuable tool in obtaining data in qualitative research because they allow researchers to interact with the participants and to observe non-verbal cues that may provide additional content about the phenomenon being investigated (Kothari 2004). In this study, an unstructured interview technique will be utilized to enable an open-in-depth discussion of the research topic. This approach, hence, allows a researcher to comprehend the complexity of the situation without imposing any prior categorization. By selecting interviews as a suitable technique for collecting data, the researcher aims to gain a more in-depth understanding of the participants’ constructions through conversations as well as the language they use to advance different discourses. The most critical strength of this instrument is that…