Postpositivism
The positivist is a paradigm at which data is quantified. The hypotheses are tested, and the aims are applied (Muijs, 2010). Reality is regarded as different from the technician. The hypothesis can be directly observed (Wimmer and Dominick, 2013). the positivist paradigm is linked with the view of social reality is objective and external. Axiological beliefs represent the positivist research paradigm as etic. With the technician maintaining an objective stance about the data collected. In epistemology, the accumulated data and facts have enough knowledge (Wayhuni, 2012). In this case, positivism looks to resolve critical practical issues. It explores relationships through statistical analysis and searches for law-like generalizations.
In this scenario, the positivist paradigm created as a technic that applies scientific methods to social phenomena or behavior. Consequently, it is supported by empirical data and logical reasoning. With the supporting evidence presents to be indisputable and authenticated (Creswell, 2013). Nevertheless, there exist several negative implications associated with the positivist paradigm that makes it inappropriate for exploratory research (Lederman and Abell, 2014). Numerous weaknesses are because to reliance on naïve realism and empirical data. Due to the result, several modifications made to the positivist paradigm in contemporary research (establishing a post-positivist paradigm). Don't use plagiarised sources.Get your custom essay just from $11/page
Different from the positivist paradigm, the post-positivist recognizes that individual value and culture affect the researcher’s perceptions and the impact on the research. Cultural values and personal opinions have both positive and negative influences on the researcher. Though the post-positivist paradigm tries to reduce the negative weight of the researcher’s perceptions, in this respect, the researcher detects with the applicants and reflects on their collective beliefs and values.
The research continually relies on the positivist approach as a research philosophy regarding the employment of specific elements from the interpretive approach. The paradigm tries to find out to the investigation the extent to which aspects and interrelationships amongst them affect the effective adoption and implementation of cloud computing on e-government services and through empirical testing on a larger model. Illustrative of the population of the study to facilitate the overview of results and hence it can be used in future studies in the same are or in a different context with identical circumstances. Nevertheless, in the past IS studies, the significant meaning of the research followed the positivist approach, in which the researcher is capable of obtaining the research aims and objectives. Positivist research took, and quantitative data was collected through a survey of 887 participants from within all public government organizations. The study was done as exploratory research to narrow down the scope of the study. The researcher starts thinks the theory developed from this research paradigm and then narrows that down to more specific hypotheses that can be assessed. It shows the reasons for selecting the positivist approach as the primary paradigm for the research.
The interpretivist paradigm
The interpretivist research paradigm is common to social science research and insists on the need to explore the meanings underlying or embedded in human behavior. It abides by the principle of the hermeneutic circle, where the cooperative purpose is developed through tedious cycles of data interpretation (Bryman, 2008). Therefore, it provides in-depth knowledge and insights into data. It is also advantageous when striving to understand societal issues and problems, applying the principle of contextualization separately (Mora et al., 2012). Contextualization implies that every society is unique and has advanced with a period. Each organization needs to be understood and studied.
The interpretivist paradigm considers a concept that is absent in the positivist paradigm, such as freedom of choice and consciousness. It states that the world can be interpreted clearly through logical reasoning, and not every finding is evidence-based. For example, people’s perceptions of reality can only be understood via a qualitative approach. The interpretivist research paradigm tries to get information from people’s understanding, comprehension, and experience of an event. Thus, a significant condemnation of this theory relates to validity, reliability, and generalization in social research (Marcen et al., 2013). Other criticisms of this paradigm are associated with the effect of researcher bias on the discoveries and the interference of the researcher in the daily lives of the research participants (Kura, 2012). It is, however, a paradigm that is frequently adopted for research topics relating to culture, language use, and human interaction. In this perspective, the researcher identifies with the participants and reflects on their shared beliefs and values.
As the interpretivist research design is related to in-depth research into a subject, the most commonly used research methods in interpretivist research are the case study and interview methods (Lederman and Abell, 2014). This paradigm is thus of the most significant advantage when seeking to comprehend and understand human behavior and phenomena (Al- Sherry, 2008; Alfarraj, 2013). It is usually related to qualitative data, and in the present case, such data were sampled via in-depth interviews, it was performed to study e-government system development in the context of KSA. This current thesis investigates the perceptions of the staff employed in some of the Saudi government Ministries. They are currently have been actively involving in e-government implementation.
In the research, the result was delivered by the interpretive method of combining multiple paradigms that are rich and dependable. An interpretive approach was taken. Qualitative data were sampled over a series of eight-structured interviews with executives and senior IT managers in the public sector in government with different circumstances. It may have the decision making to contact their government and analysis of documents concerning plans and policies for the implementation of e-government services and applications. These methods are designed to be exploratory; they seek to offer an understanding of such phenomenon in its actual context. This research needs to use a comprehensive conceptual outline. Therefore, this research defined and proposed a detailed theoretical model that aims to explore and investigate aspects that affect the adoption and utilization of e-Government. Merging the two approaches was the most appropriate design for this research since it allows the researcher to develop a comprehensive and systematic framework and gather as much data as possible from the participants to test. It validates the proposed model as well as exploring and understanding the factors that influence their adoption of cloud computing on e-Government services.
Mixed-Method Research Approach
The investigation of the influence factors that impact of the adoption of cloud computing in e-government services by defining the psychological, technical and social factors, moderated by socio-cultural value, is that the aim of this research. This study considers the adoption of varied theories and models to live the variables and verify the hypotheses that are features of quantitative research methodology. Additionally, this research aims to copy the research outcomes to the broader population. E-government may be a new initiative for the country, and per se is at an early stage of development. It lacks the depth and detail of independent research, thanks to the shortage of knowledge and citizen’s experience of using the system. The society of Saudi is conservative naturally, which emphasis segregation by gender. It was influencing the strategy of this research by avoiding direct contact within the method of knowledge collection; otherwise, the study was biased towards one gender only. The quantitative approach is chosen for this research as being an acceptable methodology.
Stratified sampling may be a probability sampling during which the population firstly has to be separated into homogenous segments 85 (strata) in keeping with a particular characteristic of the community, then from each segment (strata), a straight forward random sample will be selected. These selected samples from the varied segment (stratum) were then combined and arranged into one example (Daniel, 2011). This approach uses the available information of the population (e.g., total population, nationalities. Etc.) before selecting the sampling to form the sampling more efficiently (Schutt, 2006).
Systematic random sampling may be a variant of easy sampling. The primary element is chosen randomly from a listing or sequential file. Then every nth element is selected. It is often a convenient method for drawing a random sample when the population elements are arranged sequentially. It’s particularly useful when the weather isn’t printed (i.e., there’s no sampling frame) but instead is represented by folders in filing cabinets.
p of Close Fit (PCLOSE)
This measure provides a single test of the null hypothesis that the RMSEA equals .05, which is what’s called a close-fitting model (Kenny, 2015). If PCLOSE isn’t any greater than .05 means RMSEA is bigger than .05, indicating that lack of an in-depth fit (Arbuckle, 2013). So, p is a smaller amount than .05. It’s concluded that the model’s fit is worse than close-fitting.
hematic Data Analysis
Yin (2013) clarifies that data analysis involves categorizing, examining, and tabulating findings to deal with research points. The analytical process for the qualitative approach has various methods of study, like thematic analysis, grounded theory, content analysis, and narrative analysis. Thematic and content analysis are classified as classic analytical approaches for qualitative research (Vaismoradi et al., 2016). Thematic analysis “is a technique for identifying, analyzing, and reporting patterns (themes) within data. It minimally organizes and describes your data set in (rich) detail” (Braun and Clarke, 2006, p.6). It involves identifying context from data and key characteristics during a coding process, out of which meaning is elicited.
Both thematic and content analysis is interpretive and describes the participants’ perspectives. However, some researchers believe that thematic analysis is that the most appropriate method of study, employing a lower level of inference and interpretation (Vaismoradi et al., 2016). In the narrative analysis, however, the researcher presents the findings and explains the participants’ words. However, this approach is restricted; it specifies the presentation of the findings, instead of the method of study as a method of deriving reasonable meaning from the results.
The current research involved data analysis using tools of thematic analysis. During this method, the information collected from interviews and through other qualitative methods, is analyzed on the premise of specific themes, drafted from the research context. Perspectives are derived, and these must be arranged thematically, for the aim of drawing out themes from the findings. A deep and significant understanding informs these themes of the information collected then has to be analyzed with the assistance of a conceptual framework and, therefore, the Literature Review, so that they will be framed. Consistent with Fereday and Muir-Cochrane (2006) state that Thematic analysis may be a style of pattern recognition within the information, where emerging themes become the categories for analysis.
The thematic analysis model refers to a three-step process that has data reduction, display, and concluding (Ibrahim, 2012). These stages highlight the essential data collected to assist the researcher, achieve the study objectives. During the phase of information reduction, the researcher focuses on reducing or discarding collected data, to the extent that only the relevant data for achieving the study objectives remain. Within the current research, the information collected from secondary sources, like books, journals, and online articles were reviewed first, then relevant data were selected and organized.
The stages within the process of thematic analysis include data reduction, evaluation of the themes, data display, drawing conclusions, and interpretation. Data reduction is that the first phase of information analysis. During this stage, data are sharpened up so that they become more focused. Unnecessary information is discarded, and therefore the data are organized. With the information reduction method, tables and codes are assigned to the information, and a conclusion is drawn. Another essential step in the thematic analysis is the evaluation of the themes. These themes are evaluated in the order that no information is discarded if it’s relevant to the research study or contributes to the completeness of the data. It is critical for the success of the research to validate the themes, as this adds accuracy and reliability to the study.
Reliability and Validity Analysis
Reliability and, therefore, the validity of the study is the foremost vital evaluations of any style of social research. Such criteria become even more critical when using positivist research methodology when the requirement to make sure that when measuring personal attitudes, or employing a rating scale, respondents’ views are measured consistently and accurately (Collis and Hussey, 2009). The reliability linked to the result of the research being repeatable and whether it’d achieve the same result. Validity is said to the integrity of the conclusion of the study, and whether or not it is reached.
Standardized factor loading
For most SEM programs, the option is that the maximum likelihood estimates, including AMOS and LISREL. The difficulty with unstandardized loadings is they provide limited diagnostic information. Thus, standardized loadings are examined instead because they’re useful and required to calculate the estimates of reliability and discriminant validity. Hair et al. (2010) suggested that the minimum threshold of the standardized factor loadings (regression weights) should be 0.5.
Construct reliability
Construct reliability is additionally an indicator of convergent validity. The rule of thumb is that the reliability estimates should be .07 or above to indicate excellent reliability. Additionally, 0.70 is taken into account the minimum threshold for construct reliability; however, this rule doesn’t apply to exploratory research. A construct with high reliability implies that internal consistency exists and indicates that each one used measures consistently represent the identical latent construct.
The average variance extracted (AVE)
Variance extracted from the item is that the square of standardized factor loading that represents what the latent factor explains quantity variation in an item. In CFA, the average variance extracted (AVE) may be a summary indicator of convergence (Hair et al., 2010). AVE is calculated because the mean-variance extracted for the things loading on a construct (Fornell and Larcker, 1981). The average variance extracted (AVE) of 0.5 or higher is taken into account a good rule of thumb to suggest adequate convergence.
Convergent validity
Convergent validity because the extent to which indicators of a particular construct converge or share a high proportion of variance in common. Convergent validity implies that the indicators (items) of a specific construct should converge or share a high percentage of variance in common (Hair et al., 2010). Anderson and Gerbing (1988) suggested three impromptu tests to gauge convergent validity empirically. Thus, higher values of the AVE indicate that the things are indeed representative of the latent construct. A median variance extracted (AVE) of a minimum of 0.50 provides support for convergent validity.
Discriminant validity
Discriminant validity is that the extent to which a construct is distinct from other constructs. Thus, high discriminant validity provides evidence that a construct is exclusive and captures some phenomena other measures don’t. Hair et al. (2010) and Anderson and Gerbing (1988) suggested a rigorous test to assess discriminant validity. They recommend comparing the squared correlation estimates of any two constructs (latent variables) with the average variance extracted (AVE) values of those two constructs. The AVE estimates should be higher than the squared correlation estimates.
Descriptive Statistics
Tabachnick & Fidell (2007) explain that variables or combinations of variables are wont to describe samples of subjects that are then defined as descriptive statistics. Therefore, charts, tables, and other graphical forms are often wont to display compact kinds of summarised data from various tests that produce variance, mean mode, percentage, and frequency. For this study, these descriptive statistics will demonstrate the respondents’ data in terms of e-government acceptance, e-government intention, Internet patterns, and demographic profiles.
Descriptive statistics contribute to explaining ambiguities in data (Lovie, 1986; p.165). Therefore, the subsequent section will examine some descriptive statistics before advanced analysis is adopted, like regression tests and correlational study. A review of the literature revealed that Internet experience, levels of education, gender, and age were factors that had a critical influence on whether individuals would adopt new technology. Still, this study evaluated the characteristics of respondents that were diverse to reveal whether demographic variables could influence the adoption of e-government in Saudi Arabia by considering how cultural differences affect this.
5.4.1 Confirmatory correlational analysis (CFA)
Confirmatory correlational analysis (CFA) is applied to check the theoretical pattern of the variables loading. They indicate how well the technical specification of the factors can match the fact (the actual data). It doesn’t specify variables to factors; instead, the elements are determined by the researcher supported the idea being tested before any results being obtained. CFA enables the research to simply accept or reject the concept that has been studied (Hair et al., 2010).
When undertaking a CFA, it’s necessary to assess the unidimensional, convergent, and discriminant validity (Awang, 2015). The CFA has got to be performed for all latent constructs before modeling the interrelationship in SEM. The unidimensional should be made first before assessing the convergent and discriminant validity, and reliability. Unidimensional refers to the measurement items that have a suitable factor loading for the latent construct, which is 0.60 and above (Awang, 2015; Hu and Bentler, 1999).
Part 2:
The majority of the factors loading for every item in Figure 5.6 have achieved the recommended value, which is 0.60. However, the results also revealed that there several other things that are below 0.60, which must be deleted from the model before proceeding to the following stage of the study. (Awang, 2015) argued that “to make sure unidimensional of a measurement model, any item with a coffee factor loading but 0.60 should be deleted”.
Standardized factor loadings (or standardized regression weights as called in AMOS) are indicative of the extent of association between indicators (scale items) and one latent variable. The parameters of standardized estimates are constrained to range between -1.0 and +1.0, and thus its interpreted and reported by most researchers. All factor loading estimates exceeded the minimum standard threshold of 0.5 (Hair et al., 2010), except RA4 and AW3. The standardized loading estimates for all factors shown within the table 5.13
.
Model Validity Measures
According to Steiger and Lind (1980), RMSEA is employed to live the discrepancy per degree of freedom. Gerbing and Anderson (1992) showed that CFI is often identified as a gentle descriptive of the model fit. Additionally, to check absolutely the fit of a specified model to absolutely the fit of the independent model, three fits are recommended to be used, which are the CFI, TLI, and, therefore, the GFI. Jaccard and Wan (1996) recommended applying a minimum of three fit tests to assess the overall model fit. However, it was decided to report more indices results to extend the reliability and accuracy of measurement model fits.
In this study, the worth of chi-squared (χ2) is adequate to 1476.5 with 824 degrees of freedom and a probability value of but 0.001. However, Chi-square χ2 and p-values test of absolute model fit are over-sensitive to sample size. Therefore, the study also used χ2 over degrees of freedom, because it’s considered as an adequate measurement. The χ2 over degrees of freedom is recommended being within the range of 1 and three (Gefen, 2000; Chin and Todd, 1995), while the ratio for this study meets this recommendation, scoring CMIN/DF = 1798.5. The model slot indices of the measurement model show a good overall model fit in the following section.
5.4.2.1 Convergent Validity
To establish convergent validity, the model fit must be adequate, and therefore the average variance extracted (AVE) must exceed 0.50 (Hair et al., 2010). AVE verifies whether answers from different respondents to question-statements are sufficiently correlated with the respective latent variables. For acceptable model fit (Awang, 2015; Gaskin and Lim, 2016) recommend a chi-square/degree of freedom (CMIN/DF) between 1 and 5, comparative fit index (CFI) ≥ 0.90, standardized root mean square residual (SRMR) ≤ 0.08, and root mean square error of approximation (RMSEA) ≤ 0.06. Table 5.14 provides the results of the model fit measures. The values included within the Table indicate that the model isn’t fit because the measures of CMIN/DF = 2.050, CFI = 0.918, SRMR = 0.100, RMSEA = 0.043 and PClose = 1.000 that haven’t achieved the specified level. The CFA results confirm that the model isn’t accepted for further analysis.
In terms of the results of AVE for all constructs, as illustrated in Table 5.14, even have not achieved the quality minimum required level of 0.50.
5.4.2.2 Discriminant Validity
Discriminant validity estimates the excellence extent of a construct from other constructs. Hair (2010) and Anderson and Gerbing (1988) suggested a rigorous test to assess the discriminant validity by comparing the squared of correlation estimates of any two constructs (latent variables) with the average variance extracted (AVE) values of those two constructs. The variance extracted estimates should be greater than the squared correlation estimates. The results for this study revealed no issues with discriminant validity, see Table 5.15. The researcher used cross-loadings matrix and Fornell-Larker criterion (Hair et al., 2013). Therefore, it is stated that the results of the measurement CFA model provide acceptable convergent and discriminant validity of the scales.
5.4.2.4 Reliability and Construct Validity
The reliability and construct validity tests are undertaken by Composite Reliability (CR) and Cronbach’s alpha. This study has used both tests to ensure the reliability of the information before conducting on analysis. Hair et al. (2006: p. 137) introduce reliability as “an assessment of the degree of consistency between multiple measurements of a variable.” Indeed, reliability indicates the dependability and consistency in research findings. Within the same context, consistent with Hair et al. (2006), the security of any survey instrument is achieved if the repeated application of the device gave consistent results. Also, testing the tool with the identical individual at two different points in time is among the means of assessment for consistency. It implies that if the responses a few particular devices aren’t much different from time to time, the device is taken into account reliable, otherwise the appliance is taken into account unreliable. For this study, reliability was calculated by measuring the inner consistency of the full scale. This internal consistency is sometimes obtained by calculating the coefficient alpha or called Cronbach’s alpha. Rock bottom acceptable limit for Cronbach alpha is 0.70 (Hair et al., 2006). Churchill (1979) argues that “coefficient alpha absolutely should be the primary measure one calculates to assess the standard of the instrument.” Additionally, reliability of the scales is tested by called composite reliability (Bagozzi and Yi, 1988). for every one of the constructs (or factors) within the research model, the appropriate value for composite reliability should exceed 0.70.
The results are shown in Table 5.16, which presents the reliability and construct validity test. Chin et al. (2003) claimed that CR is more accurate than Cronbach’s alpha because it doesn’t assume that the loadings or error terms of the things are equal. Both the tests of CR and Cronbach’s Alpha require the quality minimum threshold of 0.60 and 0.70, respectively, to be met.
The average variance extracted (AVE) is employed to live the explained variation by the latent variables to random measurement error (Netemeyer, Johnston, and Burton, 1990). Estimates of average variance extracted (AVE) with 0.5 or higher are taken into account a good rule of thumb (Hair et al., 2010; Fornell and Larcker, 1981). The estimates for this study ranged from 0.5 to 0.77, which exceed the suggested minimum threshold (table 5.16). Therefore, all the results verified the convergent validity of the scales within the measurement model. The following equation was used for the calculation of AVE, as suggested by Hair et al. (2010).
The other criteria for discriminant validity that has even to be met are the most Shared Squared Variance (MSV). Hair et al. (2010) recommend that MSV must be but the results of AVE (MSVisn’t valid.
Convergent Validity
To establish concurrent validity, as mentioned before, the model fit must be adequate, and therefore the average variance extracted (AVE) must exceed 0.50. Furthermore, when the sample size of the study is quite 250, the necessity of the normed chi-square should be under 3 to indicate model fit (Hair et al., 2010). during this study, the sample size was 838 participated, and therefore the value chi-square was (1. 758), as shown in Table 5.17, which is within the recommended level.
The results of the model fit measures as included in Table 5.17, which indicates that the model is fit and every one measure of CFI = 0.953 and SRMR = 0.036, and RMSEA = 0.030 have achieved the specified level. Also, the results of AVE for all constructs, as illustrated in (Table 5.17) have reached the quality minimum required level of 0.50.
Discriminant Validity
As stated earlier, to determine discriminant validity, three criteria must be met: Fornell-Larcker test, Maximum Shared Squared Variance (MSV), and Average Shared Squared Variance (ASV) (Gaskin, 2016a; Hair et al., 2010). Since the root of the AVE of the constructs is more significant than their estimates of correlation, as shown in Table 5.18. During this study, discriminant validity was assessed by comparing absolutely the value of the correlations between the constructs and, therefore, the root of the average variance extracted by a construct. When the correlations are under the origin of the average variance extracted by a construct, constructs are said to own discriminant validity (Fornell & Larcker, 1981). As shown in Table 5.18, all square roots of the AVEs (diagonal cells) are over the correlations between constructs, which confirms adequate discriminant validity.
5.4.3.3 Reliability and Construct Validity
This study presents the results of the reliability and constructs validity test. The reliability tests of CR and Cronbach’s alpha have met the quality minimum threshold of 0.60 and 0.70, respectively confirming the reliability of the measurement model. It means the acceptability of internal consistency and affirms that each one the things employed in the model are technically free from the errors (Hair et al., 2010). In terms of convergent and discriminant validity and reliability results, the measures, as shown in Table 5.19, below indicate that the convergent and discriminant validity have met the specified level.
5.4.3.3.1 Construct Reliability (Composite)
Construct reliability, like Cronbach’s alpha, is applied to make sure that the variables in each construct are internally consistent, using Cronbach’s alpha. The reliability coefficients for the study constructs are ranged between 0.75 and 0.80 (see table 5.21). Therefore, the constructs of this study are internally consistent and above the commonly accepted estimate of 0.70 (Hair et al., 2010; Nunnally, 1975).
Table 5.21 below shows the composite reliabilities for all constructs within the measurement model. From the below, the values of the composite reliability for all constructs within the measurement model should be greater than 0.7. The researcher can move forward within the formulation and running of the ultimate structural/path model in subsequent sections.