Influential factors for technology-enhanced learning: professionals’ views

Patrick Schweighofer (Department of Information Technology & Business Informatics, University of Applied Science CAMPUS 02, Graz, Austria)
Doris Weitlaner (Department of Information Technology & Business Informatics, University of Applied Science CAMPUS 02, Graz, Austria)
Martin Ebner (Department of Educational Technology, Graz University of Technology, Graz, Austria)
Hannes Rothe (Department of Educational Technology, Freie Universitat Berlin, Berlin, Germany)

Journal of Research in Innovative Teaching & Learning

ISSN: 2397-7604

Article publication date: 26 February 2019

Issue publication date: 3 December 2019

4551

Abstract

Purpose

The literature includes several studies that define different critical success factors (CSF) which have to be considered to support the implementation of technology-enhanced learning (TEL) approaches. An analysis of such studies revealed that (1) regional differences seem to determine the CSF for TEL approaches, (2) certain CSF are relevant for TEL approaches in general, and (3) professionals in higher education determine which influential factors they consider when implementing TEL approaches. Thus, the question arises: in general, which influential factors do professionals in Austrian and German institutions of higher education actually consider when implementing TEL approaches?

Design/methodology/approach

The study is a quantitative research approach based on survey data.

Findings

The results show that certain influential factors seem to be generally important, such as the factors of respecting learning success or motivation. However, the outcome of the study also indicated that different moderating variables like experiences and personal relevance affect the professionals’ choices.

Originality/value

The originality and value are in the approach to identify generally important influential factors for the implementation of TEL approaches in Austrian and German institutions of higher education.

Keywords

Citation

Schweighofer, P., Weitlaner, D., Ebner, M. and Rothe, H. (2019), "Influential factors for technology-enhanced learning: professionals’ views", Journal of Research in Innovative Teaching & Learning, Vol. 12 No. 3, pp. 268-294. https://doi.org/10.1108/JRIT-09-2017-0023

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Patrick Schweighofer, Doris Weitlaner, Martin Ebner and Hannes Rothe

License

Published in Journal of Research in Innovative Teaching & Learning. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

Different synonyms are used in research to talk about digital technologies that support human learning (e.g. computer-assisted instruction, educational technology, educational computing, e-learning, distributed learning and more) (Chan et al., 2006). One of these terms is technology-enhanced learning (TEL), which Goodyear and Retalis (2010) described as an attractive, broadly defined term because it includes all technologies that help make learning more effective, efficient and enjoyable. If this definition is accepted, the list of relevant technologies becomes quite long and includes technologies that have been developed for and are intentionally deployed in learning situations (e.g. mobile learning approaches, game-based learning approaches, interactive learning videos and more). Furthermore, even software that is needed to present information via WWW, e-mail and mobile phones can facilitate learning and are, therefore, part of TEL (Dror, 2008).

Goodyear and Retalis (2010) provide a clear overview that describes situations in which technologies can be used as part of TEL. Technologies can be:

  • media for accessing and studying learning material;

  • media for learning through inquiry;

  • media for learning through communication and collaboration;

  • media for learning through construction;

  • used for learners’ assessments; and

  • used to improve digital and multimedia literacy.

Currently, TEL approaches are widespread in higher education and the number of TEL approaches implemented is still increasing in German-speaking countries, in part because of the so-called digital natives. Many researchers claim that students born between 1980 and 1994, the digital natives (Prensky, 2001), demand technological support in their learning scenarios because they are used to using it in their daily lives (Pedró, 2006; Redecker, 2009; Noguera Fructuoso, 2015; Schweighofer et al., 2015). However, others have criticized this statement, pointing out the lack of convincing evidence that supports this claim (Bekebrede et al., 2011; Bullen and Morgan, 2011; Margaryan et al., 2011). Regardless of the reason for the spread of TEL approaches, it is indisputable that these approaches have become widespread in higher education. For instance, the results of a survey conducted in Austria showed that all 49 higher educational institutions that responded use some kind of TEL approach (Bratengeyer et al., 2016). Therefore, the implementation of TEL approaches needs to follow an instructional design (ID) (Bullen and Morgan, 2011) since using technology to help in learning processes is a complex process (Goodyear and Retalis, 2010).

ID or instructional system design (ISD) is a procedure used to develop learning experiences and environments, applying learning strategies to make the acquisition of knowledge and skill more efficient, effective and appealing (Merrill et al., 1996). The origin for ID is in behaviorism and system thinking (Rogers, 2002; Branch and Merrill, 2012). It is related to TEL in that professionals combine ID and instructional media, which includes various technologies (e.g. computer, internet, mobile devices and social media), to accomplish their goals (Reiser, 2001). By now, a wide variety of instructional design models are available (Branch and Merrill, 2012) that can provide assistance during this design process (e.g. a model created by Dick et al. (2015)).

According to Branch and Merrill (2012), practically all of these models are related to the ADDIE model, although the origin of this model is unknown (Molenda, 2003). ADDIE stands for analyze, design, develop, implement and evaluate, which are the process steps of a systematic product development concept. Therefore, ADDIE itself is not a specific, fully elaborated model but rather a structure used by taking different approaches (Branch and Merrill, 2012). These approaches define which influential factors must be considered during the process steps (cf. Ghirardini, 2011; Lohr, 1998).

However, it is not enough to merely consider relevant, influential factors that are related to ID; it is necessary to consider critical success factors (CSF) in order to be successful with a TEL approach. In general, CSF are characteristics, conditions or variables that can decide whether a company has success or not (Leidecker and Bruno, 1984). In addition, according to Pinto and Slevin (1987) considering such factors can also improve the chances of success of different project implementation whereas Cheawjindakarn et al. (2012) furthermore pointed out that identifying CSF is especially important in educational institutes. They defined CSF for online distance learning as factors that must be taken care of in order to be successful, more efficient and more effective. This study applies the same definition of CSF for TEL approaches.

In order to show the variety of CSF for TEL approaches and provide a selection of such CSF mentioned in literature, a search in Google Scholar using the search term “critical success factors” in combination with the terms “technology-enhanced learning,” “e-learning” and “online learning” was performed and a selection of studies identifying preferably different CSF was chosen. Table I contains this brief selection of studies and their identified CSF. The list is thereby not intended to be exhaustive.

In five of these studies (Frydenberg, 2002; Joo et al., 2011; McGill et al., 2014; Selim, 2007; Sun et al., 2008), CSF were identified for e-learning and online courses in general. Frydenberg (2002) presented a literature review and McGill et al. (2014) described the results of a survey of authors who had written articles about e-learning initiatives. The remaining three articles reported the results of surveys in which students identified CSF. The results of these five studies were quite diverse because there was no shared CSF occurring in all five studies and a greater consensus only existed among CSF related to acceptance. These CSF were perceived usefulness and ease of use (Joo et al., 2011), students like the innovation, innovation is easy for students to use, innovation is easy for teachers to use (McGill et al., 2014), perceived usefulness and perceived ease of use (Sun et al., 2008). Regional differences could be a possible reason that such diverse results were obtained. Richter and Pawlowski (2015) described differences related to e-learning between Germany and South Korea. For instance, they pointed out the differences between the teacher’s role, the value of errors and the preferred learning style. Furthermore, Paulsen (2003) also discussed regional differences in Europe with regard to the use of learning management systems.

In the remaining studies, CSF were identified that address more specific TEL approaches or specific aspects of TEL approaches such as blended learning, m-learning or technical infrastructure aspects in particular.

As mentioned before, this list of CSF is just a brief selection in order to show the variety of CSF for TEL approaches. However, prior to this study Schweighofer and Ebner (2015) conducted a vast literature review, analyzing 4,567 publications to identify factors that were potentially influential when implementing TEL approaches and clustered these factors into 20 categories with 76 subcategories. Table II shows how the discussed CSF fit into these categories. Additionally the table includes the subcategories of each category to further define the category as well as the number of articles in which influential factors of this category were identified.

The categorization revealed that, although the results stated in the publications were diverse, some similarities could also be identified. For example, most of the CSF identified were course-related, but other categories show also a high count of CSF. Furthermore, only two categories (demographic differences and requirements on teachers) did not include CSF with reference to the provided literature. Therefore, it seems like although different CSF are relevant for different TEL approaches, some CSF seem to be relevant for TEL approaches in general.

Finally, none of the selected studies investigated the teachers’ points of view, with the exception of two (Cochrane, 2010; Soong et al., 2001). However, in higher educational professional practice, the teaching staff or their management decide which TEL approaches will be implemented and which influential factors will be considered while doing so. Furthermore, the amount of time needed to consider these factors during the implementation is also important. If the teaching staff or their management do not believe that it is important to consider an influential factor or if they do not want to spend time considering such a factor, the fact that the students consider it a CSF will be irrelevant since the decision makers will not investigate or consider this factor.

In summary, the insights led to three assumptions and conclusions regarding the research question:

  1. Regional differences determine the CSF for TEL approaches. Therefore, the research question should be regionally restricted.

  2. Some CSF are relevant for TEL approaches in general. Therefore, in order to identify these CSF, the research question should not be restricted to specific TEL approaches.

  3. Professionals in higher education determine which influential factors they will consider when implementing TEL approaches. Therefore, the research question should address the opinion of these professionals.

The purpose of this study is bipartite. First, the aim of this study is to broadly show which influential factors are important for professionals in Austrian and German institutions of higher education and, additionally, how much time these professionals would spend considering these factors. This information would support the implementation of TEL approaches, since it would allow to develop methods, which include important influential factors that professionals would actually consider during the development process. Additionally, the study should reveal influential factors that are important to professionals although they do not want to spend much time considering them. Methods that are easy to use and do not need much time must be developed for such factors.

Second, the study should identify influential factors that professionals do not consider as important. This information can help researchers identify fields in which more explanatory work is needed to emphasize the importance of these influential factors.

Finally, based on these conclusions and purposes, this study was conducted in an attempt to answer the following research question:

RQ1.

Which influential factors would professionals in Austrian and German institutions of higher education consider when implementing technology-enhanced learning approaches?

The remainder of this paper consists of a description of the research method used, followed by the survey results, a discussion of these results and, finally, a conclusion.

Research design

The overall research project applied a cross-sectional design, which is the most commonly-used survey design. This means that the units of analysis were studied either at one point in time or within a short time frame (Straits and Singleton, 2011). Furthermore, at this point it should be noted that the research project had two different target groups. For both the same survey was conducted. Although this paper focuses solely on the second group (institutions of higher education), in order to be accurate the following sections describe the research design for the overall research project.

Overall description of the study

The study population included: employees of Styrian enterprises operating in either the manufacturing or services industry (according to the classification of the Austrian statistical office, Statistik Austria, 2015) or schools (from secondary level upwards) and university extensions as well as institutions of higher education located in the German-speaking DACH region (Germany, Austria and Switzerland). Due to the broad nature of the study population, an online survey seemed to be an appropriate and resource-saving method that could be used to collect empirical data. An electronic mailing list and a suitable questionnaire were prepared. These were prepared with reference to a marketing database to obtain the Austrian data. This database includes information obtained from the Austrian commercial register and the association for the protection of creditors. To obtain data from Germany and Switzerland, publicly-available lists of German and Swiss institutions of higher education were examined and contact e-mail addresses were manually identified. These two data sets were supplemented by information from three of the authors’ personal author contact lists, which led to the creation of an electronic mailing list with approximately 19,000 entries. Thereby, approximately 1,300 entries regarded the group of institutions of higher education located in the German-speaking DACH region.

Survey

The elaboration of the questionnaire was an iterative process. A first draft of the questionnaire was prepared based on the 20 categories of influential factors identified by Schweighofer and Ebner (2015) (see Table II) and the question of interest. After conducting multiple cycles of linguistic refinement, an online form of the questionnaire (using EFS Survey software) was created. Pilot tests further improved the questionnaire, which resulted in the creation of an adapted and shortened version based on the testers’ comments. In summary, the final instrument can be described as follows: the questionnaire contains of three blocks using the 20 categories of influential factors identified by Schweighofer and Ebner (2015) as survey items, namely categories considered to date along with a satisfaction rating (or reasons for omission if not considered), a general assessment of each category’s importance and amount of time the respondents would spend to consider factors of these categories and the top five categories as ranked by each respondent, based on their considered importance and time investment (see Appendix). In order to limit the scope of the interpretation, the labels of the 20 categories of influential factors included keyword descriptions in parentheses (see Table II).

The survey was available online as two versions: a personalized version and an anonymous version. In this way, it was possible to handle personal contacts individually and to use the system’s bulk mail function. Informative letters were directly mailed that contained a web link to the personalized version of the questionnaire, stated the research purpose and assuring the confidentiality of the personal contact information. In the remaining cases, the contacts were additionally asked to forward the message to teachers, lecturers and other interested parties. Furthermore, the web link to the anonymized version of the questionnaire was replaced, with the risk that individuals could respond multiple times and bias the results. Since it was possible to monitor the response behavior of each personal contact, reminders to non-respondents were sent one month after the initial mail was sent out. Of course, these measures could not be used in the other cases as it was not possible to either track who responded or determine whether the contacts forwarded the invitation.

Once the questionnaire responses had been collected, the data provided on 319 fully completed questionnaires between March and April 2015 was analyzed. The data were pre-processed during which incomplete data sets and data from schools, university extensions and Switzerland due to insufficient group sizes were excluded. The final data set included 276 records. As described, this paper emphasizes on the subpopulation from the field of higher education, which further reduced the sample set to 120 usable questionnaires. Table III shows the descriptive statistics for this data set; this information was entered at the top of the questionnaire by the respondents. These descriptive statistics and the following calculations were performed using IBM SPSS Statistics 22.0.

Results

In the following section, the results of the analyses performed are presented based on the data obtained through the survey. The outcome shows which influential factors are considered to be important by professionals in higher education and how much time they would invest to consider these factors during the implementation. Therefore, the survey included the following six questions that directly addressed these issues (also see Appendix).

Q1. How important (unrelated to your position) is the consideration of the following aspects during the development of TEL approaches or when using technology to support learning and teaching for you?

Q2. How much time would you spend (in your position) when considering the following aspects during the development of TEL approaches or when using technology to support learning and teaching?

Q3. Please choose those five aspects from the following list which are most important for you (unrelated to your position) during the development of TEL approaches or when using technology to support learning and teaching.

Q4. Now arrange these aspects in descending order based on their priority.

Q5. Please choose those five aspects from the following list on which you (in your position) would spend the most time during the development of TEL approaches or when using technology to support learning and teaching.

Q6. Now arrange these aspects in descending order based on the expenditure of time.

The combined results for all questions about the importance of the influential factors appear in Tables IV and V, and the results about the time invested to consider these factors appear in Tables VI and VII. The relevant frequencies and medians for the data are shown in these tables, because the median is considered the best measure of central tendency when using ordinal scaled variables and is also usually not as strongly affected by outliers and skewed data (Quatember, 2011).

As can be seen in Tables IV and V, the categories learning success and teachers’ teaching aspects appear to include the most important influential factors on the basis of the results of this study (median=4.0). In addition, more than 90 percent of the participants considered the categories motivational aspects, learners’ learning aspects, learners’ requirements, course-related aspects and requirements on teachers to be (very) important[1]. The fact that these categories were most frequently ranked among the top five categories in terms of importance (with the exception of requirements on teachers) confirms these results. More than 70 percent of the respondents ranked the category learning success among the top five results. However, at the same time, at least 5 percent of the respondents considered these categories to be (very) unimportant. The category of influential factors with the least importance is social aspects (median=2.0) according to this survey. More than 50 percent of the respondents considered this category to be (very) unimportant. In addition, at least a third of the respondents deemed the categories business aspects, mind-set and feelings before TEL and demographic differences to be (very) unimportant. Nevertheless, at least some respondents ranked these four categories among the top five most important categories and, at least once, each was ranked first. More than 14 percent of the respondents thought that these categories were very important as well.

With regard to time investment, Tables VI and VII show that the respondents would spend the most time considering the influential factors of the category learning success. Specifically, 82.5 percent of the respondents thought they would spend much of even very much time considering influential factors in this category. At least two-thirds of the respondents also thought they would spend the same amount of time on the categories motivational aspects, support processes, learners’ learning aspects, course-related aspects, requirements on teachers and teachers’ teaching aspects. Again, the results showed that all these categories were ranked among the top five categories except requirements on teachers, which confirms these findings. The respondents indicated that they thought they would spend the most time considering factors in the category learning success (41.8 percent ranked this category in the first place). However, at least 5 percent of the respondents indicated they would spend very little time considering these categories. According to the data, the respondents stated they would spend the least amount of time (more than two-thirds said they would spend only (very) little time) considering factors in the categories mind-set and feelings before TEL and demographic differences. Despite these results, these categories were also ranked among the top five by a few participants, each was ranked in first place at least once and more than 6 percent of the respondents indicated they would spend very much time considering factors in these categories.

In general, the results indicate that all categories seem to be important for professionals in higher education since, for each category (except social aspects), more than 50 percent of the respondents – and, in part, much more than 50 percent – stated that the category was (very) important. In addition, more than 50 percent of the respondents indicated they would invest (a great deal of) time considering factors in most categories. However, less than a third would spend (a great deal of) time considering factors in the categories mind-set and feelings before TEL and demographic differences.

Discussion

Importance and time investment

The results of the present study show that influential factors of the categories learning success, teachers’ teaching aspects, motivational aspects, learners’ learning aspects, course-related aspects and requirements on teachers were identified as the most important by the 120 respondents. The respondents tended to spend more time considering factors in these categories. CSF in five of these categories also appear in the literature investigated (see Table II). Thus, it can be concluded that professionals in higher education considered influential factors in the six categories mentioned as generally important when implementing TEL approaches, independent of the approach itself. Therefore, in general it can be suggested to consider the following questions when implementing TEL approaches:

  • How does the approach increase learning success, effectiveness and efficiency?

  • How does the approach fit to teachers’ teaching style?

  • How does the approach increase learners’ motivation and engagement?

  • How does the approach fit to learners’ learning behavior?

  • How does the approach fit the purpose of the course and the teaching discipline?

  • What are the requirements on teachers to use the approach?

It must be noted, however, that the reviewed literature did not mention influential factors in the category requirements on teachers and a factor in the category learning success appeared only once. Although the literature review was not exhaustive, especially with regard to factors in the category learning success, this result was surprising since this category was identified as the most important one in the present study. In addition, Schweighofer and Ebner (2015) emphasized the importance of this category because it contained by far the largest number of factors identified in their literature review.

Although the six categories mentioned above appear to be most important, in general, all categories were ranked as important by a majority of the respondents. Even categories like demographic differences or social aspects, which showed the weakest average results, were deemed very important by at least a small group of the respondents.

Bloom’s Taxonomy provides one possible explanation for the category social aspects (Bloom et al., 1956) in that it describes different levels of knowledge acquisition. Thereby, collaborative learning scenarios, like collaborative tagging, can be used to attain higher levels of applied and metacognitive knowledge in the hierarchy (Bateman et al., 2007). It seems like factors in the category social aspects that are especially relevant in collaborative learning scenarios (Schweighofer and Ebner, 2015) are, therefore, more important in courses that address these higher levels. The majority of the respondents may not have addressed these high levels because they do not use collaborative learning scenarios, leading to the result of the low (on average) importance assigned to the category social aspects. However, those who rated this category as very important might have addressed such levels and may use such scenarios. In order to verify this assumption, further research is necessary.

A possible explanation for the varying results in the category demographic differences is the fact that these factors are only relevant if there are any demographic differences to be considered. It is obviously not important to consider demographic differences in courses that have a homogeneous group of students (with similar age, socioeconomic status and cultural and/or ethnic backgrounds). Therefore, it can be assumed that these factors were relevant to only a minority of the respondents. To confirm this assertion, additional data needs to be gathered, and further research has to be conducted.

In summary, it can be concluded that, in addition to factors in the six most important categories, depending on varying circumstances such as the course, its goals and students, different influential factors are relevant and should be considered when implementing TEL approaches. This was also emphasized by White (2007), who claimed that CSF vary and local circumstances need to be identified in order to use existing strengths.

Influence on importance and time investments

On the basis of the last conclusion, exploring which circumstances could affect the considerations of importance and time investment in the given data set was necessary. Consequently, the initially proposed analysis was extended and an additional aspect in order to consider potential moderators was added. It was hypothesized that the importance of the influential factors was ranked according to the personality of the respondent. This idea is based on that of Keller and Karau (2013), who conducted a survey with 250 students to investigate how the “Big Five” personality dimensions (extraversion, agreeableness, conscientiousness, emotional stability and openness to experience) influenced five specific types of online course impressions (engagement, value to career, overall evaluation, anxiety/frustration and preference for online courses).

Whether professionals in higher education consider a factor to be influential seems to be a personal decision. Several moderators affect such decisions. First, on the one hand, positive experiences based on decisions made in the past may have influenced subsequent decisions in similar situations (Juliusson et al., 2005). On the other hand, negative experiences can also affect future decisions, since people try to avoid making decisions that led to negative experiences (Sagi and Friedland, 2007).

Another moderator that potentially affects decisions is personal commitment, because people tend to invest more time and effort if they feel personally committed to a decision (Juliusson et al., 2005). Furthermore, according to Acevedo and Krueger (2004), feelings of personal relevance affects decisions as well. For example, if people believe something matters, they tend to invest more time on it.

Finally, demographic characteristics and environmental circumstances such as age or socioeconomic status can be important moderators of the decision process as well. This is in line with the influence of experiences because, for instance, people have had different experiences depending on their age and/or socioeconomic status which, as stated before, can affect their decisions (de Bruin et al., 2007; Finucane et al., 2005).

In conclusion, a personal decision such as whether an influential factor will be considered can be affected by experiences, personal commitment and relevance and demographic characteristics and environmental circumstances. Based on these deliberations, various correlation analyses were conducted. In order to determine whether a correlation was strong, the classification scheme of Brosius (1999) was used: 0 to 0.199 very weak, 0.2 to 0.399 weak, 0.4 to 0.599 substantial, 0.6 to 0.799 strong and 0.8 to 1 very strong. The following three sections show the results of these analyses. It should be noted that the use of a multiple regression would have been preferable but the (sub-)sample size and the conditions for the regression analysis with regard to predictors and the criteria prevented its use.

Influence of experiences

As part of the survey, the survey asked the respondents whether they had had experience with TEL in teaching. As can be seen in Table III, 97 respondents had had such experience. The survey also asked these 97 respondents whether they had already considered the factors belonging to the 20 categories. Based on the answers received, Cramér’s V was calculated to determine the influence on the importance of the categories and the time people would spend considering factors in these categories. Cramér’s V is a measure of the degree of association between two nominal variables that have two or more levels, which is equal to the Phi coefficient in the case of a 2×2 contingency table. The study used this in the analysis because it is robust where nominal and ordinal data are present.

The evidence indicates that, regardless of a few exceptions, significant weak or substantial correlations were observed between the importance of a category and whether someone had already considered the influential factors in this category. All these correlations indicate that someone who has already considered factors of a category believes it is still important to consider these influential factors. In this study, the highest correlations observed were: business aspects (r=0.520, p<0.01), social aspects (r=0.445, p<0.01) and cognitive aspects (r=0.440, p<0.01).

In addition, the calculation revealed many significant weak or substantial correlations between the amounts of time the respondents would currently spend considering factors according to their considerations of factors in the past. Again, the findings indicate that someone who had already considered factors of a category would spend more time considering factors in this category. The only exception was seen in the category mind-set and feelings before TEL (r=0.408, p<0.01). In this category, the findings revealed that respondents who had already considered the category in the past would spend less time to do so now.

Overall, the findings indicate that experiences affect human behavior such that the factors that have been considered previously are judged to be more important, and more time is invested during the consideration of these factors. Moreover, both positive (Juliusson et al., 2005) and negative (Sagi and Friedland, 2007) experiences in the past had the potential to determine personal decisions such as whether an influential factor would be important in the future. Based on these findings and the correlations described above, the respondents seem to have had predominantly positive experiences while considering influential factors in the past and, hence, indicated that they would be willing to invest time to consider these factors in the future. In contrast, with regard to the category mind-set and feelings before TEL, it could be inferred that the respondents could have had negative experiences in the past while considering these factors and, as a result, indicated that they did not want to invest much time to consider these either now or in the future. However, the strength of this claim requires further investigation. First, additional research on this topic could prove whether the assumptions are accurate. Second, if these assumptions are supported by evidence, additional research could identify positive and negative experiences and provide professionals in higher education with better methods that they can use to consider these factors in the future.

Influence of personal commitment and relevance

To determine whether personal commitment and judged relevance influenced the results, Spearman’s rank correlation was calculated between the importance of a category of influential factors and the time people would spend considering factors in this category. Furthermore, this analysis made it additionally possible to evaluate whether the importance of a category affected the time invested considering factors in other categories.

The results revealed that significant weak to strong positive correlations existed between the importance of a category and the time people were willing to spend considering factors in this category. This is valid for the entire set, meaning that the more important a category was considered to be, the more time the person was willing to spend considering factors in this category. In this context, the highest correlations observed were: demographic differences (r=0.646, p<0.01), mind-set and feelings before TEL (r=0.616, p<0.01) and social aspects (r=0.605, p<0.01).

These three categories also displayed significant and substantially strong correlations among each other. The more important the category demographic differences were ranked, the more time the respondents were willing to spend considering the categories mind-set and feelings before TEL (r=0.501, p<0.01) and social aspects (r=0.412, p<0.01). Also, the more important the category mind-set and feelings before TEL was ranked, the more time the respondents were willing to spend considering the categories demographic differences (r=0.499, p<0.01) and social aspects (r=0.448, p<0.01). Finally, the more important social aspects were ranked, the more time the respondents were willing to spend considering the category demographic differences (r=0.448, p<0.01).

Overall, many of the substantial to strong correlations that were identified reveal that the more important a category was ranked by the respondent, the more time the respondent was willing to spend considering factors in this category. Therefore, it can be inferred that personal commitment and judged relevance influence the results.

Furthermore, the evidence indicated that the categories that were considered to be important only to a small group seem to be interrelated. For instance, respondents who thought that it was important to consider factors in the category demographic differences also tended to spend more time considering factors in the category social aspects. This supposition needed to be investigated; however, it can be assumed that collaborative learning scenarios are often used in courses with demographic differences, whereby the factors in the category social aspects would become more relevant.

Influence of demographic characteristics and environmental circumstances

Finally, Spearman’s rank correlation (only for ordinal scales) and Cramér’s V were used to analyze whether the descriptive data (see Table III) affect the importance of each category and the time respondents would spend considering factors in each category.

Concerning the importance, correlations between the descriptive variables and the importance of each of the categories (Q1), the choice of the top five categories (Q3) and the ranking within the top five categories (Q4) were calculated. The analysis results showed that the first two areas of interest were related by only a few significant but weak to very weak correlations. With regard to the ranking within the top five, the analysis allowed the identification of stronger correlations but none were significant because the relevant data set for these high correlations was too small (n=2–22).

With regard to the time invested, the calculation revealed similar correlations. It can be concluded that the tested demographic characteristics and environmental circumstances did not affect the ranked importance of categories and the time respondents would be willing to spend considering factors in these categories. It seems as though other characteristics (e.g. the course subject, the level of knowledge acquisition and the heterogeneity of the students) should be tested in the future to verify whether these circumstances result in an influence on relevance and, therefore, inferred importance.

Limitations of the study

This contribution has several recognizable limitations. The subsample of 120 respondents included in this study represented only a small fraction of the defined subpopulation. High non-response rates can bias the data set (Frohlich, 2002) which can jeopardize the generalizability and validity of the results of the study. However, it is difficult to test for the presence of this bias, as this would have required the collection of a comprehensive data set from all professionals from all Austrian and German institutions of higher education. Such information theoretically might have been obtained from marketing databases or through manual searches. In the present context, however, commercial databases display certain weaknesses: they cannot guarantee the absolute accuracy of the data because complete coverage of the institutions defined in the population does not exist and the number of employees indicated includes administrative, scientific and teaching staff but frequently does not include lecturers that give classes and are employed on a part-time basis. For these reasons, it was not economically possible to precisely calculate the size of the (sub)population or to compare the basic characteristics of the populations with those of the present sample in this study. Because an electronic mailing strategy to disseminate the questionnaires was used, the composition of the sample may have shifted in favor of the paper authors’ contact coverage. Moreover, because the survey was carried out in part anonymously, the possibility that certain respondents responded multiple times cannot completely be rule out, although no apparent abnormalities in the data set were detected. Finally, surveys are typical types of studies that rely, according to their definition, on the participant’s ability to read questions and select responses on the questionnaire by themselves without interference from the researcher. To that effect, the empirical data may have suffered from different cognitive biases such as acquiescence or social desirability. This especially concerns the interpretation of the 20 categories, which is why the questionnaire provided additional keyword descriptions for them.

Conclusions and future work

The purpose of this study was to identify which influential factors professionals in Austrian and German institutions of higher education considered important and how much time these professionals would invest to consider these factors when implementing TEL approaches. Thereby, intentionally the broadly defined term TEL was used to target influential factors that were considered important, independent of the approach taken.

The results revealed that influential factors in the six categories learning success, teachers’ teaching aspects, motivational aspects, learners’ learning aspects, course-related aspects and requirements on teachers were generally considered as important by 120 professionals. In addition, respectively, at least a small group of these respondents considered the remaining factors as important too, and different circumstances seemed to affect this importance.

Therefore, the given data set was analyzed again to identify potential moderating effects. The findings revealed that experiences and personal relevance seemed to strongly affect how professionals in higher education chose which influential factors they were willing to consider. However, the tested descriptive variables (i.e. age, type of organization, country, function in organization, time in position, field of work and experience with TEL in general) did not influence these choices.

For this reason, future research should focus on the relationships between different moderating variables and relevant influential factors. The research should identify which factors are relevant under given circumstances. A qualitative research approach using interviews with several experts as well as teachers seems to be an appropriate research method for this purpose. This approach can also address some limitations of the given study like the possibility of different interpretations regarding the survey items. Revealed relevant circumstances with an impact on the relevance of influential factors could include the course subject, the addressed level of Bloom’s Taxonomy, the available technical infrastructure, experience and knowledge with TEL approaches, the number of students and much more.

Based on this research, the development of a methodology that better support the implementation of TEL approaches is possible. This methodology should include two steps: the identification of relevant circumstances and the identification of relevant influential factors that should be considered under the given circumstances. Using such a methodology should be more efficient, since only relevant influential factors will be considered during the implementation of TEL approaches. Additionally, the implemented TEL approaches should be more successful too, since CSF would have been considered. For example, the results of this study indicated it is generally not very important to consider demographic differences in German or Austrian institutions of higher education, but if a course would involve a highly heterogeneous group of learners, this factor might become a CSF and should be considered.

Finally, with several case studies the new methodology should be tested at different higher educational institutions in Germany and Austria in order to verify the value of this approach.

Selection of studies identifying CSF

Source Description CSF
Alsabawy et al. (2016) A survey conducted with 720 students of online courses at an Australian university to determine the impact of IT infrastructure services and IT quality on perceptions of usefulness (1) IT infrastructure services, (2) system quality, (3) information quality
Chen et al. (2013) A survey conducted with 306 students of courses using web-based language learning at a university in Taiwan to test hypotheses based on the social cognitive theory in order to determine how different factors influence learners’ satisfaction (1) system characteristics, (2) possibilities of interaction
Cochrane (2010) Feedback (qualitative and quantitative) from students and teachers participating in three mobile learning projects at a higher educational institute in New Zealand to identify CSF for mobile learning (1) the importance of the pedagogical integration of the technology into the course assessment, (2) lecturer modeling of the pedagogical use of the tools, (3) the need for regular formative feedback from lecturers to students, (4) the appropriate choice of mobile devices and software to support the pedagogical model underlying the course
Frydenberg (2002) A literature review on quality standards for e-learning in the USA (1) executive commitment, (2) technological infrastructure, (3) student services, (4) design and development, (5) instruction and instructor services, (6) program delivery, (7) financial health, (8) legal and regulatory requirements, (9) program evaluation
Henrich and Sieber (2009) Lessons learned from using different approaches to enhance courses about information retrieval with technology at a university in Germany to identify CSF for TEL approaches (1) concept, (2) creation, (3) maintenance, (4) utilization, (5) participation
Joo et al. (2011) A survey with 709 students of online courses at a South Korean online university to test hypotheses in order to determine how different factors influence learning satisfaction (1) teaching presence, (2) cognitive presence, (3) perceived usefulness and ease of use
McGill et al. (2014) A survey conducted of 70 authors of articles about e-learning initiatives, conducted to identify factors which influence the success, continuation or sustainability of e-learning initiatives (1) students like the innovation, (2) innovation is easy for students to use, (3) innovation is consistent with approach to teaching, (4) technology is sufficiently mature/stable, (5) management supports e-learning, (6) innovation improves student learning, (7) technology is inexpensive, (8) innovation is easy for teachers to use, (9) technology is up to date
Selim (2007) A survey conducted with 538 students at a university in the United Arab Emirates to identify important factors for successful e-learning (1) instructor’s attitude toward and control of the technology, (2) instructor’s teaching style, (3) student motivation and technical competency, (4) student interactive collaboration, (5) e-learning course content and structure, (6) ease of on-campus internet access, (7) effectiveness of information technology infrastructure, (8) university support of e-learning activities
(Soong et al., 2001) Interviews with instructors, a survey conducted with students and the analysis of archival records (logs) in order to identify CSF for using online learning resources at an university in Singapore (1) human factors pertaining to the instructors, (2) the instructors’ and students’ technical competency, (3) the instructors’ and students’ mind-set (about learning), (4) the level of collaboration intrinsic in the course, (5) the level of perceived IT infrastructure and technical support
(Stacey and Gerbie, 2008) Based on the literature and personal practices at two universities in New Zealand and Australia, the article describes success factors for blended learning (1) institutional success factors (e.g. needs), (2) success factors regarding teachers (e.g. workload, fears), (3) success factors regarding students (e.g. readiness, expectations), (4) pedagogic considerations (e.g. course design)
(Sun et al., 2008) A survey conducted with 295 students at two universities in Taiwan to test literature-based hypotheses in order to determine how different factors influence learners’ satisfaction in e-learning courses (1) learner computer anxiety, (2) instructor attitude toward e-learning, (3) e-learning course flexibility, (4) e-learning course quality, (5) perceived usefulness, (6) perceived ease of use, (7) diversity in assessments

Categorization of CSF

Influential factors identified by Schweighofer and Ebner (2015) Discussed CSF
Acceptance aspects (technology acceptance) (identified in 36 articles) (1) utilization (Henrich and Sieber, 2009), (2) perceived usefulness and ease of use (Joo et al., 2011), (3) students like the innovation, (4) innovation is easy for students to use, (5) innovation is easy for teachers to use (McGill et al., 2014), (6) perceived usefulness, (7) perceived ease of use (Sun et al., 2008)
Business aspects (benefits, organizational influences, cost reduction, difficulties, effectiveness, value, ethics, quality assurance) (identified in 38 articles) (1) information quality (Alsabawy et al., 2016), (2) executive commitment, (3) financial health, (4) legal and regulatory requirements (Frydenberg, 2002), (5) management supports e-learning, (6) technology is inexpensive (McGill et al., 2014), (7) institutional success factors (e.g. needs) (Stacey and Gerbie, 2008), (8) e-learning course quality (Sun et al., 2008)
Cognitive aspects (cognition, attention) (identified in 39 articles) (1) cognitive presence (Joo et al., 2011)
Course-related aspects (delivery mode, relevance, purpose, provided time, teaching discipline, course design) (identified in 13 articles) (1) lecturer modeling of the pedagogical use of the tools (Cochrane, 2010), (2) design and development, (3) program delivery, (4) program evaluation (Frydenberg, 2002), (5) concept, (6) maintenance (Henrich and Sieber, 2009), (7) e-learning course content and structure (Selim, 2007), (8) pedagogic considerations (e.g. course design) (Stacey and Gerbie, 2008), (9) diversity in assessments, (10) e-learning course flexibility, (11) e-learning course quality (Sun et al., 2008)
Demographic differences (age, gender differences, students’ background, socioeconomic status, cultural background, ethnical background) (identified in 27 articles)
Influence from prior knowledge and experience (experience, knowledge level, digital competence) (identified in 39 articles) (1) student motivation and technical competency (Selim, 2007), (2) the instructors’ and students’ technical competency (Soong et al., 2001)
Instruction aspects (instructions effectiveness, instructional strategies, instructional design, instruction influence) (identified in 20 articles) (1) instruction and instructor services (Frydenberg, 2002)
Learners’ learning aspects (adaptive learning, approaches, behavior, learning strategy, goal orientation, out-of-school learning resources, learning process, participation, interaction) (identified in 102 articles) (1) possibilities of interaction (Chen et al., 2013), (2) participation (Henrich and Sieber, 2009), (3) student interactive collaboration (Selim, 2007), (4) success factors regarding students (e.g. readiness, expectations) (Stacey and Gerbie, 2008)
Learners’ requirements (identity issues, learners’ readiness, learners’ preferences) (identified in 24 articles) (1) success factors regarding students (e.g. readiness, expectations) (Stacey and Gerbie, 2008)
Learning success (learning outcomes, learning effectiveness, learning reflection, learning efficiency) (identified in 186 articles) (1) innovation improves student learning (McGill et al., 2014)
Mind-set and feelings before TEL (conceptions, expectations, beliefs) (identified in 23 articles) (1) perceived usefulness and ease of use (Joo et al., 2011), (2) the instructors’ and students’ mind-set (about learning) (Soong et al., 2001), (3) learner computer anxiety, (4) perceived usefulness, (5) perceived ease of use (Sun et al., 2008)
Mind-set and feelings during TEL (attitude, perceptions, perspectives, satisfaction, emotions) (identified in 102 articles) (1) instructor’s attitude toward and control of the technology (Selim, 2007), (2) learner computer anxiety, (3) instructor attitude toward e-learning (Sun et al., 2008)
Motivational aspects (intention, engagement, motivation) (identified in 74 articles) (1) student motivation and technical competency (Selim, 2007), (2) the level of collaboration intrinsic in the course (Soong et al., 2001)
Requirements on teachers (identified in 3 articles)
Self-regulation aspects (self-regulated learning, computer/internet self-efficacy) (identified in 36 articles) (1) e-learning course flexibility (Sun et al., 2008)
Social aspects (social competence) (identified in 4 articles) (1) the level of collaboration intrinsic in the course (Soong et al., 2001)
Support processes (support, feedback) (identified in 27 articles) (1) the need for regular formative feedback from lecturers to students (Cochrane, 2010), (2) student services (Frydenberg, 2002), (3) university support of e-learning activities (Selim, 2007)
Teachers’ teaching aspects (teachers’ self-reflection, teachers behavior, teaching style, teaching strategy, teaching performance) (identified in 12 articles) (1) teaching presence (Joo et al., 2011), (2) innovation is consistent with approach to teaching (McGill et al., 2014), (3) instructor’s teaching style (Selim, 2007), (4) human factors pertaining to the instructors (Soong et al., 2001), (5) success factors regarding teachers (e.g. workload, fears) (Stacey and Gerbie, 2008)
Technical infrastructure aspects (accessibility, reliability, time of availability, available infrastructure, learning environment) (identified in 18 articles) (1) IT infrastructure services, (2) system quality (Alsabawy et al., 2016), (3) system characteristics (Chen et al., 2013), (4) technological infrastructure (Frydenberg, 2002), (5) ease of on-campus internet access, (6) effectiveness of information technology infrastructure (Selim, 2007), (7) the level of perceived IT infrastructure and technical support (Soong et al., 2001)
Technology-related aspects (technology integration, technology usage) (identified in 30 articles) (1) the appropriate choice of mobile devices and software to support the pedagogical model underlying the course, (2) the importance of the pedagogical integration of the technology into the course assessment (Cochrane, 2010), (3) creation (Henrich and Sieber, 2009), (4) technology is sufficiently mature/stable, (5) technology is up to date (McGill et al., 2014), (6) instructor’s attitude toward and control of the technology (Selim, 2007)

Overview of descriptive statistics

Item Category n %
Age (years) <25 1 0.8
25–30 10 8.3
31–40 33 27.5
41–50 35 29.2
>50 41 34.2
Organization University of applied science 59 49.2
University 45 37.5
College of education 16 13.3
Country/state Germany 28 23.3
Austria (only Styria) 62 51.7
Austria (without Styria) 30 25.0
Function in university of applied science Head of degree program 6 5.0
Full-time lecturer/teacher 24 20.0
Part-time lecturer/teacher 16 13.3
Responsible for online learning and/or didactics 9 7.5
Other function 4 3.3
Function in university Rector, faculty director, head of institute 6 5.0
Professor, assistant professor, lecturer 19 15.8
Responsible for online learning and/or didactics 7 5.8
Other function 13 10.8
Function in college of education Rector, faculty director, head of institute 3 2.5
Professor, assistant professor, lecturer 11 9.2
Responsible for online learning and/or didactics 1 0.8
Other function 1 0.8
Time in position (years) <1 5 4.2
1–3 25 20.8
4–6 31 25.8
7–10 21 17.5
>10 38 31.7
Field of work Natural science 15 12.5
Engineering and technology 43 37.5
Agricultural, medical and health science 6 5.0
Social sciences 51 42.5
Humanities 36 30.0
Experiences with TEL in teaching Yes 97 80.8
No 23 19.2

Frequency distributions of importance

Very unimportant (1) Unimportant (2) Important (3) Very important (4)
Category n % n % n % n % Median
Acceptance aspects 2 1.7 25 20.8 60 50.0 33 27.5 3.0
Business aspects 14 11.7 37 30.8 51 42.5 18 15.0 3.0
Cognitive aspects 2 1.7 14 11.7 64 53.3 40 33.3 3.0
Course-related aspects 1 0.8 9 7.5 55 45.8 55 45.8 3.0
Demographic differences 12 10.0 41 34.2 45 37.5 22 18.3 3.0
Influence from prior knowledge and experience 3 2.5 33 27.5 60 50.0 24 20.0 3.0
Instruction aspects 4 3.3 28 23.3 50 41.7 38 31.7 3.0
Learners’ learning aspects 1 0.8 5 4.2 62 51.7 52 43.3 3.0
Learners’ requirements 2 1.7 7 5.8 66 55.0 45 37.5 3.0
Learning success 2 1.7 6 5.0 40 33.3 72 60.0 4.0
Mind-set and feelings before TEL 8 6.7 45 37.5 50 41.7 17 14.2 3.0
Mind-set and feelings during TEL 6 5.0 30 25.0 56 46.7 28 23.3 3.0
Motivational aspects 2 1.7 8 6.7 52 43.3 58 48.3 3.0
Requirements on teachers 1 0.8 9 7.5 63 52.5 47 39.2 3.0
Self-regulation aspects 2 1.7 26 21.7 57 47.5 35 29.2 3.0
Social aspects 14 11.7 48 40.0 33 27.5 25 20.8 2.0
Support processes 2 1.7 13 10.8 51 42.5 54 45.0 3.0
Teachers’ teaching aspects 2 1.7 7 5.8 50 41.7 61 50.8 4.0
Technical infrastructure aspects 3 2.5 10 8.3 56 46.7 51 42.5 3.0
Technology-related aspects 5 4.2 34 28.3 52 43.3 29 24.2 3.0

Frequency distributions of rank among the top five categories regarding importance

Voted into top five Rank 1 (1) Rank 2 (2) Rank 3 (3) Rank 4 (4) Rank 5 (5)
Category n % n % %T5 n % %T5 n % %T5 n % %T5 n % %T5 Median
Acceptance aspects 22 18.3 4 3.3 18.2 1 0.8 4.5 5 4.2 22.7 5 4.2 22.7 7 5.8 31.8 4.0
Business aspects 14 11.7 2 1.7 14.3 0 0.0 0.0 4 3.3 28.6 4 3.3 28.6 4 3.3 28.6 4.0
Cognitive aspects 21 17.5 1 0.8 4.8 5 4.2 23.8 6 5.0 28.6 3 2.5 14.3 6 5.0 28.6 3.0
Course-related aspects 47 39.2 13 10.8 27.7 7 5.8 14.9 11 9.2 23.4 9 7.5 19.1 7 5.8 14.9 3.0
Demographic differences 13 10.8 4 3.3 30.8 0 0.0 0.0 2 1.7 15.4 3 2.5 23.1 4 3.3 30.8 4.0
Influence from prior knowledge and experience 11 9.2 2 1.7 18.2 2 1.7 18.2 4 3.3 36.4 1 0.8 9.1 2 1.7 18.2 3.0
Instruction aspects 18 15.0 2 1.7 11.1 4 3.3 22.2 2 1.7 11.1 1 0.8 5.6 9 7.5 50.0 4.5
Learners’ learning aspects 62 51.7 16 13.3 25.8 20 16.7 32.3 8 6.7 12.9 12 10.0 19.4 6 5.0 9.7 2.0
Learners’ requirements 40 33.3 3 2.5 7.5 12 10.0 30.0 11 9.2 27.5 7 5.8 17.5 7 5.8 17.5 3.0
Learning success 85 70.8 34 28.3 40.0 19 15.8 22.4 15 12.5 17.6 12 10.0 14.1 5 4.2 5.9 2.0
Mind-set & feelings before TEL 9 7.5 1 0.8 11.1 2 1.7 22.2 1 0.8 11.1 2 1.7 22.2 3 2.5 33.3 4.0
Mind-set and feelings during TEL 12 10.0 0 0.0 0.0 1 0.8 8.3 2 1.7 16.7 5 4.2 41.7 4 3.3 33.3 4.0
Motivational aspects 51 42.5 16 13.3 31.4 14 11.7 27.5 8 6.7 15.7 12 10.0 23.5 1 0.8 2.0 2.0
Requirements on teachers 30 25.0 6 5.0 20.0 2 1.7 6.7 5 4.2 16.7 10 8.3 33.3 7 5.8 23.3 4.0
Self-regulation aspects 21 17.5 3 2.5 14.3 5 4.2 23.8 4 3.3 19.0 3 2.5 14.3 6 5.0 28.6 3.0
Social aspects 17 14.2 3 2.5 17.6 2 1.7 11.8 5 4.2 29.4 5 4.2 29.4 2 1.7 11.8 3.0
Support processes 38 31.7 2 1.7 5.3 9 7.5 23.7 7 5.8 18.4 8 6.7 21.1 12 10.0 31.6 4.0
Teachers’ teaching aspects 38 31.7 4 3.3 11.4 3 2.5 8.6 5 4.2 14.3 9 7.5 25.7 14 11.7 40.0 4.0
Technical infrastructure aspects 35 29.2 4 3.3 10.5 8 6.7 21.1 11 9.2 28.9 8 6.7 21.1 7 5.8 18.4 3.0
Technology-related aspects 16 13.3 0 0.0 0.0 4 3.3 25.0 4 3.3 25.0 1 0.8 6.3 7 5.8 43.8 3.5

Note: For the columns labeled “%T5,” n is the number of respondents who ranked the category among the top five (column 2)

Frequency distributions according to time invested

Very little (1) Little (2) Much (3) Very much (4)
Category n % n % n % n % Median
Acceptance aspects 22 18.3 43 35.8 37 30.8 18 15.0 2.0
Business aspects 42 35.0 37 30.8 29 24.4 12 10.0 2.0
Cognitive aspects 13 10.8 41 34.2 47 39.2 19 15.8 3.0
Course-related aspects 7 5.8 17 14.2 47 39.2 19 15.8 3.0
Demographic differences 44 36.7 55 45.8 11 9.2 10 8.3 2.0
Influence from prior knowledge and experience 25 20.8 54 45.0 28 23.3 13 10.8 2.0
Instruction aspects 12 10.0 37 30.8 44 36.7 27 22.5 3.0
Learners’ learning aspects 10 8.3 27 22.5 59 49.2 24 20.0 3.0
Learners’ requirements 12 10.0 43 35.8 41 34.2 24 20.0 3.0
Learning success 6 5.0 15 12.5 50 41.7 49 40.8 3.0
Mind-set and feelings before TEL 34 28.8 52 43.3 26 21.7 8 6.7 2.0
Mind-set and feelings during TEL 23 19.2 53 44.2 32 26.7 12 10.0 2.0
Motivational aspects 11 9.2 26 21.7 49 40.8 34 28.3 3.0
Requirements on teachers 9 7.5 31 25.8 50 41.7 30 25.0 3.0
Self-regulation aspects 21 17.5 49 40.8 34 28.3 16 13.3 2.0
Social aspects 25 20.8 48 40.0 38 31.7 9 7.5 2.0
Support processes 14 11.7 18 15.0 59 49.2 29 24.2 3.0
Teachers’ teaching aspects 6 5.0 22 18.3 48 40.0 44 36.7 3.0
Technical infrastructure aspects 25 20.8 27 22.5 46 38.3 22 18.3 3.0
Technology-related aspects 25 20.8 40 33.3 44 36.7 11 9.2 2.0

Frequency distributions of rank among the top five categories regarding time investment

Voted into top five Rank 1 (1) Rank 2 (2) Rank 3 (3) Rank 4 (4) Rank 5 (5)
Category n % n % %T5 n % %T5 n % %T5 n % %T5 n % %T5 Median
Acceptance aspects 23 19.2 2 1.7 8.7 6 5.0 26.1 1 0.8 4.3 5 4.2 21.7 9 7.5 39.1 4.0
Business aspects 13 10.8 1 0.8 7.7 2 1.7 15.4 2 1.7 15.4 3 2.5 23.1 6 5.0 46.2 4.0
Cognitive aspects 20 16.7 1 0.8 5.0 2 1.7 10.0 5 4.2 25.0 9 7.5 45.0 3 2.5 15.0 4.0
Course-related aspects 41 34.2 8 6.7 19.5 12 10.0 29.3 10 8.3 24.4 8 6.7 19.5 3 2.5 7.3 3.0
Demographic differences 11 9.2 4 3.3 36.4 1 0.8 9.1 2 1.7 18.2 2 1.7 18.2 2 1.7 18.2 3.0
Influence from prior knowledge and experience 12 10.0 2 1.7 16.7 4 3.3 33.3 2 1.7 16.7 1 0.8 8.3 3 2.5 25.0 2.5
Instruction aspects 22 18.3 4 3.3 18.2 2 1.7 9.1 6 5.0 27.3 5 4.2 22.7 5 4.2 22.7 3.0
Learners’ learning aspects 63 52.5 17 14.2 27.0 16 13.3 25.4 10 8.3 15.9 11 9.2 17.5 9 7.5 14.3 2.0
Learners’ requirements 40 33.3 5 4.2 12.5 15 12.5 37.5 11 9.2 27.5 3 2.5 7.5 6 5.0 15.0 2.5
Learning success 79 65.8 33 27.5 41.8 17 14.2 21.5 9 7.5 11.4 5 4.2 6.3 15 12.5 19.0 2.0
Mind-set & feelings before TEL 8 6.7 1 0.8 12.5 1 0.8 12.5 3 2.5 37.5 3 2.5 37.5 0 0.0 0.0 3.0
Mind-set and feelings during TEL 15 12.5 1 0.8 6.7 2 1.7 13.3 4 3.3 26.7 2 1.7 13.3 6 5.0 40.0 4.0
Motivational aspects 41 34.2 12 10.0 29.3 8 6.7 19.5 8 6.7 19.5 7 5.8 17.1 6 5.0 14.6 3.0
Requirements on teachers 28 23.3 7 5.8 25.0 3 2.5 10.7 5 4.2 17.9 9 7.5 32.1 4 3.3 14.3 3.0
Self-regulation aspects 22 18.3 0 0.0 0.0 3 2.5 13.6 6 5.0 27.3 10 8.3 45.5 3 2.5 13.6 4.0
Social aspects 18 15.0 2 1.7 11.1 2 1.7 11.1 6 5.0 33.3 5 4.2 27.8 3 2.5 16.7 3.0
Support processes 42 35.0 5 4.2 11.9 7 5.8 16.7 11 9.2 26.2 8 6.7 19.0 11 9.2 26.2 3.0
Teachers’ teaching aspects 55 45.8 9 7.5 16.4 14 11.7 25.5 11 9.2 20.0 16 13.3 29.1 5 4.2 9.1 3.0
Technical infrastructure aspects 30 25.0 3 2.5 10.0 3 2.5 10.0 4 3.3 13.3 5 4.2 16.7 15 12.5 50.0 4.5
Technology-related aspects 17 14.2 3 2.5 17.6 1 0.8 5.9 4 3.3 23.5 3 2.5 17.6 6 5.0 35.3 4.0

Note: For the columns labeled “%T5,” n is the number of respondents who ranked the category into the top five (column 2)

Note

1.

Includes important and very important.

References

Acevedo, M. and Krueger, J.I. (2004), “Two egocentric sources of the decision to vote: the voter’s illusion and the belief in personal relevance”, Political Psychology, Vol. 25 No. 1, pp. 115-134.

Alsabawy, A.Y., Cater-Steel, A. and Soar, J. (2016), “Determinants of perceived usefulness of e-learning systems”, Computers in Human Behavior, Vol. 64, November, pp. 843-858.

Bateman, S., Brooks, C., McCalla, G. and Brusilovsky, P. (2007), Applying Collaborative Tagging to E-Learning, Banff, Alberta.

Bekebrede, G., Warmelink, H. and Mayer, I.S. (2011), “Reviewing the need for gaming in education to accommodate the net generation”, Computers & Education, Vol. 57 No. 2, pp. 1521-1529.

Bloom, B.S., Engelhart, M.D., Furst, E.J., Hill, W.H. and Krathwohl, D.R. (1956), Taxonomy of Educational Objectives: The Classification of Educational Goals: Handbook I: Cognitive Domain, David McKay Company, New York, NY.

Branch, R.M. and Merrill, M.D. (2012), “Chapter 2: characteristics of instructional design models”, in Reiser, R.A. (Ed.), Trends and Issues in Instructional Design and Technology, 3rd ed., international ed., Pearson, Boston, MA., pp. 8-16.

Bratengeyer, E., Steinbacher, H.-P. and Friesenbichler, M. (2016), “Die österreichische Hochschul-E-Learning-Landschaft: Studie zur Erfassung des Status quo der E-Learning-Landschaft im tertiären Bildungsbereich hinsichtlich Strategie”, Ressourcen, Organisation und Erfahrungen.

Brosius, F. (1999), SPSS 8.0: Professionelle Statistik unter Windows, Aufl, Nachdr, MITP-Verl, Bonn.

Bullen, M. and Morgan, T. (2011), “Digital learners not digital natives”, La Cuestión Universitaria, Vol. 7, pp. 60-68.

Chan, T.-W., Roschelle, J., Hsi, S., Kinshuk, Sharples, M., Brown, T., Patton, C., Cherniavsky, J., Pea, R., Norris, C., Soloway, E., Balacheff, N., Scardamalia, M., Dillenbourg, P., Looi, C.-K., Milrad, M. and Hoppe, U. (2006), “One-to-one technology-enhanced learning: an opportunity for global research collaboration”, Research and Practice in Technology Enhanced Learning, Vol. 1 No. 1, pp. 3-29.

Cheawjindakarn, B., Suwannatthachote, P. and Theeraroungchaisri, A. (2012), “Critical success factors for online distance learning in higher education: a review of the literature”, Creative Education, Vol. 3, Supplement, pp. 61-66.

Chen, Y.-C., Yeh, R.C., Lou, S.-J. and Lin, Y.-C. (2013), “What drives a successful web-based language learning environment? An empirical investigation of the critical factors influencing college students’ learning satisfaction”, Procedia – Social and Behavioral Sciences, Vol. 103, November, pp. 1327-1336.

Cochrane, T.D. (2010), “Exploring mobile learning success factors”, Research in Learning Technology, Vol. 18 No. 2, pp. 133-148.

de Bruin, W.B., Parker, A.M. and Fischhoff, B. (2007), “Individual differences in adult decision-making competence”, Journal of Personality and Social Psychology, Vol. 92 No. 5, pp. 938-956.

Dick, W., Carey, L. and Carey, J.O. (2015), The Systematic Design of Instruction, 8th ed., Pearson, Boston, MA.

Dror, I.E. (2008), “Technology enhanced learning: the good, the bad, and the ugly”, Pragmatics & Cognition, Vol. 16 No. 2, pp. 215-223.

Finucane, M.L., Mertz, C.K., Slovic, P. and Schmidt, E.S. (2005), “Task complexity and older adults’ decision-making competence”, Psychology and Aging, Vol. 20 No. 1, pp. 71-84.

Frohlich, M.T. (2002), “Techniques for improving response rates in OM survey research”, Journal of Operations Management, Vol. 20 No. 1, pp. 53-62.

Frydenberg, J. (2002), “Quality standards in e-learning: a matrix of analysis”, International Review of Research in Open and Distance Learning, Vol. 3 No. 2.

Ghirardini, B. (2011), E-learning Methodologies: A Guide for Designing and Developing e-Learning Courses, Food and Agriculture Organization of the United Nations, Rome.

Goodyear, P. and Retalis, S. (2010), “Learning, technology and design”, in Goodyear, P. and Retalis, S. (Eds), Design Patterns and Pattern Languages, Technology-Enhanced Learning, Sense Publishers, Rotterdam, Boston, MA and Taipei, pp. 1-27.

Henrich, A. and Sieber, S. (2009), “Blended learning and pure e-learning concepts for information retrieval: experiences and future directions”, Information Retrieval, Vol. 12 No. 2, pp. 117-147.

Joo, Y.J., Lim, K.Y. and Kim, E.K. (2011), “Online university students’ satisfaction and persistence: examining perceived level of presence, usefulness and ease of use as predictors in a structural model”, Computers & Education, Vol. 57 No. 2, pp. 1654-1664.

Juliusson, E.A., Karlsson, N. and Gärling, T. (2005), “Weighing the past and the future in decision making”, European Journal of Cognitive Psychology, Vol. 17 No. 4, pp. 561-575.

Keller, H. and Karau, S.J. (2013), “The importance of personality in students’ perceptions of the online learning experience”, Computers in Human Behavior, Vol. 29 No. 6, pp. 2494-2500.

Leidecker, J.K. and Bruno, A.V. (1984), “Identifying and using critical success factors”, Long Range Planning, Vol. 17 No. 1, pp. 23-32.

Lohr, L. (1998), Using ADDIE To Design a Web-Based Training Interface, March 10-14, Washington, DC.

McGill, T.J., Klobas, J.E. and Renzi, S. (2014), “Critical success factors for the continuation of e-learning initiatives”, Internet and Higher Education, Vol. 22, July, pp. 24-36.

Margaryan, A., Littlejohn, A. and Vojt, G. (2011), “Are digital natives a myth or reality?: University students’ use of digitial technologies”, Computers & Education, Vol. 56 No. 2, pp. 429-440.

Merrill, M.D., Drake, L., Lacy, M.J., Pratt, J. and ID2 Research Group (1996), “Reclaiming instructional design”, Educational Technology, Vol. 36 No. 5, pp. 5-7.

Molenda, M. (2003), “In search of the elusive ADDIE model”, Performance Improvement, Vol. 42 No. 5, pp. 34-36.

Noguera Fructuoso, I. (2015), “How millennials are changing the way we learn: the state of the art of ICT integration in education”, Revista Iboeroamericana de Educación a Distancia, Vol. 18 No. 1, pp. 45-65.

Paulsen, M.F. (2003), “Experiences with learning management systems in 113 European institutions”, Educational Technology & Society, Vol. 6 No. 4, pp. 134-148.

Pedró, F. (2006), “The new millennium learners: challenging our Views on ICT and Learning”, available at: www.oecd.org/edu/ceri/38358359.pdf

Pinto, J.K. and Slevin, D.P. (1987), “Critical factors in successful project implementation”, IEEE Transactions on Engineering Management, Vol. 34 No. 1, pp. 22-27.

Prensky, M. (2001), “Digital natives, digital immigrants part 1”, On the Horizon, Vol. 9 No. 5.

Quatember, A. (2011), Statistik ohne Angst vor Formeln: Das Studienbuch für Wirtschafts- und Sozialwissenschaftler, Studium Economic BWL, Vol. 3, aktualisierte Aufl., Pearson Studium, München.

Redecker, C. (2009), “Review of learning 2.0 practices: study on the impact of web 2.0 innovations on education and training in Europe”, The European Report on Science and Technology Indicators, 1, Amt für amtliche Veröffentlichungen der Europäischen Gemeinschaften, Luxemburg.

Reiser, R.A. (2001), “A history of instructional design and technology. part I: a history of instructional media”, Educational Technology Research and Development, Vol. 49 No. 1, pp. 53-64.

Richter, T. and Pawlowski, J.M. (2015), “The need for standardization of context metadata for e-learning environments”, in Lee, T. (Ed.), Proceedings of the e-ASEM Conference, Open University Korea, Seoul.

Rogers, P.L. (2002), “Chapter I: teacher-designers: how teachers use instructional design in real classrooms”, in Rogers, P.L. (Ed.), Designing Instruction for Technology-Enhanced Learning, Idea Group Publ, Hershey, PA, pp. 1-17.

Sagi, A. and Friedland, N. (2007), “The cost of richness: the effect of the size and diversity of decision sets on post-decision regret”, Journal of Personality and Social Psychology, Vol. 93 No. 4, pp. 515-524.

Schweighofer, P. and Ebner, M. (2015), “Aspects to be considered when implementing technology-enhanced learning approaches: a literature review”, Future Internet, Vol. 7 No. 1, pp. 26-49.

Schweighofer, P., Grünwald, S. and Ebner, M. (2015), “Technology enhanced learning and the digital economy. A literature review”, International Journal of Innovation in the Digital Economy, Vol. 6 No. 1, pp. 50-62.

Selim, H.M. (2007), “Critical success factors for e-learning acceptance: Confirmatory factor models”, Computers & Education, Vol. 49 No. 2, pp. 396-413.

Soong, M.H.B., Chan, H.C., Chua, B.C. and Loh, K.F. (2001), “Critical success factors for on-line course resources”, Computers & Education, Vol. 36 No. 2, pp. 101-120.

Stacey, E. and Gerbie, P. (2008), “Success factors for blended learning”, paper presented at ASCILITE, Melbourne.

Statistik Austria (2015), “Structural business statistics”, available at: www.statistik.at/web_en/statistics/Economy/enterprises/structural_business_statistics/index.html (accessed August 7, 2015).

Straits, B.C. and Singleton, R.A. Jr (2011), Social Research: Approaches and Fundamentals, Oxford University Press, New York, NY.

Sun, P.-C., Tsai, R.J., Finger, G., Chen, Y.-Y. and Yeh, D. (2008), “What drives a successful e-Learning? An empirical investigation of the critical factors influencing learner satisfaction”, Computers & Education, Vol. 50 No. 4, pp. 1183-1202.

White, S. (2007), “Critical success factors for e-learning and institutional change – some organisational perspectives on campus-wide e-learning”, British Journal of Educational Technology, Vol. 38 No. 5, pp. 840-850.

Acknowledgements

The authors declare that they have no conflict of interest.

Corresponding author

Patrick Schweighofer is the corresponding author and can be contacted at: patrick.schweighofer@campus02.at

About the authors

Patrick Schweighofer currently works as Deputy Head of the degree program Information Technology & Business Informatics at the University of Applied Science CAMPUS 02 in Graz, Austria. In February 2012, he received a Master’s Degree in Information Technology and IT-Marketing and, since June 2013, he has been working toward his dissertation on the subject of structured implementation of technology-enhanced learning approaches at the Graz University of Technology.

Doris Weitlaner earned her MSc in Software Development – Economy at the Graz University of Technology and is currently employed as Research and Teaching Assistant at the Information Technologies & Business Informatics department of the CAMPUS 02 University of Applied Sciences in Graz, Austria. She mainly works on projects that address the topics of service engineering and data science with a special focus on quantitative research, multivariate data analyses and process management.

Martin Ebner is currently head of the Department Educational Technology at the Graz University of Technology and, therefore, is responsible for all university-wide e-learning activities. He hold Associate Professor in media informatics and also works at the Institute for Information System Computer Media as a senior researcher. His research focuses strongly on e-learning, mobile learning, learning analytics, social media and the use of Web 2.0 technologies for teaching and learning. Martin gives a number of lectures in this area as well as workshops and talks at international conferences. To view publications as well as information about further research activities please visit his website: http://martinebner.at

Hannes Rothe is currently postdoctoral Research Associate at the Department Information Systems at the Freie Universität Berlin. Since 2013, he coordinates Entrepreneurship Education through the project Entrepreneurial Network University at the Freie Universität Berlin and Charité Universitätsmedizin. He holds Doctoral Degree in business administration and performs research on the development of IT-services. His research has been published in international conference proceedings and journals in the areas of service engineering and educational technologies. To view publications and information about additional research, please visit his website: http://hannesrothe.de

Related articles