E-learning is becoming a mainstream part of PGMeL. Yet the results of evaluation studies are contradictory in terms of efficiency and effectiveness. This thesis is concerned with the question of which factors influence the success of PGMeL. PGMeL is frequently evaluated on learning aims (knowledge, skills, or attitude/behavior), but little attention is given to the ID of the e-learning. The ID is the link between the science of how people learn and daily practice as a process for designing instruction based on empirical principles. In other words, it is the way the e-learning is designed, which technical aspects are used, which pedagogical elements are incorporated, etc. The features of e-learning that provide potential for action are called affordances, and it is these affordances which are the result of the ID, as focused on in this thesis.
Chapter one describes the current landscape of PGMeL evaluation studies. It takes the reader along a path introducing the development of the ‘e’ of e-learning (the history of the internet and its influence on e-learning) and the most prominent learning models used in PGMeL. This chapter does not aim to provide in-depth insight into learning and all its psychological models, but introduces the basics of CLT, multimedia learning, and adult learning theory. These three models help to give insight into the theoretical rationale behind certain ID choices in PGMeL. The development of the internet has had a great influence on possible affordances in e-learning. Aspects such as communication with peers, which would be beneficial based on adult learning theory, became possible because of the evolution of the internet. After covering these elements of e-learning in the introduction, we present the five research questions that this thesis attempts to answer. The first concerns which instruments, or outcomes, are currently used to evaluate PHMeL. The insight gained into means of evaluation allows us to ask ourselves which indicators are known in the current literature that can set best practice norms for PGMeL. These indicators from the literature further raise the question of which are acknowledged and which are missing according to the main stakeholders, namely users, educational experts, and content creators. Having created a list of indicators, we then ask how they can be used to evaluate and design PGMeL. Finally, when this e-learning is created, we ask which factors influence implementation, and how a team can prepare for successful implementation.
Chapter two described a systematic review aiming to identify and compare the outcomes and methods used to evaluate PGMeL. The initial search identified 5,973 articles, of which 418 were used for our analysis. The performed thematic analysis showed that the most frequently used learning aims of PGMeL are knowledge, skills, and attitude/behavior. Twelve instruments were used to evaluate a specific outcome, such as laparoscopic surgery skills. Only 4% (19/418) of the papers used an ID model or theory for the creation or evaluation of the e-learning. The most frequently used models were Kirckpatrick’s hierarchy, Gagne’s instructional design, the Heidelberg inventory, Kern’s curriculum development steps, and scales based on CLT. Chapter two provides short introductions to all these models, none of which is specifically aimed at medical postgraduates. This study shows that, apart from the learning aim, the most evaluated aspects are satisfaction, motivation, efficiency, and usefulness. Design models are rarely used and, when they are, are not aimed at PGMeL. This shows that the current literature has not yet reached any form of consensus about which aspects of PGMeL should be evaluated. There seems a great need for an evaluation tool that evaluates ID instead of learning aims, and which is properly constructed and validated.
Chapter three reports on an integrative literature review of the current literature performed with the aim of seeking out and identifying quality indicators for PGMeL. The search resulted in 11,093 papers, and a selection based on titles, abstracts, and full texts resulted in 36 papers being used for the analysis. From these papers, 72 unique indicators originated. These indicators were organized into six domains: content, preparation, design/system, communication, assessment, and maintenance. We called this model the Postgraduate Medical E-learning model (postgraduate ME model), which is partially based on the ISO-19796 standard and drew on cognitive load principles. Although most evaluation studies focus on the content of PGMeL, this study shows that content is only a part of the educational experience. The five other domains focus on ID, placing emphasis on the importance of evaluating this aspect as well.
Chapter four further explores the needs and expectations of learners, educational experts in postgraduate medical education, and commercial e-learning designers. Three focus group discussions with these were performed, and the verbatim transcribed recordings were analyzed using King’s template analysis. Initially, 34 items arose, which were placed into an initial template, based on the ME model of chapter three. The final template consisted of three domains of positive influencers (motivators, learning enhancers, and real-world translation) and three negative parameters (barriers, learning discouragers, and poor preparation). The interpretation of these domains showed three general subjects which form the basis of PGMeL: motivate, learn, and apply. These domains provide a foundation for educational tools, and the individual categories can be adapted to fit the target audience. So far, however, these individual categories had not been validated for PGMeL and these domains had not been proven useful in the daily practice of creating e-learning.
Chapter five continues with the 34 items from chapter three. We performed a Delphi procedure with a group of 13 international educational experts and 10 experienced users. The Delphi started with 57 items as a result of the literature study (chapter two) and focus group discussions (chapter four). Consensus was reached when a rate of agreement of more than two thirds was achieved. After two rounds, 72 items were addressed, of which 37 were accepted. These items were divided into the same three domains as chapter four: motivate, learn, and apply. The 37 items from this chapter could now be used to create an ID model and evaluation instrument for PGMeL.
Chapter six aims to create an empirical ID model for PGMeL and compare it with the existing evaluation models discussed in chapter two. Analogous to the intervention mapping model, we arranged the 37 items from chapter five into eight chronological steps, these being proposed building blocks which aim to guide creators, not lead them. The eight steps are (1) who, why, what; (2) educate; (3) real-world translation; (4) technology; (5) team; (6) budget; and (7) time and timeline and (8) evaluation. When comparing these steps to other models such as the ADDIE, 4C/ID, Kerns model, Gagne’s nine events, ASSURE, and Merrill and Kemp’s model, no other was as complete, and neither were any other models aimed at medical postgraduates. Chapter six presented the first evidence- and theory-based ID model aimed at PGMeL. Although certain steps are more robust and have a deeper theoretical background in current research (such as Education), others (such as Budget) have been barely touched upon and should be investigated more thoroughly in order that proper guidelines may also be provided for them.
Chapter seven discussed the next part of ID, namely implementation. We performed a series of 10 semi-structured interviews with experienced e-learning creators, after which we carried out a thematic analysis to name and describe categories and themes. Although this was not the objective of the study, the participants stressed the importance of a definition of ‘success’. Associated with this definition were: reaching your target audience, achieving learning aims, satisfying your audience, and maintaining continuity. The thematic analysis revealed 11 categories, which were divided into three groups and named after the people who influence them the most, thus being creator-, organization-, and learner-dependent factors. The first theme (creator-dependent factors) contained the categories of learning aim, pedagogical strategies, content expertise, evaluation, and motivational pathway. The second (organization-dependent factors) encompassed management support, resources, and culture. The last (learner-dependent factors) consisted of technology, motivators/barriers, and value. We compared these factors with two different innovation models (Rogers’ diffusion of innovation and Kotter’s eight steps of change management) and found an (incomplete) overlap between them; however, the factors of this chapter remain unique to PGMeL. Future studies can both evaluate the use of these innovation models in creating PGMeL and assess the effect of the organizational categories in greater depth.
Chapter eight follows the AMEE (Association for Medical Education in Europe) seven-step process in creating an evaluation instrument for medical education. In five steps, this study aimed to create and validate a survey that we called the MEES. The first step was creating the survey from the 37 items from chapter five, followed by testing readability and question interpretation. The third step was adjusting, rewriting, and translating the survey. This was followed by gathering filled-out surveys from three international e-learning modules, after which we finally evaluated the usefulness, understandability, and added value of this survey by focus group discussion with the e-learning creators. A total of 158 responses led to three focus group discussions with a total of ten participants. The usefulness of the MEES was much appreciated, understandability was good, and added value was high. Four items needed additional explanation by the authors, and a Creators’ Manual was created at their request. We briefly discussed the number of responses needed and concluded that more is better; ultimately, however, one has to work with what is available. The next steps would be to see whether improvement can be measured by using the MEES, and to continue to work on the end understandability in different languages and cultural groups.
Chapter nine demonstrates the effect of evaluating just one item of the 37 items of chapter five, namely interactivity. This chapter aims to evaluate face-to-face information provision in patient counselling for prenatal screening compared to interactive and non-interactive video information provision. We performed a prospective, non-inferiority, cluster-randomized, controlled trial comparing those three groups. One hundred and forty-one women were included, randomized, and analyzed. The baseline characteristics were comparable. The intervention group (digital information provision) was non-inferior compared to the control group (face-to-face) in satisfaction. The knowledge grade was significantly increased in the intervention group and the duration of the counselling following the information provision was significantly shorter, at 23 minutes versus 16 minutes, to the benefit of the video group. This also implies a cost-benefit from this type of information provision. When we compared the interactive with the non-interactive video group, there was no difference in outcomes. This is in line with other studies which suggest that instructional videos benefit most from a combination of segmenting, practices, and pauses. This chapter demonstrated that the added value of e-learning will most probably be found not in single affordances, but in the combination of many.
Chapter ten combines the previous research findings to answer the research questions from chapter one. Each question is answered, and the answer is put into the perspective of this thesis or the current literature. Placing the findings from this thesis in the perspective of the current view on PGMeL, we propose that e-learning is not the future, but it is now, and will only increase; furthermore, it must now be accepted that e-learning, if designed properly, is non-inferior to other forms of education and that ID is fundamental in the future evaluation of the effectiveness and efficiency of PGMeL. PGMeL should be theory-based and aimed at the most specific target audience available. This thesis builds on the fundamentals of motivate, learn, and apply, which should be complemented by specific affordances based on learning theory and aimed at the right target audience. We discuss the effect of learning theories on these affordances and suggest that the development of learning theory should take into account the lifestyle of the learner.
Subsequently, the strengths and limitations of this thesis are discussed. The strengths lie in the rigorous methodology which led to the identification of 37 items in chapter five. Other strengths are the involvement of learners from the beginning of this exploration, and being aimed at a specific target audience: medical postgraduates. The limitations pertain to the relatively generic nature of motivate, learn, and apply, and the lack of cross-cultural validation. E-learning is very easily spread, but there is no evidence as to whether the findings of this thesis are also applicable in non-Western cultures. The practical implications of this thesis can be found in chapter six (eight-step ID model), chapter seven (11 implementation factors), and chapter eight (MEES). We hope that any medical educator who plans to create an e-learning aimed at medical postgraduates can use these chapters to design their e-learning, create it, and evaluate their initial design. Finally, we discuss the future of e-learning, discussing less how we will make use of digital media (as we already do) and more what we will teach in an exponentially increased world of digitalization and automatization.