Deschryver (2014) presents a noble study on using the web-mediated knowledge synthesis for higher order thinking and for ecology of learning. He describes web-mediated knowledge synthesis and higher order thinking as deep learning paradigms that equate to creativity, knowledge integration and problem-solving skills. He argues that using the web-mediated knowledge synthesis in our daily lives can enhance a new ecology of learning that coincides with technology knowledge, pedagogy knowledge and content knowledge (TPACK). He urges teachers not to view TPACK as separate elements in web-mediated environments. He argues “there is also a need for scholarly efforts to better understand the skills necessary to engage in higher, or generative, thinking in web-mediated environments” (p. 4). This is an important argument because web mediated environments are not well-structured domains. That is, web mediated environments are ill structured domains and, thus, require acquisition of knowledge, self-regulatory and critical thinking skills coupled with an ecology of learning ethos. Finally, he argues that good educational research starts with a good literature review, theoretical framework and foci.


Deschryver (2014) did a brilliant job in designing the elements of the web-mediated knowledge synthesis and conceptual framework. Both the web-mediated knowledge synthesis and conceptual framework comprise a taxonomy of knowledge variables and synthesis. The taxonomy of synthesis is grounded on a comprehensive review of the literature, and the taxonomy of synthesis builds from the references of renown scholars. The literature review defines and operationalizes the multiple variables noted throughout the taxonomy of synthesis. The references are scholarly and peer reviewed, and the content knowledge of the conceptual framework is grounded in cognitive and learning theories. For example, he conceptualizes higher order thinking through generative synthesis. The information is reliable because Deschryver (2014) theorizes and operationalize a complex phenomenon, and he leaves the door open for future research on web-mediated knowledge synthesis.


The Deschryver (2014) study provides an understanding of grounded research in a qualitative research design. The conceptual framework fosters higher order thinking in educational technology. Higher order thinking helps the learner in problem solving and critical thinking skills in a web-mediated environment. The study promotes how to use a taxonomy of technology knowledge, pedagogy knowledge and content knowledge (TPACK) in an online environment. The study, furthermore, reiterates that TPACK is mutually exclusive and works together in an ill structured domain or web-mediated environment. Finally, the study helps my understanding knowledge syntheses in a web mediated environment.

DESCHRYVER, M. (2014). Higher Order Thinking in an Online World: Toward a Theory of Web-Mediated Knowledge Synthesis. Teachers College Record, 116(12), 1-44.


Problem Statement

Statement of Problem

Erthel and Jamet (2013) undertook a quantitative and survey questionnaire study. The researchers explored the affordances on how educational games could enhance learners’ cognitive capabilities. The researchers focused on digital game-based learning (DGBL) as an affordance. Erthel and Jamet (2013) measured the effects of DGBL learning and entertainment conditions. The name of the study was “Digital Game-Based learning: Impact of Instructions and Feedback on Motivation and Learning Effectiveness.” As such, the researchers sought to scaffold the effects on how DGBL could be effectively and cognitively deployed under a learning, motivating and instructional condition. In the literature review, other DGBL and/or serious game researchers agreed that digital learning games have everything it takes to become an effective learning medium (Hung et al., 2012; Kay, 2012; Yadav et al., 2011). In general, researchers could conduct more qualitative or quantitative studies and pinpoint how deep learning could take place using a digital learning game. It is no secret that K-12 and adult learners spend many hours playing with digital games for entertainment. Two good research questions could be as follows: 1) How could DBGL be used as a deep learning method and, at the same time, focused on intrinsic motivation? And, how can DGBL be transformed from serious game playing during leisure time to deep learning with motivational effects? In both research questions, motivation would be the exogenous and controlling variable.

Additionally, Erthel and Jamet (2013) reported on problems across the DGBL and serious game literature and contended that many other researchers have studied and demonstrated the effects of DGBL in an unsystematic formation in terms of learning and motivational effects. Erthel and Jamet (2013) argued that DGBL “benefits have never been systematically demonstrated” (p. 156). Subsequently, Erthel and Jamet (2013), therefore, presented an arguable case to undertake DGBL research, and explore the effects of deep learning and motivation in DGBL in a systematic formation. The researchers sought to explore the following variables systematically: learning instruction, entertainment instruction, performance goals, mastery goals, and intrinsic motivation.

Statement of Needs

Erthel and Jamet (2013) postulated that DGBL may unfold cognitive benefits under a learning and entertainment condition. The researchers, however, argued that DGBL needs a systematic review of the affordances offered to learners. The researchers, furthermore, argued that DGBL could be a digital medium that could bring about learning benefits with motivational effects in a learning environment. Consequently, the researchers did a comprehensive review of the literature, and cited the benefits and effects that DGBL may offer learners in terms of motivation and engagement. The comprehensive literature review revealed the pros and cons to DGBL as a learning medium (Berlinger, 2002; Hung et al., 2012; Kay et al., 2011). For instance, one scholar argued that DGBL was more beneficial in comparison to the traditional classroom environment or conventional environments. On the contrary, other scholars highlighted cons about the benefits of DBGL in a learning environment. Furthermore, other scholars reported DBGL was a weak digital medium with minimal motivational benefits. One scholar, additionally, questioned the benefits of DBGL and argued it “imposes considerable constraints that make it extremely difficult to integrate deep content, strategies, and skills” (Erthel & Jamet, 2013, p. 157). Consequently, the myriad of pros and cons about DGBL was highlighted across the literature in terms of learning effects and benefits. Noticeably, the literature review did not compare DGBL to learning effects and benefits of other digital mediums, for example, video games or digital storytelling. Thus, a good research question could be “how do DGBL compare to other digital mediums in terms of learning effects and benefits?”

Therefore, as noted above, the studies of DGBL were fragmented across the literature and posited many confounding factors in terms of cognitive lessons and strategies. As a result, Erthel and Jamet (2013) conducted a systematic review and quantitative study of DGBL in terms of its effects to cognitive learning and processing. The researchers claimed that “no one has so far subjected the [DGBL] games’ instructions to scientific scrutiny…. even though they are a fundamental feature of DGBL” (Erthel & Jamet, 2013, p. 158). The latter further justified the need to undertake this study in terms of scientific scrutiny and effects of DGBL as a serious game medium.

Statement of Research Ability

Erthel and Jamet (2013) literature review revealed many confounding factors and contradictory arguments about the pros and cons of DGBL. DGBL, additionally, added to the long-standing debate of whether technologies make us smarter (see Clark, 1983, 1994; Kozma, 1994). DGBL, in terms of deep learning benefits, would be a researchable topic on its own merit. Further research on DGBL would add to the body of scholarly knowledge in terms of cognitive benefits. The literature on DGBL would require additional qualitative and quantitative research on educational technology, especially in terms of learning benefits with respect to serious games environments (SGEs). Moreover, DGBL designed based studies would also add significantly to the body of scholarly knowledge. Additional DGBL studies would provide empirical data and quality evidence on the effects of DGBL in terms of deep cognitive learning. Therefore, DGBL would be a researchable topic and would be beneficial to educational technology and research.

Literature Review

Statement of Conceptual Framework

Erthel and Jamet (2013) did not illustrate a conceptual framework under a single venue. In fact, the researchers do not even mention the term, that is, conceptual framework. Locating the conceptual framework, in my opinion, was like building a complex jigsaw puzzle with scattered or missing pieces. However, the researchers did connect the title of the study, the context to the abstract, and the body of study to the general discussion and results. Obviously, Table 1 (the description of motivational components) comprised the dependent and independent variables and measurements that surround a conceptual framework. Table 1, for instance, highlighted the protocols, dependent variables (performance goals, mastery goals, and intrinsic motivation) that were assessed against the independent variable (learning and entertainment instruction). In my opinion, Table 1 could be renamed from Description of Components to the Conceptual Framework.

Erthel and Jamet (2013) postulated and “sought to identify the conditions under which DGBL is most effective, by analyzing the effects of two different types of instructions (learning instruction vs. entertainment instruction)” (p. 157). Erthel and Jamet (2013) stated, as the study scaffolded the “only variation would be [the] instructions given for a digital learning game called ASTRA” (p. 158). ASTRA, the intervention, was given to adults who were aged from 18 to 26, and not adolescents. Using a defined conceptual framework, future studies of DGBL, using the ASTRA intervention, could be applied to other segments of the population, such as: adolescents, adult learners, lower socio-economic class learners and disabled learners.

Statement of Relevance to Theory

Erthel and Jamet (2013), as well as other scholars, argued that developing educational technology and theory is difficult because it requires a comprehensive understanding of complicated variations that are technologically, contextually, methodologically and pedagogically interconnected (see Mishra & Koehler, 2013). Some scholars even argued that educational technology is theoretically a hard science (see Berlinger, 2002). As such, Erthel and Jamet (2013) categorized DGBL as an educational technology with learning and motivational benefits in a learning and instructional environment. Other scholars, noted in the literature review, sought DGBL as having relevance to theory, pedagogy, methodology, and practice. It may well be arguable that DGBL could fit into the constructivist theory debate of learning and instruction.

According to the literature review, scholars have looked at DBGL as relevant to the flow theory, cognitive theory, and cognitive load theory. For instance, scholars of DGBL “have looked at the relevance of flow theory [that is] the immediate subjective experience that occurs when an individual engages in an activity” (Erthel & Jamet, 2013, p. 157). Another relevant theory is the cognitive theory “referred to as rote learning (i.e., surface learning) and meaningful learning (i.e., deep learning)” (Erthel & Jamet, 2013, p. 158) for cognitive processing. Another relevant theory to DGBL is the theory of cognitive load or “effort engaged by learners in information processing…a key component of learning performance” (Erthel & Jamet, 2013, p. 164). In addition, DGBL has relevance to theory in terms of the following: intrinsic motivation, effects of pedagogy/andragogy, and future research methodologies. In summary, DGBL could fit into many cognitive and learning theories and theoretical perspectives.

Statement of Relevance to References and Conclusion

Erthel and Jamet (2013) did a careful job with selecting scholarly articles. After carefully evaluating the reference section, it was noted the references were scholarly and similarly in content and context. The journals, books and other materials were scholarly and similarly noted in content and context as well. The references aligned logically with the conclusion. The researchers’ aim aligned logically with the research question. For example, the researchers aimed in experiment 1 “to ascertain whether the effects of instructions…would also manifest themselves during DGBL” (p. 158). The researchers aimed in experiment 2 “to determine the presence of KCR feedback in DGBL…learning strategies induced by instruction” (p. 162). The overarching research question was “is deep learning compatible with serious games” (p. 165). The research question was in alignment with the aim of the study. That is, in both experiments, the researcher proved that effects of instruction and intervention of KCR caused deep learning with DBGL and serious games.

Erthel and Jamet (2013) connected the references to the conclusion by undertaking a comprehensive literature review coupled with quantitative research and survey questionnaire. The survey questionnaire was orchestrated on a 7-point Likert scale. In the data analysis, the researchers backed up their claims with evidence of the learning and motivating effects of DGBL. In the findings, the researchers addressed DGBL in terms of learning and entertainment conditions. Even though the study connected the references to the conclusions, however, in my opinion, the study was not generalizable or transferable to the general population. The population sample was too narrow and focused only on a very small segment of the population, that is undergraduate students. Further research would be required to connect the references to the conclusion for generalizability and transferability.

Statement of Relevance to Problem Investigated

On a good note, Erthel and Jamet (2013) addressed the problems of the study throughout the ending or general discussion. For example, in the two types of designed experiments carried out, the researchers posited that learners in experiment 2 did much better with a combination of knowledge of correct response (KCR) feedback and learning instructions. The KCR feedback intervention in experiment 2 elicited deeper cognitive processing as compared to experiment 1. The KCR feedback was not intervened in experiment 1. Erthel and Jamet (2013), therefore, argued that KCR feedback in DGBL given to participants in Experiment 2 performed much better on reading comprehension than those not given the KCR feedback in experiment 1. Finally, Erthel and Jamet (2013) related KCR feedback to enhanced self-regulatory cognitive learning for self-efficacy.

In experiment 1, the researchers posited that the learners did much better on comprehension with learning instructions and deeper cognitive processing. The researchers, however, noted a problem with paraphrase type questions, which was not the case with inference type questions. With regards to inference questions, the ASTRA environment elicited adverse responses to inference type questions. This dichotomy would have to be elucidated in future studies, according to the researchers. Another dichotomy was the fear of failure in experiment 1, but no fear of failure in experiment 2. This dichotomy would have to be also elucidated in future studies as well. In addition, the researchers highlighted three problems in the experiments, one was ASTRA intervention. According to the researchers, the ASTRA intervention caused minimal interaction with the participants. The second problem in this study was the high scores on the tests. The high scores on the test could be construed as research bias. Finally, the third problem of the study was related to the choice of methodology. Accordingly, the methodological approach invoked external and internal validity issues and, as well as, generalizable issues.

Statement of Critique Research Questions and Hypotheses

Erthel and Jamet (2013) did not highlight the research questions or hypotheses under one venue. The research questions and hypotheses were implied and scattered throughout the study, in my opinion. As stated above, a reader or learner must dig deep into the study to recognize the research questions or hypotheses. Critically speaking, that exerted lots of intellectual energy to locate research questions or hypotheses in the study. Exerting a lot of energy could be extraneous because learners must assess a great deal of research in media. So, a learner could possibly overlook research, such as this one, for example, when a study does not clearly depict or articulate the research questions or hypotheses under one venue.

In addition to the paraphrase-type questions and inference-type question, Erthel and Jamet (2013) clearly articulated “one of the objectives of the present study was to answer the question “Is deep learning compatible with serious games” (p. 164). The researchers did a good job in addressing this research question. The results were clearly announced in the data analysis. In terms of hypothesis, again, the hypotheses were not clearly stated under one venue, that is, the hypotheses were scattered throughout the paper and implied. For instance, Erthel and Jamet (2013) stated, “the ANOVA showed that the participants in the learning instruction condition performed significantly better than those in the entertainment instruction condition” (p. 163). This was the first hypothesis. Arguably, the second hypothesis was implied when participants in the entertainment instruction group performed better on comprehension than those in the learning instruction group. The latter hypothesis was the thrust of experiment 1. Arguably, the third hypothesis was noted when participants who were given an entertainment instruction performed significantly more poorly on comprehension than those given the learning instructions. The fourth hypothesis was noted in the presence of KCR feedback, learners in the entertainment instruction condition performed significantly better on comprehension than those in the learning construction condition. The fifth hypothesis was denoted in the study when the researchers articulated that KCR feedback prompted the participants to process the DGBL content more deeply. Finally, that last hypothesis implied, in my opinion, was noted when the researchers argued that neither experiment revealed any effect of instruction on responses to the paraphrase-type questions, even though deep learning results in better memory storage. In summary, the researchers stated, “future research is needed to test [all hypotheses]” (p. 165). As stated earlier and critically speaking, the research question(s) and hypotheses should have been clearly denoted and stated under one venue.

Research Design and Data Analysis

Statement of Critique the Methods and Research Design

Erthel and Jamet (2013) did a phenomenal job with the research design and survey questionnaire. That is, the researchers undertook a quantitative study as their research design. The researchers methodological approach used designed-based experiments and survey questionnaire. The researchers undertook two different experiments to measure the effects of cognitive information and cognitive processing in DGBL. The researchers analyzed the following types of effects in DGBL: learning condition and entertainment condition. With this approach, the researchers adopted the “value added perspective,” meaning the researchers incorporated ASTRA as an intervention. ASTRA, a digital learning instruction, was proven to be effective for other digital video games.

In experiment 1, a total of 46 undergraduate students participated in a 35 to 40-minute experiment. The instruction was the manipulative independent variable. The dependent variables were learning condition and entertainment condition. The participants were introduced to four different diseases with five different parts. The experiment took place in six different rooms with a computer. The experiment consisted of six phases. The first phase was a pretest with conditions, and the second phase was putting on headphones and following the ASTRA commands. The third phase was a test about the diseases. The test was comprised of 15 questions. 12 of the questions were performance related, and 3 were performance goal avoidance related. A seven-point Likert scale was used to measure the level of agreement. After completing the test about diseases, the participants completed an additional 8 questions that measured paraphrase type questions and inference type questions.

With regards to experiment 2, the researchers strived to prove that the knowledge of correct response (KCR) intervention in the quizzes and learning game could improve the learning or entertainment condition. A total of 44 undergraduates participated in experiment 2. The test administration and protocols were the same as experiment 1. The independent and dependent variables were also the same as experiment 1. Again, both experiments were well designed, in my opinion. Even though the experiments were well designed, the study is in danger of external and internal validity. In my opinion, the population size was too small and narrow to be generalizable.

Statement of Critique Replicability

Erthel and Jamet (2013) stated, “it would be well worth replicating our study with more immersive and interactive material…until they thought they had achieved sufficient learning outcomes” (p. 165). There were multiple areas of replicability in the study. For instance, the question remained open if DBGL could invoke paraphrasing or memorization. The question remained open as to why there was fear in experiment 1 and not in experiment 2. In addition, the question remained open if other interventions like ASTRA could be invoked and cause cognitive learning and processing. The study could be also replicated with experiments at the adolescent level as well as adult learners past the age of 30 or so. This study was rich in future research questions or potential for replicability, likewise.

Statement of Critique Data Analysis and Results

Erthel and Jamet (2013) analyzed the learning conditions using an analysis of variance (ANOVA), Laverne’s test, nonparametric Mann-Whitney test, mean scores, and standard deviations. The data analysis matched the research design, which was quantitative and survey questionnaire.

Experiment 1, the researchers allowed pre-test participants to score up to 6 points. Only one pre-test participant was excluded from the study. The pre-test, ANOVA did not reveal any significant differences between the groups. The recall quizzes on Lavene’s test showed no significant differences between the groups. The knowledge questionnaire on Lavene’s test showed significance on paraphrase-type questions and no significant differences with inference type questions. The motivation questionnaire on ANOVA showed no significant difference were discovered among the groups.

In Experiment 2, the researchers excluded four participants because of expertise. Pretest scores in experiment 2 failed to show any significant differences among the groups. Recall quiz scores test called the variance scores equal among the groups and not significant. In addition, the researchers reported no problems of homogeneity among the knowledge questionnaire scores.

Erthel and Jamet (2013) could not back up significant claims with quality of evidence. In fact, some of the significant claims were in direct opposition of expectation. For example, the researchers stated, “contrary to our expectations, type of instruction had no significant effect on paraphrase type questions” (p. 161). In other words, the researchers failed to observe any effect of instruction and motivation in DGBL. The researchers, however, did find the opposite conclusion on inference type questions. The researchers stated, “contrary to our expectations, the participants ratings of the performance goal avoidance did not differ significantly between the learning instructions” (p. 161). Erthel and Jamet (2013) failed in finding an effect on the motivation items. This was also a flaw in the quality of evidence. With these flaws in quality of evidence, this report is in danger of external validity and generalization.

Implication of Results

Statement of Critique Limitations of Study

Erthel and Jamet (2013) articulated three main limitations to the study. The researchers reported and admitted that in the ASTRA game environment “learners had little few opportunities to interact with the material apart from selecting the right behavior to adopt toward an elderly person with one of the diseases and completing the quizzes” (p. 165). Secondly, the researchers reported “learners seldom received feedback correcting their comprehension errors” (p. 165). Thirdly, according to the researchers, the methodology of choice was questionable. The researchers admitted that obtaining online questions as a methodology was not the best choice. Fourth, the small population of choice was a limitation. Fifth, the ASTRA choice as an intervention was also a limitation. Subsequently, the study was not generalizable or transferable to other study groups. The study had limitations with external and internal validity. On a good note, the study was positive on construct validity. Also, according to the data analysis, the study was positive on identifying the dependent variables (entertainment and learning) and assessing them against the independent or manipulating variables (instruction).

Statement of Critique Author’s Conclusion

Erthel and Jamet (2013) were all over the map in terms of a definitive and arguable conclusion. For instance, the researchers uncovered minimal effects in the motivation construct. The researchers, additionally, were in direct opposition to DGBL motivational effects undertaken by other scholars. That is, other seminal work suggested that DGBL influenced intrinsic motivation.

Erthel and Jamet (2013) end section 3.3 with a “Discussion and Conclusion.” However, the researchers end the overall study with only a General Discussion. The researchers end the study paradoxically, in my opinion. Conceptually speaking, I argue, a conclusion is essential in scholarly research. A conclusion connects all the methodological concepts together in unison. For example, a conclusion discusses the title and how it connects to the references, reference data, and tables and figures. A conclusion discusses the abstract and how it connects to the purpose of the study and research questions, for example. A conclusion, for instance, ends with reference to the protocol. That is a summary of the introduction, summary of literature review, summary of how the study fits into the body of scholarly knowledge, and a rationale as to why the study is useful to the field. Finally, a conclusion articulates the results of the study and formulates an analysis of the data, analysis of the arguments, and quality of evidence of which connects to the results. In summary, all the constructs and variables were noted throughout the study but scattered unsystematically. As a result, a replication of the study will mop up the paradoxes and lead to better conclusions.

Statement of Critique Future Practice

Erthel and Jamet (2013) reported on the practices of DGBL in terms of learning, benefits and motivation. For instance, the researchers admitted the present study opens several avenues for future research in DGBL in terms of learning and motivation. The researchers also highlighted contradictory flaws across both experiments. For example, neither experiment revealed any effect of instruction on the paraphrase questions in terms of memorization. Perhaps future research could reveal effects on the paraphrase questions.

Erthel and Jamet (2013), according to the literature review, highlighted contradictory outcomes in terms of DGBL being a serious game. For instance, the researchers articulated “the results of studies carrying serious game environments (SGEs) with conventional media are still highly contradictory” (p. 157). The researchers also highlighted “the learning instruction appeared to have generated a greater fear of failure than the entertainment instruction did” (p. 163). The concern of failure of fear could be a concern among other serious games, thus, further research could counteract or remedy this area of concern. The researchers also reported flaws in the outcomes of results. For instance, “contrary to our expectations, the participants ratings of the performance goal avoidance items differ significantly between the learning instruction” (p. 165). In addition, the researchers “failed to observe any effect of instruction memorization quality” (p. 165). Therefore, future practice and replicability could resolve the concerns or issues above.



Berliner, D.C. (2002). Educational research: The hardest science of all. Educational Researcher, 31(8), 18-20.

Erhel, S. s., & Jamet, E. e. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, 156-167.

Hung, C. M., Hwang, G. J., & Huang, I. (2012). A Project-based digital storytelling approach to improving students’ learning motivation, problem-solving competence and learning achievement. Educational Technology & Society, 15(4), 368–379.

Kay, R. H. (2012). Exploring the use of video podcasts in education: A comprehensive review of the literature. Computers in Human Behavior, 28, 820-831.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science (pp. 349-366). Mahwah, NJ: Lawrance Erlbaum Associates.

Spiro, R. J., & DeSchryver, M. (2009). Constructivism: When it’s the wrong idea and when it’s the only idea. In S. Tobias & T. Duffy (Eds.), Constructivist theory applied to instruction: Success or failure. Mahwah, NJ: Lawrence Erlbaum.

Yadav, A.,Phillips, M.M., Lundeberg, M.A., Koehler, M.J., Hilden, K.H., & Dirkin, K.H. (2011). If a picture is worth a thousand words is video worth a million? Differences in affective and cognitive processing of video and text cases. Journal of Computing in Higher Education, 23(1), 15-37.




Shaw, Carr-Chellman, and Choi (2017) undertake a qualitative study on Universal Design for Learning (UDL). UDL is an adult learning architecture that focuses on collaboration, inclusion and accessibility in online and distant classroom environments. The key words of the study are UDL, accessibility, online learning, epistemology, and adult learners. The researchers boast that UDL is a holistic approach that offers online and distant education for all adult learners, which includes the learners in lower socio-economic status, students with cultural or language barriers, and students that are disabled. No learner is left behind in UDL. As such, UDL focuses on the importance of learning in terms of “learner’s needs and desires through the inclusion of real-life tasks and an understanding of the importance of flexibility” (p. 21). UDL allows all learners’ a forum of various way of approaching learning, knowledge acquisition and self-regulation through collaboration and communication in an online and distant learning environment. UDL is a teaching-learning strategy that engages and motivates all teachers and learners to seek deep learning and to enhance cognitive skills through social and communicative interaction. The researchers also equate UDL to such topics as justice, epistemology and practice. With justice, the researchers associate UDL with Section 504 of the Rehabilitation Act. With epistemology, UDL makes the learning experience more holistic and learner-centered. Finally, with practice, UDL helps to recognize and associate patterns with expression and engagement.


In evaluating this study, I think it was a useful source of information because the researchers focus was on using UDL as an inclusive methodology to represent, express and engage all adult learners to achieve higher education in online or distant education. UDL does not discriminate based on age, sex, creed or color or religion. UDL provides an online learning platform for all adult learners. UDL, in general, could help researchers overcome barriers of discrimination in adult learning through flexibility and practicing of different ways of online learning for adult learners. Such different ways of learning could be through means of audio, oral, graphics or visual means of learning. The overall goal of this source of information was to include “all” adult learners and to motivate them to pursue continuous and life cycle learning in a knowledge economy via online and distant learning.


Even though the study lacks a methodology or research design, Shaw et al. (2017) did a scholarly job on applying UDL to a course description and demonstrating the effects of collaboration and communication in a teaching-learning transaction. This study is highly reflective because it leaves no learner behind. This study view scholarship through the lens of the lower socio-economic class and disabled. It also looks at scholarship as a one sized design curriculum that fits all learners. This study is replicable if it would include participants in an experimental design-based study with learners from all walks of life. For example, a researcher could set up an experimental design-based study with an integration of students from academia, students from a lower socio-economic status, and disabled students. The independent variable would be instruction, and the dependent variables would be the effects on learning and types of instruction. The data analysis would compare the effects of learning benefits both collectively and individually. The optimal outcome would be that each student, given a fair chance of critical literature and perspective, would achieve the same learning outcome regardless of age, creed, color or race.

Rogers-Shaw, C., Carr-Chellman, D. d., & Choi, J. (2018). Universal Design for Learning: Guidelines for Accessible Online Instruction. Adult Learning, 29(1), 20-31.



Forest and Kimmel (2016) undertakes a qualitative study in critical literacy performances of learners in an online, asynchronous graduate level children’s literature course. The research methodology is content analysis. The aim of the study is twofold, that is, (1) to evaluate a graduate level children’s critical literature course from the lens of critical thinking and perspective, and (2) to implant critical literature and perspective in a children’s literature courses and examine the behavioral and learning effects of literature in children’s books in an online, asynchronous environment. The research question is “In what ways do graduate students in an online, asynchronous children’s literature course engage in critical literacy during discussion of children’s books?” (p. 284). The key words in the study are critical literacy, online teaching, book discussion, and children’s literature.

Forest and Kimmel (2016) argue that “library educators must take responsibility for incorporating critical literacy practices in the preparation and continuing education of school and public librarians” (p. 283). The researchers describe critical literacy as social interaction, social collaboration and accessibility in an online, asynchronous environment. The researchers postulate that UDL offers students and learners multitudinous options in acquisition knowledge and knowledge transfer in an online, asynchronous environment. In the Forest and Kimmel (2016) study, students were graded through their participation in an online, asynchronous graduate level children’s literature course. The grading formation of the course kept the students engaged, motivated during the examination and evaluation of study.


Forest and Kimmel (2016) presented a useful study through the lens of critical literacy and perspective in an online, asynchronous graduate level children’s literature course. In analyzing and evaluating the methods in the study, the researchers did a phenomenal job in the research design. The researchers’ added empirical data to the body of scholarly knowledge using the lens of critical literature and perspective in an online, asynchronous graduate level children’s literature course. That is, the researcher’s selected 40 graduate level students enrolled in a Children’s Literature course during the summer of 2014. In groups of 4 to 5 graduate students, the students analyzed the content of four children’s books and then discussed the books in an online, asynchronous chat room. Each student was given alternating roles to discuss its content knowledge and knowledge acquisition of children’s books throughout the semester in an online, asynchronous graduate level children’s literature course. Each student was graded on its knowledge transfer and information sharing about the content of each book. The research questions were predesigned by a similar study that had learning effects in critical literacy and perspective. In the data analysis, the researchers’ analyzed 36 transcripts from the chat room. After analyzing the transcripts, the researchers coded the data into five phases of content analysis. In the outcomes, students highlighted key variables, such as: gender, social class, race, stereotypes, politics and justice. The study was reliable and transferable to the population of students who are studying library science in an online, asynchronous graduate level children’s literature course. The researchers’ proved that critical literacy and perspective could foster deep thinking and social interactions in an online, asynchronous graduate level children’s literature course.


Forest and Kimmel (2016) work on critical literacy and perspective is scholarly and well written for the library sciences in an online, asynchronous graduate level children’s literature course. In fact, critical literacy, moreover, may be applied to the other sciences in an online, asynchronous undergraduate, graduate and post graduate level environment. Perhaps critical literacy could be applied to elementary and secondary education in an online, asynchronous graduate level course using the same formation. What is more, the constant collaboration, communication and interaction with respect to critical literature and perspective is a model platform for online, asynchronous learning environments.

I find the Forest and Kimmel (2016) study very relevant to future research because of the interaction and role playing among the participants in an online, asynchronous graduate level course. For instance, critical literature and perspective could be applied to curriculars in educational technology or leadership in an online, asynchronous course. For example, the instructor could assign peer reviewed and scholarly articles to students in an online, asynchronous graduate level course. The students could play different roles and the instructor could examine the learning effects in an online, asynchronous graduate level course.


Forest, D. d., & Kimmel, S. s. (2016). Critical Literacy Performances in Online Literature Discussions. Journal Of Education For Library & Information Science, 57(4), 283-294.



Nelson et al. (2009) conducted research on how teachers and students could deploy Web 2.0 and technological, pedagogical, and content knowledge (TPACK) for learning in the classroom environment. In educational technology and research, the researchers explored how Web 2.0 and the TPACK framework could add to scholarly knowledge. The researchers looked at why integrating Web 2.0 and TPACK would foster student’s engagement in networking and learning from global Internet users. The researchers, finally, made a strong case that real teaching and learning occurred when teachers integrated Web 2.0 and TPACK in the classroom environment.

Nelson et al. (2009) argued that technology was useless, offered no real learning capabilities when it stood alone. For learning to take place in the classroom, the researchers postulated that Web 2.0 and TPACK must be accompanied together. The researchers also introduced e-learning affordances, such as Google Docs, e-books, digital storytelling or blogs for deployment and learning in the classroom environment. The researchers, finally, closed with case studies on how, why, and when Web 2.0 and TPACK could be integrated and have effective outcomes in the classroom environment.


Nelson et al. (2009) articulated and presented an overview of how and why educational technologies, such as blogs, Google Docs, digital storytelling and e-books could foster collaboration in the classroom environment and could build on the TPACK framework. The researchers claimed that these educational technologies coupled with TPACK would provide “students opportunities to generate and document classroom content, activities, experience and reflections” (p. 83).

Nelson et al. (2009) compared to other scholarly TPACK studies is in danger because it lacks an abstract, a comprehensive literature review, a research design, data analysis and a conclusion. However, the study, in my opinion, does provide the affordances of content knowledge but lacks the affordances of pedagogical knowledge and technological knowledge. As such, this research paper is anecdotal and informational at best; the paper is not generalizable or reliable in terms of external validity. On a good note, this study is replicable and can add to the body of scholarly knowledge with the appropriate research design or method.


Nelson et al. (2009) builds on the Mirsha and Koehler (2006) TPACK framework for theoretical grounding. The researchers attempted to capture and then adapt the essential TPACK qualities in teachers to enhance learning in a classroom environment. Since the study lacks a methodological approach, I do not find it helpful in my area of research. However, I did find the TPACK framework to be of the utmost interest in terms of my area of research in educational technology. TPACK, finally, is a game changer in terms of educational technology and research.

Koehler, M. & Mishra, P. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017-1054.

Nelson, J., Christopher, A., & Mims, C. (2009). TPACK and Web 2.0: Transformation of Teaching and Learning. Techtrends: Linking Research & Practice To Improve Learning, 53(5), 80-85.