Problem Statement

Statement of Problem

Erthel and Jamet (2013) undertook a quantitative and survey questionnaire study. The researchers explored the affordances on how educational games could enhance learners’ cognitive capabilities. The researchers focused on digital game-based learning (DGBL) as an affordance. Erthel and Jamet (2013) measured the effects of DGBL learning and entertainment conditions. The name of the study was “Digital Game-Based learning: Impact of Instructions and Feedback on Motivation and Learning Effectiveness.” As such, the researchers sought to scaffold the effects on how DGBL could be effectively and cognitively deployed under a learning, motivating and instructional condition. In the literature review, other DGBL and/or serious game researchers agreed that digital learning games have everything it takes to become an effective learning medium (Hung et al., 2012; Kay, 2012; Yadav et al., 2011). In general, researchers could conduct more qualitative or quantitative studies and pinpoint how deep learning could take place using a digital learning game. It is no secret that K-12 and adult learners spend many hours playing with digital games for entertainment. Two good research questions could be as follows: 1) How could DBGL be used as a deep learning method and, at the same time, focused on intrinsic motivation? And, how can DGBL be transformed from serious game playing during leisure time to deep learning with motivational effects? In both research questions, motivation would be the exogenous and controlling variable.

Additionally, Erthel and Jamet (2013) reported on problems across the DGBL and serious game literature and contended that many other researchers have studied and demonstrated the effects of DGBL in an unsystematic formation in terms of learning and motivational effects. Erthel and Jamet (2013) argued that DGBL “benefits have never been systematically demonstrated” (p. 156). Subsequently, Erthel and Jamet (2013), therefore, presented an arguable case to undertake DGBL research, and explore the effects of deep learning and motivation in DGBL in a systematic formation. The researchers sought to explore the following variables systematically: learning instruction, entertainment instruction, performance goals, mastery goals, and intrinsic motivation.

Statement of Needs

Erthel and Jamet (2013) postulated that DGBL may unfold cognitive benefits under a learning and entertainment condition. The researchers, however, argued that DGBL needs a systematic review of the affordances offered to learners. The researchers, furthermore, argued that DGBL could be a digital medium that could bring about learning benefits with motivational effects in a learning environment. Consequently, the researchers did a comprehensive review of the literature, and cited the benefits and effects that DGBL may offer learners in terms of motivation and engagement. The comprehensive literature review revealed the pros and cons to DGBL as a learning medium (Berlinger, 2002; Hung et al., 2012; Kay et al., 2011). For instance, one scholar argued that DGBL was more beneficial in comparison to the traditional classroom environment or conventional environments. On the contrary, other scholars highlighted cons about the benefits of DBGL in a learning environment. Furthermore, other scholars reported DBGL was a weak digital medium with minimal motivational benefits. One scholar, additionally, questioned the benefits of DBGL and argued it “imposes considerable constraints that make it extremely difficult to integrate deep content, strategies, and skills” (Erthel & Jamet, 2013, p. 157). Consequently, the myriad of pros and cons about DGBL was highlighted across the literature in terms of learning effects and benefits. Noticeably, the literature review did not compare DGBL to learning effects and benefits of other digital mediums, for example, video games or digital storytelling. Thus, a good research question could be “how do DGBL compare to other digital mediums in terms of learning effects and benefits?”

Therefore, as noted above, the studies of DGBL were fragmented across the literature and posited many confounding factors in terms of cognitive lessons and strategies. As a result, Erthel and Jamet (2013) conducted a systematic review and quantitative study of DGBL in terms of its effects to cognitive learning and processing. The researchers claimed that “no one has so far subjected the [DGBL] games’ instructions to scientific scrutiny…. even though they are a fundamental feature of DGBL” (Erthel & Jamet, 2013, p. 158). The latter further justified the need to undertake this study in terms of scientific scrutiny and effects of DGBL as a serious game medium.

Statement of Research Ability

Erthel and Jamet (2013) literature review revealed many confounding factors and contradictory arguments about the pros and cons of DGBL. DGBL, additionally, added to the long-standing debate of whether technologies make us smarter (see Clark, 1983, 1994; Kozma, 1994). DGBL, in terms of deep learning benefits, would be a researchable topic on its own merit. Further research on DGBL would add to the body of scholarly knowledge in terms of cognitive benefits. The literature on DGBL would require additional qualitative and quantitative research on educational technology, especially in terms of learning benefits with respect to serious games environments (SGEs). Moreover, DGBL designed based studies would also add significantly to the body of scholarly knowledge. Additional DGBL studies would provide empirical data and quality evidence on the effects of DGBL in terms of deep cognitive learning. Therefore, DGBL would be a researchable topic and would be beneficial to educational technology and research.

Literature Review

Statement of Conceptual Framework

Erthel and Jamet (2013) did not illustrate a conceptual framework under a single venue. In fact, the researchers do not even mention the term, that is, conceptual framework. Locating the conceptual framework, in my opinion, was like building a complex jigsaw puzzle with scattered or missing pieces. However, the researchers did connect the title of the study, the context to the abstract, and the body of study to the general discussion and results. Obviously, Table 1 (the description of motivational components) comprised the dependent and independent variables and measurements that surround a conceptual framework. Table 1, for instance, highlighted the protocols, dependent variables (performance goals, mastery goals, and intrinsic motivation) that were assessed against the independent variable (learning and entertainment instruction). In my opinion, Table 1 could be renamed from Description of Components to the Conceptual Framework.

Erthel and Jamet (2013) postulated and “sought to identify the conditions under which DGBL is most effective, by analyzing the effects of two different types of instructions (learning instruction vs. entertainment instruction)” (p. 157). Erthel and Jamet (2013) stated, as the study scaffolded the “only variation would be [the] instructions given for a digital learning game called ASTRA” (p. 158). ASTRA, the intervention, was given to adults who were aged from 18 to 26, and not adolescents. Using a defined conceptual framework, future studies of DGBL, using the ASTRA intervention, could be applied to other segments of the population, such as: adolescents, adult learners, lower socio-economic class learners and disabled learners.

Statement of Relevance to Theory

Erthel and Jamet (2013), as well as other scholars, argued that developing educational technology and theory is difficult because it requires a comprehensive understanding of complicated variations that are technologically, contextually, methodologically and pedagogically interconnected (see Mishra & Koehler, 2013). Some scholars even argued that educational technology is theoretically a hard science (see Berlinger, 2002). As such, Erthel and Jamet (2013) categorized DGBL as an educational technology with learning and motivational benefits in a learning and instructional environment. Other scholars, noted in the literature review, sought DGBL as having relevance to theory, pedagogy, methodology, and practice. It may well be arguable that DGBL could fit into the constructivist theory debate of learning and instruction.

According to the literature review, scholars have looked at DBGL as relevant to the flow theory, cognitive theory, and cognitive load theory. For instance, scholars of DGBL “have looked at the relevance of flow theory [that is] the immediate subjective experience that occurs when an individual engages in an activity” (Erthel & Jamet, 2013, p. 157). Another relevant theory is the cognitive theory “referred to as rote learning (i.e., surface learning) and meaningful learning (i.e., deep learning)” (Erthel & Jamet, 2013, p. 158) for cognitive processing. Another relevant theory to DGBL is the theory of cognitive load or “effort engaged by learners in information processing…a key component of learning performance” (Erthel & Jamet, 2013, p. 164). In addition, DGBL has relevance to theory in terms of the following: intrinsic motivation, effects of pedagogy/andragogy, and future research methodologies. In summary, DGBL could fit into many cognitive and learning theories and theoretical perspectives.

Statement of Relevance to References and Conclusion

Erthel and Jamet (2013) did a careful job with selecting scholarly articles. After carefully evaluating the reference section, it was noted the references were scholarly and similarly in content and context. The journals, books and other materials were scholarly and similarly noted in content and context as well. The references aligned logically with the conclusion. The researchers’ aim aligned logically with the research question. For example, the researchers aimed in experiment 1 “to ascertain whether the effects of instructions…would also manifest themselves during DGBL” (p. 158). The researchers aimed in experiment 2 “to determine the presence of KCR feedback in DGBL…learning strategies induced by instruction” (p. 162). The overarching research question was “is deep learning compatible with serious games” (p. 165). The research question was in alignment with the aim of the study. That is, in both experiments, the researcher proved that effects of instruction and intervention of KCR caused deep learning with DBGL and serious games.

Erthel and Jamet (2013) connected the references to the conclusion by undertaking a comprehensive literature review coupled with quantitative research and survey questionnaire. The survey questionnaire was orchestrated on a 7-point Likert scale. In the data analysis, the researchers backed up their claims with evidence of the learning and motivating effects of DGBL. In the findings, the researchers addressed DGBL in terms of learning and entertainment conditions. Even though the study connected the references to the conclusions, however, in my opinion, the study was not generalizable or transferable to the general population. The population sample was too narrow and focused only on a very small segment of the population, that is undergraduate students. Further research would be required to connect the references to the conclusion for generalizability and transferability.

Statement of Relevance to Problem Investigated

On a good note, Erthel and Jamet (2013) addressed the problems of the study throughout the ending or general discussion. For example, in the two types of designed experiments carried out, the researchers posited that learners in experiment 2 did much better with a combination of knowledge of correct response (KCR) feedback and learning instructions. The KCR feedback intervention in experiment 2 elicited deeper cognitive processing as compared to experiment 1. The KCR feedback was not intervened in experiment 1. Erthel and Jamet (2013), therefore, argued that KCR feedback in DGBL given to participants in Experiment 2 performed much better on reading comprehension than those not given the KCR feedback in experiment 1. Finally, Erthel and Jamet (2013) related KCR feedback to enhanced self-regulatory cognitive learning for self-efficacy.

In experiment 1, the researchers posited that the learners did much better on comprehension with learning instructions and deeper cognitive processing. The researchers, however, noted a problem with paraphrase type questions, which was not the case with inference type questions. With regards to inference questions, the ASTRA environment elicited adverse responses to inference type questions. This dichotomy would have to be elucidated in future studies, according to the researchers. Another dichotomy was the fear of failure in experiment 1, but no fear of failure in experiment 2. This dichotomy would have to be also elucidated in future studies as well. In addition, the researchers highlighted three problems in the experiments, one was ASTRA intervention. According to the researchers, the ASTRA intervention caused minimal interaction with the participants. The second problem in this study was the high scores on the tests. The high scores on the test could be construed as research bias. Finally, the third problem of the study was related to the choice of methodology. Accordingly, the methodological approach invoked external and internal validity issues and, as well as, generalizable issues.

Statement of Critique Research Questions and Hypotheses

Erthel and Jamet (2013) did not highlight the research questions or hypotheses under one venue. The research questions and hypotheses were implied and scattered throughout the study, in my opinion. As stated above, a reader or learner must dig deep into the study to recognize the research questions or hypotheses. Critically speaking, that exerted lots of intellectual energy to locate research questions or hypotheses in the study. Exerting a lot of energy could be extraneous because learners must assess a great deal of research in media. So, a learner could possibly overlook research, such as this one, for example, when a study does not clearly depict or articulate the research questions or hypotheses under one venue.

In addition to the paraphrase-type questions and inference-type question, Erthel and Jamet (2013) clearly articulated “one of the objectives of the present study was to answer the question “Is deep learning compatible with serious games” (p. 164). The researchers did a good job in addressing this research question. The results were clearly announced in the data analysis. In terms of hypothesis, again, the hypotheses were not clearly stated under one venue, that is, the hypotheses were scattered throughout the paper and implied. For instance, Erthel and Jamet (2013) stated, “the ANOVA showed that the participants in the learning instruction condition performed significantly better than those in the entertainment instruction condition” (p. 163). This was the first hypothesis. Arguably, the second hypothesis was implied when participants in the entertainment instruction group performed better on comprehension than those in the learning instruction group. The latter hypothesis was the thrust of experiment 1. Arguably, the third hypothesis was noted when participants who were given an entertainment instruction performed significantly more poorly on comprehension than those given the learning instructions. The fourth hypothesis was noted in the presence of KCR feedback, learners in the entertainment instruction condition performed significantly better on comprehension than those in the learning construction condition. The fifth hypothesis was denoted in the study when the researchers articulated that KCR feedback prompted the participants to process the DGBL content more deeply. Finally, that last hypothesis implied, in my opinion, was noted when the researchers argued that neither experiment revealed any effect of instruction on responses to the paraphrase-type questions, even though deep learning results in better memory storage. In summary, the researchers stated, “future research is needed to test [all hypotheses]” (p. 165). As stated earlier and critically speaking, the research question(s) and hypotheses should have been clearly denoted and stated under one venue.

Research Design and Data Analysis

Statement of Critique the Methods and Research Design

Erthel and Jamet (2013) did a phenomenal job with the research design and survey questionnaire. That is, the researchers undertook a quantitative study as their research design. The researchers methodological approach used designed-based experiments and survey questionnaire. The researchers undertook two different experiments to measure the effects of cognitive information and cognitive processing in DGBL. The researchers analyzed the following types of effects in DGBL: learning condition and entertainment condition. With this approach, the researchers adopted the “value added perspective,” meaning the researchers incorporated ASTRA as an intervention. ASTRA, a digital learning instruction, was proven to be effective for other digital video games.

In experiment 1, a total of 46 undergraduate students participated in a 35 to 40-minute experiment. The instruction was the manipulative independent variable. The dependent variables were learning condition and entertainment condition. The participants were introduced to four different diseases with five different parts. The experiment took place in six different rooms with a computer. The experiment consisted of six phases. The first phase was a pretest with conditions, and the second phase was putting on headphones and following the ASTRA commands. The third phase was a test about the diseases. The test was comprised of 15 questions. 12 of the questions were performance related, and 3 were performance goal avoidance related. A seven-point Likert scale was used to measure the level of agreement. After completing the test about diseases, the participants completed an additional 8 questions that measured paraphrase type questions and inference type questions.

With regards to experiment 2, the researchers strived to prove that the knowledge of correct response (KCR) intervention in the quizzes and learning game could improve the learning or entertainment condition. A total of 44 undergraduates participated in experiment 2. The test administration and protocols were the same as experiment 1. The independent and dependent variables were also the same as experiment 1. Again, both experiments were well designed, in my opinion. Even though the experiments were well designed, the study is in danger of external and internal validity. In my opinion, the population size was too small and narrow to be generalizable.

Statement of Critique Replicability

Erthel and Jamet (2013) stated, “it would be well worth replicating our study with more immersive and interactive material…until they thought they had achieved sufficient learning outcomes” (p. 165). There were multiple areas of replicability in the study. For instance, the question remained open if DBGL could invoke paraphrasing or memorization. The question remained open as to why there was fear in experiment 1 and not in experiment 2. In addition, the question remained open if other interventions like ASTRA could be invoked and cause cognitive learning and processing. The study could be also replicated with experiments at the adolescent level as well as adult learners past the age of 30 or so. This study was rich in future research questions or potential for replicability, likewise.

Statement of Critique Data Analysis and Results

Erthel and Jamet (2013) analyzed the learning conditions using an analysis of variance (ANOVA), Laverne’s test, nonparametric Mann-Whitney test, mean scores, and standard deviations. The data analysis matched the research design, which was quantitative and survey questionnaire.

Experiment 1, the researchers allowed pre-test participants to score up to 6 points. Only one pre-test participant was excluded from the study. The pre-test, ANOVA did not reveal any significant differences between the groups. The recall quizzes on Lavene’s test showed no significant differences between the groups. The knowledge questionnaire on Lavene’s test showed significance on paraphrase-type questions and no significant differences with inference type questions. The motivation questionnaire on ANOVA showed no significant difference were discovered among the groups.

In Experiment 2, the researchers excluded four participants because of expertise. Pretest scores in experiment 2 failed to show any significant differences among the groups. Recall quiz scores test called the variance scores equal among the groups and not significant. In addition, the researchers reported no problems of homogeneity among the knowledge questionnaire scores.

Erthel and Jamet (2013) could not back up significant claims with quality of evidence. In fact, some of the significant claims were in direct opposition of expectation. For example, the researchers stated, “contrary to our expectations, type of instruction had no significant effect on paraphrase type questions” (p. 161). In other words, the researchers failed to observe any effect of instruction and motivation in DGBL. The researchers, however, did find the opposite conclusion on inference type questions. The researchers stated, “contrary to our expectations, the participants ratings of the performance goal avoidance did not differ significantly between the learning instructions” (p. 161). Erthel and Jamet (2013) failed in finding an effect on the motivation items. This was also a flaw in the quality of evidence. With these flaws in quality of evidence, this report is in danger of external validity and generalization.

Implication of Results

Statement of Critique Limitations of Study

Erthel and Jamet (2013) articulated three main limitations to the study. The researchers reported and admitted that in the ASTRA game environment “learners had little few opportunities to interact with the material apart from selecting the right behavior to adopt toward an elderly person with one of the diseases and completing the quizzes” (p. 165). Secondly, the researchers reported “learners seldom received feedback correcting their comprehension errors” (p. 165). Thirdly, according to the researchers, the methodology of choice was questionable. The researchers admitted that obtaining online questions as a methodology was not the best choice. Fourth, the small population of choice was a limitation. Fifth, the ASTRA choice as an intervention was also a limitation. Subsequently, the study was not generalizable or transferable to other study groups. The study had limitations with external and internal validity. On a good note, the study was positive on construct validity. Also, according to the data analysis, the study was positive on identifying the dependent variables (entertainment and learning) and assessing them against the independent or manipulating variables (instruction).

Statement of Critique Author’s Conclusion

Erthel and Jamet (2013) were all over the map in terms of a definitive and arguable conclusion. For instance, the researchers uncovered minimal effects in the motivation construct. The researchers, additionally, were in direct opposition to DGBL motivational effects undertaken by other scholars. That is, other seminal work suggested that DGBL influenced intrinsic motivation.

Erthel and Jamet (2013) end section 3.3 with a “Discussion and Conclusion.” However, the researchers end the overall study with only a General Discussion. The researchers end the study paradoxically, in my opinion. Conceptually speaking, I argue, a conclusion is essential in scholarly research. A conclusion connects all the methodological concepts together in unison. For example, a conclusion discusses the title and how it connects to the references, reference data, and tables and figures. A conclusion discusses the abstract and how it connects to the purpose of the study and research questions, for example. A conclusion, for instance, ends with reference to the protocol. That is a summary of the introduction, summary of literature review, summary of how the study fits into the body of scholarly knowledge, and a rationale as to why the study is useful to the field. Finally, a conclusion articulates the results of the study and formulates an analysis of the data, analysis of the arguments, and quality of evidence of which connects to the results. In summary, all the constructs and variables were noted throughout the study but scattered unsystematically. As a result, a replication of the study will mop up the paradoxes and lead to better conclusions.

Statement of Critique Future Practice

Erthel and Jamet (2013) reported on the practices of DGBL in terms of learning, benefits and motivation. For instance, the researchers admitted the present study opens several avenues for future research in DGBL in terms of learning and motivation. The researchers also highlighted contradictory flaws across both experiments. For example, neither experiment revealed any effect of instruction on the paraphrase questions in terms of memorization. Perhaps future research could reveal effects on the paraphrase questions.

Erthel and Jamet (2013), according to the literature review, highlighted contradictory outcomes in terms of DGBL being a serious game. For instance, the researchers articulated “the results of studies carrying serious game environments (SGEs) with conventional media are still highly contradictory” (p. 157). The researchers also highlighted “the learning instruction appeared to have generated a greater fear of failure than the entertainment instruction did” (p. 163). The concern of failure of fear could be a concern among other serious games, thus, further research could counteract or remedy this area of concern. The researchers also reported flaws in the outcomes of results. For instance, “contrary to our expectations, the participants ratings of the performance goal avoidance items differ significantly between the learning instruction” (p. 165). In addition, the researchers “failed to observe any effect of instruction memorization quality” (p. 165). Therefore, future practice and replicability could resolve the concerns or issues above.



Berliner, D.C. (2002). Educational research: The hardest science of all. Educational Researcher, 31(8), 18-20.

Erhel, S. s., & Jamet, E. e. (2013). Digital game-based learning: Impact of instructions and feedback on motivation and learning effectiveness. Computers & Education, 67, 156-167.

Hung, C. M., Hwang, G. J., & Huang, I. (2012). A Project-based digital storytelling approach to improving students’ learning motivation, problem-solving competence and learning achievement. Educational Technology & Society, 15(4), 368–379.

Kay, R. H. (2012). Exploring the use of video podcasts in education: A comprehensive review of the literature. Computers in Human Behavior, 28, 820-831.

Schwartz, D. L., & Hartman, K. (2007). It is not television anymore: Designing digital video for learning and assessment. In Goldman, R., Pea, R., Barron, B., & Derry, S.J. (Eds.), Video research in learning science (pp. 349-366). Mahwah, NJ: Lawrance Erlbaum Associates.

Spiro, R. J., & DeSchryver, M. (2009). Constructivism: When it’s the wrong idea and when it’s the only idea. In S. Tobias & T. Duffy (Eds.), Constructivist theory applied to instruction: Success or failure. Mahwah, NJ: Lawrence Erlbaum.

Yadav, A.,Phillips, M.M., Lundeberg, M.A., Koehler, M.J., Hilden, K.H., & Dirkin, K.H. (2011). If a picture is worth a thousand words is video worth a million? Differences in affective and cognitive processing of video and text cases. Journal of Computing in Higher Education, 23(1), 15-37.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s