Enhancing digital competence through story-based learning: a massive open online course (MOOC) approach

Sivakorn Malakul (Department of Technology, Institute for the Promotion of Teaching Science and Technology, Bangkok, Thailand)
Cheeraporn Sangkawetai (Department of Technology, Institute for the Promotion of Teaching Science and Technology, Bangkok, Thailand)

Journal of Research in Innovative Teaching & Learning

ISSN: 2397-7604

Article publication date: 19 November 2024

327

Abstract

Purpose

This study investigated a story-based learning MOOC’s effectiveness in enhancing digital competence.

Design/methodology/approach

A quasi-experimental design with 5,501 participants enrolled in a developed MOOC course was assessed through pretests, formative assessments and posttests. K-means clustering, using the Self-Efficacy in Digital Competence Scale (SDCS), was employed to classify experimental and control groups and analyze differences in perceived competence across age groups (10s–60s).

Findings

Learners’ digital competence significantly improved (p < 0.001) after the MOOC, demonstrating knowledge and skill gains across various domains. The highest SDCS domain was communication and collaboration, while the lowest was digital content creation. Additionally, the SDCS data showed higher self-efficacy in the 20–40s age group and lower in the 10, 50 and 60s.

Research limitations/implications

The findings suggest a gap in learners’ digital content creation competence. Additional content could be incorporated to bridge this gap. This study supports story-based learning MOOCs for promoting digital competence.

Originality/value

This research contributes to the field by developing and evaluating a MOOC with story-based learning to explore learners’ digital competence and its relationship with age.

Keywords

Citation

Malakul, S. and Sangkawetai, C. (2024), "Enhancing digital competence through story-based learning: a massive open online course (MOOC) approach", Journal of Research in Innovative Teaching & Learning, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/JRIT-04-2024-0091

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Sivakorn Malakul and Cheeraporn Sangkawetai

License

Published in Journal of Research in Innovative Teaching & Learning. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Global agencies like the OECD (2021), UNESCO (2022), and the European Commission (2021) emphasize the growing importance of digital competence. They emphasize that access to digital resources, literacy skills, and competence are essential for adapting to digital technology society and are fundamental for future workforce readiness. To prepare “digital citizens”, the education sector is encouraged to promote digital competence, extending beyond formal education to lifelong learning (ISTE, 2022).

Traditionally, promoting digital competence has emphasized ICT skills and Digital Literacy (DL) during early development (Røkenes and Krumsvik, 2014; Peters et al., 2022). However, according to the European Commission (2021) and UNESCO (2022), digital competence encompasses a broader range of skills beyond DL, including foundational Computer Science (CS) skills such as systematic thinking, problem solving, and programming. Digital competence also involves the ability to critically evaluate digital content, understand digital privacy and security, and use digital tools for collaboration and communication (Vuorikari et al., 2022). Thus, developing new learning curricula that align with these global agency frameworks and cover CS, ICT, and DL comprehensively could enhance digital citizenship more effectively.

Digital competence promotion can be integrated into school and university education for students and extended to lifelong learning (Basantes-Andrade et al., 2022; Gordillo et al., 2019; Pradubwate et al., 2020; Yelubay et al., 2022). A popular approach to lifelong learning is online education through platforms like MOOCs. MOOCs provide an accessible and flexible learning environment that can accommodate a wide range of learners, from various backgrounds and age groups. Since digital skills often involve awareness and problem-solving, using examples through animation storytelling can vividly illustrate concepts and impact learners' understanding (Au et al., 2016). Furthermore, age has been identified as a significant factor influencing how individuals perceive and engage with digital skills, with younger learners typically reporting higher levels of self-perceived digital competence compared to older learners (Teo et al., 2014).

Story-based learning using animated videos offers several advantages over other teaching methods. It helps clarify complex concepts (Wells, 1986), enhances long-term memory (Banister and Ryan, 2001), increases interest and motivation (Naul and Liu, 2019), promotes problem solving and critical thinking skills (Mott and Lester, 2006; Rowe et al., 2011), and fosters learning engagement (Smeda et al., 2014). Prior study has also demonstrated the effectiveness of visual-text elements in story-based animated videos, with improved learning comprehension and reduced cognitive load (Malakul and Park, 2023). Story-based learning can create emotional connections with the content, making it more relatable and memorable for learners. Animated videos specifically leverage visual and auditory stimuli to engage multiple learning channels, thereby enhancing cognitive processing and retention (Mayer, 2014).

In addition to promoting digital competence, self-efficacy influences learners' engagement and success in digital learning environments. Self-efficacy, as defined by Bandura (1997), refers to an individual’s belief in their capability to successfully perform a specific task or achieve a desired outcome. High self-efficacy has been shown to enhance learners’ confidence, perseverance, and willingness to tackle challenges (Hatlevik et al., 2014). Therefore, it is important to examine how self-efficacy affects the development of digital competence. Understanding the relationship between these factors can help educators create more effective strategies to support learners in online platforms.

Despite extensive research, a gap remains on the impact of story-based learning in MOOCs to enhance digital skills. To address this gap, the study developed the “Coding for All” (CFA) course, utilizing learning through story-based learning within a MOOC platform specifically designed to promote digital competence. This study explores how story-based learning through animated videos could facilitate learners' digital competence and presents both digital competence development and self-efficacy among learners, providing a comprehensive understanding of how these interact and influence each other.

By examining these aspects, the present study contributes valuable insights into the development and implementation of a self-paced learning course that fosters digital competence and self-efficacy. Incorporating frameworks such as UNESCO’s Media and Information Literacy alongside DigComp can ensure a comprehensive approach to digital literacy, addressing all facets of digital competence (UNESCO, 2022; Vuorikari et al., 2022). This study highlights the potential of story-based learning in digital education and emphasizes the importance of integrating self-efficacy considerations into curriculum development to support lifelong learning and digital citizenship.

2. Literature reviews

2.1 Digital competence and the DigComp framework for citizens

Digital competence, also known as digital literacy or digital skills, refers to the ability to effectively navigate, understand, and use digital technologies for personal, social, and professional purposes. The European Commission operationalized digital competence based on a 2006 Council Recommendation, leading to the development of the DigComp framework (Ferrari et al., 2013). First published in 2013, DigComp serves as a blueprint to enhance citizens' digital competence, support policymakers in formulating relevant policies, and plan targeted education and training initiatives (Vuorikari et al., 2016). The framework encompasses areas such as Information and Data Literacy (ID), Communication and Collaboration (CC), Digital Content Creation (DC), Safety (SF), and Problem Solving (PS) (Vuorikari et al., 2022). Additionally, UNESCO’s Media and Information Literacy framework complements DigComp by emphasizing media and information literacy, enhancing the understanding of the role and functions of media in democratic societies (UNESCO, 2022).

Educational institutions and educators have embraced the DigComp framework to guide the development of digital competence in curricula, teaching methodologies, and assessment practices, ensuring learners' acquisition of essential digital skills and knowledge for success in a digital society (Rolf et al., 2019). The adoption of the DigComp framework has extended beyond European countries to other nations (Yoon, 2022), including Thailand, where it has been implemented as a crucial competency for assessing digital skills (Phuapan et al., 2016) or information technology competence (Siddoo et al., 2017) in educational research.

While the DigComp framework has been widely utilized, there is a potential for research on its application in developing digital competence through story-based learning. This study could fill this gap by adapting DigComp to assess the impact of story-based learning, thereby enhancing our understanding of effective educational strategies for developing digital skills.

Building on the foundational understanding of digital competence and the DigComp framework, it is essential to explore the role of self-efficacy in digital competence development.

2.2 Self-efficacy in digital competence

According to Bandura (1997), “self-efficacy” refers to an individual’s belief in their capability to successfully perform a specific task or achieve a desired outcome. It is a key component of social cognitive theory, emphasizing the reciprocal interaction between personal factors, environmental influences, and behavior (Bandura, 1986, 1997). Perceived self-efficacy serves as the foundation for developing digital competence (Zhang et al., 2023), as individuals with high self-efficacy exhibit greater confidence and perseverance when facing challenges (Hatlevik et al., 2014).

Previous studies have evaluated self-perceived digital competence in ICT and digital device utilization (Aesaert et al., 2017; Ulfert-Blank and Schmidt, 2022; Wang and Zhao, 2021). However, these studies may not fully encompass the rapid technological advancements such as artificial intelligence (AI). The Digital Skills Assessment Tool from Europass (2023), based on DigComp 2.1 framework (Carretero et al., 2017), reflects individuals' self-perceived capabilities for each task or skill. Studies using DigComp 2.1 revealed differences in perceptions of digital competence across domains and age groups (Jiménez-Hernández et al., 2020; Khan and Vuopala, 2019; Kreuder et al., 2024). Moreover, previous research often focuses on working-age groups without clearly defined age brackets or relies on generational categorizations, potentially overlooking age-specific differences in digital competence development (Khan and Vuopala, 2019; Kreuder et al., 2024). Comparing self-efficacy in digital competence among learners using different teaching methods, including story-based learning, could enhance audiences understanding of effective strategies for developing digital competence. By examining the self-efficacy of learners using story-based learning alongside other methods, valuable insights into optimal teaching practices could be gained.

Having established the importance of self-efficacy in digital competence, this study now examines the impact of MOOCs as a significant avenue for enhancing digital skills.

2.3 MOOCs courses in developing digital competency

The emergence of Massive Open Online Courses (MOOCs) has provided open access to educational courses and materials, offering flexibility and interactive elements to promote learner engagement and collaboration (Nasongkhla et al., 2015). MOOCs have become widely available due to the influence of the open educational resources (OER) movement, which began with initiatives like the MIT OpenCourseWare project in 2001 (Liyanagunawardena et al., 2013). While MOOCs have been instrumental in fostering digital competence and skills development in various countries (Basantes-Andrade et al., 2022; Gordillo et al., 2019; Yelubay et al., 2022), there is still a need to understand the specific impact of story-based learning within MOOCs on digital competence.

The integration of MOOCs in educational and professional contexts has shown significant enhancements in digital competencies, including motivational, technological, cognitive, and ethical dimensions (Gordillo et al., 2019). Moreover, the utilization of MOOCs for employee development has proven effective in promoting digital competence and improving training management (Edelsbrunner et al., 2022). In Thailand, MOOC platforms like Thai MOOC and university platforms have been implemented for educational technology and innovation training, covering topics such as technology-enabled learning management, online communication, and digital media creation (Thai MOOC, 2023; Pradubwate et al., 2020). Exploring the role of MOOCs, specifically those employing story-based learning, can provide valuable insights into optimizing digital skills training through online platforms.

With an understanding of how MOOCs contribute to digital competency development, it is pertinent to examine the effectiveness of story-based learning, particularly through animated videos, in this educational context.

2.4 Story-based learning with animated videos

Story-based learning refers to the delivery of a narrative or sequence of events as media to learners. This instructional approach involves using words, images, videos, games, or other mediums to captivate and engage learners. According to Bruner (1991), narratives serve as the primary mechanism through which humans structure their understanding of reality. They function both as cognitive structures and as forms of communication, aiding in framing and comprehending perceptions of the world. The sequencing of narratives establishes cause-and-effect relationships between events, while the chosen point of view conveys thoughts and feelings to the audience (Abbott, 2011).

Previous studies have highlighted several advantages of utilizing stories and narratives in learning, such as clarifying complex or abstract concepts, facilitating the assimilation of new ideas, and establishing a clear path to understanding (Wells, 1986). Story-based learning has also been associated with increased long-term memory (Banister and Ryan, 2001), enhanced learning gains and self-efficacy (McQuiggan et al., 2008), heightened immersion and motivation (Naul and Liu, 2019), and improved learning engagement and problem-solving skills (Mott and Lester, 2006; Rowe et al., 2011). Narrative animated videos, combining words, pictures, and audio to facilitate learning, align with Mayer’s Cognitive Theory of Multimedia Learning (2014) by engaging both visual and auditory channels, promoting deeper cognitive processing.

In addition, early studies on animated videos (Berney and Bétrancourt, 2016; Liu and Elms, 2019) and motion graphic animation (Hanif, 2020; Malakul and Park, 2023; Smeda et al., 2014; Taylor et al., 2017) have consistently demonstrated significant positive impacts on learning experiences and digital literacy, contributing to increased engagement, heightened interest, and improved learning outcomes. The impact of story-based learning on digital competence across different populations and learning platforms has not been thoroughly explored. Understanding this impact can help enhance the effectiveness of educational design and meet learners' needs more effectively.

Understanding the impact of the developed course on learners' digital competence requires investigating various domains of digital competence development and self-efficacy. The research questions for this study are as follows:

RQ1.

How does learners' digital competence change after completing the CFA course?

RQ2.

Which digital competence domains vary after the CFA course?

RQ3.

How does digital competence vary by age group after the CFA course?

The insights gained from this research would contribute to the design of more effective educational programs that not only enhance digital competence but also foster a sense of confidence and self-efficacy among learners.

3. Methods

3.1 Research design

This study employed a quasi-experimental design with a comparison group to investigate the impact of a story-based learning MOOC on self-efficacy in digital competence throughout the developed course (Cohen et al., 2007). K-means clustering in the data mining process with machine learning determined the experimental and control groups (Ahmad and Dey, 2007; Rachwał et al., 2023; Rodvaree et al., 2024), using SDCS scores for group classification. K-means clustering is widely recognized for its effectiveness in partitioning data into distinct groups by minimizing within-cluster variation and maximizing between-cluster differences (Aldenderfer and Blashfield, 1984; Everitt et al., 2011).

Participants with higher SDCS scores were assigned to the control group, while those with lower scores were placed in the experimental group. This method was chosen to ensure a clear comparison between participants with varying levels of digital competence, aligning with the guidelines for cluster analysis used in social sciences (Lorr, 1983). Self-efficacy in digital competence was measured at three time points: before the intervention (pretest), during the intervention (formative assessments after each learning module), and after the intervention (posttest). Additionally, learners' self-efficacy in digital competence was assessed at the end of the course.

To compare digital competence across different age groups, the learners were divided into age categories (i.e. 10, 20, 30, 40, 50, and 60s) for comparative analysis. This age division reflects differences in digital skill development, with younger learners generally starting to engage with technology, while older groups (50–60s) tend to use technology in more specific, practical ways (Kreuder et al., 2024; Staddon, 2020; Teo et al., 2014).

3.2 Participants

The study participants were learners enrolled in the “Coding for All” (CFA) course on Thai MOOC, from January 31, 2023, to March 31, 2024. Data were collected from 12,427 learners, but the analysis focused only on those who completed the course and provided consent to share personal information and learning records anonymously. The final data for analysis was from 5,501 participants. The minimum sample size was calculated using G*Power software version 3.1 (Faul et al., 2007). Demographic characteristics of the participants are presented in Table 1.

3.3 Materials and instruments

3.3.1 The CFA course on Thai MOOC

The Coding for All (CFA) course on Thai MOOC was designed to cover fundamental learning content in three modules; Computer Science (CS), Information and Communication Technology (ICT) contents, and Digital Literacy (DL). Table 2 shows the mapping of learning content and the domains of digital competence. This mapping ensures that each module effectively addresses the key competencies outlined in the DigComp 2.2 framework (Vuorikari et al., 2022), providing a comprehensive approach to digital skill development. The integration of these competencies within the CFA course ensures that learners receive a well-rounded education that not only covers theoretical aspects but also practical applications relevant to current technological trends, particularly AI literacy. Regarding learning content creation, the course materials were developed through a collaborative process involving twenty-one specialists in computing science, computer engineering, and information technology. All learning contents and materials were reviewed by three external experts, who are professors in educational technology and computer science from different universities, to ensure accuracy and consistency.

The course duration was three hours, including two hours of story-based animated videos and one hour of learning activities (e.g. self-practice, reading materials, learning exercises, and tests). Each video lesson lasted approximately 3–8 min and used motion graphics animation to present what-if scenarios, immersing the audience in captivating narratives. The CS module focused on systematic thinking and problem-solving, the ICT module demonstrated technology’s impact on life and work, and the DL module emphasized ethical and safe technology use. The videos included questions to engage viewers and encourage critical thinking (see Figure 1 for an example of the animated videos).

3.3.2 Learning performance

  • (1)

    Pretest: Before starting the course, learners were given a pretest that consisted of six multiple-choice questions. This pretest aimed to assess the learners' baseline knowledge and comprehension across all three modules of the course (i.e. CS, ICT, and DL). The questions were designed to align with the learning objectives for each of these domains, ensuring that the pretest measured key foundational concepts in a manner consistent with the course content. The purpose of the pretest was to establish a baseline of learners' initial level of understanding and provide a reference point for measuring progress throughout the course.

  • (2)

    Formative assessments: At the end of each of the three learning modules in CS, ICT, and DL, formative assessments were conducted. These assessments consisted of five multiple-choice questions per module, designed to evaluate learners' comprehension of the specific content covered. Each assessment was carefully crafted to reflect the learning objectives of the respective module, ensuring alignment with the course content in each of the three domains. In total, there were 15 questions across all modules. Formative assessments were designed to demonstrate the changes in learners' knowledge during the learning process. This provided valuable insights for this study by illustrating learning progress and reinforcing key concepts, contributing to a deeper understanding of knowledge development.

  • (3)

    Posttest: Upon completion of the course, a summative assessment was conducted to evaluate learners' overall knowledge and skills. The posttest consisted of 15 multiple-choice items, providing a comprehensive review of the course content. Like the pretest and formative assessments, the posttest was designed to align with the learning objectives of all three modules (i.e. CS, ICT, and DL). Although the specific questions in the pretest and posttest differed, both were carefully constructed to assess the same core competencies and learning objectives. The questions covered a range of topics, ensuring that learners' proficiency and mastery of the material were measured consistently across all domains. The posttest results were compared with pretest scores to measure knowledge gains and determine the effectiveness of the instructional strategies employed.

Each test score accounted for 100% of the learners' evaluation in each respective assessment. All tests were reviewed by three external experts—professors in educational technology and computer science from different universities—to ensure content accuracy, alignment with course objectives, and consistency between pretest and posttest questions. The KR-20 reliability test was conducted to measure the internal consistency of the assessments. The results showed a coefficient of 0.68 for the pretest, 0.72 for the formative assessments, and 0.74 for the posttest (Kuder and Richardson, 1937). These reliability values are considered acceptable according to established guidelines (Nunnally and Bernstein, 1994; Taber, 2017). The detailed pretest and posttest questions are provided in the Appendix.

3.3.3 Self-efficacy in Digital Competence Scale (SDCS)

The SDCS was designed based on the DigComp 2.2 framework, incorporating selected questions from the Digital skills test by Europass of the European Union (Europass, 2023) and participants’ competences from DigComp 2.2: The Digital Competence Framework for Citizens guidebook (Vuorikari et al., 2022). The question items of SDCS are designed to assess competency levels ranging from fundamental to advance as referenced in the framework in 5 domains including Information and data literacy (ID), Communication and collaboration (CC), Digital content creation (DC), Safety (SF), and Problem solving (PS). All selected questions were translated into the Thai language and edited while preserving their original meaning. The scale employs a Likert-4 scale format, comprising 24 items that assess digital competence across four levels: 0 (I don’t know how to do it), 1 (I can do it with help), 2 (I can do it on my own), and 3 (I can do it with confidence and, if needed, I can support/guide others). For evaluating this instrument’s quality, all question items were validated by three external experts who confirmed their consistency. Additionally, the overall scale demonstrated excellent reliability (Taber, 2017), as indicated by a Cronbach’s alpha coefficient of 0.95.

3.4 Data collection and analysis

All participants followed the learning process and completed the pretest, formative assessments, posttest, and the SDCS on the Thai MOOC platform, which operates on the Open edX system (Thai MOOC, 2023). The collected data were analyzed using IBM SPSS Statistics software version 29. K-means clustering was employed to classify participants into control and experimental groups. This method was selected for its ability to minimize within-group variance and maximize between-group differences, making it ideal for analyzing quantitative measures like self-efficacy in digital competence. Its simplicity and efficiency also make it suitable for large datasets, such as those found in MOOCs (Aldenderfer and Blashfield, 1984; Everitt et al., 2011).

This study applied three different nonparametric statistical analyses. First, Friedman’s analysis of variance was used to assess learning progress from the pretest through the formative assessments across the three modules (CS, ICT, and DL), as well as the posttest, and to examine digital competence within each domain (Friedman, 1940). Second, the Mann-Whitney U test was conducted to analyze the differences between the experimental and control groups for each assessment (i.e. pretest, formative assessment, and posttest) (Mann and Whitney, 1947). Third, Kruskal-Wallis’s analysis of variance was performed to compare the average posttest scores across the SDCS scores for different age groups (10, 20, 30, 40, 50, and 60s) (Kruskal and Wallis, 1952).

Comparing the experimental and control groups, along with different age groups, enhances the study’s reliability. By analyzing learning progress across both groups, this approach provides stronger evidence of the effectiveness of the MOOC course with story-based learning in improving digital competence. The data analytic demonstrates how digital competence develops over time, highlighting differences in progress between learners exposed to the story-based learning intervention and those following traditional methods, across various age groups.

4. Results

4.1 Cluster analysis

K-means clustering was employed using SDCS data in 5 domains (i.e. ID, CC, DC, SF, and PS) to classify the control and experimental groups for data analysis. The clustering model was optimized based on the highest silhouette value. The SDCS data were classified into two clusters, with a silhouette score of 0.67. A silhouette score above 0.50 is considered acceptable for classifying groups using the K-means clustering method (Rachwał et al., 2023). Besides, the R2 value of 0.66 meets the criteria for a moderate effect size (Cohen et al., 2007). Evaluation of the cluster analysis‘s accuracy using cross validation with 5-fold yielded a result of 73.20% (Rodvaree et al., 2024). The classification resulted in two clusters: 1,465 participants with lower SDCS (Mdn = 2.04, M = 2.61, S.D. = 0.33) were assigned to the experimental group, and 4,036 participants with higher SDCS (Mdn = 3.00, M = 2.91, S.D. = 0.14) were assigned to the control group.

4.2 Learning performance

A Friedman’s analysis of variance revealed significant differences in learning performance across the pretest, formative assessments from three modules (i.e. CS, ICT), and posttest (χ2(2) = 4863.76, p < 0.001). Wilcoxon signed-rank tests with Bonferroni correction were conducted for post-hoc comparisons. The pretest scores (Mdn = 0.50) were significantly lower than both formative assessments (Mdn = 0.80, z = −49.0, p < 0.001) and posttest scores (Mdn = 0.93, z = −127.0, p < 0.001). Formative assessments were also significantly lower than posttest scores (z = −78.0, p < 0.001). Consistent with the rank order of medians (pretest: 1.41, formative assessments: 1.90, posttest: 2.68), these findings suggest improvement in learning performance throughout the course.

The test results for the experimental group (lower SDCS) and control group (upper SDCS) revealed that both groups showed improved learning performance when comparing pre-test and post-test scores. Table 3 shows the descriptive statistic of experimental and control groups The Mann-Whitney U tests indicated significant differences between the experimental and control groups for all assessments: pretest (χ2(1) = 14.38, p < 0.001), Formative assessments(χ2(1) = 34.87, p < 0.001), and posttest (χ2(1) = 31.58, p < 0.001). Figure 2 illustrates the box plots of learning performance in the experimental and control groups.

Learning performance across age groups was evaluated using Kruskal-Wallis H tests. No significant difference was found in pretest scores (H(5) = 6.79, p = 0.24). However, significant differences were observed in Formative assessments (H(5) = 18.83, p < 0.01) and posttest scores (H(5) = 238.34, p < 0.001). The mean rank order of posttest scores increased with age: 10s (2218.15), 20s (2543.09), 30s (2843.06), 40s (3042.50), 50s (3265.00), and 60s (3204.50).

Table 4, post hoc analysis with Mann-Whitney U tests (adjusted alpha = 0.01 using Bonferroni correction) revealed significant differences in posttest scores between most age groups (ps < 0.001). Specifically, the 60s age group did not differ significantly from the 20, 30, 40, or 50s groups (ps > 0.05). Similarly, the 50s group did not differ significantly from the 40s group (ps > 0.05). Figure 3 depicts the distribution of learning performance scores across age groups.

4.3 Self-efficacy in digital competence between domains

A Friedman analysis of variance revealed significant differences in SDCS scores across all domain pairs, χ2(4) = 392.30, p < 0.001. Wilcoxon signed-rank tests were conducted for post-hoc analysis. Results indicated that CC had the highest mean score (2.73), followed by ID (2.69), SF (2.68), PS (2.67), and DC (2.65). CC scores were significantly higher than all other domains (p < 0.001). Additionally, DC scores were significantly lower than SF (p < 0.001), PS (p < 0.001), and ID (p < 0.001). However, no significant differences were found between ID and SF (p = 0.091). See Figure 4 for the mean score order of the SDCS domains.

4.4 Self-efficacy in digital competence between learners’ age groups

The Kruskal-Wallis H test revealed significant differences in SDCS across age groups (10s = 2155.54, 20s = 2856.45, 30s = 3010.96, 40s = 2842.91, 50s = 2468.62, and 60s = 2578.40; H(5) = 199.03, p < 0.001). Table 5, Post hoc comparisons using Mann-Whitney U tests with a Bonferroni-adjusted alpha of 0.01 showed that learners in their 60s reported significantly lower SDCS compared to all other age groups (ps < 0.01). Learners in their 50s also reported significantly lower SDCS than those in their 20, 30, and 40s (ps < 0.001). However, no significant differences were found between learners in their 20, 30, and 40s. Interestingly, after applying the Bonferroni correction, the initial significant difference between the 60s age group and all others disappeared. Figure 5 demonstrate the mean rank of SDCS scores by age group.

5. Discussion

5.1 RQ1: how does learners' digital competence change after completing the CFA course?

The result of comparing mean ranks of the pretest scores, Formative assessments from three modules (i.e. CS, ICT, and DL), and posttest scores indicate that learners demonstrated an improvement in their digital competence during the course, as reflected by higher scores in the formative assessments compared to the pretest. Additionally, the posttest scores exhibited a mean ranked higher than formative assessments and posttest scores. The experimental group with lower SDCS and the control group with upper SDCS both showed parallel positive changes, with the control group (upper SDCS) consistently achieving higher scores across all stages (i.e. pretest, formative assessment, and posttest) than the experimental group (lower SDCS). The posttest scores of learners from different age groups (i.e. 10, 20, 30, 40, 50, and 60s) indicated a significant difference in the mean rank totals across the age groups. As learners' age increases, their posttest scores consistently demonstrate higher performance levels. The present study’s results were in line with prior research conducted by Basantes-Andrade et al. (2022), Edelsbrunner et al. (2022), Pradubwate et al. (2020), and Yelubay et al. (2022), highlighting the positive impact of MOOC-based online lessons on the development of learners' digital competence. The observed improvements in learning outcomes align with previous studies, which indicated that learners benefit from story-based learning (McQuiggan et al., 2008) and animated videos (Berney and Bétrancourt, 2016; Taylor et al., 2017; Hanif, 2020; Liu and Elms, 2019), resulting in enhanced learning outcomes and technological competence (Smeda et al., 2014). This finding further strengthens the evidence that learners showed an improvement in their digital competence in terms of knowledge and skills during the CFA course.

5.2 RQ2: which digital competence domains vary after the CFA course?

This study examined learners' self-efficacy in digital competence using the Self-efficacy in Digital Competence Scale (SDCS). The results showed significant differences across the five DigComp 2.2 framework domains (Vuorikari et al., 2022) of CC, ID, SF, PS, and DC. Learners reported the highest confidence in CC, followed by ID, SF, and PS. DC had the lowest reported confidence.

Through the cluster analysis, it was found that 73.37% of learners in the control group, classified as upper SDCS, had scores at the “Advanced” level, as defined by Europass (2023), indicating a high level of confidence in their digital competence and the ability to guide others. In contrast, learners in the experimental group, classified as lower SDCS, had scores ranging from “Beginner to Intermediate.”

In addition, the results of SDCS in this course align with the findings of Castaño-Muñoz et al. (2016), indicating that learners who engage in MOOCs and complete them tend to possess higher levels of digital skills. This is further supported by the study conducted by Romero-Rodriguez et al. (2020), which suggests that the initial level of digital competence of a participant in a MOOC can serve as a valid predictor of their likelihood to complete it. Therefore, this suggests that learners of this course felt relatively more confident in their abilities related to communication and collaboration, safety, problem solving, and information and data literacy, compared to digital content creation.

The high confidence in communication and collaboration likely reflects learners' familiarity with communication apps, heavily used in Thailand (Kemp, 2023). This confidence might relate to the outbreak of COVID-19 has further accelerated the shift toward online working styles (Dahiya et al., 2021). Conversely, lower confidence in digital content creation may be related to concerns about copyright issues (Vuorikari et al., 2022).

5.3 RQ3: how does digital competence vary by age group after the CFA course?

The statistical analysis results indicated that there were indeed differences in self-efficacy perceptions in digital competence across different age groups. The digital competence of each age group also can be divided into two levels: lower competence, which includes learners in their 10, 50, and 60s, and upper competence, which includes learners in their 20, 30, and 40s. According to Kreuder et al. (2024), there are significant age-related differences in digital competence, highlighting the need for appropriate educational strategies. The findings of this study align with earlier research (Khan and Vuopala, 2019), which found that Generation Z (i.e. 10s) had a lower level of digital competence proficiency than Generation Y or Millennials (i.e. 20 and 30s), and that Millennials have higher proficiency than Generation X (i.e. 40 and 50s) and Boomers (60s). In Thailand, it was found that younger learners, who perceive themselves as more technologically competent, tend to have higher levels of e-learning acceptance compared to older learners (Teo et al., 2014).

When considering the self-efficacy levels of learners based on age groups, it was observed that the perceived digital self-efficacy of learners in the 40–50s age group tended to decrease, in contrast to the observed increase in learning performance. This trend is consistent with research showing that older individuals, particularly those in their 50 and 60s, tend to use technology in more targeted and specific ways, rather than for general purposes (Staddon, 2020). This finding suggests that learners in this age group may have had lower confidence in their skills compared to their actual knowledge and skills developed throughout the course, relative to learners from other age groups who studied the same course. These results express the accuracy and bias of ICT self-efficacy, which found that learners, on average, overestimate their ICT competencies (Aesaert et al., 2017). Also, learners with lower actual abilities tend to overestimate their competence, while those with higher abilities underestimate their competence (Ehrlinger et al., 2008). This suggests that younger age groups may focus on foundational skills, whereas older adults require advanced digital training to bridge the competence gap (Kreuder et al., 2024).

5.4 Practical implications

Overall, the results of this study provide strong support for the effectiveness of the CFA course in enhancing learners' digital competence with animated videos on story-based learning. The course significantly contributed to the acquisition of learning materials that related to the implementation of the DigComp 2.2 framework (Vuorikari et al., 2022), thereby promoting digital citizenship. The study’s findings indicate that there is a need to gain more competence in digital content creation by incorporating additional content to augment learners' competence.

Furthermore, the Self-Efficacy in Digital Competence Scale (SDCS) results revealed that learners in their 10s had lower levels of self-efficacy in digital competence compared to other age groups, as evidenced by their posttest scores. This highlights the importance of promoting self-efficacy in digital competence among learners in their 10, 50, and 60s. While learners in their 50 and 60s demonstrated high levels of learning, their self-perceived assessments indicated lower confidence in their digital skills.

Understanding how self-efficacy influences digital competence development can provide deeper insights into effective educational strategies. Implementing age-specific support mechanisms can help cater to the unique needs of different age groups, thereby enhancing their self-efficacy and overall digital competence. Additionally, the use of story-based learning has been shown to significantly engage learners by making content more relatable and memorable. This method can help maintain learners' interest and motivation, which is crucial for sustained learning outcomes (Hwang et al., 2023).

5.5 Limitations

The present study had some limitations that should be acknowledged. Firstly, the analysis was conducted on individuals who completed the course, which represents a subset of the overall learner population. Future research could benefit from including learners who did not complete the course to better understand factors related to course dropout and to provide a more comprehensive view of learner experiences. Those who complete a MOOC often possess higher levels of digital competence, which may not fully reflect the diversity of the broader population (Castaño-Muñoz et al., 2016; Romero-Rodriguez et al., 2020).

Secondly, while the age distribution of registered learners was somewhat imbalanced—particularly with fewer young learners under 10 and elderly learners over 60 compared to those in their 30s—statistical analyses were still feasible and provided meaningful insights into the digital competence of the learners. However, future studies with more balanced age groups would offer a more representative understanding of digital competence across all age ranges.

5.6 Suggestions for future study

The findings of this study offer valuable insights and provide recommendations for future research and curriculum development. Firstly, it is important to consider incorporating self-efficacy assessment alongside learning performance evaluation in the course. Currently, learners are not informed about their level of self-efficacy, which can impact their motivation and self-confidence. By providing learners with information about their self-efficacy levels, based on Bandura’s self-efficacy theory (Bandura, 1997), they can be better equipped to engage in digital self-improvement and further enhance their digital skills.

Secondly, future curriculum development could prioritize expanding digital content creation. This would allow individuals to gain confidence in creating and using digital multimedia creatively, as well as in programming and developing applications. By incorporating activities and resources that encourage programming skills, learners can enhance their ability to create original and impactful multimedia content while also developing proficiency in application development. It is important to provide guidance and promote responsible and ethical use of digital content, programming principles, and intellectual property rights throughout the curriculum.

Lastly, further research can delve into exploring the underlying factors contributing to these age-related differences in self-efficacy perceptions and investigating strategies to enhance self-efficacy across all age groups.

6. Conclusion

In conclusion, this study demonstrates that completing the CFA course with story-based learning in a MOOC platform leads to an enhancement in learners' digital competence. The results show a clear improvement in digital competence, as reflected by higher scores in formative assessments and posttests compared to the pretest. By utilizing k-means clustering, participants were effectively grouped into experimental and control groups based on their self-efficacy in digital competence. This technique allowed for a more precise comparison between learners with varying competence levels, ensuring that the analysis captured meaningful differences between the groups.

The study also highlights differences in digital competence across age groups, with learners in their 10, 50, and 60s showing lower levels of competence, while those in their 20, 30, and 40s demonstrate higher levels. These findings suggest that digital competence development varies significantly with age, indicating the importance of addressing the needs of different age groups.

Furthermore, the study suggests the need to incorporate self-efficacy assessments regularly throughout the course and expand content related to digital content creation, which appeared as a weaker area of self-efficacy among learners. Overall, the CFA course is one approach that can enhance learners' digital competence. This study highlights differences in digital competence across age groups, providing insights into skill development.

Figures

Story-based animated videos in CFA course

Figure 1

Story-based animated videos in CFA course

Learning performance in the course between lower and upper SDCS

Figure 2

Learning performance in the course between lower and upper SDCS

Learning performance in the course by age groups

Figure 3

Learning performance in the course by age groups

SDCS sorted by mean rank

Figure 4

SDCS sorted by mean rank

The mean rank of the posttest and the SDCS by age groups

Figure 5

The mean rank of the posttest and the SDCS by age groups

Demographic characteristics (N = 5,501)

CharacteristicsFrequencyPercent
GenderMale1,54528.09
Female3,92972.42
Others270.49
Age groups (year old)10s (10–19)84215.31
20s (20–29)1,46326.6
30s (30–39)1,47926.89
40s (40–49)1,18721.58
50s (50–59)5019.11
60s (over 60)290.53
QualificationUndergraduate1,15320.96
Bachelor’s degree3,45362.77
Master’s degree87415.89
Doctoral degree210.38
OccupationStudent1,67330.41
Graduate student1382.51
Government/state enterprises employees1,04218.94
Teacher/educator2,52945.97
Private employee671.22
Entrepreneur130.24
Unemployed390.71
SDCS levelLower (experimental group)1,46526.63
Upper (control group)4,03673.37

Source(s): Authors’ own work

Learning outlines of the CFA course

ModulesUnitsDescriptionDigital competence
1. CS1.1. From coding to thinkingAnalyze the order of thinking, action, planning before starting workDC, PS
1.2. Choose the right and decide wiselyDesign solutions in many options to choose and decide the best optionDC, PS
1.3. Technology decodingLearning how computer-based decision-making works and analyzing the cause of the problem with flowchartDC, PS
1.4. Get to know AI technologyGet to know the history of AI technology, type of AI, and usage examplesID
2. ICT2.1. Media EcosystemThe diverse media ecosystem in today’s societyID, CC
2.2. Cashless society a comfy lifeUse online financial services safelySF
2.3. Working togetherUse this online service for creating communications, presentations and collaborations with othersCC, PS
2.4. Lift up life with dataAnalyze and use information to improve the quality of lifeID, PS
2.5. Financial planning with technologyUse online expense accounting applications to analyze and manage personal finances systematicallyID, PS
3. DL3.1. Digital Intelligence quotient (DQ)Digital intelligence quotient to become a digital citizenCC, SF
3.2. Fact or Fake: Check for sure!Evaluate information and sources to select the accurate informationID, CC
3.3. Should I share this?Identify how to properly use copyrighted works and not infringe on intellectual propertyCC, DC
3.4. CyberbullyingHow to deal with problems caused by using social mediaCC, SF
3.5. Cognitive biases of information in digital mediaAnalyze, compare and make decisions based on rational thinking. Identify or characterize the reasons for failure from a given situationID, PS
3.6. AI LiteracyAnalyze discrepancies in the information generated by the technology without deceit and discern the accuracy of the content presentedID, CC, SF

Source(s): Authors’ own work

Descriptive statistic of experimental and control groups

GroupsMedianMeanS.D.
Experimental (lower SDCS)Pretest0.500.540.28
Formative assessments0.730.720.21
Posttest0.930.850.20
Control (upper SDCS)Pretest0.670.570.27
Formative assessments0.800.750.18
Posttest0.930.890.15

Source(s): Authors’ own work

Pairwise comparison of the posttest score between age groups

Age group pairsUSEz
10 vs 20s−324.9565.95−4.93***
10 vs 30s−624.9265.82−9.49***
10 vs 40s−824.3568.7−12.00***
10 vs 50s−1,046.8586.03−12.17***
10 vs 60s−986.35287.95−3.43**
20 vs 30s−299.9756.22−5.34***
20 vs 40s−499.4159.56−8.39***
20 vs 50s−721.9078.92−9.15***
20 vs 60s−661.41285.91−2.31
30 vs 40s−199.4359.41−3.36*
30 vs 50s−421.9378.81−5.35***
30 vs 60s−361.44285.88−1.26
40 vs 50s−222.5081.23−2.74
40 vs 60s−162.00286.56−0.57
60 vs 50s60.50291.20.21

Note(s): *** ps < 0.001, ** ps < 0.01, * ps < 0.05

Source(s): Authors’ own work

Pairwise comparison of SDCS between age groups

Age group pairsUSEz
10 vs 20s−700.9166.16−10.59***
10 vs 30s−855.4266.03−12.96***
10 vs 40s−687.3768.91−9.97***
10 vs 50s−313.0886.3−3.63**
10 vs 60s−422.85288.87−1.46
20 vs 30s−154.5156.4−2.74
20 vs 40s13.5459.750.23
20 vs 50s387.8379.174.90***
20 vs 60s278.05286.820.97
30 vs 40s168.0559.62.82
30 vs 50s542.3379.066.86***
30 vs 60s432.56286.791.51
40 vs 50s374.2981.494.59***
40 vs 60s264.52287.470.92
60 vs 50s−109.77292.13−0.38

Note(s): *** ps < 0.001, ** ps < 0.01

Source(s): Authors’ own work

Authors’ contributions: Sivakorn Malakul: Conceptualization, Methodology, Investigation, Software, Formal analysis, Writing - Original Draft, Writing - Review and Editing, Project administration. Cheeraporn Sangkawetai: Methodology, Validation, Investigation, Resources, Writing - Review and Editing, Supervision.

Declaration of competing interest: The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: This research project was supported by the Institute for the Promotion of Teaching Science and Technology (IPST), Ministry of Education.

Ethics declaration and consent to participate: The authors conducted the research and reported in this article in accordance with The Ethical Guidelines for Research on Human Subjects in Thailand (Second Edition) BE 2564 standards. They declare that all samples in this study provided informed consent. All data collection and processing were performed anonymously. All samples acknowledged providing personal information under protection under Thailand’s Personal Data Protection Act BE 2562.

Data availability: Data will be made available upon reasonable request.

Supplementary material

The supplementary material for this article can be found online.

References

Abbott, H.P. (2011), The Cambridge Introduction to Narrative, Cambridge University Press, Cambridge.

Aesaert, K., Voogt, J., Kuiper, E. and van Braak, J. (2017), “Accuracy and bias of ICT self-efficacy: an empirical study into students' over- and underestimation of their ICT competences”, Computers in Human Behavior, Vol. 75, pp. 92-102, doi: 10.1016/j.chb.2017.05.010.

Ahmad, A. and Dey, L. (2007), “A k-mean clustering algorithm for mixed numeric and categorical data”, Data and Knowledge Engineering, Vol. 63 No. 2, pp. 503-527, doi: 10.1016/j.datak.2007.03.016.

Aldenderfer, M. and Blashfield, R. (1984), Cluster Analysis, SAGE Publications, doi: 10.4135/9781412983648.

Au, C.H., Lam, S., Fung, L. and Xu, X. (2016), “Using animation to develop a MOOC on information security”, 2016 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM), pp. 365-369, doi: 10.1109/IEEM.2016.7797898.

Bandura, A. (1986), Social Foundations of Thought and Action: A Social Cognitive Theory, Prentice-Hall, New Jersey.

Bandura, A. (1997), Self-efficacy: The Exercise of Control, Freeman, San Francisco.

Banister, F. and Ryan, C. (2001), “Developing science concepts through story-telling”, School Science Review, Vol. 83 No. 302, pp. 75-83.

Basantes-Andrade, A., Cabezas-González, M., Casillas-Martín, S., Naranjo-Toro, M. and Benavides-Piedra, A. (2022), “NANO-MOOCs to train university professors in digital competences”, Heliyon, Vol. 8 No. 6, e09456, doi: 10.1016/j.heliyon.2022.e09456.

Berney, S. and Bétrancourt, M. (2016), “Does animation enhance learning? A metaanalysis”, Computers and Education, Vol. 101, pp. 150-167, doi: 10.1016/j.compedu.2016.06.005.

Bruner, J. (1991), “The narrative construction of reality”, Critical Inquiry, Vol. 18 No. 1, pp. 1-21, doi: 10.1086/448619.

Carretero, S., Vuorikari, R. and Punie, Y. and European Commission, Joint Research Centre (2017), “DigComp 2.1: the digital competence framework for citizens with eight proficiency levels and examples of use”, Publications Office of the European Union. doi: 10.2760/00963.

Castaño-Muñoz, J., Kreijns, K., Kalz, M. and Punie, Y. (2016), “Does digital competence and occupational setting influence MOOC participation? Evidence from a cross-course survey”, Journal of Computing in Higher Education, Vol. 29 No. 1, pp. 28-46, doi: 10.1007/s12528-016-9123-z.

Cohen, L., Manion, L. and Morrison, K. (2007), Research Methods in Education, 6th ed., Routledge, London.

Dahiya, S., Rokanas, L.N., Singh, S., Yang, M. and Peha, J.M. (2021), “Lessons from internet use and performance during Covid-19”, Journal of Information Policy, Vol. 11, pp. 202-221, doi: 10.5325/jinfopoli.11.2021.0202.

Edelsbrunner, S., Steiner, K., Schön, S., Ebner, M. and Leitner, P. (2022), “Promoting digital skills for Austrian employees through a MOOC: results and lessons learned from design and implementation”, Education Sciences, Vol. 12 No. 2, p. 89, doi: 10.3390/educsci12020089.

Ehrlinger, J., Johnson, K., Banner, M., Dunning, D. and Kruger, J. (2008), “Why the unskilled are unaware: further explorations of (absent) self-insight among the incompetent”, Organizational Behavior and Human Decision Processes, Vol. 105 No. 1, pp. 98-121, doi: 10.1016/j.obhdp.2007.05.002.

Europass (2023), “Digital skills assessment tool”, available at: https://europa.eu/europass/digitalskills/ (accessed 15 January 2023).

European Commission (2021), “Digital education initiatives | European Education Area”, available at: https://education.ec.europa.eu/focus-topics/digital-education/about

Everitt, B.S., Landau, S., Leese, M. and Stahl, D. (2011), Cluster Analysis, 5th ed., John Wiley & Sons, New Jersey.

Faul, F., Erdfelder, E., Lang, A.-G. and Buchner, A. (2007), “G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences”, Behavior Research Methods, Vol. 39 No. 2, pp. 175-191, doi: 10.3758/bf03193146.

Ferrari, A., Punie, Y. and Brečko, B. (2013), “DIGCOMP: a framework for developing and understanding digital competence in Europe”, Final report, doi: 10.2788/52966.

Friedman, M. (1940), “A comparison of alternative tests of significance for the problem of m rankings”, The Annals of Mathematical Statistics, Vol. 11 No. 1, pp. 86-92, doi: 10.1214/aoms/1177731944.

Gordillo, A., López-Pernas, S. and Barra, E. (2019), “Effectiveness of MOOCs for teachers in safe ICT use training”, Comunicar, Vol. 27 No. 61, pp. 103-112, doi: 10.3916/c61-2019-09.

Hanif, M. (2020), “The development and effectiveness of motion graphic animation videos to Improve primary school students' sciences learning outcomes”, International Journal of Instruction, Vol. 13 No. 4, pp. 247-266, doi: 10.29333/iji.2020.13416a.

Hatlevik, O.E., Ottestad, G. and Throndsen, I. (2014), “Predictors of digital competence in 7th grade: a multilevel analysis”, Journal of Computer Assisted Learning, Vol. 31 No. 3, pp. 220-231, doi: 10.1111/jcal.12065.

Hwang, G., Zou, D. and Wu, Y.-X. (2023), “Learning by storytelling and critiquing: a peer assessment-enhanced digital storytelling approach to promoting young students' information literacy, self-efficacy, and critical thinking awareness”, Educational Technology Research and Development, Vol. 71 No. 3, pp. 1079-1103, doi: 10.1007/s11423-022-10184-y.

ISTE (2022), Digital Citizenship in Education, available at: https://www.iste.org/areas-of-focus/digital-citizenship

Jiménez-Hernández, D., González-Calatayud, V., Torres-Soto, A., Martínez Mayoral, A. and Morales, J. (2020), “Digital competence of future secondary school teachers: differences according to gender, age, and branch of knowledge”, Sustainability, Vol. 12 No. 22, 9473, doi: 10.3390/su12229473.

Kemp, S. (2023), “Digital 2023: Thailand”, Datareportal, available at: https://datareportal.com/reports/digital-2023-thailand

Khan, F. and Vuopala, E. (2019), “Digital competence assessment across generations”, International Journal of Digital Literacy and Digital Competence, Vol. 10 No. 2, pp. 15-28, doi: 10.4018/ijdldc.2019040102.

Kreuder, A., Frick, U., Rakoczy, K. and Schlittmeier, S.J. (2024), “Digital competence in adolescents and young adults: a critical analysis of concomitant variables, methodologies and intervention strategies”, Humanities and Social Sciences Communications, Vol. 11 No. 1, 48, doi: 10.1057/s41599-023-02501-4.

Kruskal, W.H. and Wallis, W.A. (1952), “Use of ranks in one-criterion variance analysis”, Journal of the American Statistical Association, Vol. 47 No. 260, pp. 583-621, doi: 10.1080/01621459.1952.10483441.

Kuder, G.F. and Richardson, M.W. (1937), “The theory of the estimation of test reliability”, Psychometrika, Vol. 2 No. 3, pp. 151-160, doi: 10.1007/BF02288391.

Liu, C. and Elms, P. (2019), “Animating student engagement: the impacts of cartoon instructional videos on learning experience”, Research in Learning Technology, Vol. 27, doi: 10.25304/rlt.v27.2124.

Liyanagunawardena, T.R., Adams, A.A. and Williams, S.A. (2013), “MOOCs: a systematic study of the published literature 2008-2012”, The International Review of Research in Open and Distributed Learning, Vol. 14 No. 3, p. 202, doi: 10.19173/irrodl.v14i3.1455.

Lorr, M. (1983), Cluster Analysis for Social Scientists, Jossey-Bass, California.

Malakul, S. and Park, I. (2023), “The effects of using an auto-subtitle system in educational videos to facilitate learning for secondary school students: learning comprehension, cognitive load, and satisfaction”, Smart Learning Environments, Vol. 10, 4, doi: 10.1186/s40561-023-00224-2.

Mann, H.B. and Whitney, D.R. (1947), “On a test of whether one of two random variables is stochastically larger than the other”, The Annals of Mathematical Statistics, Vol. 18 No. 1, pp. 50-60, doi: 10.1214/aoms/1177730491.

Mayer, R.E. (2014), “Cognitive theory of multimedia learning”, in Mayer, R.E. (Ed.), The Cambridge Handbook of Multimedia Learning, 2nd ed., Cambridge University Press, Cambridge Core, doi: 10.1017/CBO9781139547369.005.

McQuiggan, S.W., Rowe, J.P., Lee, S. and Lester, J.C. (2008), “StoryBased learning: the impact of narrative on learning experiences and outcomes”, in Woolf, B.P., Aïmeur, E., Nkambou, R. and Lajoie, S. (Eds), Intelligent Tutoring Systems, Springer Berlin Heidelberg, pp. 530-539, doi: 10.1007/978-3-540-69132-7_56.

Mott, B.W. and Lester, J.C. (2006), “NarrativeCentered tutorial planning for InquiryBased learning environments”, in Ikeda, M., Ashley, K.D. and Chan, T. (Eds), Intelligent Tutoring Systems, Springer Berlin Heidelberg, pp. 675-684, doi: 10.1007/11774303_67.

Nasongkhla, J., Thammetar, T. and Chen, S.-H. (2015), “Thailand OERs and MOOCs country report”, in MOOCs and Educational Challenges around Asia and Europe, Vol. 121.

Naul, E. and Liu, M. (2019), “Why story matters: a review of narrative in serious games”, Journal of Educational Computing Research, Vol. 58 No. 3, doi: 10.1177/0735633119859904.

Nunnally, J.C. and Bernstein, I.H. (1994), Psychometric Theory, 3rd ed., McGraw-Hill, New York.

OECD (2021), “21st-Century readers”, in PISA, OECD, doi: 10.1787/a83d84cb-en.

Peters, M., Elasri Ejjaberi, A., Jesús Martínez, M. and Fabregues, S. (2022), “Teacher digital competence development in higher education: overview of systematic reviews”, Australasian Journal of Educational Technology, Vol. 38 No. 3, pp. 122-139, doi: 10.14742/ajet.7543.

Phuapan, P., Viriyavejakul, C. and Pimdee, P. (2016), “An analysis of digital literacy skills among Thai university seniors”, International Journal of Emerging Technologies in Learning (IJET), Vol. 11 No. 3, p. 24, doi: 10.3991/ijet.v11i03.5301.

Pradubwate, R., Pheeraphan, N., Sirawong, N. and Trirat, N. (2020), “Characteristics and learning behavior of active learners on SWU-MOOC”, Proceedings of the 2020 11th International Conference on E-Education, E-Business, E-Management, and E-Learning, pp. 158-162, doi: 10.1145/3377571.3377603.

Rachwał, A., Popławska, E., Gorgol, I., Cieplak, T., Pliszczuk, D., Skowron, Ł. and Rymarczyk, T. (2023), “Determining the quality of a dataset in clustering terms”, Applied Sciences, Vol. 13 No. 5, 2942, doi: 10.3390/app13052942.

Rodvaree, P., Suyoteetanarat, P., Naknut, K., Sinthupinyo, S. and Malakul, S. (2024), “A new clustering personality for higher education selection based on multiple intelligences”, 2024 IEEE International Conference on Cybernetics and Innovations (ICCI), pp. 1-6, doi: 10.1109/ICCI60780.2024.10532481.

Røkenes, F.M. and Krumsvik, R.J. (2014), “Development of student teachers' digital competence in teacher education – a literature review”, Nordic Journal of Digital Literacy, Vol. 9 No. 4, pp. 250-280, doi: 10.18261/issn1891-943x-2014-04-03.

Rolf, E., Knutsson, O. and Ramberg, R. (2019), “An analysis of digital competence as expressed in design patterns for technology use in teaching”, British Journal of Educational Technology, Vol. 50 No. 6, pp. 3361-3375, doi: 10.1111/bjet.12739.

Romero-Rodriguez, L.M., Ramirez-Montoya, M.S. and Gonzalez, J.R.V. (2020), “Incidence of digital competences in the completion rates of MOOCs: case study on energy sustainability courses”, IEEE Transactions on Education, Vol. 63 No. 3, pp. 183-189, doi: 10.1109/te.2020.2969487.

Rowe, J.P., Shores, L.R., Mott, B.W. and Lester, J.C. (2011), “Integrating learning, problem solving, and engagement in narrative-centered learning environments”, International Journal of Artificial Intelligence in Education, Vol. 21 Nos 1-2, pp. 115-133, doi: 10.3233/JAI-2011-019.

Siddoo, V., Sawattawee, J., Janchai, W. and Yodmongkol, P. (2017), “Exploring the competency gap of it students in Thailand: the employers’ view of an effective workforce”, Journal of Technical Education and Training, Vol. 9 No. 2, pp. 1-15.

Smeda, N., Dakich, E. and Sharda, N. (2014), “The effectiveness of digital storytelling in the classrooms: a comprehensive study”, Smart Learning Environments, Vol. 1 No. 1, 6, doi: 10.1186/s40561-014-0006-3.

Staddon, R.V. (2020), “Bringing technology to the mature classroom: age differences in use and attitudes”, International Journal of Educational Technology in Higher Education, Vol. 17 No. 1, 11, doi: 10.1186/s41239-020-00184-4.

Taber, K.S. (2017), “The use of Cronbach's alpha when developing and reporting research instruments in science education”, Research in Science Education, Vol. 48 No. 6, pp. 1273-1296, doi: 10.1007/s11165-016-9602-2.

Taylor, M., Marrone, M., Tayar, M. and Mueller, B. (2017), “Digital storytelling and visual metaphor in lectures: a study of student engagement”, Accounting Education, Vol. 27 No. 6, pp. 552-569, doi: 10.1080/09639284.2017.1361848.

Teo, T., Ruangrit, N., Khlaisang, J., Thammetar, T. and Sunphakitjumnong, K. (2014), “Exploring E-learning acceptance among university students in Thailand: a national survey”, Journal of Educational Computing Research, Vol. 50 No. 4, pp. 489-506, doi: 10.2190/ec.50.4.c.

Thai MOOC (2023), “Thailand cyber university, Ministry of higher education”, Science, Research and Innovation, available at: https://thaimooc.org/ (accessed 16 January 2023).

Ulfert-Blank, A.-S. and Schmidt, I. (2022), “Assessing digital self-efficacy: review and scale development”, Computers and Education, Vol. 191, 104626, doi: 10.1016/j.compedu.2022.104626.

UNESCO (2022), “About media and information literacy”, available at: https://www.unesco.org/en/communication-information/media-information-literacy/about

Vuorikari, R., Punie, Y., Carretero, S. and Brande, L. V.den. (2016), DigComp 2.0: The Digital Competence Framework for Citizens, Publications Office, doi: 10.2791/11517.

Vuorikari, R., Kluzer, S. and Punie, Y. (2022), DigComp 2.2: The Digital Competence Framework for Citizens – With New Examples of Knowledge, Skills and Attitudes, Publications Office of the European Union, doi: 10.2760/115376.

Wang, Q. and Zhao, G. (2021), “ICT self‐efficacy mediates most effects of university ICT support on preservice teachers' TPACK: evidence from three normal universities in China”, British Journal of Educational Technology, Vol. 52 No. 6, pp. 2319-2339, doi: 10.1111/bjet.13141.

Wells, G. (1986), The Meaning Makers: Children Learning Language and Using Language to Learn, Heinemann, New Hampshire.

Yelubay, Y., Dzhussubaliyeva, D., Moldagali, B., Suleimenova, A. and Akimbekova, S. (2022), “Developing future teachers' digital competence via massive open online courses (MOOCs)”, Journal of Social Studies Education Research, Vol. 13 No. 2, pp. 170-195, available at: http://www.jsser.org/index.php/jsser/article/view/4197

Yoon, S.H. (2022), “Gender and digital competence: analysis of pre-service teachers' educational needs and its implications”, International Journal of Educational Research, Vol. 114, 101989, doi: 10.1016/j.ijer.2022.101989.

Zhang, Z., Maeda, Y., Newby, T., Cheng, Z. and Xu, Q. (2023), “The effect of preservice teachers' ICT integration self-efficacy beliefs on their ICT competencies: the mediating role of online self-regulated learning strategies”, Computers and Education, Vol. 193, 104673, doi: 10.1016/j.compedu.2022.104673.

Acknowledgements

The authors would like to thank all contributors involved in developing the “Coding for All” course and the experts who reviewed the research instruments. We also extend our gratitude to the Thai MOOC platform by Thailand Cyber University for hosting and distributing the course.

Corresponding author

Sivakorn Malakul is the corresponding author and can be contacted at: sivakorn.mk@gmail.com

About the authors

Sivakorn Malakul completed his M.Ed. in Educational Technology from Korea University and B.Ed. in Computer Education from Chulalongkorn University. His academic interests are in AI in education, and computer science education. Currently, his career is mainly promoting teaching computing science in K-12 at the Institute for the Promotion of Teaching Science and Technology, Ministry of Education (Thailand).

Cheeraporn Sangkawetai earned her Ph.D. in Learning Innovation and Technology from King Mongkut’s University of Technology Thonburi, Thailand. She is the head of the technology department at the Institute for the Promotion of Teaching Science and Technology, Ministry of Education (Thailand). Her specialization focuses on computing science education and coding in K–12 classrooms.

Related articles