Meryl P. Gardner, Roberta Michnick Golinkoff, Kathy Hirsh‐Pasek and Daniel Heiney‐Gonzalez
The purpose of this paper is to gain insight into how both characteristics of toys and marketer‐provided cues influence parents' perceptions of advertised toys and their ideas of…
Abstract
Purpose
The purpose of this paper is to gain insight into how both characteristics of toys and marketer‐provided cues influence parents' perceptions of advertised toys and their ideas of what life skills are important for their children's future well‐being and success.
Design/methodology/approach
Data were collected with a 2 (toy encourages structured play vs toy encourages unstructured play) × 2 (ad mentions “brain development” vs ad mentions “child development”) experimental design involving four advertisements for a hypothetical toy.
Findings
Parents recognized that the toy encouraging unstructured play has many benefits. Relative to parents who saw an ad with a “child development” appeal, those who saw an ad with a “brain development” appeal rated social and intellectual development as less important for their children.
Practical implications
Findings support the idea that manufacturers can and should continue to develop toys, which encourage relatively unstructured play; such toys are both appreciated by parents and valued by experts. They also support eliminating “brain talk” from advertising; such messages do not enhance parents' evaluations of toys and detract from parents' maintaining the value they place on social and intellectual development.
Social implications
By designing toys which encourage play which is most beneficial to children and promoting them with advertisements without “brain” language, marketers can support children's development and parents' values.
Originality/value
This paper provides insights into the effects of toy and ad characteristics on parents' perceptions of toys and what is important for their children.
Details
Keywords
This study, by critically analyzing material from multiple sources, aims to provide an overview of what is available on evaluation tools for educational apps for children. To…
Abstract
Purpose
This study, by critically analyzing material from multiple sources, aims to provide an overview of what is available on evaluation tools for educational apps for children. To realize this objective, a systematic literature review was conducted to search all English literature published after January 2010 in multiple electronic databases and internet sources. Various combinations of search strings were used due to database construction differences, while the results were cross-referenced to discard repeated references, obtaining those that met the criteria for inclusion.
Design/methodology/approach
The present study was conducted according to the methods provided by Khan et al. (2003) and Thomé et al. (2016). The whole procedure included four stages: planning the review, identifying relevant studies in the literature, critical analysis of the literature, summarizing and interpreting the findings (Figure 1). Furthermore, in this analysis, a well-known checklist, PRISMA, was also used as a recommendation (Moher et al., 2015).
Findings
These review results reveal that, although there are several evaluation tools, in their majority they are not considered adequate to help teachers and parents to evaluate the pedagogical affordances of educational apps correctly and easily. Indeed, most of these tools are considered outdated. With the emergence of new issues such as General Data Protection Regulation, the quality criteria and methods for assessing children's products need to be continuously updated and adapted (Stoyanov et al., 2015). Some of these tools might be considered as good beginnings, but their “limited dimensions make generalizable considerations about the worth of apps” (Cherner, Dix and Lee, 2014, p. 179). Thus, there is a strong need for effective evaluation tools to help parents and teachers when choosing educational apps (Callaghan and Reich, 2018).
Research limitations/implications
Even though this work is performed by following the systematic mapping guideline, threats to the validity of the results presented still exist. Although custom strings that contained a rich collection of data were used to search for papers, potentially relevant publications that would have been missed by the advanced search might exist. It is recommended that at least two different reviewers should independently review titles, abstracts and later full papers for exclusion (Thomé et al., 2016). In this study, only one reviewer – the author – selected the papers and did the review. In the case of a single researcher, Kitchenham (2004) recommends that the single reviewer should consider discussing included and excluded papers with an expert panel. The researcher, following this recommendation, discussed the inclusion and exclusion procedure with an expert panel of two professionals with research experience from the Department of (removed for blind review). To deal with publication bias, the researcher in conjunction with the expert panel used the search strategies identified by Kitchenham (2004) including: Grey literature, conference proceedings, communicating with experts working in the field for any unpublished literature.
Practical implications
The purpose of this study was not to advocate any evaluation tool. Instead, the study aims to make parents, educators and software developers aware of the various evaluation tools available and to focus on their strengths, weaknesses and credibility. This study also highlights the need for a standardized app evaluation (Green et al., 2014) via reliable tools, which will allow anyone interested to evaluate apps with relative ease (Lubniewski et al., 2018). Parents and educators need a reliable, fast and easy-to-use tool for the evaluation of educational apps that is more than a general guideline (Lee and Kim, 2015). A new generation of evaluation tools would also be used as a reference among the software developers, designers to create educational apps with real educational value.
Social implications
The results of this study point to the necessity of creating new evaluation tools based on research, either in the form of rubrics or checklists to help educators and parents to choose apps with real educational value.
Originality/value
However, to date, no systematic review has been published summarizing the available app evaluation tools. This study, by critically analyzing material from multiple sources, aims to provide an overview of what is available on evaluation tools for educational apps for children.