Towards data-driven software engineering skills assessment

Jun Lin (The Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY), Nanyang Technological University, Singapore, and Alibaba-NTU Singapore Joint Research Institute, Singapore)
Han Yu (School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore)
Zhengxiang Pan (Interdisciplinary Graduate School, Nanyang Technological University (NTU), Singapore)
Zhiqi Shen (School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore)
Lizhen Cui (School of Computer Science and Technology, Shandong University, China)

International Journal of Crowd Science

ISSN: 2398-7294

Article publication date: 11 October 2018

Issue publication date: 29 November 2018

1962

Abstract

Purpose

Today’s software engineers often work in teams to develop complex software systems. Therefore, successful software engineering in practice require team members to possess not only sound programming skills such as analysis, design, coding and testing but also soft skills such as communication, collaboration and self-management. However, existing examination-based assessments are often inadequate for quantifying students’ soft skill development. The purpose of this paper is to explore alternative ways for assessing software engineering students’ skills through a data-driven approach.

Design/methodology/approach

In this paper, the exploratory data analysis approach is adopted. Leveraging the proposed online agile project management tool – Human-centred Agile Software Engineering (HASE), a study was conducted involving 21 Scrum teams consisting of over 100 undergraduate software engineering students in multi-week coursework projects in 2014.

Findings

During this study, students performed close to 170,000 software engineering activities logged by HASE. By analysing the collected activity trajectory data set, the authors demonstrate the potential for this new research direction to enable software engineering educators to have a quantifiable way of understanding their students’ skill development, and take a proactive approach in helping them improve their programming and soft skills.

Originality/value

To the best of the authors’ knowledge, there has yet to be published previous studies using software engineering activity data to assess software engineers’ skills.

Keywords

Citation

Lin, J., Yu, H., Pan, Z., Shen, Z. and Cui, L. (2018), "Towards data-driven software engineering skills assessment", International Journal of Crowd Science, Vol. 2 No. 2, pp. 123-135. https://doi.org/10.1108/IJCS-07-2018-0014

Publisher

:

Emerald Publishing Limited

Copyright © 2018, Jun Lin, Han Yu, Zhengxiang Pan, Zhiqi Shen and Lizhen Cui.

License

Published in International Journal of Crowd Science. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

Most hiring managers in software companies understand that a successful member of the software engineering team needs to be strong in both programming skills (e.g. software design, coding and testing skills) and soft skills (e.g. communication, collaboration and self-management skills). Programming skills can be gauged, at least in part, from students’ performance in examinations and programming contests. Soft skills are much harder to assess, especially during the limited time given in job interviews. Although the concept of these skills can be taught, the ability to apply them consistently in practice can only be acquired through one’s own experience.

In tertiary education institutions, software engineering students are often assessed by a combination of examinations and coursework projects. Many educators have realized the limitations of examinations in assessing students’ practical skills. Thus, coursework projects often serve as an opportunity for students to both practice and demonstrate their skills. However, as an instructor has to face tens or even hundreds of students in a semester, it is not practical for him or her to know the weaknesses and strengths in each student’s skills in detail through observation. Technologies that can subjectively quantify students’ skill development are needed to enable instructors to proactively and effectively help each student.

With the emergence of systems capable of collecting personal behaviour trajectory big data (Heymann and Garcia-Molina, 2011), data-driven analysis of people’s characteristics over time is changing how students’ performance can be measured. Some funding agencies are starting to support research in data-driven student assessment technologies to complement traditional examination scores. For example, the Ministry of Education in Singapore has started an initiative to build technological solutions capable of holistically assessing students’ twenty-first-century competencies (e.g. critical thinking and self-directed learning skills).

Following a similar line of thinking, in this paper, we explore how software development behaviour data can be used to assess students’ programming and soft skills. As agile software development (ASD) involves many human factors reflecting developers’ personal characteristics compared with other plan-driven methodologies (Cockburn and Highsmith, 2001), we focus on tracking students’ activities in the ASD process. For this purpose, we conducted a 12-week study involving 125 undergraduate software engineering students from the Beihang University, Beijing, China. The students self-organized into 21 ASD teams of five to seven persons. Each team developed one software system of significant complexity following the Scrum ASD method as part of their course requirements. Some examples of the coursework projects include “A Personal Healthy Living App”, “A Social Network App for Senior Citizens” and “An Activity Tracking App for the Elderly”.

Students in this study carry out software engineering activities at various stages of the Scrum methodology in our online agile project management (APM) tool – the Human-centred Agile Software Engineering (HASE) platform (www.linjun.net.cn/hase/) (Lin et al., 2014). HASE mainly supports activities during the sprint planning and sprint review/retrospective phases. Such activities include proposing tasks; estimating the priority, difficulty and time required for each task; deciding how to allocate tasks; gathering collaboration information; reviewing the timeliness and quality of completed tasks; and providing feedback on individual team member’s mood at different points in time during a sprint. During the study, students logged 169,137 ASD activities in the HASE platform. By analysing the collected data set to reflect students’ programming skills, collaboration and mood stability, we demonstrate the potential of this research direction and discuss its implications for software engineering education.

2. Related work

To the best of our knowledge, there has yet to be published previous studies using software engineering activity data to assess software engineers’ skills. Nevertheless, as the skills assessment has always been an important problem, other methods have been applied in an attempt to address it.

In 2004, Kitchenham et al. advocated evidence-based software engineering (EBSE) similar to what is happening with evidence-based medicine (Kitchenham et al., 2004). A technological platform for tracking and analysing important factors in software engineering such as skills factors and life-cycle factors were called for, and the benefits of which were analysed. Nevertheless, the work intends to produce methods to support the development of high-quality software through objective analysis of performance-related indicators. Although similar in principle to our work, they do not specify how EBSE can assess the skills demonstrated by the software engineers or how such insight can be used to improve software engineering education.

In 2011, Salleh et al. presented the results of a systematic literature review concerning agile pair programming effectiveness (Salleh et al., 2011). The paper analysed compatibility factors, such as the feel good, personality and skill level factors, and their effect on pair programming effectiveness. Four metrics were used in the analysis:

  1. academic performance;

  2. technical productivity;

  3. program/design quality; and

  4. learning satisfaction.

As the study was not focused on assessment, the general findings are not useful for skills assessment. Nevertheless, it did point towards the importance of soft skills in software engineering.

In 2014, Lin et al. started to track personal performance data with APM tools to study task allocation-related decision-making under Scrum (Lin et al., 2014). It used the same research techniques as reported in this paper. However, the study focused on analysing students’ programming skills and did not consider their soft skills such as collaboration and mood.

3. Study design

In this section, we present our research approach and the metrics adopted in our analysis.

3.1 Research approach

We use the HASE APM platform to unobtrusively track the student participants’ activities in the Scrum ASD process, including their decision-making, collaboration, task assignment and mood. The platform provides six main features to support APM that cover the sprint planning and sprint review/retrospective phases:

  1. Registration: To build user profiles, HASE requires registrants to specify their self-assessed competence levels in different areas of expertise such as familiarity with specific programming languages, system design methodologies and user interface (UI) design tools. This information will only be used to compute an initial assessment for a user in the absence of peer ratings or performance data. Once data from these relatively more objective sources become available, the user’s self-assessment will be excluded from the assessment result.

  2. Team and role management: HASE supports the creation of teams, the selection of product owners and stakeholders into the teams and the assignment of different roles within a team (e.g. programmers and UI designers).

  3. Task management: Task information including task description, skills required for the task and the person who proposed each task is displayed for all team members to view. The difficulty value of each task T is recorded using an 11-point Likert scale (Likert, 1932) (with 0 denoting “extremely easy” and 10 denoting “extremely hard”). Each team member can input his or her estimated difficulty value for each task into the HASE platform. The HASE platform then uses the average difficulty value for the task (DT). The students were asked to take into account the technical challenge as well as the amount of effort required when judging the difficulty of a task. The priority value of each task is also recorded using an 11-point Likert scale (with 0 denoting “extremely low priority” and 10 denoting “extremely high priority”). Each team member can input his or her estimated priority value for each task into the HASE platform. The HASE platform then uses the average priority value for the task.

  4. Sprint planning: HASE records the teams’ decisions on which tasks are assigned to which team member during each sprint. Once assigned, the status of the task becomes “Assigned”. The assignee i inputs his or her confidence value ( ConfTi) for each task T on an 11-point Likert scale (with 0 denoting “not confident at all” and 10 denoting “extremely confident”). Each team member also inputs the estimated required time to complete each task (in number of days). The HASE platform uses the average estimated time required to generate the deadline for the task ( TTest). Apart from a primary assignee, multiple students can collaboratively work on a task. The collaborator information for each task is also recorded by HASE.

  5. Sprint review/retrospective: Once a task is completed, the assignee changes its status in the HASE platform to “Completed”. This action will trigger HASE to record the actual number of days ( TTact) used to complete this task. HASE also provides functions for team members to peer review the quality (QualT) of each completed task T. The quality of a completed task is recorded in the platform using an 11-point Likert scale, with 0 representing “extremely low quality” and 10 representing “extremely high quality”. The average quality rating for each task is used by HASE as the final quality rating for that task.

  6. Team morale monitoring: During the sprint planning meeting, team members can report their current mood values into the HASE platform. A person i’s mood at the beginning of a sprint t(mibegint) is represented on a five-point Likert scale, with 1 representing “very low” and 5 representing “very high”. During the sprinter view/retrospective meeting, each task assignee i can report his or her mood after completing a task at the end of a sprint t(miendt) using the same five-point Likert scale.

The input data to the HASE platform required from ASD teams are as a result of students’ activities following the Scrum methodology. In this way, users of HASE can behave as if they are using any APM tool without expending additional effort to help with data collection. Thus, the data collection process remains unobtrusive to the participants. Over the 12-week period of the coursework project, the HASE platform collected 169,137 behaviour trajectory records related to software engineering activities from the 125 students who participated in this study.

3.2 Metrics

In this paper, we adopt the exploratory data analysis (EDA) approach (Tukey, 1977) to analyse the data collected. EDA is an approach for analysing data sets to summarize their main characteristics, often with visual methods. It is primarily for understanding what can be learnt from the data beyond the formal modelling or hypothesis testing task. We use the following metrics to facilitate our analysis:

  • Technical productivityi): It refers to the average amount of workload a student i can complete during a sprint. In this study, we use the task difficulty value as an indicator of the workload of a task, as the task difficulty values reported by students denote both the technical challenge and the amount of effort required to complete the task.

  • Competence (Compi): It refers to the probability a student i can complete a task assigned to him or her with satisfactory quality before the stipulated deadline. In this paper, the outcome of a task needs to achieve an average quality rating higher than five out of ten to be considered as having satisfactory quality. This metric is similar to a student’s reputation. Thus, we adopt a reputation computation model – the beta reputation model (Jøsang et al., 2007) – which is widely used in the fields of online services, artificial intelligence and network communications (Pan et al., 2009; Yu et al., 2010, 2011; Liu et al., 2013; Yu et al., 2013a). It is calculated as follows:

    (1) Compi=αi+1αi+1+(βi+1)(0,1)

  • where αi and βi are calculated as:

    (2) αi = T(i)1[TTact-TTest0 and QualT>5]DT
    (3) βi = T(i)1[TTact-TTest>0 and QualT5]DT
    The function 1[condition] in equation (2) and equation (3) equals to1 if “condition” is true. Otherwise, 1[condition] equals to 0. ∅(i) denotes the set of tasks i has previously worked on until the current point in time. The “+1” terms in the numerator and denominator of equation (1) are Laplace smoothing terms (Wang and Singh, 2007) which ensure that if i has no previous track record, Compi evaluates to 0.5, indicating maximum uncertainty about i’s performance:

  • Team morale (begin) ( Mjbegin(t)): It refers to the average of the mood values reported by members of team j during the sprint planning meeting of sprint t.

  • Team morale (end) ( Mjend(t)): It refers to the average of the mood values reported by members of team j during the sprint review/retrospective meeting of sprintt.

4. Results and analysis

An EDA has identified certain personal characteristics that may become useful markers for assessing students’ skills in the future. Figure 1 shows the participants’ competence scores versus their productivity scores at the end of the study. It can be observed that the participants’ performances in terms of these metrics are quite distinguishable. In general, participants who demonstrated high competence tend to also be able to handle high workloads allocated to them (r = 0.7443, ρ< 0.01). One participant achieved significantly higher competence and productivity scores than the rest of the participants.

Collaboration is generally regarded as a useful way to improve the effectiveness and efficiency of a software team. Figure 2 shows a heat map of the number of collaborators per task each participant had for each of the 12 weeks. The lighter the colour of a point on the figure, the more collaborators per task that the participant had for that particular week. The colour scale mapping different colour gradients to the actual number of collaborators per task is shown on the right-hand side of the figure. Participants are ranked according to their average number of collaborators per task per week. Those who are shown at the bottom of the figure ranked the highest among their peers. It can be observed that this metric can distinguish the behaviours among different participants clearly.

Stability of mood is a sign showing one’s maturity and self-management skills. Figure 3 shows a heat map of the intra-week mood change (which is computed as Δmit=miendt-mibegint(-5, 5) for each week) over the 12 weeks. In all, 102 out of the 125 participants provided valid reports on their mibegint and miendt values. The colour scale mapping different colour gradients to the intra-week mood change is shown on the right-hand side of the figure. Participants are ranked according to their average intra-week mood change values per week over 12 weeks. Those who are shown at the bottom of the figure ranked the highest among their peers. It can be observed that this metric can distinguish the behaviours among different participants quite clearly. The mood of those who ranked high on this metric tends to increase at the end of a week after a sprint of development. And as their mood at the beginning of the week also tends to be high, the increments are generally small. Thus, their mood remain relatively stable throughout a sprint. Those who ranked low on this metric (top part of the figure) tend to have big negative mood swings, especially towards the end of the study.

To explore if the assessment of participants’ skills may help us identify students who are good at hands-on software engineering but did not stand out in examinations, we construct a skills score to aggregate the effect of competence, productivity, collaboration and mood stability into one scalar measurement. In this study, the skills score, Sskills(i), for a participant is computed as:

(4) Sskills =Sμi+SCompi+Scoli1-SΔmi×1003
where Sμi ∈ [0, 1], SCompi ∈ [0, 1], SColi ∈ [0, 1], and SΔmi∈(-1, 1) are the normalized scores for i in terms of productivity, competence, collaboration and mood stability, respectively (Sskills(i) ∈[0, 100]).

Figure 4 plots the participants’ skills scores against their examination scores for the subject of software engineering in the same semester. The examination paper used was the standard software engineering end-of-semester examination paper from the Beihang University, which has been designed by the professors in charge of the course and reviewed by the university examination board. It can be observed that, according to their exam scores, their performance clustered in the range of 80 to 100 marks, enabling almost all of them to achieve a grade of A or A+. However, their skills scores spread from as low as 10 marks to as high as 80 marks, making their performance more distinguishable compared with their exam scores. The skills scores have only a weak positive correlation with the exam scores (r = 0.2129, ρ< 0.05). Furthermore, the top three best-performing participants in terms of skills scores achieved only average exam scores among their peers, and many participants with high exam scores achieved low skills scores.

We acknowledge that there may be other ways to compute the skills score and we refrain from claiming that our current formulation for the skills score is the most effective. Nevertheless, the results show that the data-driven skills score can potentially help us distinguish the performance of software engineering students better than examination-based assessments.

Figure 5 shows the distribution of students’ average self-reported mood values during the sprint planning meeting at the start of each sprint. The colour scale represents the average self-reported mood values. The average mood value is 3.86 out of 5. The correlation between students’ mood during the sprint planning meetings and their competence values is r = 0.0025, ρ < 0.9394, indicating no statistically significant correlation. The correlation between students’ mood during the sprint planning meetings and their technical productivity values is r = 0.1505, ρ < 0.01, indicating a statistically significant albeit weak positive correlation.

Figure 6 shows the distribution of students’ average self-reported mood values during the sprint review/retrospective meeting at the end of each sprint. The colour scale represents the average self-reported mood values. The average mood value is 3.80 out of 5, which is slightly lower than at the beginning of the sprint. The correlation between students’ mood during the sprint review/retrospective meetings and their competence values is r = 0.0148, ρ < 0.5946, indicating no statistically significant correlation. The correlation between students’ mood during the sprint review/retrospective meetings and their technical productivity values is r = 0.4207, ρ < 0.01, indicating a statistically significant positive correlation. Therefore, based on these analyses, team members with high technical productivity tend to have a high morale, especially at the end of a sprint after completing the tasks allocated to them.

5. Implications

By providing a technological platform for the longitudinal tracking of software engineers’ behaviour trajectory data related to software development, we open up new possibilities for different parties involved in software engineering, namely, researchers, educators and practitioners.

5.1 Implications for software engineering researchers

The availability of large software engineering behaviour data sets will present new challenges to researchers to develop new analytics techniques. With detailed information on each user’s demographics, skills indicator values over time, detailed interactions with the software engineering tools provided, interactions with other team members and decisions made, the high dimensionality of the data sets makes it a challenge to identify which feature, or combination of features, can form accurate predictors for certain behaviours of interest. Machine learning (Anzai, 1992) can be leveraged to develop useful techniques for this purpose.

However, before this step can happen, additional efforts are needed to complement the behaviour data sets with labelled meta-data on what the observed behaviour patterns mean. This can potentially be achieved by conducting follow-up interview-based studies with the participants through carefully designed questionnaires once unique behaviour patterns have been identified. This also further opens up the research question on how to present the behaviour pattern data in a human-interpretable format to facilitate the interviews.

5.2 Implications for software engineering educators

Software engineer educators may be a viable source of knowledge in the effort of building up a repository of meta-data for the behaviour patterns obtained by the tracking platform. As they frequently interact with students who may be using the proposed tracking platform, they can potentially provide insights into the meanings of the behaviour patterns. The challenge here is for software engineering researchers to provide tools to enable educators who are willing to contribute meta-data for the behaviour patterns to do so with ease. Techniques from the field of crowdsourcing (Doan et al., 2011) may offer a starting point for such an effort.

Once new techniques for automatically assessing a student’s skill development based on his or her behaviour patterns are developed, new forms of real-time personalized inventions may become available to educators. The simplest possibility is for the system to send out alerts on students who may require help in specific areas to course instructors. Through mining the behaviour patterns of many students and cross-checking with their academic performance, or even employment prospects if such data are available, the system may be able to suggest behaviour trajectories that are the most beneficial for students from different backgrounds, thereby making data-driven personalized software engineering training possible. The envisioned behaviour data tracking platform can potentially convert software engineering education into a test bench for open science and enable a more adaptive and individualized learning experience.

5.3 Implications for software engineering practitioners

The behaviour data are tracked in an unobtrusive manner by the APM tool automatically. The peer review-related functions represent activities that an ASD team member already has to perform when following the ASD practice. Overall, the proposed APM tool-based behaviour data tracking approach does not require software engineers to incur additional overhead. However, the data analytics functions provide ASD teams with insights into detailed team dynamics and performance information that can be useful for decision-making. Furthermore, with the behaviour data as input, automatic context-aware software engineering task allocation decision support mechanisms (Lin, 2013) become a distinct possibility. These mechanisms can be based on similar mechanisms available in the field of crowdsourcing (Yu et al., 2013b, 2013c, 2015).

6. Discussions and future work

In this paper, we explore a novel data-driven approach to assess software engineering students’ skills. Different from traditional interview/internship-based methods, our study is based on participants’ ASD activity trajectory data collected unobtrusively during normal ASD processes through our HASE APM platform. This type of data objectively reflects developers’ ASD activities and performance at fine granularity.

As the data collection and analytics technologies further develop, software engineering students may eventually perform all coursework activities in a technology platform capable of unobtrusively collecting their behaviour data and continuously assessing a wide range of their skills over time. In this way, the students’ practical skill development can be monitored by their instructors so that pedagogical methods can be personalized to help individual students in specific areas. Such a tool will enable software engineering educators to have a quantifiable way of understanding their students’ skill development and take a proactive approach in helping them develop programming and soft skills. The skills scores may, one day, be part of a student’s academic profile and be taken into consideration by industry recruiters to help companies identify well-rounded software engineering talents suitable for their teams.

From this study, we see the start of a series of research and applications in data-driven software engineering skills assessment. In future research, we plan to conduct surveys/interviews to understand more in-depth how students collaborate. We will continue using the HASE platform to collect agile programming activity data over subsequent semesters and expand our data collection effort to include more universities so as to investigate the possible effects of socio-cultural factors. More finely grained data such as the time each student spent on a task and the breakdown of the usage of the time will also be collected in future versions of the HASE platform.

Figures

Students’ competence versus productivity

Figure 1.

Students’ competence versus productivity

Average number of collaborators per task

Figure 2.

Average number of collaborators per task

Intra-week mood variation

Figure 3.

Intra-week mood variation

Participants’ skills score versus their examination score

Figure 4.

Participants’ skills score versus their examination score

Students’ average morale before a sprint

Figure 5.

Students’ average morale before a sprint

Students’ average morale after a sprint

Figure 6.

Students’ average morale after a sprint

References

Anzai, Y. (1992), “Pattern recognition and machine learning”, Academic Press, Cambridge MA.

Cockburn, A. and Highsmith, J. (2001), “Agile software development, the people factor”, Computer, Vol. 34 No. 11, pp. 131-133.

Doan, A., Ramakrishnan, R. and Halevy, A.Y. (2011), “Crowdsourcing systems on the world-wide web”, Communications of the ACM, Vol. 54 No. 4, pp. 86-96.

Heymann, P. and Garcia-Molina, H. (2011), “Turkalytics: analytics for human computation”, in Proceedings of the 20th International Conference on World Wide Web (WWW’11), pp. 477-486.

Jøsang, A., Ismail, R. and Boyd, C. (2007), “A survey of trust and reputation systems for online service provision”, Decision Support Systems (DSS), Vol. 43 No. 2, pp. 618-644.

Kitchenham, B.A., Dyba, T. and Jorgensen, M. (2004), “Evidence-based software engineering”, in Proceedings of the 26th International Conference on Software Engineering (ICSE’04), pp. 273-281.

Likert, R. (1932), “A technique for the measurement of attitudes”, Archives of Psychology, Vol. 22 No. 140.

Lin, J. (2013), “Context-aware task allocation for distributed agile team”, in Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering (ASE’13), pp. 758-761.

Liu, S., Yu, H., Miao, C. and Kot, A.C. (2013), “A fuzzy logic based reputation model against unfair ratings”, in Proceedings of the 12th International Conference on Autonomous Agents and Multi-agent Systems (AAMAS’13), pp. 821-828.

Lin, J., Yu, H., Shen, Z. and Miao, C. (2014), “Studying task allocation decisions of novice agile teams with data from agile project management tools”, in Proceedings of the 29th IEEE/ACM International Conference on Automated Software Engineering (ASE’14), pp. 689-694.

Pan, L., Meng, X., Shen, Z. and Yu, H. (2009), “A reputation pattern for service oriented computing”, in Proceedings of the 7th International Conference on Information, Communications and Signal Processing (ICICS’09).

Salleh, N., Mendes, E. and Grundy, J. (2011), “Empirical studies of pair programming for CS/SE teaching in higher education: a systematic literature review”, IEEE Transactions on Software Engineering (TSE), Vol. 37 No. 4, pp. 509-525.

Tukey, J.W. (1977), Exploratory Data Analysis, Addison-Wesley, Boston.

Wang, Y. and Singh, M.P. (2007), “Formal ‘trust’, model for multiagent systems”, in Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI’07), pp. 1551-1556.

Yu, H., Shen, Z., Leung, C., Miao, C. and Lesser, V.R. (2013b), “A survey of multi-agent trust management systems”, IEEE Access, Vol. 1 No. 1, pp. 35-50.

Yu, H., Liu, S., Kot, A.C., Miao, C. and Leung, C. (2011), “Dynamic witness selection for trustworthy distributed cooperative sensing in cognitive radio networks”, in Proceedings of the 13th IEEE International Conference on Communication Technology (ICCT’11), pp. 1-6.

Yu, H., Miao, C., An, B., Leung, C. and Lesser, V.R. (2013a), “A reputation management approach for resource constrained trustee agents”, in Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI’13), pp. 418-424.

Yu, H., Miao, C., Shen, Z., Leung, C., Chen, Y. and Yang, Q. (2015), “Efficient task Sub-delegation for crowdsourcing”, in Proceedings of the 29th AAAI Conference on Artificial Intelligence (AAAI-15), pp. 1305-1311.

Yu, H., Shen, Z., Miao, C. and An, B. (2013c), “A reputation-aware decision making approach for improving the efficiency of crowdsourcing systems”, in Proceedings of the 12th International Conference on Autonomous Agents and Multi-agent Systems (AAMAS’13), pp. 1315-1316.

Yu, H., Shen, Z., Miao, C., Leung, C. and Niyato, D. (2010), “A survey of trust and reputation management systems in wireless communications”, Proceedings of the IEEE, Vol. 98 No. 10, pp. 1755-1772.

Further reading

Freeman, P., Wasserman, A.I. and Fairley, R.E. (1976), “Essential elements of software engineering education”, in Proceedings of the 2nd International Conference on Software Engineering (ICSE’76), pp. 116-122.

Acknowledgements

This research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its IDM Futures Funding Initiative; Interdisciplinary Graduate School, NTU; and the Lee Kuan Yew Post-Doctoral Fellowship Grant.

Corresponding author

Jun Lin can be contacted at: junlin@ntu.edu.sg

About the authors

Jun Lin is a Research Fellow at the Joint NTU-UBC LILY Research Centre, Nanyang Technological University, Singapore; a Research Scientist at the Alibaba-NTU Singapore Joint Research Institute; and an Adjunct Associate Professor at College of Software, Beihang University, China. He holds a PhD degree from the School of Computer Science and Engineering, Nanyang Technological University, Singapore. His current research interests include LoRaWAN, blockchain, Internet of Things, crowd science, software engineering and AI technologies.

Han Yu is a Nanyang Assistant Professor (NAP) at the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He has been a Visiting Scholar at the Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST) from 2017 to 2018. Between 2015 and 2018, he held the prestigious Lee Kuan Yew Post-Doctoral Fellowship (LKY PDF) at the Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY). His research focuses on artificial intelligence-powered crowd-computing.

Zhengxiang Pan is a PhD student at Interdisciplinary Graduate School, Nanyang Technological University, Singapore. His research focuses on human–computer interaction, crowdsourcing and social mobilization.

Zhiqi Shen is a Senior Scientist at the School of Computer Science and Engineering, Nanyang Technological University, Singapore. He obtained his BSc degree in Computer Science and Technology from Peking University, MEng in Computer Engineering from Beijing University of Technology and PhD from Nanyang Technological University. His research interests include artificial intelligence, software agents, multi-agent systems, goal-oriented modelling, agent-oriented software engineering, semantic web/grid, e-Learning, bioinformatics and bio-manufacturing, agent augmented interactive media, game design and interactive storytelling.

Lizhen Cui is a Professor at the School of Computer Science and Technology, Shandong University, China. His research interests include big data management and big data analytics, big data artificial intelligence, service computing and collaborative computing, software architecture and technology in cloud.

Related articles